Design question, UCS and Nexus 5k - FCP

Hi,
I need some advice from (Mainly a Nexus person); I have drawn and attached the proposed solution (below).
I am designing a solution with 3 UCS chassis, Nexus 5K and 2X NetApp 3240 (T1 and T2). FC will be used to access disk on the SAN. Also, Non UCS compute will need access to the T2 SAN only. (UCS will access T1 and T2). It is a requirement for this solution that non UCS devices do not connect to the same Nexus switches that the UCS chassis use.
UCS Compute:
the 3 X chassis will connect to 2 X 6296 FIs which will cross connect to the a 2 X Nexus 5K through a FC port channel, the Nexus 5Ks will be configured in NPIV mode to provide access to the SAN. FC from each Nexus 5K to the NetApp controllers will be provided through a total of 4 X FC Port Channels (2 FC member ports per PC) from each nexus 5K one going to controller A and the other to controller B.
Non UCS compute:
These will connect directly through their HBAs to their own Nexus 5Ks and then to the T2 SAN, these will be zoned to never have access to the T1 SAN.
Questions:
1-      As the UCS compute will need to have access to the T1, what is the best way to connect the Nexus 5Ks on the LHS in the image below to the Nexus on the RHS (This should be FC connection).
2-      Can fiber channel be configured in a vPC domain as Ethernet? Is this a better way for this solution?
3-      Is FC better than FCoE for this solution? I hear FCoE is still not highly recommended.
4-      Each NetApp controller is only capable of pushing 20GBps max, this is why each port channel connecting to each controller is only 2 members. However, I’m connecting 4 port channel members from each Fabric Interconnect (6296) to each Nexus switch. Is this a waste? Remember that connectivity form each FI is also required to the T2 SAN.

Max,
What you are implementing is traditional FlexPod design with slight variations.
I recommend to look at FlexPod design zone for some additional material if you have not done so yet.
http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_flexpod.html
To answer your questions:
1) FC and FCoE do not support vPC. If UCS needs to have access to T1, then there is no need to have ISL between sites. If UCS needs to have access to T1 and T2, then best option would be to set up VSAN trunking on N5K and UCS and configure vHBAs accordingly.
2) Both should work just fine. If you go with FCoE, then UCS would need to be on the latest version for multi-hop FCoE support.
3) If you only worried about storage throughput then yes, you will never utilize 40Gb PO if your source will be 20Gb PO. What are your projected peak and average loads on this network?

Similar Messages

  • SCA design question - PIX and SCA with dual logical SSL server.

    I have a SCA design question. please correct or verify my solution.
    1. connectivity.
    <Client with port 443>--<ISP>--<PIX>--<SCA>--<SERVER(two IP on single NIC and each IP associates to WEB server) with port 81>
    * client will access WEB server with x.x.1.100 or x.x.1.101
    2. physical IP address
    - PIX outside=x.x.1.1
    - PIX inside=x.y.1.1
    - SCA device=x.y.1.2
    - SERVER NIC1=x.y.1.10
    - SERVER NIC2=x.y.1.11
    3. PIX NAT
    - static#1=x.x.1.100 map to x.y.1.10
    - static#2=x.x.1.101 map to x.y.1.11
    4. SCA configuration.
    mode one-port
    no mode one-port
    ip address x.y.1.2 netmask 255.255.255.0
    ip route 0.0.0.0 0.0.0.0 x.y.1.1
    ssl
    server SERVER1
    ip address x.y.1.10
    localport 443
    remoteport 81
    server SERVER2
    ip address x.y.1.11
    localport 443
    remoteport 81
    Thanks,

    The document http://www.cisco.com/univercd/cc/td/doc/product/webscale/css/scacfggd/ has a link to a page which describes how to use the configuration manager command line interface to configure the Secure Content Accelerator. Several configuration examples are also included in this page.

  • Design Question - BPM and dynamic JDBC adapters

    Hello,
    I need help to finish my scenario.
    scenario:
    step1 : Idoc > PI(7.1)  <>  JDBC stored procedure call
    step2: If the Sync. JDBC call is successful  then make a sync. BAPI call  to R/3.
    step3: If JDBC all fails ( in step 1 ) it should tigger an emial and do not make BAPI call ( do not execute step 2).
    I have 200 SQL servers and each time IDOC goes to any one of these 200 servers  ( yes, only one server ) , depending on the connection parameters in one of the idoc segment.
    Questions:
    1. Can we do this without BPM?
    2. can we configure dynamic JDBC adapte depending on the login credentials in IDOC ( server name, port, user name , passwore).
    3. If dynamic JDBC adapter configuration is not possible, what should be my design. Do i need to create 200 communication channels and 200 rec. determination, 200 interface determination, 200 receiver agreement..I dont think this is a good design

    Hello,
    It seems doable without using BPM.
    step1 : Idoc > PI(7.1) <> JDBC stored procedure call
    step2: If the Sync. JDBC call is successful then make a sync. BAPI call to R/3.
    You can use a two-step mapping.
    1.) The first one calls the stored procedure using UDF or Java Mapping (as was suggested in earlier threads)
    2.) The input to the second mapping will be the response from 1. You can use RFCAccessor to execute the BAPI.
    step3: If JDBC all fails ( in step 1 ) it should tigger an emial and do not make BAPI call ( do not execute step 2).
    Use XI/PI alerting framework for failed messages. The BAPI call can/cannot be executed by using a try-catch statement in the root node of 2nd mapping (1..1 occurrence), return suppress in the root node if conditions are not met or return a value if otherwise.
    Note: Consider this blog in your design, /people/thorsten.nordholmsbirk/blog/2008/11/04/why-xi-mappings-should-be-free-of-side-effects
    Hope this helps,
    Mark

  • New UCS and VMware setup Questions

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    We are currently in the process of migrating out vmware infrastructure from HP to UCS.  We are utilizing the Virtual Connect Adapters for the project.  With the migration we also plan on implementing the cisco nexus v1000 in our environment.  I have demo equipment setup and have had a chance to install a test environment, but still have a few design questions.
    When implementing the new setup, what is a good base setup for the virtual connect adapters with the v1000?  How many Nics should I dedicate?  Right now I run 6 nics per server (2 console, 2 Virtual Machines, and 2 Vmotion).  Is this a setup I should continue with going forward?  The only other thing I am looking to implement is another set of nics for nfs access.  In a previous setup at a different job, we had 10 nics per server (2 console, 4 virtual machines, 2 vmotion and 2 iSCSI).  Is there any kind of standard for this setup?
    The reason I am asking is I want to get the most out of my vmware environment as we will be looking to migrate Tier 1 app servers once we get everything up and running.
    Thanks for the help!

    Tim,
    Migrating from HP Virtual Connect (VC) -> UCS might change your network design slightly, for the better of course .  Not sure if you're using 1G or 10G VC modules but I'll respond as if you've using 10G modules because this is what UCS will provide. VC modules provide a 10G interface that you can logically chop up into a max of 4 host vNIC interfaces totaling 10G. Though it's handy to divide a single 10G interfaces into virtual NICs for Service Console, VMotion, iSCSI etc, this creates the opportunity for wasted bandwidth.  The logical NICs VC creates provides a max limit of bandwidth to the adapter.  For example if create a 2GB interface for your host to use for vMotion, then 2G of your 10G pipe is wastes when there's no vMotions taking place!
    UCS & 1000v offer a different solution in terms of bandwidth utilization by means of QoS.  We feel it's more appropriate to specifiy a "minimum" bandwidth guarantee rather than a hard upper limit - leading to wasted pipe.  Depending on which UCS blade and mezz card option you have, the # of adapters you can present to the Host varies.  B200 blades can support one mezz card (with 2 x 10G interfaces) while the B250 and B440 are full width blades and support 2 Mezz cards.  In terms of Mezz cards now, there's the Intel/Emulex/Qlogic/Broamcom/Cisco VIC options.  In my opinion the M81KR (VIC) is best suited for virtualized environments as you can present up to 56 virtual interfaces to the host, each having various levels of QoS applied.  When you roll the 1000v into the mix you have a lethal combination of adding some of the new QoS features that automatically match traffic types such as Service Console, iSCSI, VMotion etc.  See this thread for a list/explanation of new features coming in the next verison of 1000v due out in a couple weeks https://www.myciscocommunity.com/message/61580#61580
    Before you think about design too much, tell us what blades & adapters you're using and we can offer some suggestions for setting them up in the best configuration for your virtual infrastructure.
    Regards,
    Robert
    BTW - Here's a couple Best Practice Guides with UCS & 1000v that you might find useful.

  • How would i design the relationship between "question", "subquestion", and "answer"

    Hi all. Consider the following scenario:
    Scenario:
    A Question has an Answer, but some Questions have Subquestions. For example:
    1. Define the following terms: (Question)
    a) Object (1 marks) (Subquestion)
    An instance of a class. (Answer)
    b) ...
    2. Differentiate between a constructor and a destructor (2 marks)
    (Question)
    A constructor constructs while a destructor destroys.
    (Answer)
    Question:
    I want to model Questions, Subquestion, and Answer as Entities with relationships/associations, preferably binary relationships as i feel ternary relationships will be problematic while programming. Any suggestion on how i would
    go about this?
    There is never infinite resources.
    For the Question Entity, a question has the attributes "QuestionPhrase <String>", "Diagram<Binary>", and "Marks
    <Decimal>".
    For the SubQuestion Entity, a subquestion has the attributes "SubQuestionPhrase <String>", "Diagram<Binary>", and "Marks <Decimal>".
    For the Answer Entity, an answer has attributes, "AnswerPhrase<String>", "Diagram <Binary>"

    Yes. I am in .Net. I sure do hope i did not ask in the wrong forum. :-|
    Hi KCWamuti,
    If you need to design the relationship between Question table and Answer table in SQL Server, as Uri’s and Visakh’s posts, you can create the foreign key to establish relationship between tables, and use join in query to get your desired result. For more
    information about JOIN in SQL Server, please review this article:
    Different Types of SQL Joins.
    However, if you need to model Questions, Subquestion, and Answer as Entities in .Net, then the issue regards data platform development. I suggest you post the question in the Data Platform Development forums at
    http://social.msdn.microsoft.com/Forums/en-US/home?category=dataplatformdev . It is appropriate and more experts will assist you.
    Thanks,
    Lydia Zhang

  • Method design question...and passing object as parameter to webserice

    I am new to webservice...one design question
    i am writing a webservice to check whether a user is valid user or not. The users are categorized as Member, Admin and Professional. For each user type I have to hit different data source to verify.
    I can get this user type as parameter. What is the best approach to define the method?
    Having one single method �isValidUser � and all the client web service can always call this method and provide user type or should I define method for each type like isValidMember, isValidAdmin ?
    One more thing...in future the requirement may change for professional to have more required field in that case the parameter need to have more attribute. But on client side not much change if I have a single isValidUser method...all they have to do is pass additional values
    isValidUser(String username, String usertype, String[] userAttributes){
    if usertype == member
    call member code
    else if usertype = professional
    call professional code
    else if usertype = admin
    call admin code
    else
    throw error
    or
    isValidMember(String username, String[] userAttributes){
    call member code
    One last question, can the parameter be passed as object in web service like USER object.

    First of all, here is my code
    CREATE OR REPLACE
    TYPE USERCONTEXT AS OBJECT
    user_login varchar2,
    user_id integer,
    CONSTRUCTOR FUNCTION USERCONTEXT (
    P_LOGIN IN INTEGER
    P_ID_ID IN INTEGER
    ) RETURN SELF AS RESULT
    Either your type wont be compiled or this is not the real code..

  • Catalyst 3850 Stack VLANs, layer 2 vs. layer 3 design question

    Hello there:
    Just a generic, design question, after doing much reading, I am just not clear as when to use one or the other, and what the benefits/tradeoffs are:
    Should we configure the switch stack w/ layer 3, or layer 2 VLANs?
    We have a Catalyst 3850 Stack, connected to an ASA-X 5545 firewall via 8GB etherchannel.
    We have about 100 servers (some connected w/ bonding or mini-etherchannels), and 30 VLANs.
    We have several 10GB connections to servers.
    We push large, (up to) TB sized files from VLAN to VLAN, mostly using scp.
    No ip phones, no POE.
    Inter-VLAN connectivity/throughput and security are priorities.
    Originally, we planned to use the ASA to filter connections between VLANs, and VACLs or PACLs on the switch stack to filter connections between hosts w/in the same VLAN.
    Thank you.

    If all of your servers are going to the 3850 then I'd say you've got the wrong switch model to do DC job.  If you don't configure QoS properly, then your servers will start dropping packets because Catalyst switches have very, very shallow memory buffers.  These memory buffers get swamped when servers do non-stop traffic. 
    Ideally, Cisco recommends the Nexus solution to connect servers to.  One of the guys here, Joseph, regularly recommends the Catalyst 4500-X as a suitable (and financial) alternative to the more expensive Nexus range.
    In a DC environment, if you have a lot of VM stuff, then stick with Layer 2.  V-Motion and Layer 3 don't go hand-in-hand.

  • Oracle 10g RAC design with ASM and OCFS

    Hi all,
    I have a question about a proposed Oracle 10g Release 2 RAC design for a 2 node cluster.
    ASM can store database files but not Oracle binaries nor OCR and voting disk. As such, OCFS version 1 does not support a shared Oracle Home. We plan to use OCFS version 2 with ASM version 2 on Red Hat Linux Enteprrise Server 4 with Oracle 10g Release 2 (10.2.0.1).
    For OCFS v2, a shared Oracle home and shared OCR and voting disk are supported. My question is does the following proposed architecture make sense for OCFS v2 with ASM v2 on Red Hat Linux 4?
    Oracle 10g Release 2 on Red Hat Enterprise Linux Server 4:
    OCFS V2:
    - shared Oracle home and binaries
    - shared OCR and vdisk files
    - CRS software shared OCFS v2 filesystem
    - spfile
    - controlfiles
    - tnsnames.ora
    ASM v2 with ASMLib v2:
    Proposed ASM disk groups:
    - data_dg for application data
    - backupdg for flashback and archivelogs
    - undo_rac1dg ASM diskgroup for undo tablespace for racnode1
    - undo_rac2dg ASM diskgroup for undo tablespace for racnode2
    - redo_rac1dg ASM diskgroup to hold redo logs for racnode1
    - redo_rac2dg ASM diskgroup to hold redo logs for racnode2
    - temp1dg temp tablespace for racnode1
    - temp2dg temp tablespace for racnode2
    Does this sound like a good initial design?
    Ben Prusinski, Senior DBA

    OK Tim, thanks for advices.
    I think Netbackup can be integrated with RMAN but I don't want to loose time on this (political).
    To summarize:
    ORACLE_HOME and CRS_HOME on each node (RAID1 and NTFS)
    Shared storage:
    Disk1 and disk 2: RAID1: - Raw partition 1 for OCR
    - Raw partition 2 for VotingDisk
    - OCFS for FLASH_RECOVERY_AREA
    Disk3, disk4 and disk5: RAID 0 - Raw with ASM redundancy normal 1 diskgroup for database files.
    This is a running project here, will start testing the design on VMware and then go for production setup.
    Regards

  • Design question: Scheduling a Variable-timeslot Resource

    I originally posted this in general java programming, because this seemed like a more high-level design descussion. But now I see some class design questions. Please excuse me if this thread does not belong here (this is my first time using the forum, save answering a couple questions).
    Forum,
    I am having trouble determining a data structure and applicable algorithm (actually, even more general than the data structure -- the general design to use) for holding a modifiable (but more heavily read/queried than updated), variable-timeslot schedule for a given resource. Here's the situation:
    Let's, for explanation purposes, say we're scheduling a school. The school has many resources. A resource is anything that can be reserved for a given event: classroom, gym, basketball, teacher, janitor, etc.
    Ok, so maybe the school deal isn't the best example. Let's assume, for the sake of explanation, that classes can be any amount of time in length: 50 minutes, 127 minutes, 4 hours, 3 seconds, etc.
    Now, the school has a base operation schedule, e.g. they're open from 8am to 5pm MTWRF and 10am to 2pm on saturday and sunday. Events in the school can only occur during these times, obviously.
    Then, each resource has its own base operation schedule, e.g. the gym is open from noon to 5pm MTWRF and noon to 2pm on sat. and sun. The default base operation schedule for any resource is the school which "owns" the resource.
    But then there are exceptions to the base operation schedule. The school (and therefore all its resources) are closed on holidays. The gym is closed on the third friday of every month for maintenance, or something like that. There are also exceptions to the available schedule due to reservations. I've implemented reservations as exceptions with a different status code to simplify things a little bit: because the basic idea is that an exception is either an addition to or removal from the scheduleable times of that resource. Each exception (reservation, closed for maintenance, etc) can be an (effectively) unrestricted amount of time.
    Ok, enough set up. Somehow I need to be able to "flatten" all this information into a schedule that I can display to the user, query against, and update.
    The issue is complicated more by recurring events, but I think I have that handled already and can make a recurring event be transparent from the application point of view. I just need to figure out how to represent this.
    This is my current idea, and I don't like it at all:
    A TimeSlot object, holding a beginning date and ending date. A data structure that holds list of TimeSlot objects in order by date. I'd probably also hold an index of some sort that maps some constant span of time to a general area in the data structure where times around there can be found, so I avoid O(n) time searching for a given time to find whether or not it is open.
    I don't like this idea, because it requires me to call getBeginningDate() and getEndDate() for every single time slot I search.
    Anyone have any ideas?

    If I am correct, your requirement is to display a schedule, showing the occupancy of a resource (open/closed/used/free and other kind of information) on a time line.
    I do not say that your design is incorrect. What I state below is strictly my views and should be treated that way.
    I would not go by time-slot, instead, I would go by resource, for instance the gym, the class rooms (identified accordingly), the swimming pool etc. are all resources. Therefore (for the requirements you have specified), I would create a class, lets say "Resource" to represent all the resources. I would recommend two attributes at this stage ("name" & "identifier").
    The primary attribute of interest in this case would be a date (starting at 00:00hrs and ending at 24:00hrs.), a span of 24hrs broken to the smallest unit of a minute (seconds really are not very practical here).
    I would next encapsulate the availability factor, which represents the concept of availability in a class, for instance "AvailabilityStatus". The recommended attributes would be "date" and "status".
    You have mentioned different status, for instance, available, booked, closed, under-maintainance etc. Each of these is a category. Let us say, numbered from 0 to n (where n<128).
    The "date" attribute could be a java.util.Date object, representing a date. The "status", is byte array of 1440 elements (one element for each minute of the day). Each element of the byte array is populated by the number designation of the status (i.e, 0,1,2...n etc.), where the numbers represent the status of the minute.
    The "Resource" class would carry an attribute of "resourceStatus", an ordered vector of "ResourceStatus" objects.
    The object (all the objects) could be populated manually at any time, or the entire process could be automated (that is a separate area).
    The problem of representation is over. You could add any number of resources as well as any number of status categories.
    This is a simple solution, I do not address the issues of querying this information and rendering the actual schedule, which I believe is straight forward enough.
    It is recognized that there are scope for optimizations/design rationalization here, however, this is a simple and effective enough solution.
    regards
    [email protected]

  • Solution Design questions/suggestions/help needed

    Hi,
    I would appreciate any inputs regarding this.
    I am thinking of designing Solutions in Solman for our landscape which consists of ECC 6 and PI 7.1; each having a 3 system landscape D-Q-P. Is there an issue if I create Solutions Dev, QA and Prod vs Solutions ECC and PI ? I am more in favor of the former because it allows me easier management of the landscape when I implement System Monitoring, EWA, System Administration to see everything for the critical Prod systems together. Do any of you see any downsides to this approach ? Or any advantages of creating Solution ECC and Solution PI ?
    Also, from what I read from the documentation, I would need to create logical component for each system (ZECCDEV,ZECCQA etc) for the systems to show up in the solution landscape ? This is because in the definition of logical components, there is only one field to add the Dev or QA or Prod systems ? I still have to start working on this, so I could be wrong. Please feel free to correct me.
    Any inputs would be appreciated since I am fairly new to this.
    Thanks,
    Shreya

    Hello,
    I am not really sure what you are asking.
    Systems are usually defined (SID) in SMSY, then they are added to a logical component.
    Logicals need to be in the customer name space, so they are copied from SAP References to a unique name prefixed by a "Z".
    Multiple systems  of  any system type (PRD, DEV, QAS, TEST, SANDBOX...etc) can be added to a logical component.
    One or more logical can then be added to a Solution.
    This is basically how the Solution is built. There are some limitations you need to consider when naming systems.
    For certain functions, like EWA reports they need to have a unique Installation # + SID, and in the case of a long_sid, it would be the Installation # + (1st 3 chars of the)Long_SID.
    So if you want to know if you can have a PI system called DEV, QAS and PRD and  a ECC system called DEV, QAS and PRD, you can. But if you name them ECC_DEV, ECC_QAS, and ECC_PRD, then you will have problems.
    A logical name must be unique. But they show up in the system landscape as soon as they have been defined.
    When the logical is added to the Solution, only system Types of Production are automatically set to "Put in Solution"
    Even though you can see the systems of other types in the logical and in the solution. And any system that is not in status "Put in Solution" will not be visible when you try to use it, as an example create an EWA , you would select the solution and not see any other type but production systems to select. This is because system types that are not production need to be manually set to "Put in Solution". This is done in the solution, in Change/Edit mode, and right clicking on the system you want to put in the solution and selecting that option, then saving.
    From your questions am and not exactly sure what you were getting at, but I do hope this general info helps.
    Regards,
    Paul

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • Design question for database connection in multithreaded socket-server

    Dear community,
    I am programming a multithreaded socket server. The server creates a new thread for each connection.
    The threads and several objects witch are instanced by each thread have to access database-connectivity. Therefore I implemented factory class which administer database connection in a pool. At this point I have a design question.
    How should I access the connections from the threads? There are two options:
    a) Should I implement in my server class a new method like "getDatabaseConnection" which calls the factory class and returns a pooled connection to the database? In this case each object has to know the server-object and have to call this method in order to get a database connection. That could become very complex as I have to safe a instance of the server object in each object ...
    b) Should I develop a static method in my factory class so that each thread could get a database connection by calling the static method of the factory?
    Thank you very much for your answer!
    Kind regards,
    Dak
    Message was edited by:
    dakger

    So your suggestion is to use a static method from a
    central class. But those static-methods are not realy
    object oriented, are they?There's only one static method, and that's getInstance
    If I use singleton pattern, I only create one
    instance of the database pooling class in order to
    cionfigure it (driver, access data to database and so
    on). The threads use than a static method of this
    class to get database connection?They use a static method to get the pool instance, getConnection is not static.
    Kaj

  • SOA real-time design question

    Hi All,
    We are currently working with SOA Suite 11.1.1.4. I have a SOA application requirement to receive real-time feed for six data tables from an external third party. The implementation consists of five one-way operations in the WSDL to populate the six database tables.
    I have a design question. The organization plans to use this data across various departments which requires to replicate or supply the data to other internal databases.
    In my understanding there are two options
    1) Within the SOA application fork the data hitting the web-service to different databases.
    My concern with this approach is what if organizations keep coming with such requests and I keep forking and supplying multiple internal databases with the same data. This feed has to be real-time, too much forking with impact the performance and create unwanted dependencies for this critical link for data supply.2) I could tell other internal projects to get the data from the populated main database.
    My concern here is that firstly the data is pushed into this database flat without any constraints and it is difficult to query to get specific data. This design has been purposely put in place to facilitate real-time performance.Also asking every internal projects to get data from main database will affect its performance.
    Please suggest which approach should I take (advantage/disadvantage. Apart from the above two solutions, is there any other recommended solution to mitigate the risks. This link between our organization and external party is somewhat like a lifeline for BAU, so certainly don't want to create more dependencies and overhead.
    Thanks

    I had tried implementing the JMS publisher/subscriber pattern before, unfortunately I experienced performance was not so good compared to the directly writing to the db adapter. I feel the organization SOA infrastructure is not setup correctly to cope with the number of messages coming through from external third party. Our current setup consists of three WebLogic Servers (Admin, SOA, BAM) all running on only 8GB physical RAM on one machine. Is there Oracle guideline for setting up infrastructure for a SOA application receiving roughly 600000 messages a day. I am using SOA 11.1.1.4. JMS publisher/subscriber pattern just does not cope and I see significant performance lag after few hours of running. The JMS server used was WebLogic JMS
    Thanks
    Edited by: user5108636 on Jun 13, 2011 4:19 PM
    Edited by: user5108636 on Jun 13, 2011 7:03 PM

  • Workflow design questions: FM vs WF to call FM

    Hereu2019s a couple of workflow design questions.
    1. We have Workitem 123 that allow user to navigate to a custom transaction TX1. User can make changes in TX1.  At save or at user command of TX1, the program will call a FM (FM1) to delete WI 123 and create a new WI to send to a different agent. 
    Since Workitem 123 is still open and lock, the FM1 cannot delete it immediately, it has to use a DO loop to check if the Workitem 123 is dequeued before performing the WI delete.
    Alternative: instead of calling the FM1, the program can raise an event which calls a new workflow, which has 1 step/task/new method which call the FM1.  Even with this alternative, the Workitem 123 can still be locked when the new workflowu2019s task/method calls the FM1.
    I do not like the alternative, which calls the same FM1 indirectly via a new workflow/step/task/method.
    2. When an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow .. step .. task .. method .. FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is to call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object.
    Any recommendation?
    Amy

    Mike,
    Yes, in my first design, the TX can 1. raise a terminating event for the existing workitem/workflow and then 2. raise another event to call another workflow.   Both 1 and 2 will be in FM1. 
    Then the design question is: Should the FM1 be called from TX directly or should the TX raise an event to call a new workflow which has 1 step/task, which calls a method in the Business object, and the method calls the FM1?
    In my second design question, when an application object changes, the user exit will call a FMx which is related to workflow.  The ABAP developer do not want to call the FMx directly, she wants to raise an event which call a workflow, which has 1 step/task, which calls a method, which calls the FMx indirectly.  This way any commit that happens in the FMx will not affect the application objectu2019s COMMIT.
    My recommendation is either call the FMx using u2018in Update tasku2019 so that the FMx is only called after the COMMIT of the application object or raise an event to call a receiver FM (FMx).
    Thanks.
    Amy

  • Aggregation level - design  question

    Hi, All
    we are in BI-IP ( Netweaver 2004s SPS16).
    I have a design question for this scenario.
    User needs to plan the amounts for a duration (start period and end period), Jan2008 -Dec2008 (001.2008, 012.2008) = 12000.
    We need to distribute this to the periods equally. 001.2008 = 1000, 002.2008 =1000 .... 012.2008=1000.
    If the user changes the period amounts, it should be reflected back in the duration amount.
    Pl suggest the design for the aggregation levels to achieve this.
    Thanks in advance.
    velu

    Hello Velu,
    As the name also goes, creating an "aggregation level" will only result in "aggregation". What your requirement is, is called disaggregation or distribution - this cannot happen automatically - you will either have to use the feature in input queries or use the distribution planning function or create your own planning function using fox/exit.

Maybe you are looking for

  • ?: How do you move files to an external hard drive

    Hello everyone, I have a 2009 MacBook Pro running Lion 10.7.5 I am starting to run out of internal HD space as my images in Photoshop are very large (over 1 GB ea.) I currently use an external hard drive for Time Machine but would like to use a diffe

  • Boot camp outside mac

    Hi, Does anyone know if bootcamp can create a partition on an external hard drive so i don't need to sacrifice any mac space? Also i have a new macbook which has no sign of boot camp in utilities folder,what is the best route to get it? a down load o

  • CD information does not show.

    I have alot of CDs that I rip into iTunes but for some reason whether I am connected to the internet or not it never shows the CD information when I try to rip the cd. Is it a problem with my computer? Windows Media Player labels my music so I was wo

  • Printing purchase order

    hello, I want to print a purchase order in which company address is optional as per given plant.if i give plant 5001, then compny name n add. should not display .For another plant address and comp. name  should display for  same printing form. Kindly

  • New computer, old Sony speakers

    Great computer!  New P6754Y PC with AMD Athlon II 640 Quad core and Windows 7 64.......my old Sony Vaio has 2 speakers and sub woofer, I kept them.  My question is on the Sony speaker there is a DC power cord that has an odd end, it won't fit into th