Dell MD3620i connect to vmware - best practices

Hello Community,
I've purchased a Dell MD3620i with 2 x ports 10Gbase-T Ethernet on each controller (2 x controllers).
My vmware environment consists of 2 x ESXi hosts (each with 2ports x 1Gbase-T) and an HP Lefthand storage( also 1Gbase-T). The switches I have are Cisco3750 that have only 1Gbase-T Ethernets.
I'm going to replace this HP Storage with a DELL storage.
As I have never worked with DELL storages, I need your help to answer my questions:
1. What is teh best practices to connect vmware hosts to the Dell MD3620i ?
2. what is the process to create a LUNs?
3. Can I create multiply LUNs on only one disk group? or is the best practice to create one LUN on one disk group?
4. How to set iSCSI 10GBase-T ports working on 1Gbps switch?
5. Is the best practice to connect the Dell MD3620i directly to the vmware Hosts without switch?
6. The old iscsi on HP storage is in a different network, can I do vmotion to move all virtual machines from one iSCSI network to another and then change the iSCSI IP addresses on vmware hosts without virtual machines interruption?
7. Can I bundle two iSCSI ports to one 2Gbps interface and conenct to the switch? I'm using two switches, so I want connect each controller to each switch by bounding their interfaces to 2Gbps. My Question is, would be controller switched over to another controller if the Ethernet link falls on the switch?(in case one switch is rebooting)
tahnks in advanse!

TCP/IP basics: A computer cannot connect to 2 different (isolated) networks (e.g. 2 directly-attached cables between the server and a SAN's iSCSI port) that share the same subnet.
Data corruption is highly unlikely if you were to share the same vlan for iSCSI, however, performance and overall reliability would be impacted.
With a MD3620i, here are a few setup scenarios using the factory default subnets (and for direct-attached setups I had to add 4 additional subnets):
Single switch (not recommended as the switch becomes your single point of failure):
Controller 0:
iSCSI port 0: 192.168.130.101
iSCSI port 1: 192.168.131.101
iSCSI port 2: 192.168.132.101
iSCSI port 4: 192.168.133.101
Controller 1:
iSCSI port 0: 192.168.130.102
iSCSI port 1: 192.168.131.102
iSCSI port 2: 192.168.132.102
iSCSI port 4: 192.168.133.102
Server 1:
iSCSI NIC 0: 192.168.130.110
iSCSI NIC 1: 192.168.131.110
iSCSI NIC 2: 192.168.132.110
iSCSI NIC 3: 192.168.133.110
Server 2: <end in 120>
All ports plug into that 1 switch (obviously).
If you only want to use 2 NICs for iSCSI, have server 1 use the 130 and 131 subnet, and server 2 use 132 and 133, server 3 then uses 130 and 131 again. This spreads the IO load between the iSCSI ports on the SAN.
Dual switches (one VLAN for all the iSCSI ports on that switch though):
NOTE: Do NOT link the switches together. This helps prevent issues that occur on one switch from affecting the other switch.
Controller 0:
iSCSI port 0: 192.168.130.101 -> To Switch 1
iSCSI port 1: 192.168.131.101 -> To Switch 2
iSCSI port 2: 192.168.132.101 -> To Switch 1
iSCSI port 4: 192.168.133.101 -> To Switch 2
Controller 1:
iSCSI port 0: 192.168.130.102 -> To Switch 1
iSCSI port 1: 192.168.131.102 -> To Switch 2
iSCSI port 2: 192.168.132.102 -> To Switch 1
iSCSI port 4: 192.168.133.102 -> To Switch 2
Server 1:
iSCSI NIC 0: 192.168.130.110 -> To Switch 1
iSCSI NIC 1: 192.168.131.110 -> To Switch 2
iSCSI NIC 2: 192.168.132.110 -> To Switch 1
iSCSI NIC 3: 192.168.133.110 -> To Switch 2
Server 2: <end in 120>
Same note about using just 2 NICs per server for iSCSI. In this setup each server will still use both switches so that a switch failure should not take any of your servers' iSCSI connectivity down.
Quad switches (or 2 VLANs on each of the 2 switches above):
iSCSI port 0: 192.168.130.101 -> To Switch 1
iSCSI port 1: 192.168.131.101 -> To Switch 2
iSCSI port 2: 192.168.132.101 -> To Switch 3
iSCSI port 4: 192.168.133.101 -> To Switch 4
Controller 1:
iSCSI port 0: 192.168.130.102 -> To Switch 1
iSCSI port 1: 192.168.131.102 -> To Switch 2
iSCSI port 2: 192.168.132.102 -> To Switch 3
iSCSI port 4: 192.168.133.102 -> To Switch 4
Server 1:
iSCSI NIC 0: 192.168.130.110 -> To Switch 1
iSCSI NIC 1: 192.168.131.110 -> To Switch 2
iSCSI NIC 2: 192.168.132.110 -> To Switch 3
iSCSI NIC 3: 192.168.133.110 -> To Switch 4
Server 2: <end in 120>
In this case using 2 NICs per server means the first server uses the first 2 switches and the second server uses the second set of switches.
Direct attach:
iSCSI port 0: 192.168.130.101 -> To server iSCSI NIC 1 (on an example IP of 192.168.130.110)
iSCSI port 1: 192.168.131.101 -> To server iSCSI NIC 2 (on an example IP of 192.168.131.110)
iSCSI port 2: 192.168.132.101 -> To server iSCSI NIC 3 (on an example IP of 192.168.132.110)
iSCSI port 4: 192.168.133.101 -> To server iSCSI NIC 4 (on an example IP of 192.168.133.110)
Controller 1:
iSCSI port 0: 192.168.134.102 -> To server iSCSI NIC 5 (on an example IP of 192.168.134.110)
iSCSI port 1: 192.168.135.102 -> To server iSCSI NIC 6 (on an example IP of 192.168.135.110)
iSCSI port 2: 192.168.136.102 -> To server iSCSI NIC 7 (on an example IP of 192.168.136.110)
iSCSI port 4: 192.168.137.102 -> To server iSCSI NIC 8 (on an example IP of 192.168.137.110)
I left controller 1 on the "102" IPs for easier future changing back to just 4 subnets.

Similar Messages

  • NetApp direct connect to UCS best practices

    Folks,
    I have installed many FlexPods this year but all involved either Nexus 5Ks or 7Ks with vPC. This protects the NFS LUN connections from both network outages or UCSM FIM outages. What is not clear to me is the best practice on connecting a NetApp directly to the UCS via applaince ports. Appliance ports seem like a great idea but they seem to ass issues to designs, both in VMware and the network.
    Does anyone have a configuration example on both the NetApp, UCS & Vmware side?
    I thought I would ask the group their opinion.
    Cheers,
    David Jarzynka

    Hi David
    Can you clarify your last posting a little bit.
    I never installed direct attached NetApp Storage to the UCS as well.
    One drawback i see is, if you have your NetApp System in active/standby mode connected to the FI, then for example all Servers connect through FI A to the NetApp System (NIC with VMKernel for all Server active on FI A). If the NetApp Link fails and switches over to standby link on Fabric B all the Traffic will go form Server to FI A to Uplink Switch to FI B and then to the NetApp System - because the Server is not aware that the NetApp System did a failover. So not all companies will have 10G Uplink Switches which will cause a bottleneck in that case.
    What other things do you see? I agree completely with you - all say is a wonderful feature on the slides - but i don't think it's that smart in practise?
    Thanks for a short replay.
    Cheers
    Patrick

  • SAP BI4 SP2 Patch 7 Webi Connection to BW Best Practice

    We are working with the version 4.0 SP 2 patch 7 of  BI4 and developing some reports with WEBI and we are wondering about wich is the best method to access to BW Data.
    At the moment we are using BICS because read in no few places that this is the best method to consume BW DATA cause have improvements is perforance, hierarchies, etc, but i don't know if this is really true.
    Are BICS the best method to access to BW Data, this is the way recomended by SAP?
    In the fillter panel of a webi document, we cant use "OR" clause, is not possible use this clause????
    When we working with hierarchies and change the hierarchy for the dimension value or viceversa the report throw an error of AnwserPromts API (30270)
    When we working with BEX queries containning variables and try to merge that variable with a Report Prompt(From another query) , and execute the queries shows an error indicating that one prompt has no value.
    (fAnyone experienced this problems too? anyone find out a solutions to this issues?
    Best Regards
    Martin.

    Hi Martin
    In BI 4.0 BICS is the method to access BW not universes.  .UNV based on BW are there for legacy.
    Please look at this forum ticket with links on Best practices BI 4.0 - BW and if you do a search in SDN you can find many tickets on this topic.
    How to access BEx directly in WEBI 4.0
    Regards
    Federica

  • Unity Connection 7.x - Best Practice for Large Report Mailboxes?

    Good morning, We have 150 mailboxes from Nurses to give shift reports in. The mailbox quota is 60MB and the message aging policy is on. Deletede messages are deletede after 14 days. The massage aging policy is system wide, and increasing the quota would cause storage issues. Is there a way to keep the message aging policy and reduce it for 1 group of users? Is there a way to bulk admin the mailbox quota changes?
    Version 7.1.3ES9.21004-9
    Thanks

    As for UC 8x, you're not alone.  I don't typically recommend going to an 8.0 release (no offense to Cisco).  Let things get vetted a bit and then start looking for the recommended stable version to migrate to.
    As for bulk changes to mailbox store configurations for users, Jeff (Lindborg) may be able to correct me if I am wrong here.  But with the given tools, I don't think there is a way to bulk edit or update the mailbox info for users (i.e., turn on/off Message Aging Policy).  No access to those values via Bulk Edit and no associated fields in the BAT format either.
    Now, with that said - no one knows better than Lindborg when it comes to Unity.  So I defer to him on that point.
    Hailey
    Please rate helpful posts!

  • Error while Connecting report Best Practices v1.31 with SAP

    Hello experts,
    I'm facing an issue while trying to connect some of my reports from Best Practices for BI with SAP.
    It only happens when it's about info sets, the other ones that are with SAP tables go smoothly without a problem.
    The most interesting is I have already one of the reports connected to SAP info sets.
    I have already verified the document of steps of creation of additional database that comes with BP pack. They seem ok.
    Here goes what Crystal Reports throws to me after changing the data source to SAP:
    For report "GL Statement" one of the Financial Analysis one which uses InfoSet: /KYK/IS_FIGL_I3:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: It wasn't indicated any variant for exercise (something like this after translating) - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    For report "Cost Analysis: Planned vs. Actual Order Costs" one of the Financial Analysis one which uses InfoSet: ZBPBI131_INFO_ODVR and ZBPBI131_INFO_COAS; and also the Query CO_OM_OP_20_Q1:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: check class for selections raised errors - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    Obs.: Those "Z" infosets are already created in SAP environment.
    The one that works fine is one of the Purchasing Analysis reports:
    - Purchasing Group Analysis -> InfoSet: /KYK/IS_MCE1
    I'm kind of lost to solve this, because I'm not sure if it can be in the SAP JCO or some parameter that was done wrongly in SAP and I have already check possible solutions for both.
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

    I re-checked step 3.2.3 - Uploading Crystal User Roles (transaction PFCG) - of the manual where it talks about CRYSTAL_ENTITLEMENT and CRYSTAL_DESIGNER roles, I noticed in the Authorizations tab that the status was saying it hadn't been generated and I had a yellow sign, so then that was what I did (I generated) as it says in the manual.
    Both statuses are now saying "Authorization profile is generated" and the sign is now green on the tab.
    I had another issue in the User tab (it was yellow as Authorizations one before generating)....all I needed to do to change to green was comparing user (User Comparison button).
    After all that, I tried once more to refresh the Crystal report and I still have the error messages being thrown.
    There's one more issue in one of the tabs of PFCG transaction, it is on the Menu one where it is with a red sign, but there's nothing talking about it in the manual. I just have a folder called "Role menu" without anything in it.
    Can it be the reason why I'm facing errors when connecting the report to SAP infoSets? (remember one of my reports which is connected to an infoSet works good)
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • What are the best practices to connect 30-40 iPads to Wi-Fi in a single room?

    What are the best practices to connect 30-40 iPads to Wi-Fi in a single room?

    I don't use it but it does say this in the help section...

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • External connectivity best practice

    Hi,
    I would like to know what are the best practices to have external users using Excel to connect to SSAS cube. I understand the general concept that a cube user must be in a domain AD andthat same user is a member of a SSAS cube.
    How can I ensure the excel connection is secure and each external user is only able to view their own data? I know the workbook can be password protected but can the connection be also password protected ?
    BTW, I am not sure if this is the right forum for this question
    Regards

    Most modern external drives will typically go into a standby mode shortly after they're unmounted or disconnected which will spin down the drive. They'll still use some power, but your drive won't be doing anything so you don't need to worry about wearing it out. I'm a bit anal about stuff. I usually go so far as to unplug my external drives when I'm not using them as the power supply still consumes some juice... but, in your case, as it seems like you probably go back and forth quite a bit... just powering it down would be the best thing. It really doesn't take any extra effort when connecting your MacBook Pro to just switch it back on again.

  • Best Practice - WAP connecting switchport configuration.

    Is there a best practice for deploying the WAP's in a WAP/WLC infrastructure?  Should the connecting switchport be an Access port or a Trunk port?  I've seen this implemented in both fashions and wasn't sure if one was a better choice than the order.  What is the difference?
    My other question is regarding applying additional switchport configurations.  Is there anything wrong with applying either spanning-tree portfast, spanning-tree bpdguard, or switchport port-security. 

    Hi Ken,
    Access port all the time, everywhere, UNLESS the AP is configured for HREAP/FLEX then trunk. Or if you deploy a AP in monitor mode then TRUNK.
    QOS -- if its access port trust dscp. If you truck trust cos.
    No you are fine. Portfast is highly recommended.
    "Satisfaction does not come from knowing the solution, it comes from knowing why." - Rosalind Franklin
    ‎"I'm in a serious relationship with my Wi-Fi. You could say we have a connection."

  • Best practice for RAC connections

    Got a question of what people consider best practice for setting up high-availability connection pools to a RAC cluster. Now that you can specify the fail-over logic right in the thin connection string it seems like there are three options.
    A) Use OCI connections and allow the fail-over logic to be maintained in the TNSNAMES.ORA file.
    B) Use simple thin connections with multi-pools and let WebLogic maintain the fail-over logic.
    C) Use simple thin connections with fail-over logic in the connection string.
    Thanks,
    Rodger...

    If you need XA, then follow the WebLogic documentation. If not, then
    you have much more freedom. The thin driver can be configured to
    use the tnsnames.ora file if that helps you. WebLogic much prefers the
    thin driver to the OCI-based one, which can kill a JVM with OCI bugs.
    If you do driver-level failover, each failed connection will cost a test
    and replace. If you use multipools, WLS can be configured to flush a
    whole pool when it finds a connection bad, and also make the failover
    at the pool level, right then, so application delay is minimized.
    Joe

  • Lun Size best practice for UC apps and VMWare?

    Hi,
    We have UCS manager v2.1 with FI 6248 direct FC attached to NetApp with plenty of storage.
    Per following doc, Lun size for UC apps should be 500GB - 1.5TB and 4 to 8 VMs per Lun.
    http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements#Best_Practices_for_Storage_Array_LUNs_for_Unified_Communications_Applications
    We have four B200M3 blades and 3 to 4 UC apps (CUCM, Unity, UCCX) will be hosted on each blade. May add more VM the blades in the future.
    I am thinking four 1 TB Luns and one for each blades. (actually 8 Luns in toal, 4 boot luns for ESXi and 4 for UC apps).
    What is the best practice (or common deployment) to create Lun size and design?
    Thanks,
    Harry

    UC apps need low IO,nothing special,Reference vmware LUN design is ok.

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • What is the best Practice JCO Connection Settings for DC  Project

    When multiple users are using the system data is missing from Web Dynpro Screens.  This seems to be due to running out of connections to pull data.
    I have a WebDynpro Project based on component development using DC's.  I have one main DC which uses other DC's as Lookup Windows.  All DC's have their Own Apps.  Also inside the main DC screen, the data is populated from multiple function modules.
    There are about 7 lookup DC Apps accessed by the user
    I have created JCO destinations with following settigns
    Max Pool Size 20
    Max Number of Connections 200
    Before I moved to DC project it was regular Web Dynpro Project with one Application and all lookup windows were inside the same Project.  I never had the issue with the same settings.
    Now may be becuase of DC usage and increase in applications I am running out of connections.
    Has any one faced the problem.  Can anyone suggest the best practice of how to size JCO connections.
    It does not make any sense that just with 15-20 concurrent users I am seeing this issue.
    All lookup components are destroyed after its use and is created manually as needed.  What else can I do to manage connections
    Any advise is greatly appreciated.
    Thanks

    Hi Ravi,
    Try to go through this Blog its very helpful.
    [Web Dynpro Best Practices: How to Configure the JCo Destination Settings|http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417600)ID2054522350DB01207252403570931395End?blog=/pub/wlg/1216]
    Hope It will help.
    Regards
    Jeetendra

  • SAP Upgrade from 4.7 to ECC 6.0 connected to BW 7.0 Best Practices

    We are upgrading SAP R/3 4.7 to ECC 6.0.  We have been running live in a BW 7.0 environment. We have done some enhancements for 2LIS_11_VAITM -  Sales Document Item Data  and 2LIS_13_VDITM - Billing Document Item Data datasources.   We currently have a test instance that has been upgraded to ECC6.0.
    What are the best business practices for testing BW to insure data transfer and enhancements are working correctly?
    Eg. Should we connect the ECC6 instance to BWD and test there OR upgrade R/3 TST that is connected BWD  and test there OR Upgrade QAS and test in BWQ.
    Thanks in advance . . .
    Edited by: RWC on May 4, 2011 6:26 PM

    Hi RWC,
    the plug-in will change slightly, you may notice differences in the screens of RSA2 and others after the upgrade.
    Regarding best practices: In a recent upgrade our project team decided to create a parallel landscape with a new, additional Dev and QA on the R/3 side.
    We connected these new systems to a BW Sandbox and the BW QA.
    We identified all datasources, transfer rules and infopackages in use in production and recorded them with related objects onto transports in BW Dev. Before the import into BW Sandbox and BW QA we adjusted the system name conversion table to convert from old R/3 Dev to new R/3 Dev in order set up all the required connections for testing with the upgraded R/3 systems.
    After the go LIVE of the upgrade we renamed the old R/3 Dev system in BW Dev and ran BDLS to convert everything (speak to your Basis team). That way we made sure not to lose any Development and we got rid of the old R/3 Dev system.
    Take a look at this post for issues we encountered during this project and test everything you load in production.
    Re: Impact on BI 7.0 due to ECC 5.0 to ECC 6.0 Upgrade
    Best,
    Ralf

Maybe you are looking for