Adding 2 New SItes between 2 Data Centers

I have a current Site installed with default settings (no additional sites/subnets/site links), and we are plannig on creating 3 new DCs in our remote Data Center, which will be the second site.  I want to seperate desktop traffic between the 2
Data Centers. My question is, what is the best way to do this?  I want to keep the Deafult First Name Site as a catch -all for all subnets that may be missed.

1.  Why do you wish to keep clients from accessing DCs in the datacenter?  What harm do you think may occur?
2.  Assuming a two-site topology with a site link associating each of them won't accomplish what you want, but can *reduce* the amount of authentication traffic from one site to another.
3.  Keeping the Default First Site Name is fine - it won't hurt anything, but using it as a 'catch-all' for IP subnets not registered in AD indicates poor IP address management.  You should address that issue first.
4.  It might be possible to completely eliminate client auth traffic in one site from reaching another site by creating three sites in series (Site A <-> Site B <-> Site C) and disabling "Bridge all site links".  With Site Link Bridging
disabled, the connection from Site A to Site C will no longer be transitive and clients that are members of that site will not send authentication requests to DCs in Site C.  However, that's a lot of 'hoops' to jump through just to address a problem I'm
not sure is all that important.
-ds
David Shaw [MSFT]

Similar Messages

  • Requirements about delay and bandwith for using OTV in Nexus 7000 between two data centers separated 25 miles?

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

    We have two Nexus 7000, and I need use them with OTV between two data Centers separated 25 miles, but I don´t know what are the optimal values about bandwidth and delay (ms) for extended VLANs IDs (production and DAG replication) for Microsoft Exchange environment. Can somebody tell me please which are the values required for operate OTV in optimal conditions in this case? We have about 35 000 users that will use that platform of email. Thanks a lot for your comments. Regards.

  • Core switches between 2 data centers

    Existing data center has 2 X Cisco 6500 series core switches, new data center has 2 X Cisco nexus core switches. For the connection between 2 data center, it will be using leased line. To establish the network connection for both side, should it use trunk port with layer 2 connection directly or layer 3 ip routing (configure the ip on switchport interface) ? What is the better approach for its design? Please share your idea.

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Normally, when dealing with off-site traffic, L3 is better as it limits traffic to only what needs to go off-site.
    However, as you're dealing with data centers, it's no longer uncommon that there's need to share L2 between them.  So, much depends on what your needs are.

  • WAN Vlan optimization between 2 Data Centers - 4451 Router

    Hello Group gurus,
    I have little odd question 
    We have 2 Data Centers, we have dedicated 1Gig link across them. we want to optimize certain Vlan traffic across them. 
    we have 4451 routers installed at each location and OSPF running for end subnets to know each other. but that is routing part completed.
    however how can we utilize UCSE chassis of 4451 to optimize vlan traffic across data center is still a query.
    I tried to find out document on google but specific to my requirement has not seen on.
    if someone already set up this type of scenario, please help.
    Thanks in advance

    Hi John, I think it's best to use the right equipment for the job. If you've already got a router in place and you're not in a campus/metro/ISP environment, it's not really prudent to use another router. A simple layer 2 or layer 3 switch can accomplish this and give you plenty of ports at a much better price per port.
    You may want to look in to the SG300 series switch if you want something that can handle route load and give ample amount of ports.

  • Adding a new field to Existing data Datasource

    Hello,
    If  I want to add extra field to the existence Data Source in SAP R/3 how will you do? and how will you extract the data in SAP BW .could you please explain the steps. I would appreciate.
    Thanks in Advance.
    Sri

    1)Go to RSA6 and find the data source you need to enhance.--> Display
    2)Double click  on the Extract structure.
    3)Now click on the append structure button to add the required field on to the existing structure.
    4) add your required fields with ZZ appended to your field.
    5) Save & Activate the append structure. Then go back and make sure you activate the extract structure also.
    6) Now again go back to RSA6 and select your Data Source. But this time go to change Data Source to remove the hide option to the enhanced fields. By default they’ll be in hide mode. If you don’t remove the hide field then this field will not be seen in BW side.
    7) Now go to SE38 to write the logic to populate the data into the enhanced field. Program name to write the logic is ZXRSAU01.
    8) Check + Save + Activate.
    9) Check in RSA3 if data is populated as per your requirement.
    10) Replicate your Data Source.
    11) Now go to Data Source/ Trans. Structure screen. Now you can see the enhanced field on the right hand side.

  • Can portal session cookies be used between two data centers

    OAS generates the following header information and session information for my application. However when I need to failover the originating OAS datacenter into my hot stand-by for maintenance or upgrades, the OAS in the other datacenter responds with a 503 web error. We are using Akamai's GTM to manage the liveness of the datacenter, so we would need the hot stand-by OAS portal in that datacenter to return a 302 error code. Is there some method that we can add to our portal application which would always return a 302 error code.
    See header information collected through wfetch. The 503 error is caused by the hot stand-by data center not accepting or recognizing the cookie. Both OAS datacenters are IDENTICAL in Oracle levels, application levels, web servers, portals and OS patches.
    resolve hostname "170.107.183.32"WWWConnect::Connect("170.107.183.32","80")\nsource port: 2182\r\n
    GET /portal/pls/portal/PORTAL.wwsec_app_priv.login?p_requested_url=%2Fportal%2Fpls%2Fportal%2FPORTAL.home&p_cancel_url=%2Fportal%2Fpls%2Fportal%2FPORTAL.home HTTP/1.1\r\n
    Accept: */*\r\n
    Accept-Language: en-us\r\n
    Accept-Encoding: gzip, deflate\r\n
    User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.0.3705; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)\r\n
    Host: www.thomson-pharma.com\r\n
    Connection: Keep-Alive\r\n
    Cookie: ORA_WX_SESSION="10.225.8.30:80-1#2"; portal=9.0.3+en-us+us+AMERICA+3D66674E7EED0801E04400144F41424E+BBAA98EEB32D58C086231A8D6CBE2E5D402D89B0E79D83A18C668BB0CA7417B4044DEA389C8B50DD37D9272A24B4753B22F29978861DE14503F8B9BEDC2014654B26A434CF074F4D8749B88610ADADF5084A90ADBF749E2A; DATACENTER=EAGAN\r\n
    \r\n
    HTTP/1.1 503 Service Unavailable\r\n
    Cache-Control: private\r\n
    Content-Type: text/html\r\n
    Set-Cookie: ORA_WX_SESSION="10.237.138.33:80-1#2"\r\n
    Set-Cookie: portal=; expires=Wednesday, 27-Dec-95 05:29:10 GMT; path=/\r\n
    Connection: Keep-Alive\r\n
    Keep-Alive: timeout=5, max=999\r\n
    Server: Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server OracleAS-Web-Cache-10g/10.1.2.0.2 (N;ecid=208440262161,0)\r\n
    Content-Length: 710\r\n
    Date: Fri, 26 Oct 2007 14:58:07 GMT\r\n
    \r\n
    Thanks -John

    Hi John,
    This question is probably more appropriate in one of the Portal forums, but perhaps you can take a look at the information in section C.5 Configuring the Portal Session Cookie in Appendix C of the Portal Configuration guide.
    Here is a link: http://download.oracle.com/docs/cd/B14099_19/portal.1014/b19305/cg_app_c.htm#sthref1907
    Regards,
    Peter

  • Headers are changing when adding a new section between others

    Hi!
    I am compiling a colloquium program. For each lecture abstract, I have made a file with distinct headers. I open the page side tab, click on one page and copy it (Command+C). I click in the main file, in the page side tab, on the abstract which would be before the new one, and paste the new abstract (Command+V).
    The new abstract is neatly pasted after the old one, but the headers are changed : it has the previous abstract headers. Worse, every abstract, whose each is in a distinct section, has this previous abstract header, even if I turned off the «Same headers/footers as before» option!
    Is this a bug? Did I do something wrong?

    I may have found an explanation to this problem.
    When I put a one page document in the master document, every header I had written has been brought back! I think it is a dual page problem here. When the second page of a inserted document (left page) become an odd page (right page), it does not take in account the odd page of the inserted document, but the odd page of the section BEFORE the inserted document.
    Does anyone want to test this?
    Hope it will be fixed soon...

  • Adding new site columns to my issue tracking List, will result in having their Source field as blank inside the Issue content type

    I have added an Issue tracking list to my SharePoint team site 2013. Then I wanted to add 5 extra columns to my list . So I did the following:-
    I went to site settings.
    Then I added 5 new site columns.
    I went back to my list setting, I click on the “Issue” content type, and I added the 5 newly added site columns to my "Issue" content type , using the “Add from existing site or list columns” link as follow:-
    And then these columns were added automatically inside the Edit, Create& Display forms.
    But since this is the first time I work on such a task , so I want to make sure that I did every thing correctly. Because I am not sure why the 5 newly added site columns will have their source field inside the “Issue” content type as blank , as shown in
    the above picture. So does this indicate that there is a problem ? or this is because I have added the 5 newly added site columns to the List content type and not inside the Content type at the site level ?
    Thanks

    Hi,
    It’s by design, the Source field means the content type name.
    As you said, when we create a new column, if we not attach the column to a content type, then the column would have not the source field option.
    However, if you create a new column, then add the column in a content type, the content type would automatically update in the site.
    Now, you would see a source content type appear.
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Search does not crawl new site collection and documents

    We have the following situation. We have two locations with different farms sharing the same databases (using AlwaysOn for the content databases). Everything works fine, the second site is also read-only while having the primary farm online. For existing
    databases the search crawler on the second site is able to crawl existing site collections.
    For new site collections created on the first farm the crawler on the first farm indexes the content proberly. The second farm though is not knowledgable of the content unless you force him to reiterate the content database. After this procedure the sites
    are available on the second site as well (showing them in the web browser), but the search farm still is not able to see the new site collection and data created within.
    Is there any additional iteration we have to go through making the crawler aware of the new structure / content ?
    Thanks in advance, Jens

    Nope.
    Change log wipes are real, that's how incremental crawls work in SharePoint.
    Site A is created and modified. Changes are mirrored to the second AG, content is added, logged in the changes log and then removed as the crawler on the primary farm indexes it.
    This continues until you make farm 2 aware of the changes. At that point farm 2 will look for any changes to the content in the change logs on the newly added sites. Which will be empty, or at least not contain any changes since the primary farm's last crawl.
    That explains why you don't get sites indexed properly when they are added but would explain why some content is indexed afterwards which i believe is the case?
    The second issue you'll find is that the crawls won't synchronise. Assuming continuous crawls kicking off at the same time you'll end up in a race between the two. If the primary farm is quicker then the second farm will continuously fall behind then catch
    up and go ahead of the primary indexing process, but if the secondary farm is faster then it'll race off into the distance and then any changes that occur between the secondary farm indexing a site and the primary indexing the site will be lost on the secondary
    farm.
    You'll have to run full crawls. Unless MS have done a lot of work on the supporting infrastructure incremental or continuous crawls of AOAGs won't work well.

  • Scheduled system maintenance on AU and EU data centers - March 8th 2015

    To ensure the highest levels of performance and reliability, we've scheduled a database server upgrade on our AU and EU data centers. To minimize the customer impact, the upgrade is scheduled at the most convenient hours for the regions and will take up to 6 hours to complete. During the maintenance procedure,  Partner registration, trial site creation, publish from Muse, sFTP, APIs and some site admin sections will not be available for sites on all data centers (including US). Additionally, during the 6 hours maintenance procedure, all sites on the AU and EU data centers will experience 2 sessions of 1 minute downtime. Except for the 2 scheduled  downtime sessions, the website front-ends will not be impacted by the maintenance.
    Maintenance schedule:
    Start date and time: Sunday, March 8th, 6:00 AM UTC (check data center times)
    Duration: We are targeting a 6 hours maintenance window
    Customer impact:
    Partner registration, Trial site creation, Adobe Muse publish, APIs, FTP and some admin sections will not be available through the entire maintenance window on all data centers
    All websites and services on AU and EU data centers will experience 2 sessions of 1 minute downtime sometimes within the maintenance window
    For up to date information about system status, check the Business Catalyst System Status page. We apologize for any inconvenience caused by these service interruptions. Please make sure that your customers and team members are made aware of these important updates.
    Thank you for your understanding and support,
    The Adobe Business Catalyst Team

    Congrats to Noah and Mr. X!
     System Center Technical Guru - March 2015  
    Noah Stahl
    Make System Center Orchestrator Text Faster than a Teenager using PowerShell
    and Twilio
    Ed Price: "Wow, I love the breakdown of sections. As Alan wrote in the comments, "Wow! Great article!""
    Mr X
    How to educate your users to regularly reboot their Windows computers
    Ed Price: "I love the table and use of code snippets and images! Great article!"
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Error while adding a used relationship between the New DC and the Web DC

    Hi Gurus
    We are getting the Error in NWDS while Adding  a used relationship between the New DC and the Web DC.
    Steps what we are Done
    1. Create the custom project from inactiveDC's
    2.creating the project for the component crm/b2b in SHRAPP_1
    3.After that we changed the application.xml and given the contect path.
    4.Then we tried to add Dependency to the custom create DC we are getting the error saying that illegal deppendency : the compartment sap.com_CUSTCRMPRJ_1 of DC sap.com/home/b2b_xyz(sap.com.CUSTCRMPRJ_1) must explicitly use compartment sap.com_SAP-SHRWEB_1 of DC sap.com/crm/isa/ like that it was throwing the error.
    so, we skip this step and tried to create the build then it is saying that build is failed..
    Please help us in this regard.
    Awaiting for ur quick response...
    Regards
    Satish

    Hi
    Please Ignore my above message.
    Thanks for ur Response.
    After ur valuble inputs we have added the required dependencies and sucessfully created the projects, then building of the  projects was also sucessfully done and  EAR file was created.
    We need to deploy this EAR file in CRM Application Server by using the interface NWDI.
    For Deploying the EAR into NWDI, we need to check-in the activites what i have created for EAR. once i check-in the activites ,the NWDI will deploy the EAR into CRM Application Server.
    In the Activity Log we are able to check the Activities as Suceeded but the Deployment column is not showing any status.
    When i  right click on my activity Id the deployment summery is also disabled.
    So finally my Question is that where can i get the deployment log file, and where can i check the deployment status for my application..
    Any pointers in this regard would be of great help..
    Awaiting for ur valuble Responses..
    Regards
    Satish

  • Issues with Writing back data to Planning Cube After adding a new characterstic

    Hello,
    We have two cubes and a MP on top of it. ( Reporting Cube and Planning Cube ). An aggregation layer has also been built and so does a planning BEx query on top of it.
    Before Scenario: things were working fine as we were able to write back data to planning cube using the BEx.
    After Scenario: We created a new Dimension in the planning cube and added a new characteristic ( lets call it ZX_KEY ) . This Characteristic holds the concatenated values of some of the other characteristics in its master data tables . ( For ex: Account_Customer ID_Country-code_Industry-code ).
    This new characteristic is also present in the reporting cube and has been added and transferred to MP.
    This new characterstic was added to the planning BEx query as one of the rows and we executed it the analyzer. After entering the required values for all characterstics ( including the newly added ZX_KEY ) and the key figure values, we tried saving it. Now this is where the Analyzer is throwing up this error..
    ~~Characterstic combination cannot be assigned to part provider ~~
    ~~Characterstic 'ZX_KEY'; Characterstic value 'R123000000_12_US_TX'~~
    ~~ Entered values are incorrect:Correct before navigation ~~
    The value that we are entering ( R123000000_12_US_TX ) is picked up from the master data table of ZX_KEY by double clicking and selecting it from the fetched values. So i am not sure as to why its throwing up the above error. Request your help on this please.
    regards,
    Karthik

    Hi,
    Try to check that all characteristics are correctly assigned at multiprovider level.
    hope it helps

  • Adding a new field to the Address Data for a business partner

    Hi Experts,
    I am trying to add a new custom field to the address data (all structures and tables) that is linked to a business partner on SAP CRM via EEWB. Structure is the address structure wthin BUS_EI_EXTERN. Table is BUT020. I have been told that it is not possible as there is no Business Object that allows this. When doing an EEWB, the only business object is BUPA, which when selected, adds the new custom field to BUT000. I would like the field to be added to BUT020 (Address Table). This leads me to believe that there is no standard way of doing this, which ultimatley means that it would need to be done manually. Please help me with this predicament.
    Regards
    Yusuf

    The search help exit allows you to modify functionality of search help. If you add a new field to the
    parameter list that is not contained on the selection method you can manually populate it within the search
    help exit.
    This  would be performed within the u2018STEP DISPu2019 section. Once within this section all search help
    data has been retrieved and is stored in table RECORD_TAB (record_tab-string) as one long string value.
    Therefore you need to read table SHLP in-order to locate position of value within string.
    Example:
    To find position of personnel number (PERNR) within elemenory search
    help M_PREMN you would use the following code:
    Loop at record_tab.
         read table shlp-fielddescr into wa_shlp
                                       with key tabname   = 'M_PREMN'
                                                fieldname = 'PERNR'.
    You could then use this information in the following way, for
    example, to find a persons organisation unit:
          select  orgeh endda
            up to 1 rows
            from pa0001
            into (ld_orgeh,ld_endda)
           where pernr eq record_tab-string+wa_shlp-offset(8)
                                                      u201Cpernr length is 8
           order by endda descending.
          endselect.
          select single orgtx
            from t527x
            into ld_orgtxt
           where orgeh eq ld_orgeh and
                 sprsl eq sy-langu and
               ( endda ge sy-datum and
                 begda le sy-datum ).
    If you have added a new field to the end of the parameters list
    the next step is to populate it by adding this data to the end of
    the record_tab string:
      concatenate record_tab-string ld_orgtxt into record_tab-string.
      modify record_tab.
    endloop.

  • Dreamweaver CC Mac OSX 10.9 Crashes when adding new site

    I just recently updated my OSX to 10.9 (Maverick) on my MacBook Pro (2012) and now I cannot add new sites using Site Manager. I tried doing this with both New Sites... and Manage Sites...  menu commands. When I select the root folder for the site in my local directory, Dreamweaver shuts down and produces the error report. (see below)
    This error appears
    Process:         Dreamweaver [1494]
    Path:            /Applications/Adobe Dreamweaver CC/Adobe Dreamweaver CC.app/Contents/MacOS/Dreamweaver
    Identifier:      com.adobe.dreamweaver-13.1
    Version:         13.1.0.6443 (13.1.0)
    Code Type:       X86 (Native)
    Parent Process:  launchd [142]
    Responsible:     Dreamweaver [1494]
    User ID:         501
    Date/Time:       2013-11-05 17:48:51.308 -0500
    OS Version:      Mac OS X 10.9 (13A603)
    Report Version:  11
    Anonymous UUID:  BD382C62-DAF3-C95F-2BA7-C2389654F0FA
    Crashed Thread:  0  CrBrowserMain  Dispatch queue: com.apple.main-thread
    Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
    Exception Codes: KERN_INVALID_ADDRESS at 0x000000008301f69b
    VM Regions Near 0x8301f69b:
        __LINKEDIT             00000000400d0000-000000004010d000 [  244K] r--/rwx SM=COW  /Applications/Adobe Dreamweaver CC/Adobe Dreamweaver CC.app/Contents/Frameworks/AdobeJP2K.framework/Versions/A/AdobeJP2K
    -->
        __TEXT                 000000008fee8000-000000008ff1b000 [  204K] r-x/rwx SM=COW  /usr/lib/dyld

    Re: Mavericks, Java SE6 is required.
    http://forums.adobe.com/thread/1320137?tstart=0
    What you describe sounds like a permissions problem.
    Have you tried creating a new user account to see if that helps?
    Nancy O.

  • How do you set the number of rows in a spreadsheet, so that even when you drag data in, in writes over those rows instead of adding a new row?

    How do you set the number of rows you want in a spreadsheet, so that even when you drag data in, in writes over those rows instead of adding a new row?

    After the discovery reported above, I filed this report :
    Bug ID# 10073038
    Summary:
    When Numbers is used on a system with decimal comma a csv file may be good AND wrong
    Steps to Reproduce:
    With Numbers v2, you introduced an interesting enhancement.
    In system using the comma as decimal separator, Numbers requires csv files using the semi-colon as values delimiter.
    In fact it’s true if we OPEN the document dragging its icon on Numbers one or thru the open dialog.
    This said.
    (1) Drag and drop a csv built with the 'semi-colon' standard on a table or on a sheet
    (2) Drag and drop a csv built with the 'comma' standard on a table or on a sheet
    Expected Results:
    Every normally constituted user assume that in
    case (1) he will get a perfectly built table
    case (2) he will get every cells of a row in a single cell
    Actual Results:
    In fact you forgot the drag and drop way of use and in
    case (1) every values separated by semi-colon are inserted in a single cell
    case (2) values separated by comma are correctly spread in a table
    isn’t it ridiculous ?
    Regression:
    Except looking in  QuickView to see which is exactly the structure of the file to decide the way we will insert it in a Numbers document, we may use an applescript fair enough to replace the semi-colons by TAB characters
    or
    to replace the commas by TABs and the decimal periods by commas
    Notes:
    While I am on this subject, I wish to make two proposals:
    (1)  It would be fine to format the date according to the ISO format year-mm-dd when you export a Numbers doc to csv.
    Doing that, dates would be imported correctly in every countries.
    At this time, on an English system, you export as mm/dd/year.
    If the doc is open on a system using the format dd/mm/year, the results will be odd.
    On a system using the format dd/mm/year, you export this way and so, if the doc is open on a system using the format mm/dd/year the results are odd too.
    As every localized versions accept the ISO format (at least on entry), using it in the export scheme would give a correct behavior everywhere.
    (2) It would be fine to add the format Tab Separated Values in the Export pane.
    TSV + ISO date format would give documents opening flawlessly everywhere.
    Yvan KOENIG (VALLAURIS, France) dimanche 4 septembre 2011 21:27:41
    iMac 21”5, i7, 2.8 GHz, 4 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.0
    My iDisk is : <http://public.me.com/koenigyvan>
    Please : Search for questions similar to your own before submitting them to the community

Maybe you are looking for