Best Practice Two ISPs and BGP

Hello Experts.
I was wanting to hear opinions for the best way to setup two ISR4431's with two 2960x's and two ASA firewalls.
My current design is:
ISP1 router -> ISR4431-A ->{2960x pair} -> ASA-A
ISP2 router -> ISR4431-B ->{2960x pair} -> ASA-B
Currently using public BGP and HSRP on the inside with an SLA monitor to a public IP.
If HSRP is the best way to accomplish this, how do i solve these two problems or is there a better design? (The two 4431's are not connected to each other currently.)
-Least Cost routing (i guess that is what its called) - I want to visit a website that is located on ISP2's network (or close to it), but HSRP currently has ISP1 as active. If i go out ISP1 it may go around the country or 10 hops before it hit a site that is 4 hops away on the other ISP.
-Assymetric routing - i think that is where a reply comes in the non-active ISP - how do i prevent that.
I am really just looking for design advice about the best way to use this hardware to create as much redundancy as possible and best performance possible. If you could just share your opinion of "I would use ____" or give me a stamp of reassurance on the above design and any opinion on the two problems.
Thanks for the time!

Hi,
If you are running BGP with the service provides, you need an IBGP link between the 2 ISR-4431 routers.  If for example you want traffic to go out using sp-1 and come back using the same provider you need to us AS path prepending, so sp-2 sees a longer path to your network  and so traffic goes out and comes back through the same provider.  In this case you use sp-2 as backup link, if not you can be dealing with Asymmetric routing. In addition, for HSRP/VRRP to work both routers should be connecting to the set of  2960x switches. You can simply stack the 2960x switches so they logically look as one device. The same should go for the firewalls. They should connect to the switch stack.
HTH

Similar Messages

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • How can i get best practice for SD and MM

    Please, can any body tell me how can i get best practices for SD and MM for functional approach?
    Thanks
    Utpal

    Hello Utpal,
    I am really surprised, in just 10 minutes you searched that site and found it not useful. <b>Check out my previous reply "you will not find screen shot in this but you can add it in this"</b>
    You will not find readymade document, you need to add this as per your requirement.
    btw, the following link gives you some more link for new SAP guys, this will be helpful. <b>Check out HOW to BASIC transaction</b>
    New to Materials Management / Warehouse Management?
    Hope this helps.
    Regards
    Arif Mansuri

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best practices with sequences and primary keys

    We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
    The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
    I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?

    >
    I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
    >
    Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
    Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
    1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
    2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
    (which now has a value LATER than the value used by session 1) in the INSERT statement.
    3. at time t5 session 2 COMMITs.
    4. at time t7 session 1 COMMITs.
    Tell us: which row was added FIRST?
    If you extract data at time t4 you won't see ANY of those rows above since none were committed.
    If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
    For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
    Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
    The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
    About the best you, the user (i.e. not ORACLE the superuser), can do is to
    1. create the table with ROWDEPENDENCIES
    2. force delayed-block cleanout prior to selecting data
    3. use ORA_ROWSCN to determine the order that rows were inserted or modified
    As luck would have it there is a thread discussing just that in the Database - General forum here:
    ORA_ROWSCN keeps increasing without any DML

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Best practice - Heartbeat discovery and Clear Install Flag settings (SCCM client) - SCCM 2012 SP1 CU5

    Dear All,
    Is there any best practice to avoid having around 50 Clients, where the Client version number shows in the right side, but client doesn't show as installed, see attached screenshot.
    SCCM version is 2012 SP1 CU5 (5.00.7804.1600), Server, Admin console have been upgraded, clients is being pushed to SP1 CU5.
    Following settings is set:
    Heartbeat Discovery every 2nd day
    Clear Install Flag maintenance task is enabled - Client Rediscovery period is set to 21 days
    Client Installation settings
    Software Update-based client installation is not enabled
    Automatic site-wide client push installation is enabled.
    Any advise is appreciated

    Hi,
    I saw a similar case with yours. They clients were stuck in provisioning mode.
    "we finally figured out that the clients were stuck in provisioning mode causing them to stop reporting. There are two registry entries we changed under [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\CCM\CcmExec]:
    ProvisioningMode=false
    SystemTaskExcludes=*blank*
    When the clients were affected, ProvisioningMode was set to true and SystemTaskExcludes had several entries listed. After correcting those through a GPO and restarting the SMSAgentHost service the clients started reporting again."
    https://social.technet.microsoft.com/Forums/en-US/6d20b5df-9f4a-47cd-bdc3-2082c1faff58/some-clients-have-suddenly-stopped-reporting?forum=configmanagerdeployment
    Best Regards,
    Joyce
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Best Practices for AD and Windows Environment

    Hello Everyone,
    I need to create a document having the best practices for AD containing best practices for DNS, DHCP, AD Structure, Group Policy, Trust Etc.
    I just need the best practices irrespective of what is implemented in our company.
    I just need to create a document for analysis as of now. I searched over the internet but could not find much. I would request you all to pour in your suggestions from where i can find those.
    If anyone could send me or point me the link. I am pretty new to the technology, so need your help.
    Thanks in Advance

    I have an article where I shared the best practices to use to avoid known AD/DNS issues: http://www.ahmedmalek.com/web/fr/articles.asp?artid=23
    However, you need first to identify your requirements and based on these requirements, you can identify what should be implemented on your environment and how to manage it. The basics here is that you need to have at least two DC/DNS/GC servers per AD domain
    for the High Availability. You need also to take a system state backup of at least one DC/DNS/GC server in your domain. As for DHCP, you can use 50/50 or 80/20 DHCP rule depending on your setup.
    You can also refer to that: https://technet.microsoft.com/en-us/library/cc754678%28v=ws.10%29.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Portal Design - Best Practices for Role and Workset Tab Menu

    We are looking to identify and promote best practices in SAP Portal Design. 
    First, is there a maximum number of tabs which should exist on the highest level tab menu, commonly called the role menu?  Do a large number of tabs on this menu cause performance issues?  Are there any other issues associated with a large number of tabs on this menu?
    Second, can the workset tab menu be customized to be 2 lines of tabs?  Our goal is to prevent tab scrolling.
    Thanks

    Debra,
    Not aware of any performance issues with the number of tabs in the Level 1 or 2 menus, particularly if you have portal navigation caching enabled.
    From an end user perspective I guess "best practice" would be to avoid scrolling in the top level navigation areas completely if possible.
    You can do a number of things to avoid this, including:
    - Keep the role/folder/workset names as short as possible.
    - If necessary break the role down into multiple level 1 entry points to reduce the number of tabs in level 2.
    An example of the second point would be MSS.  Instead of creating a role with a single workset (i.e. level 1 tab), we usually split it into two folders called something like "My Staff" and My Finance" and define these folders as entry points.  We therefore end up with two tabs in level 1 for the MSS role, and consequently a smaller number of tabs in level 2.
    Hope that helps......
    Regards,
    John

  • Two Asa Two Isp and Windows 2008 R2 Server

    Hello Everybody ,
    If you can support my issue , I do appreciate a lot.
    First of all thanks a lot for your interest ..
    Here is my  issue :
    I have two Isp Connection ( 1 metro Eth Connection  and 1 Ghdsl Connection )
    1) Asa 5505 (Version 8.0(5)) is for the 1.st Isp Connection
    Windows 2008 R2 server is up and running as Web Server on this ASA 5505 config.
    As:
    (static (inside,outside) mywebsrv.mycompany.com 192.168.5.5 netmask 255.255.255.255
    And Ipconfig of W2008Srv is 192.168.5.5 255.255.255.0 192.168.5.1 (Gateway ASA 5505)
    2) Asa 5510 (Version 8.0(5)) is for the 2.nd Isp Connection
    Windows 2003 R2 server is up and running as Ftp Server on this ASA 5510 config.
    As:
    (static (inside,outside) myftpsrv.mycompany.com 192.168.50.10 netmask 255.255.255.255
    And Ipconfig of W2003Srv is 192.168.50.10  255.255.255.0 192.168.50.1 (Gateway ASA 5510)
    Here is my question :
    I need to move my Ftp server (due to old hardware + old server issues ) 
    into the Windows 2008 R2 Server ( HP DL Server with 4 Nic).
    If I conect my Asa 5510 to the second nic of Windows 2008 R2 Server.
    and give an ip address as 192.168.50.10 255.255.255.0
    what should be the gateway Ip address : ?
    Before I go ahead and implement :
    a) What do I need to do  on  the Windows 2008 R2 Server
    as persistent route adds with different metrics
    b) Any config adds or changes on Asa 5505 and ASA 5510 regarding static routes with
       different metric and so on ...
    Many thanks in advance for your support .

    If you do that, the second interface will work as a failsafe for the first NIC.
    As far as i know, you won't be able to route traffic based on the type of traffic nor do load-balancing between the interfaces.
    I guess the best approach will be to get a newer server and use it as a replacement for the one running 2003 R2.

Maybe you are looking for

  • HT5622 NO NAMES APPEARS IN MY CONTACTS

    Please help me. I started iCloud and when sync my iphone with outlook, no names appear in my iphone contacts

  • Library not loaded when osx start with single mode

    Description: Product:Mac mini os:mac os x 10.7.5 step1. For app software testing,the follow files was moved from /usr/lib to / . libcrypto0.9.7.dylb libcrypto0.9.8.dylb libcrypto.dylb step2.restart os Problem  appear: the gray screen appear a long ti

  • Anyone know the correct Interent setting for Windows Vista to talk to Grace

    I just can't get the download track names function to work. It says 'CDDB info is not avalaible for some of these items' It says this immediately and then locks up searching endlessly. Even if I change a track name I bought off Itunes to 'track 1', a

  • Save multiple times (save as)

    Isn't there a way in iPhoto to perform a "save as" function? I have several photos that I would like to save and print as both color and black and white. How do I save each edit as a different file name so I can see both and print both without having

  • How do u get the nano to show album art?

    how do u get the nano to show album art when playing a song because mine used to show album art for a bon jovi song but now it dosnet plz help if u can thnks