Running bgp with provider, best practices

Hi all
We have recently got a link from a provider to give us point to point between 2 offices, the provider is running bgp to us.
What best practices should I do when configuring this! At the moment we have connectivity, with basic neighbour statements etc.
What things should I so for security and to protect my environment from the provider etc?
Cheers
Carl

Hi,
This is a very well concern for a provider and Customer as CE-PE connectivity is the connection between two different entities. But when we talk about the CE-PE connection what all we can prevent:
1. Securing BGP neighbor ship with enabling password
2. Preventing Excessive Route flooding
3. Securing the date over an MPLS VPN network
For detail on these refer the below document:
http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/L3VPNCon.html#wp309784
Hope it answers your query.
Thanks & Regards
Sandeep

Similar Messages

  • Portal db provider(best practice)

    Best practice question here. If I wanted to create a few db portlets(suggestions/questions) is there already an existing portal db provider/schema that I should add them to? Or is it best to simply create a schema and db provider?

    That is an interesting question, we created our own schemas for each of the portal sites we have, so basically custom made providers for all portlets used in those portals.

  • Dealing with Drobo (best practices?)

    I have two second generation Data Robotics Drobos, and have been using them under 10.6 on a MacBook via USB. Like many Drobo users, I have had various "issues" over the years, and even suffered 1TB of data loss probably related to the USB eject bug that was in Mac OS X 10.6.5-10.6.7. I have also used the Drobos on a Mac with FireWire.
    My Drobos are set up as 1TB volumes, so my 4x2TB unit shows six 1TB volumes. Using DiskWarrior on some of my volumes has reported "speed reduced by disk malfunction" and DW was unable to rebuild the directory. I fear for my data, so I have been in the process of moving data away from the drive and starting fresh.
    I would like to use this discussion to see what "best practices' others have come up with when dealing with a Drobo on a Mac.
    When I first set up the Drobo, the documentation stated that the unit would take longer to startup if using one big partition, so I chose the smallest value -- 1TB. This initially gave me a few Drobo volumes to use, and as I swapped in larger hard drives, Drobo would start adding more 1TB volumes. I like this approach, since it lets me unmount volumes I am not using (so iMovie does not have to find every single "iMovie Events" I have across 12TB of drives).
    This was also a good way to protect my data. When my directory structure crashed, and was unrepairable, I only lost 1TB of data. Had that happened on a "big" volume Drobo, I would have lost everything.
    Data Robotics own KB articles will tell you to never use Disk Utility to partition a Drobo, but other KB articles say this is what you must do to use TimeMachine... Er? And, under 10.7, they now say don't do that, even for TimeMachine. Apparently, if your parititoned under 10.6 or earlier, you can still use your TimeMachine backup under 10.7, but if you are 10.7 only, you have to use some Time Tamer utility and create a sparsebundle image -- and then you cannot browse TimeMachine backups (what good is that, then?).
    It's a mess.
    So I am looking for guidance, tips, suggestions, and encouragement. I will soon be resetting one of my Drobos and starting fresh, then after I get everything working again, I will move all my data over to it, and reset my second Drobo.

    I have been trying to do either.
    right now i have the images download when the cell is created and then stored into an NSMutable Array. the array is initially populated with a NSString value of the url to the image. I then test to see if the object at the current TableView index is a UIIimage, If not i download the image and replace the existing NSString with the UIImage in the array.
    -(UIImage*) newUIImageWithURLString:(int)urlString
    if (![[imgarr objectAtIndex:urlString] isKindOfClass: [UIImage class]])
    NSLog(@"image not there");
    UIImage *img2get = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:[imgarr objectAtIndex:urlString]]]];
    [imgarr replaceObjectAtIndex:urlString withObject:img2get];
    [img2get release];
    return [imgarr objectAtIndex:urlString];
    Works fairly well but does stall the scrolling when i download the image because i am calling it like this in the cellForRowAtIndexPath.
    UIImage *cellimage = [self newUIImageWithURLString:indexPath.row];
    cell.image = cellimage;
    I am looking into using a background process for the actual downloading so as not to interfere with the table operations. have you any thoughts on the best way to this?

  • Securing with NAT - Best Practice ?

    Hi,
    It is forbidden to do NAT Exempt from Internal to DMZ ?
    I hear there is a compliance in banking that 2 server who needs to communicate but its forbidden to know each other ip address ?
    How about NAT as second layer or firewall ?
    What is best practice to secure enterprise network from NAT point of view ?
    Thx

    Hello Ibrahim,
    No, not at all, that is not a restriction at all. You can do it if needed.
    Now looks like in your enviroment is a requirement that this 2 servers communicate with each other but they will not know each other IP address.
    Then NAT is your friend as will satisfy the requirement you are looking for.
    Well I do not consider NAT to be a security measure as for me it does not perform any inspection, any rule set any policy ,etc but I can ensure you there are a lot of people that think about it as a security measure.
    I see it as an IP service that allows us to preserve the IP address space.
    For more information about Core and Security Networking follow my website at http://laguiadelnetworking.com
    Any question contact me at [email protected]
    Cheers,
    Julio Carvajal Segura

  • Using XML with Flex - Best Practice Question

    Hi
    I am using an XML file as a dataProvider for my Flex
    application.
    My application is quite large and is being fed a lot of data
    – therefore the XML file that I am using is also quite large.
    I have read some tutorials and looked thorough some online
    examples and am just after a little advice. My application is
    working, but I am not sure if I have gone about setting and using
    my data provider in the best possible (most efficient) way.
    I am basically after some advice as to weather I am going
    about using (accessing) my XML and populating my Flex application
    is the best / most efficient way???
    My application consists of the main application (MXML) file
    and also additional AS files / components.
    I am setting up my connection to my XML file within my main
    application file using HTTPService :
    <mx:HTTPService
    id="myResults"
    url="
    http://localhost/myFlexDataProvider.xml"
    resultFormat="e4x"
    result="myResultHandler(event)" />
    and handling my results with the following function:
    public function myResultHandler(event:ResultEvent):void
    myDataFeed = event.result as XML;
    within my application I am setting my variable values by
    firstly delacring them:
    public var fName:String;
    public var lName:String;
    public var postCode:string;
    public var telNum:int;
    And then, giving them a value by “drilling” into
    the XML, E;g:
    fName = myDataFeed.employeeDetails.contactDetails.firstName;
    lName = myDataFeed.employeeDetails.contactDetails.lastName;
    postCode =
    myDataFeed.employeeDetails.contactDetails.address.postcode;
    telNum = myDataFeed.employeeDetails.contactDetails.postcode;
    etc…
    Therefore, for any of my external (components in a different
    AS file) components, I am therefore referencing there values using
    Application:
    import mx.core.Application;
    And setting the values / variables within the AS components
    as follows:
    public var fName:String;
    public var lName:String;
    fName =
    Application.application.myDataFeed.employeeDetails.contactDetails.firstName;
    lName =
    Application.application.myDataFeed.employeeDetails.contactDetails.lastName;
    As mentioned this method seems to work, however, is it the
    best way to do it??? :
    - Connect to my XML file
    - Set up my application variables
    - Give my variables values from the XML file ……
    Bearing in mind that in this particular application there are
    many variable that need to be set and there for a lot of lines of
    code just setting up and assigning variables values from my XML
    file.
    Could someone Please advise me on this one????
    Thanks a lot,
    Jon.

    I don't see any problem with that.
    Your alternatives are to skip the instance variables and
    query the XML directly. If you use the values in a lot of places,
    then the Variables will be easier to use and maintain.
    Also, instead of instance variables, you colld put the values
    in an "associative array" (object/hashtable), or in a dictionary.
    Tracy

  • Saving zip code data with PHP - best practices

    I have built my client an application that analyzes uploaded
    zip codes for
    matches with a standard set of zips. These uploaded zips can
    be one at a
    time, or a copy/paste from an XLS file (just 5 digit ZIPs).
    They are now asking me to save these uploaded zips, and I am
    wondering what
    would be the best way to do that. My two obvious choices are
    1. Write them to an external text file with a
    programmatically generated
    name, and enter the name in the database, keyed to the user.
    2. Write the zips themselves into a glob field in the
    database.
    I'm inclined to the former, since I don't think there would
    ever need to be
    any further manipulation of these zip codes, but what do you
    think? Are
    there other choices I may have overlooked?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Dang - sorry. Wrong forum.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:fvfi5j$ig7$[email protected]..
    >I have built my client an application that analyzes
    uploaded zip codes for
    >matches with a standard set of zips. These uploaded zips
    can be one at a
    >time, or a copy/paste from an XLS file (just 5 digit
    ZIPs).
    >
    > They are now asking me to save these uploaded zips, and
    I am wondering
    > what would be the best way to do that. My two obvious
    choices are -
    >
    > 1. Write them to an external text file with a
    programmatically generated
    > name, and enter the name in the database, keyed to the
    user.
    > 2. Write the zips themselves into a glob field in the
    database.
    >
    > I'm inclined to the former, since I don't think there
    would ever need to
    > be any further manipulation of these zip codes, but what
    do you think?
    > Are there other choices I may have overlooked?
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >

  • UITableView with Images Best Practices

    I have a UITableView with each row having an image coming from a remote URL. There are a great many strategies for dealing with and caching the images. I've narrowed it down to two:
    1. When the table stops scrolling let all the visible cells know they need to grab their images. Fire off a background thread to get the images then cache them in memory.
    2. Same as above, except write the images to disk.
    Has anyone played with these methods to find the breakpoint for when keeping the images in memory is too much of a burden?

    I have been trying to do either.
    right now i have the images download when the cell is created and then stored into an NSMutable Array. the array is initially populated with a NSString value of the url to the image. I then test to see if the object at the current TableView index is a UIIimage, If not i download the image and replace the existing NSString with the UIImage in the array.
    -(UIImage*) newUIImageWithURLString:(int)urlString
    if (![[imgarr objectAtIndex:urlString] isKindOfClass: [UIImage class]])
    NSLog(@"image not there");
    UIImage *img2get = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:[imgarr objectAtIndex:urlString]]]];
    [imgarr replaceObjectAtIndex:urlString withObject:img2get];
    [img2get release];
    return [imgarr objectAtIndex:urlString];
    Works fairly well but does stall the scrolling when i download the image because i am calling it like this in the cellForRowAtIndexPath.
    UIImage *cellimage = [self newUIImageWithURLString:indexPath.row];
    cell.image = cellimage;
    I am looking into using a background process for the actual downloading so as not to interfere with the table operations. have you any thoughts on the best way to this?

  • UC on UCS RAID with TRCs best practices

    Hi,
    We bought UCSs servers to do UC on UCS. The servers are TRC#1 240M3S, hence with 16x300GB drives.
    I am following this guide to create the RAID (I actually thought they would come pre-configured but it does not seem to be the case):
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers.pdf
    When it comes to setting up the RAID for the C240-M3, it is mentioned that I should create 2 RAID 5 arrays of 8 disks each, 1 per SAS Adapter.
    The thing is that on my servers I apparently only have 1 adapter that is able to control all the 16 Disks. It might be a new card that was not available at the time the guide was written. 
    So my question is: Should I still configure two RAID 5 volumes although I only have one SAS adapter or can I use a single RAID 5 (or other) volume? 
    If I stick to two volumes, are there recommendations for example to put some UC apps on one volume and some others on another volume? Those servers will be used for two clusters, so I was thinking using one datastore per cluster.
    Thanks in advance for your thoughts
    Aurelien

    Define "Best"?
    It really comes down to what your requirements are, i.e what applications are you going to use, are you going to use SAN, how many applications, what is your budget, etc, etc.
    Here is a link to Cisco's UC on UCS wiki:
    http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware
    HTH,
    Chris

  • Best Practice Two ISPs and BGP

    Hello Experts.
    I was wanting to hear opinions for the best way to setup two ISR4431's with two 2960x's and two ASA firewalls.
    My current design is:
    ISP1 router -> ISR4431-A ->{2960x pair} -> ASA-A
    ISP2 router -> ISR4431-B ->{2960x pair} -> ASA-B
    Currently using public BGP and HSRP on the inside with an SLA monitor to a public IP.
    If HSRP is the best way to accomplish this, how do i solve these two problems or is there a better design? (The two 4431's are not connected to each other currently.)
    -Least Cost routing (i guess that is what its called) - I want to visit a website that is located on ISP2's network (or close to it), but HSRP currently has ISP1 as active. If i go out ISP1 it may go around the country or 10 hops before it hit a site that is 4 hops away on the other ISP.
    -Assymetric routing - i think that is where a reply comes in the non-active ISP - how do i prevent that.
    I am really just looking for design advice about the best way to use this hardware to create as much redundancy as possible and best performance possible. If you could just share your opinion of "I would use ____" or give me a stamp of reassurance on the above design and any opinion on the two problems.
    Thanks for the time!

    Hi,
    If you are running BGP with the service provides, you need an IBGP link between the 2 ISR-4431 routers.  If for example you want traffic to go out using sp-1 and come back using the same provider you need to us AS path prepending, so sp-2 sees a longer path to your network  and so traffic goes out and comes back through the same provider.  In this case you use sp-2 as backup link, if not you can be dealing with Asymmetric routing. In addition, for HSRP/VRRP to work both routers should be connecting to the set of  2960x switches. You can simply stack the 2960x switches so they logically look as one device. The same should go for the firewalls. They should connect to the switch stack.
    HTH

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best Practice - Hardware requirements for exchange test environment

    Hi Experts,
    I'm new to exchange and I want to have a test environment for learning, testing ,batches and updates.
    In our environment we have co-existence 2010 and 2013 and I need to have a close scenario on my test environment.
    I was thinking of having an isolated (not domain joined) high end workstation laptop with (quad core i7, 32GB RAM, 1T SSD) to implement the environment on it, but the management refused and replied "do it on one of the free servers within the live production
    environment at the Data Center"... !
    I'm afraid of doing so not to corrupt the production environment with any mistake by my configuration "I'm not that exchange expert who could revert back if something wrong happened".
    Is there a documented Microsoft recommendation on how to do it and where to do so to be able to send it to them ??
    OR/ Could someone help with the best practice on where to have my test environment and how to set it up??
    Many Thanks
    Mohamed Ibrahim

    I think this may be useful:
    It's their official test lab set up guide.
    http://social.technet.microsoft.com/wiki/contents/articles/15392.test-lab-guide-install-exchange-server-2013.aspx
    Also, your spec should be fine as long as you run the VMs within their means.

  • Best practice on integration message augmentation via user exit or rfc call

    I am looking for documentation that would provide best practices on whether to use user exits to augment the data on an IDOC or to forward the standard SAP produced IDOC to PI where rfc calls are made to augment the data as required for the specific target system.
    I am sure there are pros and cons for both solutions, but I am hesitant to use user exits since we now have moved the knowledge of what a target system wants from the integration layer to the source layer.  If a second target system comes along in the future, the user exit becomes more complicated with additional target specific requirements. 
    Any links to best practice documentation on this subject is greatly appreciated.
    Edited by: Sean Sweeney on Oct 15, 2009 6:59 PM

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Best practices for nested virtualization

    Can someone please provide best practices when creating Vswitches (ports, port group etc)?
    Learning vswitches and just want to not over think the process.I don't have networking experience but understand the basics.

    Hi,
    In implementing any LO module, generally the following are the points to be taken in mind.
    1. Base level have a ODS to load data from R/3. This ensure that you have exact data content as that in R/3. This can be a write optimized.
    2. Second level have an ODS along with the transformation and modification of data based on the business and functional requirement and enhancments.
    3. Finally have a cube to consolidate the data and make it available for reporting.Create all the reports based on the cube.
    Hope this gives an idea.
    Regards,
    akhan
    Edited by: Akhan_BI on Sep 5, 2009 12:41 AM

  • SAP CRM V1.2007 Best Practice

    Hello,
    we are preparing the installation of a new CRM 2007 system and we want to have a good demo system.
    We are considering too options;
    . SAP CRM IDES
    . SAP CRM Best Practice
    knwoing that we have an ERP 6.0 IDES system we want to connect to.
    The Best Practice seems to have a lot of preconfigured scenarios that will not be available in the IDES system (known as the "SAP all in one").
    How can we start the automatic installation of the scenarios (with solution builder) connecting to the ERP IDES system?
    Reading the BP Quick guide, it is mentioned that in order to have the full BP installation we need to have a ERP system with another Best Practice package.
    Will the pre customized IDES data in ERP be recognized in CRM?
    In other words, is the IDES master data, transactional data and organizational structure the same as the Best Practice package one?
    Thanks a lot in advance for your help
    Benoit

    Thanks a lot for your answer Padma Guda,
    The difficult bit in this evaluation is that we don't know exactly the
    difference between the IDES and the Best Practice. That is to say,
    what is the advantage to have a CRM Best Practice connected to an ERP
    IDES as opposed to a CRM IDES system connected to a ERP IDES system?
    As I mentioned, we already have an ERP IDES installed as back end system.
    I believe that if we decide to use the ERP IDES as the ERP back end, we will loose some of the advantage of having an ERP Best practice connected to an CRM best practice e.g. Sales area already mapped and known by the CRM system, ERP master data already available in CRM, transactional data already mapped, pricing data already mapped etc.
    Is that righ? Or do we have to do an initial load of ERP in all cases?

  • Design Patterns/Best Practices etc...

    fellow WLI gurus,
    I am looking for design patterns/best practices especially in EAI / WLI.
    Books ? Links ?
    With patterns/best practices I mean f.i.
    * When to use asynchronous/synchronous application view calls
    * where to do validation (if your connecting 2 EIS, both EIS, only in WLI,
    * what if an EIS is unavailable? How to handle this in your workflow?
    * performance issues
    Anyone want to share his/her thoughts on this ?
    Kris

              Hi.
              I recently bought WROX Press book Professional J2EE EAI, which discusses Enterprise
              Integration. Maybe not on a Design Pattern-level (if there is one), but it gave
              me a good overview and helped me make some desig decisions. I´m not sure if its
              technical enough for those used to such decisions, but it proved useful to me.
              http://www.wrox.com/ACON11.asp?WROXEMPTOKEN=87620ZUwNF3Eaw3YLdhXRpuVzK&ISBN=186100544X
              HTH
              Oskar
              

Maybe you are looking for