Is it best practice to use a dedicated link between NX7K pair for keep-alive?

I have been using dedicated physical 10G link between pair of NX7k for keep-alive. Is it a best practice? It seems a waste of 10G ports because keep-alives does not need that much bandwidth.
I'm thinking just configure a dedicated VLAN interface in the default VRF, and have it routed through other devices (fir example 6509 core switches) for keep-alive. Has anyone done that before? The goal is not to use dedicated 10G ports for keep-alive.
Thanks a lot.

gwhuang5398 wrote:I have been using dedicated physical 10G link between pair of NX7k for keep-alive. Is it a best practice? It seems a waste of 10G ports because keep-alives does not need that much bandwidth.I'm thinking just configure a dedicated VLAN interface in the default VRF, and have it routed through other devices (fir example 6509 core switches) for keep-alive. Has anyone done that before? The goal is not to use dedicated 10G ports for keep-alive.Thanks a lot.
I agree that using a Tengig port for this purpose is not efficient. The whole idea of peer-keepalive is to have a way to avoid split brain scenarios in case of peer-link failure. For this reason the peer-keepalives should never cross the peer-link as it defeats the original purpose I just cited. Following are some viable options:
You can use the N7K management ports for peer-keepalive functionality. If you have redundant supervisors then make sure you use an external switch (OOB switch for example) to patch the management ports.
You can use dedicated interfaces just like you are doing right now but that would make more sense if you had Gig ports in my opinion.
You can run peer-keepalives inband but you need to make sure this traffic is never routed over the peer-link. It is considered a good practice to use a dedicated VRF for this purpose.
Atif

Similar Messages

  • Best practice to use MediaPlayer?

    Is the best practice to use attributes of MediaPlayer (such as playing) or to request the PlayTrait and read its playState attribute?
    (This question comes from a customer.  Reposting here for the OSMF team to respond and for the benefit of the whole group.)
    Sumner Paine
    osmf product manager

    For playback use cases, I'd recommend you stick with MediaPlayer, as it's simpler to use and manages all of the trait event registration.

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Best practices on using EVALUATE functions

    hi, experts,
    I wanna know what is the best practices on using EVALUATE functions on obiee (calling oracle user defined functions)
    I found that if I use evaluate functions in Answers,
    obiee will construct a sql behind and then execute.
    sometimes, obiee contructs some unexpected sqls, and returns errors.
    so, is it better to use EVALUATE functions in logical columns ?
    thanks

    EVALUATE('DB_Function(%1)' as returntype, {Comma separated Expression})
    even when used in Logical columns, its gonna fire the same sql.

  • Best practice to use PXE on 802.1X network ?

    Hello,
    We use Cisco ISE 1.2.0.899 on our network (we plan to upgrade to 1.3 in some months).
    Our network includes Cisco models 2960S (and some 2960T) about wired and 2602I (with WISM2) about wireless.
    We have to allow PXE boot on one (or many) VLAN.
    Do you know what's the best practice to use PXE on a 802.1X network ?
    Does ISE and/or Switch can recognize PXE request?
    Do we have to use settings/rules into ISE or on Switch?
    Does the easy way is to allow PXE on WebAuth VLAN?
    Regards,
    Chris

    I am in a similar position.
    We would prefer to keep all switch ports common, even those used for imaging from scratch.
    For PXE as far as I can see we need to allow the port to quickly fail 802.1X and MAB to a remediation VLAN.
    Using ISE we can apply an ACL that allows PXE bootp and dhcp requests and responses along with any other traffic we want in that network i.e. access to internet proxy server, anti-virus updates for posturing etc.
    I haven't configured this yet so I'm not sure of what issues we'll face with timing. We currently use an auth pattern of 802.1X first, then MAB, then fail open to the static VLAN. With ISE 1.3 this is the supposed suggested method instead of a hard "closed" mode. 
     switchport access vlan XX
     switchport mode access
     network-policy VV
     ip access-group ACL-ALLOW in
     authentication event fail action next-method
     authentication event server dead action reinitialize vlan XX
     authentication event server dead action authorize voice
     authentication host-mode multi-domain
     authentication open
     authentication order dot1x mab
     authentication priority dot1x mab
     authentication port-control auto
     authentication periodic
     authentication violation restrict
     mab
     dot1x pae authenticator
     dot1x timeout tx-period 10

  • Best practice to use Tortoise SVN with LV

    Can anyone recommend what is the best practice to use and structure the project using TSVN with LV? I have seen the jki tool and have also read about some issues of linkage when using TSVN with LV as posted on the forum here. I suppose these linkage issues still exists? Other than perforce is there any suggestion on source control that integrates well with LV?
    TIA
    CLD,CTD

    We use Tortoise SVN with LV and it works very good. It's not integrated in that i cannot from within LV check in and out stuff, i have to do that in the Explorer. That's not a problem to me.
    SVN is a very good source and version control regardless.
    One small issue with external handling is if you want to change an already used and active filename. In LV you can save to another filename and references will update, but ofcourse SVN doesn't pick up on that automagically. There are two solutions to this:
    1. When you check in, you'll get 1 added and 1 deleted file, select both, r-click and "Repair move".
    2. After changing the filename in LV, change it back in explorer and r-click the file for a SVN rename and rename it to the new name.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best practices to use stored procedure

    Just wondering about best practices of using Stored procedures in TOPLink with respect to Objects. Any thoughts on this?
    I find the approach suggested at re:Coding for Stored Procedures is a lot of work! seems to be fine.
    Is there any thoughts on converting results directly into Java objects?
    Murali

    I encountered the same problems.
    See the topic I posted: Re: Mapping a Java attribute to the result of a function call
    The solution I used is good, but has its restrictions.
    I created a database view on the query that uses stored functions. Then, I mapped my object to the database view. Problem solved.
    However, like I said, this solution has its restrictions:
    1) Your database must support views
    2) This only works for read-only queries
    I hope this helps you any further.
    Kind regards,
    Erwin

  • Does anyone know the best practices to use Captivates on an Elearning course, please...

    I need to know the best practices to use captivates on an eLearning course, as how much information should it has, etc..

    Hello There,
    Adobe Captivate has multiple workflows which can help you to create eLearning courses. It can create various types of learning content and I suggest you to visit the following links.
    Product Info: www.adobe.com/products/captivate/
    OnDemand Seminars to get more info on what captivate can do: http://www.adobe.com/cfusion/event/index.cfm?event=list&type=ondemand_seminar&loc=en_us
    Register for Trainings and Webinars: http://www.adobe.com/cfusion/event/index.cfm?event=list&loc=en_us&type=&product=Captivate& interest=&audience=&monthyear=
    If you have specific scenarios to discuss, you can mail me at [email protected] or tweet me at @vish_adobe
    Thanks,
    Vish
    @vish_adobe

  • How Can we force the Hyper-V Replica to use a dedicated network between two hyper-v hosts?

    Dear Team,
    How can we force Hyper-V Replica to use a dedicated network between two hyper-v hosts?
    For live migration, we can choose the desired network in Hyper-V Manager (Use these IP addresses for Live Migration).
    I have two 10G adapters teamed together, the virtual switch is created on top as converged network with several vNICs (MangementOS, LiveMigratrion, Backup, etc...)
    Each network is set with a specific VLAN to isolate the traffic.
    The two Hyper-V hosts are on the same LAN and domain.
    Thank you.
    Regards,

    I have accomplished this by using a DNS pointer specifying an IP address on that dedicated network. That will force traffic done that network.
    John Johnston

  • Can I use Unix symbolic links between Mountain Lion and Snow Leopard Mail folders?

    After upgrading to Mountain Lion, I partitioned my iMac HD to have two partitions: Macintosh HD has Mountain Lion; I reinstalled Snow Leopard on Macintosh HD 2. Best part: you can access your user-created files from EITHER disk partition. But not so OS X Mail. I wanted to revert to Snow Leopard, since I don't like the iOS-like Mountain Lion (swipe THIS!), but Mail is a problem since all my Snow Leopard Mail was successfully migrated over to Mountain Lion during upgrade.
    Is there a way to use Unix symbolic links between actual OS X Mail folders in Mountain Lion and OS X Mail in Snow Leopard?
    It seems the (trial) symbolic link created pointed to a blank file.

    It's not a matter of "letting" each maintain its own database, William. By default, I believe, I have no control over what gets written. In fact, if there were a way to set a preference that says, use this index named "spotlightindexSL' only [while in SL], that might solve my problem. Then when booting up in ML, it would just go after the index it last made.
    My guess is that while I am in ML or SL and not the other, there are all sorts of changes to files and the system freaks and says "Oh, now look at what a mess I've made — there are all sorts of files unaccounted for. Now I have to rebuild the whole thing."
    I have 2 had drives in my Mac, both 500GB. One (Working Disk) has no operating system and all my files, and the other drive is partitioned 470/30gb with SL on the 470 and ML in the 30. When I restart in either OS, the auto-start Indexing as if for the very first time, and do both hard drives (in total: 3 partitions of files, not counting the ML Restore partition).
    I know it all uncoventionally — just wanted to see what my $20 new OS will cost me in software upgrades, in particular my $1800 Adobe Design Suite CS4 and a few others.

  • Best Practice in using Business Packages

    Hi All,
    Are there any Best Practices in the use of Business Package content?   Do you assign the Roles delivered by the Business Package and do you make changes to the original iViews?
    or
    Do you copy the content delivered in the Business Package to a new folder and work with there?
    These questions are purely at the configuration level and not at the Java coding level.   For instance if I want to turn of the iView Tray, or change a parameter such as height, or even remove an iView from a page or Role.
    I would like to know the various approaches the SDN community uses and the different challenges and benefits that result in each approach.
    Look forward to hearing from you all
    Paul

    Hi Paul,
    I also build my own roles. The only time I might use the standard roles is for demo purposes early in a project.  You will find that in some cases the business packages like MSS don't always even include standard roles, so you have no choice but to build.
    I never change any of the standard iViews/Pages/Worksets - ever.
    The most contentious issue seems to be whether to do a full or delta link copy of the standard objects.  I tend to initially do a full copy of the objects into a custom folder set in the PCD and modify those. Then I only use delta links from Page to iViews where I need the option of setting different properties for the same iView if it appears in multiple pages.  Delta links can be a bit flakey at times, so I tend to only use them where I have to.  I suspect that I may get to a point where I don't use them at all.
    Just my 2 cents worth....
    Regards,
    John

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best Practice to use a single root Application Module?

    I was reading in another thread that it may be a good idea to have all application modules nested within a single root application module (AM) so that there is only 1 session maintained for the root AM, versus an individual session for each AM. Is this a best practice? If yes, should the root AM be a skeleton AM (minimal customer service methods), or, should you select the most heavily used AM and nest the other AM's underneath of it?
    In my case, I currenlty have 2 AM's (and will have 3 AM's in the future) each representing a different set of use cases withn the application (i.e., one supports users searches / shopping cart-like functionality, and the second supports an enrollment process.) It could be the case that a user only accesses pages on the web site to do searches (first AM), or only to do enrollment (2nd AM), or, they may access pages of the site that access both AM's. Right now I have 2 separate AM's that are not nested. Should I nest the AM's and define a root AM?
    thanks

    Hi javaX
    The main physical effect of having 2 separate AMs is that they have their own transactions with the database, and presumably sit in the application module pool as their own instances consuming connections from the connection pool. Alternatively a single root AM with 2 nested AMs share a single transaction through the root AM; only the root AM controls the transaction in this scenario.
    As such it's a question of do you need separate transactions or will one suffice?
    How you group your EOs/VOs etc within the AMs is up to you, but usually falls into logical groups such as you have done. If a single transaction is fine, instead of creating multiple AMs, you could instead just create logical package structures instead. Neither method is right or wrong, they're just different ways of structuring your application.
    When you create a nested AM structure, within your ViewController project in the Data Control Palette you'll actually see 3 data controls mapped to each AM. In addition expanding the root AM data control, you'll see the nested AMs again. Create a dummy project with a nested AM structure and you'll see what I mean.
    If you base your page definitions on anything from the root AM and it's children in the Data Control Palette, this will work on the root AM's transaction.
    If you base your page definitions on something from one of the other AM data controls that isn't inside the main root AM in the Data Control Palette, instead of using the root AM's transaction, the separate child AM will be treated as root AM and will have its own transaction.
    The thing to care of when developing web pages is to consistently use the AM and it's nested AMs, or the child AMs directly with their separate transactions, otherwise it might cause a bit of a nightmare debugging situation later on when the same application is locking and blocking on the same records from 2 separate AM transactions.
    Hope this helps.
    CM.

  • Best Practice to use MRP results

    Dear Forum,
    As you know that SAP standard system will not show any exception message for the finished  product in case one of the components is arriving late. What is the best practice to deal with that situaution if we are not going to APO.
    Thank You,
    Fadi

    Fadi,
    The planner for the component will see the exception message.
    Depending on the organization of the planning group, the component planner either addresses the entire top-to-bottom mateiral BOM chain, including the FGs, or the component planner collaborates with the FGs planner to resolve the issue.
    SAP supports automatic sending of messages to appropriate individuals within the organization, using a number of delivery methods.
    I don't believe SAP best practices speaks to this matter.  It is usually left to the company to decide.
    Rgds,
    DB49

Maybe you are looking for