Nexus 7000 - Management best practice in multi-VDC environment

Hello all, my topology includes an Admin VDC, a Core\Agg VDC and a Storage VDC for FCoE traffic. In the absence of a dedicated OOB Management switch, could I patch from an interface in the Core\Agg VDC to the Mgmt0 interface of the SUP2 and get access to mgmt0 IP of all VDCs? Or is an OOB management switch a necessity to manage this topology?  
Any information would be appreciated.
Kind regards
Rays

Thanks for the reply Richard. I understand the risk with not having an OOB network, thanks for that.
So without the OOB management network, are you saying I need a separate switch to which I can connect a physical interface from each VDC to, for management purposes? And to access the Storage and Admin VDC, would I connect the mgmt0 interfaces to the same switch? As i understand it you cannot allocate a physical i/f to the Admin VDC, so the only method is via console or mgmt0 interface...
Thanks for you're assistance...
Rays

Similar Messages

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Looking for best practice on J2EE development environment

    Hi,
    We are starting to develope with J2EE. We are looking for best practice on J2EE development environment. Our concern is mainly on code sharing and deployment.
    Thanks, Charles

    To support "code sharing" you need an integrated source code control system. Several options are out there but CVS (https://www.cvshome.org/) is a nice choice, and it's completely free and it runs on Windows, Linux, and most UNIX variants.
    Your next decision is on IDE and application server. These are usually from a single "source". For instance, you can choose Oracle's JDeveloper and Deploy to Oracle Application Server; or go with free NetBeans IDE and Jakarta Tomcat; or IBM's WebSphere and their application server. Selection of IDE and AppServer will likely result in heated debates.

  • Solution Manager best practices about environments

    Hello,
    we intend to use Solution Manager 4.0.
    My question : I wonder whether we need to have a single instance a SM (production) or whether we need to have multiple instances (one development SM where developments and customizing will be performed and one production SM populated with transport requests coming from the development SM) ?
    What are the best practices ?
    Thank you.
    Regards,
    Fabrice

    Dear Fabrice,
    In principle you donot need to have 2 instances of Solution Manager. 1 Instance is sufficient enough for monitoring all the Satellite Systems.
    However if you intending to have Customized ABAP on Solution Manager then it might be a good idea to do so in an different client in the same instance keeping the client as an development client.
    Most of the customizing in Solution Manager is non transportable hence it should be directly done in the productive client.
    Hope this answers your queries.
    Regards
    Amit

  • What are project management best practices?

    I created a test project in Premiere Elements 12, and saved it in a directory named "Michaels Posters".   Then I archived the project to this directory and it created a "Copied_My\ new\ video\ project1" directory with all of the media files.  Then I added a video clip to the project, archived it again, and it created the "Copied_My\ new\ video\ project1_001" folder below.
    My first real project will be a video highlights video of my 4 years old for 2013.  This will involve editing the same project several nights a week, for maybe a couple of months.  This would result in numerous "Copied_My\ new\ video\ project1_NNN" directories being created, assuming I archive the project each night.
    So what is the best practices for managing a larger project like this, and avoid using a lot of disk space for the same project?
    Michaels\ Posters/
    ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   └── My\ new\ video\ project1.PRV
    ├── Copied_My\ new\ video\ project1
    │   ├── Adobe\ Premiere\ Elements\ Preview\ Files
    │   ├── Encoded\ Files
    │   └── Layouts
    ├── Copied_My\ new\ video\ project1_001
    │   └── Adobe\ Premiere\ Elements\ Preview\ Files
    ├── Encoded\ Files
    │   └── My\ new\ video\ project1.prel
    ├── Layouts
    └── Media\ Cache\ Files

    I do work with the LAST archived project file, which contains ALL necessary resources to edit the video.  But then if I add video clips to the project, these newly added clips are NOT in the archived project, so I archive it again.
    The more I think about it, the more I like this workflow.  One disadvantage as you said is duplicate videos and resource files.  But a couple of advantages I like are:
    1. You can revert to a previous version if there are any issues with a newer version, e.g., project corruption.
    2. You can open the archived project ANYWHERE, and all video and resource files are available.
    In terms of a larger project containing dozens of individual clips like my upcoming 2013 video highlights video of my 4  year old, I'll delete older archived projects as I go, and save maybe a couple of previous archived projects, in case I want to revert to these projects.
    If you are familiar with the lack of project management iMovie, then you will know why I am elated to be using Premiere Elements 12, and being able to manage projects at all!
    Thanks again for your help, I'm looking forward to starting my next video project.

  • Multiple room management -- best practice -- server side http api update?

    Hi Folks, 
    Some of the forum postings on multiple room management are over year old now.  I have student/tutor chat application which has been in the wild for 5 months now and appears to be working well.  There is a single tutor per room, multiple chats and soon to be a whiteboard per student, which is shared with the tutor in a tabbed UI. 
    It is now time to fill out the multiple tutor functionality, which I considered and researched when building, but did not come to any conclusions.   I'm leaning towards a server side implementation.  Is there an impending update to the http api?
    Here is what I understand to be the flow:
    1) server side management of who is accessing the room
    2) load balance and manage the room access 1 time user and owner session from the server side
    3) for my implementation, a tutor will need to login to the room, in order for it to be available
    4) Any reconnection would in turn need to be managed by the server side, and is really a special case of room load balancing.
    My fear is that at some point I'm going to need access to the number of students in the room or similar and this is not available, so that I'll need client functionality, which will need update the server side manager.
    As well, I'm concerned about delays on the server side access to which might create race conditions in a re-connect situation.  User attempts to reconnect, but server side manager thinks that the user is already connected.
    Surely this simple room management has been built, does anyone have any wisdom they can impart?  Is there any best practice guidance or any samples?
    Thanks,
    Doug

    Hi Raff, Thanks a ton for the response.
    I wasn't clear on what I was calling load balancing.  What I mean by this is room assignment for student clients.  We have one tutor per room.  There are multiple students per room, but each is in their own one-on-one chat with the tutor.
    I'm very much struggling with where to do the room assignment / room managemnt, on the server side or on the client side (if that is even possible).  In my testing it is taking upwards of 10 seconds minimum to get a list of rooms (4 virtually empty rooms) and to query the users in a single room (also a minimum of users/nodes in the queried room).   If after this point, I 'redirect' the student to the least full room, then the student incurs the cost of creating a new session and logging into the room.  As well I intend to do a bit of xml parsing, and other processing, so that 10 seconds is likely to grow.
    Would I see better performance trying to do this in the client?
    As far as the server side, at what point does a room go to 'not-active'?
    When I'm querying the roomList, I am considered one of the 'OWNER' users in the UserLists.  At what point can it be safe to assume that I have left the room? 
    Is there documentation on the meaning and lifecycle of the different status codes?  not-active,  not-running, and ok?  Are there others?
    How much staleness can I expect from the server-side queries?
    As far as feature set, the only thing that comes to mind is xpath and or wild card support for getNode() but i think this was mentioned in other posts.
    Regarding the reconnection issues, I am timing out the student after inactivity, and this is probably by and large the bulk of my reconnect use cases.  This and any logout interaction from the student presents a use case where I  may want reassign the student return to the same room as before.  I can envision scenarios of a preferred tutor if available etc.  In this case, I'll need to know list of rooms.  In terms of reconnection failover, this is not not a LCCS / FMS issue.
    Thanks again for responding.

  • SRM EBP User management - best practice followed for your customer.

    Hello All,
    What are the best practices followed on SRM User manageemnt  followed for your customers.
    (1)When employee/ buyer  leave the organisation  ? what actions you do ? do you lock the users?
    (2) If any thing interested share your experiences.
    (3) What exactly customer expects from SRM systems on SRM user management?
    (4) SAP audit/ customer audit practice on USER management ?
    Any piece of information on your experiece/ best practice  is appreciated.
    regards
    Muthu

    Thanks Peter .
    it is happening only in SRM right.
    Is any work around for this issue.
    In future SRM any planing to take care of this.
    in ECC i can delete the user whenever the user moves .
    All SRM customers will be very happy if SRM gives some workaround for this issue.
    Every customer wants to reduce cost .
    How can I find what are the opening documents for this user in one shot ?
    thanks for answering this question.
    I have seen our Eden Kelly report helps for shopping cart and other BO.
    You are doing good job on our SRM  WIKI innovative topics and discussons. I appreciate.
    why i am raising this concern is that one user left the organisation and again we want to edit the data whch entered by the left user . system will not allow us to do after deleting the user.
    so we are approaching SAP for ths help.
    It is very difficult to convice the customers on this issues.
    br
    muthu

  • Nexus 5020/2248: Best Practice for DSCP-trust, verses DSCP-don't trust

    We have several Nexus 5020/2248's for our server access to the network (approximately 1000 ports). On about 20 of these ports we want to trust the DSCP markings, since they are from out Cisco Unified Comm appliances (these are access ports). However, on all other ports  (mostly access, so DSCP, but ~ 100 trunks so some are COS also) we want to "not trust" or essentially zero out any non-zero DSCP value.
    Does anyone have suggestions on easiest/best method for setting  the trust boundaries for Nexus 5020/2248s?
    Note: vers 5.0(3) NS(1)
    Thanks!
    Mike.

    Hi
    Before any other think  I would like you to upgrade your WLC image to 7.6.130.0 & FUS to 1.9.0.0. FUS upgrade will require 30-40min downtime to your wireless.
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/release/notes/fus_rn_OL-31390-01.html
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/release/notes/crn76mr03.html
    It tells me to disable 802.11a and 802.11b. If I go into each radio on 'Network' tab and disable these - NOTHING can connect to any SSID. So unless I've misunderstood something - this is a contradiction in the best-practice link I posted above,
    This simply says, you have to disable radio band before changing QoS profile values. Once you change those value you can re-enable the radio band. :)
    If I enable FT-PSK and PSK - for devices that support FT-PSK (which i'm under the impression is 802.11r) will these devices use that and for devices that don't they'll use 'regular' PSK? How do I know if my client is connecting using PSK or FT-PSK as both are enabled on the same WLAN.
    Read these 802.11r post & you will understand how 802.11r works. There are certain client devices does not like PSK & FT-PSK capability advertise on the same SSID & may have connectivity issues. But most of the clients like iPhone, iPads,etc  will connect without any problems.
    http://mrncciew.com/2014/09/06/cwsp-802-11r-ft-association/
    http://mrncciew.com/2014/09/07/cwsp-802-11r-over-the-air-ft/
    http://mrncciew.com/2014/09/08/cwsp-802-11r-over-the-ds-ft/
    "show client detail <mac_address>" should indicate if client connected over FT-PSK or PSK.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Best Practices for multi-switch MDS 9124 Impelementations

    Hi,
    I was wondering if anyone had any links to best-practices guides, or any experience, building mutli-swtich fabrics with the Cisco MDS 9124 or similar (small) switches? I've read most of the FibreChannel books out there and they all seem pretty heavy on theory and FibreChannel protocol operations but lack when it comes to real-world deployment scenarios. Something akin to the Case Studies sections a lot of the CCIE literature has, but anything would be appreciated.
    Regards,
    Meredith Shaebanyan

    Hi Meridith
    www.Whitepapers.zdnet.com has links to good reading. It has links to items like:
    http://www.vmware.com/pdf/esx_san_cfg_technote.pdf is probably a typical SAN environment these days. It's basic and just put your 9124's in where the switches are.
    http://www.sun.com/bigadmin/features/hub_articles/san_fundamentals.pdf is for bigger SANs such as DR, etc.
    Things to consider with 9124's are:
    They can break so keep a good current backup on a tftp/ftp/scp server.
    Consider that if you have all the ports used, the two 8 port licences are not going to work on a replacement switch as they are bound to your hostid. The vendor that sold the switch should be able to get replacements quickly but you will lose time with them.
    Know exactly what the snmpserver command does as if you have your 9124 replaced and you load your backup config and you use Fabric Manager, it won't be able to manage the 9124 unless you change the admin password with snmpserver.
    9124/9134's don't have enough Buffer Credits to expand beyond about 10 km.
    Any ISL's used between switches should always be at least two and use Port Channels where possible.
    The 9124 or 9124e or 9134 are great value based switches. I keep a spare for training and emergencies. We use them in a core/edge solution and I am very satisfied with them. I have only had one failure with Cisco switches in the last 5 years and it was a 9140 that sat around for far too long doing nothing. The spare meant we were up and running in 30 minutes from the time we noticed the failure and got to the data centre. As there were two paths, no one actually noticed anything. My management system alerted me.
    Remember to make absolutely sure that any servers attached to the SAN have multipathing software. The storage array vendors (HDS, EMC, etc) can sell you the software such as HDLM or Powerpath. You can use an independent solution such as Veritas DMP. Just don't forget to use it.
    Follow the guidelines in the two documents and get some training as the MDS training is very good indeed. 5 days training and you will be confident about what to do in any sized SAN including Brocade and McData.
    A small SAN is just as satisfying as a large one. If in doubt, get a consultant to tell you what to do.
    Is that what you was after? I hope it was not too simple.
    Stephen

  • Hotfix Management | Best Practices | WCS | J2EE environment

    Hi All,
    Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
    Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
    - ONLY this jar
    - the jar with 8 old files and 2 new files.
    - the jar using whatever dependent jars are required.
    - the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
    Pointers, more in line with a WCS environment would be very much appreciated!
    Regards,
    Mrinal Mukherjee

    Moderator Action:
    This post has been moved from the SysAdmin Build & Release Engineering forum
    To the Java EE SDK forum, hopefully for closer topic alignment.
    @)O.P.
    I don't think device driver build/release engineering is what you were intending.
    Additionally, your partial post that was accidentally created as a duplicate to this one
    has been removed before it confuses anyone.

  • Working with version management and promotion management best practices BO 4.1

    Hi Experts
    I wondered if anybody knows if there is a document or something about best practices to work with the version management and promotion management in BO 4.1?
    Our Environment includes two servers. The first one is our development and test server. The second server is our prod system.
    Now on the dev server we have basically two folders called dev and test. We control access to them with a right system based on the folder structure.
    My question now is how you would work in this scenario (third server is not an option). Main target is to have as few reports as possible. Therefore we try to work with the version management system and only have one version of each report in the dev folder of the cms. But this is where problems start. Sometimes the newest version is not the version we want to publish to the test folder or even prod server.
    How would you publish the report to the other folder? Make a copy of the concerned report (transport to the same system via promotion management is not possible). Also how would you use the version management in regards to the folder structure? Only use version management in dev folder and export reports to test folder (out of vms control) or also use vms in test folder and how would that work?
    Further more I’d be interested in learning best practices with promotion management. I found out that the promotion of a report that doesn’t exist in prod doesn’t make any problems. But as soon as an older version already exists there is only partial success and the prod folder gets renamed to “test”.
    Any suggestions on how to handle these problems?
    Thank you and regards
    Lars

    Thank you for your answer.
    So you are basically proposing to work with the vms in the dev folder and publish the desired version to the test folder. And the test folder is out of version control in this scenario if I understood you correctly (like simple data storage)?
    And how would you suggest promoting reports to the prod system? Simply by promoting the
    desired version from dev folder directly to prod? This would probably lead to inconsistence because we would need to promote from dev system to test and dev to prod instead of promoting a straight line from dev over test to prod. Furthermore it would not solve the problem of the promoting result itself (A new folder called dev will be generated in prod but the report gets promoted to the prod folder if there was no report before).
    Thank you for the link. I came across this page just a few days ago and found also lots
    of other tutorials and papers describing the basic promoting process. The promoting process in general is clear to me but I wondered if it is possible to change some parameters to  prevent folder renaming for example.
    Regards
    Lars

  • X2100M2 Embedded LIghts Out Manager best practice

    Hi guys,
    I'm in worry about the best practice of configuring network interface on a X2100M2 Solaris 5.10 for the Embedded LIghts Out Manager. Hope you can help. I haven't find any documents of it which explain the best practice.
    Here is the situation :
    I've have 4 network interfaces but I only need two of them. So I decide to use the bge0 and the bge1 interfaces.
    bge0 is the server interface with an IP with .157
    bge1 is the ELOM interface with an IP with .156
    In the past, it was the reverse : bge0 was the ELOM with .156 and bge1 was the network server int. with .157
    Could you please guys let me know what is the best practice? Does the int.0 must be the server one? Is it possible to have network problem with this kind of configuration?
    Thanks
    Cheers,

    hi guys,
    No one have a clue? I've got some dukeDolars to offer...
    Tks

  • Operating system image build and management best practices?

    how do we create gold images for servers/desktops,
    Best practices image management,
    How do we control changes?
    How do we prevent unauthorized changes (installation of software)?
    What tools we can use for above.

    I use MDT 2013 Lite Touch to create my images
    http://www.gerryhampsoncm.blogspot.ie/2014/03/create-customised-reference-image-with.html
    You should use in-built ConfigMgr Role Based Access Control to manage images afterwards (look at the Operating System Deployment Manager role).
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

Maybe you are looking for

  • Cancelling Credit memo not reverse the accounting document

    Hello, Hello, We are facing pecular problem in production system when we Cancel the sales credit memo system is not genarating the reversal accounting document and it's genarating the entry same way credit memo. When we do the credit memo it's passin

  • Mass change for material master data

    Dear all, Previously my company had marked some storage location for MRP indicator to exclude the storage location stock from MRP run. Now I need to reactive back those storage location, where should I go? I know a way which can active back material

  • Preparing for a PDF review

    This question was posted in response to the following article: http://help.adobe.com/en_US/acrobat/using/WS53DFBF0D-427E-46d1-829C-BBAED285F67D.w.html

  • Module Processor not getting executed

    Hi All, I hav created a module processor in the same way as it is mentioned in the pdf https://websmp201.sap-ag.de/nw-howtoguides -> exchange infrastructure -> how to Create Modules for the J2EE Adapter Engine. But to my surprise, it is not working.I

  • Cartesian selects after Update from 10.0.2.0.3 to 10.0.2.0.5

    Hello, we just installed the patch 10.0.2.0.5 on a 10.0.2.0.3 database and some selects didn't work as before. While changing the select clause, there are diffent counts. There are 3 tables: Detail_1 => MASTER_1 <= Detail_2 MASTER_1 has a primary and