Best Practice setting up NICs for Hyper V 2008 r2

I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
addresses from the local dhcp server anymore.
1. NIC on management Vlan -- IP Static -- Physical host
2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
Thanks in advance

Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
you simple remote office.  
Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
Disable all the other NICs
Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
No silver bullet here, but maybe a step in the right direction.
Rob McShinsky (VirtuallyAware.com)
VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

Similar Messages

  • Best Practice: Setting up Agents for cross-training

    The post that sparked this topic:
    http://forum.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Unified%20Communications%20and%20Video&topic=Contact%20Center&topicID=.ee6fe12&fromOutline=&CommCmd=MB%3Fcmd%3Ddisplay_location%26location%3D.2cc2d609
    My contribution to this topic:
    The Scenario:
    Agent2 is a primary resource for Q2, which takes a lot of calls. At any given time there are always at least 5 calls in queue. Agent1 is a primary resource for Q1, which takes fewer calls than Q2, and rarely has calls waiting in queue. Agent1 is special, because he/she is cross-trained in Q2 and helps out when needed. Agent1 should never take a call for Q2 if a call for Q1 is waiting; regardless of how long the caller in Q2 has been waiting.
    The Problem:
    CSQs select their resources independently of what is going on in other CSQs. They only look at their own available resource pool. If a resource is available, that resource becomes the selected resource to handle the current contact; regardless of that resource's other CSQ associations.
    Agent1 runs the risk of helping Q2 callers who have been waiting longer than Q1 callers, even though he/she should be primarily helping Q1 callers.
    The Setup:
    Agents
    Agent1 (Skills: Q1 [8]; Q2 [4])
    Agent2 (Skills: Q2 [8])
    Skills
    Q1
    Q2
    CSQs
    Q1_t1 (Most Skilled; Skill Q1 - 6 and above)
    Q1_t2 (Most Skilled; Skill Q1 - 1 and above)
    Q2_t1 (Most Skilled; Skill Q2 - 6 and above)
    Q2_t2 (Most Skilled; Skill Q2 - 1 and above)
    The Solution:
    You create a tiered structure out of your CSQs.
    Instead of having 10 levels of skill to choose from, you have 5. You can think of this like a 5 star rating for your agents.
    We take advantage of the fact that scripts are interruptible, and at any time during a queue loop an agent becomes available, they will be placed into reserved state immediately.
    We also take advantage of the fact that, if a resource is Ready in a second tier queue, then we know that there are no callers waiting in their primary queue. Otherwise, the resource would be reserved, talking, or not ready.
    In your Q2 script, select from Q2_t1 first.
    If queued and if Get Reporting Statistics shows > 0 resources Ready in Q2_t2, then select from Q2_t2. Dequeue if queued or if a Connect step failure occurs.
    This creates a situation where Agent1, who is skilled in both CSQs, empties his/her primary queue (Q1_t1) before ever taking a call from his/her secondary queue (Q2_t2). If no calls are waiting in Q1, then he/she is still eligible to help out Q2.
    Possible Problems:
    1. There would be a change in the way you look at reporting.
    2. There are now two CSQs, because you cannot change the skill criteria in a script.
    3. In a rare instance the secondary script could get the report stats, see 1 resource ready, and right as it executes the select resource step, the primary script executes its own select resource step. Agent1 is now talking to a secondary contact, and his/her primary contact has to wait.
    The likely hood of this happening increases as callers waiting in Q2 increases.
    Conclusion:
    What are some of your thoughts on this topic?
    How have you solved cross-training previously?
    What would you add, subtract, or modify from my proposed solution?

    Hi Anthony,
    I just found your post about cross-training and I can only say it is great!
    Actually it is really close to the be behaviour I have to implement for a customer:
    - A 2 level helpdesk: level 1 takes all the calls, level 2 takes the calls that level 1 could not solve,
    - Agents of level 2 can help those of level 1 if they are available (or if the number of calls in queue is too high; that point needs to be decided),
    - The level 1 is a team of Agents,
    - The level 2 is divided into 2 Agents teams, each one dedicated to a specific king of incident.
    What I planned is the following (I reused your naming and presentation to explain it ):
    Agents
    For level 1 : Agent1 to Agent20 (Skills: S1 [8])
    For level 2 team 1 : Agent21 to Agent Agent30 (Skills: S1 [4]; S2 [8])
    For level 2 team 2 : Agent31 to Agent Agent40 (Skills: S1 [4]; S3 [8])
    Skills
    S1
    S2
    S3
    CSQs
    Q1_t1 (Most Skilled; Skill S1 - 6 and above)
    Q1_t2 (Most Skilled; Skill S1 - 1 and above)
    Q2 (Most Skilled; Skill S2 - 6 and above)
    Q3 (Most Skilled; Skill S3 - 6 and above)
    In the first script
    Select resources from Q1_t1 first.
    If  queued and if Get Reporting Statistics shows > 0 resources Ready in  Q1_t2, then select from Q1_t2.  Dequeue if queued or if a Connect step  failure occurs.
    When Agent1 to Agent20 answer a call and cannot solve the issue, it transfers the call to the script of Q2 or Q3, depending on the kind of issue.
    In the second script
    There is a single script for queues Q2 and Q3: it is executed differently using a "name of queue" parameter.
    Select resources from Q2/Q3.
    Do you think it would be the best way to answer the need?
    Also, I have understood that dequeue step is used for statistics (remove a call from the statistics of a queue): is that correct or is there another use here?
    Many thanks for your answer!
    Julien

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • Reporting Tools for Hyper-V 2008 R2

    Are there any documenting or reporting tools for Hyper-V 2008 R2?  I just need to get a list of all VM guest running and their associated VHD files along with where those VHD's are stored. 
    Thanks!
    Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.

    Sam,
    Thanks, we are at the start of an upgrade from 2008 R2 to 2012 R2.  Just needed an easier way to pull information about the current environment like what location each VHD for each VM is located and if there are any snap shots saved, etc. 
    Just planning ahead to help make the migration easier.  The current environment is a mixture of CSV's and direct LUN's.  So it should be interesting when we start our migration planning.
    Noticed you are from the King of Prussia area?  Just spent a few weeks over at the Hilton Garden Inn at Valley Forge. 
    Kris
    Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.

  • Best practices with LDIF Development for RBAC?

    I'm currently working on enforcing RBAC (Role Based Access controls) in OID that may be subject to change every few months. What I've currently been doing is writing LDIF files to make changes to the existing RBAC once the changes have been finalized.
    Unfortunately, now we have ended up with a growing list of LDIF files that must be run in sequential order if we were to build a new environment. Any defects or development errors that slip through developer unit testing must be handled in the same manner.
    What is the best practice process for performing this type of development? Would it make more sense to have one LDIF file that removes all of the RBAC enforcement (via ldapmodify -c), and then a separate file that will install the latest and most up to date version? I've also considered just using one LDIF file, appending any updates to the end of it and using the ldapmodify command with the -c parameter

    With regard to the 29.97/30 thing, you'll find that video people are idiosyncratically imprecise about that. We say 60 when we mean 59.94, we say 30 when we mean 29.97 and we say 24 when we mean 23.976.
    We're quirky.
    Whenever somebody says one of those nice, round numbers, you can assume they're really talking about the corresponding ugly fraction.
    Unless they're film people, in which case +24 means 24, dangit.+

  • Best Practices or Project Template for Rep/Version

    I have installed the Repository 6i (3) and created the users successfully, even though it has taken a lot of effort to make sure each step is correct.
    However, on setting up the workareas and importing the project files, I have been trying back and force to figure out where things go, and who has what access.
    Is there something like a best practice or a project template for setting up a basic repository/version control system, that provides
    1. the repository structure,
    2. corresponding file system structure (for different developers, build manager, etc)
    3. access grants, and
    4. work scenarios, etc.
    The Technet demos and white papaers are either too high-level (basic), or too individual function oriented. I can't get a clear picture of the whole thing, since there are so many concepts and elements that don't easily go together.
    Considering that I am a decent DBA and developer, it has taken me 2 weeks, and I am still not ready to sign up other developers to use this thing. How do you expect any small development teams to ever use it? It's one thing to design it to be scalable and all-possible, it's another to make it easily usable. I have been suggested to use MS VSS. The only reason I am still trying Ora-Rep is its promise to directly support Designer and Oracle objects.

    Andy,
    I have worked extensively with the Repository over the last year and a half. I have collected some of my experiences and the derived guidelines on using the Repository in real life in a number of papers that I will be presenting at ODTUG 2001, next week in San Diego. If you happen to be there (see www.odtug.com), come and see me and we could talk through your specific situation. If you are not and you are interested in those papers, drop me an Email and I could send them to you (they probably will also become available on OTN right after the ODTUG conference).
    best regards,
    Lucas

  • SAP Best Practices on assigning roles for Auditors

    Dear Gurus,
    We need to set up SAP roles for auditors in or system for SRM ECC & BI.
    Could you please suggest on wich roles should be granted to the auditors as best practice to follow on?
    I will really apprecciate your help.
    Best Regards,
    Valentino

    Hi Martin,
    Thanks for your interest. I would be very happy to work with folks like you to slowly improve such roles as we find improvement possibilities for them, and all benefit from the joint knowledge and cool features which go into them. I have been filing away at a set of them for years now - they are not evil but still usefull and I give them to an auditor without being concerned as long as they can tell me approximately what they have been tasked to look into.
    I then also show them the corresponding user menu of my role for these tasks and then leave them alone for a while... 
    Anyway... SAP told me that if we host the content on SDN for the collaboration and documentation to the changes in the files, then version management of the files can be hosted externally for downloading them (actually, SAP does not have an option because their software does not support it...).
    I will rather host them on my own site and add the link in the SDN wiki and a sticky forum post link to it than use a generic download service, at least to start with. Via change management to the wiki, we can easily map this to version management of the files on a monthly periodic update cycle once there are enough changes to the wiki.
    How about "Update Tuesday" as a maintenance cycle --> config updates each second Tuesday of the month... to remove authorizations to access backdoors which are more than "just display"...
    Cheers,
    Julius

  • Best Practice : Creating Custom Renderer for Standard Component

    I've been reading the docs and a few threads about Custom Renderers. The best practice seems to be to create a Custom Component where you need a Custom Renderer. Is this the case?
    See [this post|http://forums.sun.com/thread.jspa?forumID=427&threadID=520422]
    I've created several Custom Renderers to override the HTML provided by the Standard Components, however I can't see the benefit in also creating a Custom Component when the behaviour of the standard component is just fine.
    Thanks,
    Damian.

    It all depends on what you are trying to accomplish. Generally speaking if all you need is for the user interface output to be changed then a renderer will work just fine. A new component is usually made in order to provide some fundamental change in server side functionality not related to the user interface. - Ponderator

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

  • BPC 5 - Best practices - Sample data file for Legal Consolidation

    Hi,
    we are following the steps indicated in the SAP BPC Business Practice: http://help.sap.com/bp_bpcv151/html/bpc.htm
    A Legal Consolidation prerequisit is to have the sample data file that we do not have: "Consolidation Finance Data.xls"
    Does anybody have this file or know where to find it?
    Thanks for your time!
    Regards,
    Santiago

    Hi,
    From [https://websmp230.sap-ag.de/sap/bc/bsp/spn/download_basket/download.htm?objid=012002523100012218702007E&action=DL_DIRECT] this address you can obtain .zip file for Best Practice including all scenarios and csv files under misc directory used in these scenarios.
    Consolidation Finance Data.txt is in there also..
    Regards,
    ergin ozturk

  • Best Practices: iPad/MacBookPro synching for video production in education

    My organization just bought 14 Macbook Pros and 14 iPads Minis. Our goal is to have students in single-day classes use the iPads to film something, then synch/export the video to a MacBook Pro where they can then edit that video in iMovies. Once that single-day class is over, all of the video will (likely) be deleted and new students come in a couple days later and start fresh. I'm trying to figure out the best practices for this to make it as painless as possible for all involved.
    So, matching AppleIDs for each pair? One AppleID for all devices and manual synch through iTunes? Dropbox/cloud synching instead of iTunes?
    All of these devices are brand new. I have already started prepping the MacBook Pros, but have not even turned on the iPads since I'm not sure which AppleID I should attach to the iPads -- I assume the first AppleID on an iPad will accept the iLife apps much the same way they do on the MacBook Pros.
    Any help is appreciated.
    Thanks
    Jack

    well the most important fact to accept is that ALL DRIVES WILL FAIL.  It's just a matter of when.  I can tell you about a nightmare situation with g-drives (before Hitachi bought them).   What format are you shooting?  If you shoot on tape, you can always recapture as long as you captured with abort clips on dropped frames on make new clip on timecode break are enabled.  But that's gonna take "real time."  If you shot on a chip based format, backing up the chips in multiple places (and I mean multiple) can provide a sense of security.  But if you need to be able to get back to work immediately if you have a drive fail, having a back up of your media or if you've stored it on a redundant raid is crucial.  I also seriously recommended having a clone of your startup drive so if your startup (boot) drive fails, you can get back to work quickly. 
    https://discussions.apple.com/docs/DOC-2494

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

Maybe you are looking for

  • Is there a way of persuading iCloud to sync the smart groups I have in address book on my Mac

    I have a small number of smart groups in address book on my iMac. but these do not sync through iCloud to my Macbook or iPod touch.  Everythin else syncs even ordinary groups, but not the smart groups.  Is there a way of persuading iCloud to do it? 

  • Regarding SY-SAPRL

    Hi In ECC 6 , System Variable SY-SAPRL  returns the version of "WAS" and not R/3 Application. Is there any system variable available to replace this variable to get the same output as in 4.6 c. If any other solution is there , please let me know. Poi

  • Wide gamut LCD monitors - Actually a hinderance?

    There may possibily be a huge misconception about a key monitor spec: color gamut. I thought/assumed that the wider a display's gamut, the better. Well, I could be wrong. I have a NEC 2690 on the way that I intend to color correct with using Apple Co

  • Call report from forms6i, use report server from 10gAS

    Hi there, I have been searching for any info I can get that will help me to do the following: I want to call an Oracle 6i report from Forms6i but I want to specify a Report Server running on an Oracle10gAS box. Can this be done in a client/server fas

  • Is crystal reports correct for me / my needs?

    i'm wondering if crystal reports is correct for my needs.  i don't mind investing the time / money to figure this out, but i really can't waste any time so i thought i'd ask the question first is crystal reports correct for my needs? some detail of m