Replica info - filtered vs. read/write of the root partition

Hello all,
According to Craig's setup guide, the BM server should be in the its own partition of which it is the master replica and it should contain a read/write replica of the root partition.
We currently have this setup but the one issue we are having is that our content filter is using LDAP to monitor eDir authentication and it seems to be grabbing some workstation authentications as opposed to the user authentications.
One suggestion to resolve this issue is to use a filtered replica which only sees users and user groups. Is this an option with our BM servers or should I be looking at using a different server for LDAP authentication and put a filtered replica on that one.
Any thoughts are greatly appreciated.
Steve D.

BorderManager needs to read license objects when it launches, filtering
objects from NDS, access rules from NDS and its own configuration from
NDS. It also needs to read NMAS-related information from NDS.
I have found that the most efficient way to a) get BMgr to read its
information and b) fix filtering issues is to have the BMgr server in
its own OU. In the past, there was also a Site-Site VPN dependency on
reading a root replica, but that was fixed sometime ago. (VPN may
launch faster if the BM server has a root replica, but it doesn't have
to have it).
BM wants to read licenses initially from the root of the replica ring,
so it helps if the BM server is the master of the replica ring holding
the licenses. This is not a requirement, but it makes BM launch faster
usually, and it especially important in branch offices with a site-site
VPN. BM read filters from the NBMRuleContainer, which is almost always
in the same ou as the server. It is easier to fix filtering issues if
you can simply delete them all and remigrate them into NDS without
having to worry about filters from some other BM server being in the
same container. These are the main reasons I like to have BM in its
own partition and the master of that replica ring.
It may help to have a replica of the security container on the server
as well, for nmas-related VPN logins, but I'm not sure on that. If you
are running NMAS RADIUS on the same server, you need to have replicas
of the user partitions also on the server. And with NMAS-related
logins for VPN, you really want all the clients and all the servers
with user replicas up to the latest version of NMAS.
Access rules are normally applied to the server object, but if they are
applied to ou's above BM, it may help to have replicas holding those
OU's on the server, but it's not required. (BM will have to read the
OU's from some other server if it can't get them from itself though).
Offhand, those are the NDS-related requirements I can think of for BM.
I would be putting my efforts into fixing the LDAP calls that the
application is using so that it doesn't look at workstation objects
rather than try to filter those objects out. However, perhaps you
could alter your NDS design and put all the workstation objects into
one or more OU's that the LDAP app doesn't look at?
Craig Johnson
Novell Support Connection SysOp
*** For a current patch list, tips, handy files and books on
BorderManager, go to http://www.craigjconsulting.com ***

Similar Messages

  • While syncing it says that it can't read/write to the iPod. I've reset to factory settings and it still won't work.

    Every time I connect it to sync it says it's been sync'd with another library and do I want to erase and sync w/ the current itune library.....it has never been sync'd with another itunes library....just this one on my mac.  When it starts to sync and says that it can't read/write to the ipod. I've restarted it, reset it and made sure my itune is the most recent.  HELP.

    When you still have the problem after restoring to factory defaults/new iPod then you very likely have a hardware problem and an appointment a th Genius Bar of an Apple store is in order.

  • Mount point won't allow read/write for non-root user

    Any ideas why this particular fstab line leads to root user only read/write for any disk referenced in my fstab?
    Example:
    UUID=496E-7B5E   /media/STORAGE   vfat   defaults 0   0
    I have tried all variations of what "defaults" should be (rw,suid,dev,exec)
    I had even added uid=0777, and no matter what options I add there, doing
    sudo mount -a
    or with the line in fstab commented out and
    mount -t vfat -U 496E-7B5E /media/STORAGE -o defaults
    causes the same issue.
    Results in every filesystem there to be mounted as read only for me as a user, and I can only write to them as root. 
    Weird
    I have run
    sudo chmod -v -R a+rwx /media/STORAGE
    and similarly
    sudo chmod -v -R 0777 /media/STORAGE
    Both were tried on the directory as mounted and unmounted.  When mounted, the verbose output DOES NOT error out and shows property change of the files
    Oddly, if no fstab reference is used, the disk shows up in the dolphin panel, and can be mounted in that manner and it is read write as a usual user. 
    Using a Chakra-live installed with Unetbootin, so perhaps that is the issue... so
    How is mounting through dolphin handled and what might I use at the command line to accomplish this same routine, as I only need one partition to mount read write when the system starts, so maybe I can add the command to rc.local
    Last edited by bwh1969 (2009-01-18 23:04:59)

    # fstab generated by gen_fstab
    #<file system>   <dir>         <type>      <options>    <dump> <pass>
    none            /dev/pts      devpts      defaults        0     0
    none            /dev/shm      tmpfs       defaults        0     0
    UUID=496E-7B5E /media/STORAGE vfat    defaults,user,users,rw,exec,uid=777,gid=777   0       0
    /dev/sr0     /mnt/sr0_cd  auto     user,noauto,exec,unhide 0     0
    # This would do for a floppy
    #/dev/fd0        /mnt/floppy    vfat,ext2 rw,user,noauto    0     0
    #    +   mkdir /mnt/floppy
    # E.g. for USB storage:
    #/dev/sdb1        /mnt/usb      auto      rw,user,noauto   0     0
    #    +   mkdir /mnt/usb

  • [solved] unable to boot to the root partition on my new (usb) HDD

    hello,
    i got a "unable to determine major/minor number of root device" error message when the system try to find the root partition (after it succefully boot on the boot partition).
    Shuttle XPC SB65G2 (Mainboard FB65), usb HDD: Intenso INIC-1608L, Linux ctkarch 2.6.37-ARCH #1 SMP PREEMPT, grub legacy.
    sdb1: /boot (ext2), sdb2: swap, sdb3: / (ext4), sdb4: /home (ext4)
    of course i have a kernel compiled with usb in HOOKS in mkinitcpio.conf.
    i tried to install with sdb1: / (ext4), sdb2: swap, sdb3: /home (ext4)
    but i got another error: it can't find the file /dev/blabla (root partition) (after it succefully boot on the partition!).
    at last i found the solution: i have to comment "root (hdx,x)" in menu.lst for grub legacy!:
    # (0) Arch Linux
    title Arch Linux
    # root (hd1,0)
    kernel ... by-uuid...
    you don't need this line when you define by uuid, by label, or so...
    it seems this line perturbs the behavior of the system.
    if this can help...

    You do not need the "raw". If you do:
    ok devalias
    it will show you how other aliases are formed
    do ls -l /dev/dsk/c0t2d0s0 and you'll see the "non-raw" device path
    ok reset
    will "lock" your alias in NVRAM but it will attempt to boot from boot-device (normally disk) the next time unless auto-boot? is set to false
    ok setenv auto-boot? false
    My guess is that it could not understand "raw", reset, and booted from boot-device.
    You can:
    ok setenv auto-boot? false
    ok nvunalias altboot
    ok nvalias altboot /pci@1f,0/ide@d/dad@2,0
    ok reset
    ok boot altboot

  • Can't boot os 10.8 after setting in macintosh hd "get info" - "everyone" from "read/write" to "no access"

    Hello to everyone,
    trying to learn my mac (as this is my first one, it is a rMBP) I clicked "Get Info" to the "Macintosh HD". And after that, in the lower end, where it says about sharing and permissions, it had an option "Everyone" which was assigned as "Read/Write". I changed that setting to "No Access". After that the OS was like dead (stuck), in the sense that the mouse was moving, but you couldn't do anything. None of the applications or the buttons were working. So I turned off the laptop from the power button, and then I pressed it again to turn it on. It tries to load Mac OS, but it stucks at the white display with the gray apple in the middle and tries to load forever. I tried to boot with Shift button pressed (safe mode) but it does the same.
    PS. Windows 7 in the bootcamp partition works just fine and I can see Macintosh HD from there.
    Any help please??

    Here's a good guide: http://support.apple.com/kb/TS1417
    You'll probably have to boot from the recovery system, as described in the "Try disk utility" step 1 of the guide.

  • Formatted on  XP - EXTREMELY SLOW read/write on the Mac !!!!

    Man this iPod is wack!! Im using the iPod mini in Disk Mode to transfer bits (updaters since my XP machine is NEVER connected to the net) back and forth between my PowerBook and my XP Intel based machine. Obviously i set up and formatted the iPod Mini on the XP machine.
    Observation, file read/write times are unacceptably SLOW on Mac and the PC. Did Apple deliberately make this thing unusable in this situation? This thing has a transfer speed as if it were connected to a USB 1.1 port but it is NOT, on any of my machines. I format my USB Flash Drive (SanDisk Cruzer Titanium) on the same XP machine, use the same ports and it's read/write times are at least 5 times faster than the Mini !!!
    When i format and setup the iPod Mini on the Mac then install MacDrive on the PC astonishingly the read/write times significantly increase on BOTH Mac and the PC. As i suggested i think there is something drastically wrong with Apple's PC iPod setup, there is NO way my other devices formatted on the PC have such abysmal performance . Man i click on a folder in the Finder of the PC formatted iPod and it take about 3 seconds until the files inside are displayed...this is also the case on the PC.
    Anyone else experienced this?
    I was going to buy another iPod specifically for transferring data between my Mac's and my colleagues PC's, but there is no way i will with such severely degraded read/write performance.
    Best

    Apparently this is not a single isolated case.
    Scott Aronian Posts :
    "On Mac FAT32 Format Slower Than HFS "
    My 60GB iPod came in Windows FAT32 format which I liked since I could use it in disk mode on both Mac & Windows. However it seemed slow to start playing video and when switching between video files.
    I did some timing tests and on the Mac FAT32 was much slower than HFS.
    FAT32
    Start playing movie = 12s
    Stop movie and restart at same location = between 12-30s
    HFS
    Start playing movie = 5s
    Stop movie and restart at same location = between 6s
    If you have a Mac and your new 5G iPod is formatted for Windows, you may want to use the iPod Updated to restore and make the drive HFS.
    There is NO such performance problems with USB Flash drives when formatted to Fat32 on PC's i have tested many and they perform similar on BOTH platforms.
    Something fishy goin on here

  • How does LR read/write to the HD?

    OWC just came out with their new 6Gb/s SSDs  (http://eshop.macsales.com/shop/SSD/OWC/Mercury_Extreme_Pro_6G) and it  made me wonder how LR reads and writes to the hard drive... Does it do  it more in 4k blocks?
    I'm asking because I want to know what benchmark tests I should be referring to when looking at SSDs...
    Thanks!

    There is a C API for access of shared variables. I think it should be possible to use this from JAVA.
    Norbert
    EDIT: You can look here for some more elaborate answer. I am not sure if there is a generic C/C++ API available without going via DataSocket nowadays.
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Still not sure if InputStream Reader/Writer is the right thing

    Well I changed my simple client/server code to use Reader and Writers, since i figured they should be correct because I am sending text back and forth between the client and server. But I just realized that Readers and Writers (the read and write method) returns ints. I can convert them to chars, but that seems relatively inefficient. Whereas before I was using DataInputStream.read/writeUTF and that was reading whole lines of data (strings). That seems like it is better, but the API confuses me, and I'm not sure why the API says Readers and Writers should be used for textual data when it's returning ints - where the casting overhead is needed. Thanks

    To be honest man, I have no idea. :(
    But perhaps, that basic class of all input classes is the byte reader. Maybe who programmed it thought it will be more wise to give that code instead converting into a letter.
    Only because it doesn't serve your purposes doesn't mean that isn't inefficient. maybe it is less efficient for someone who wants the int code.
    The first step is brining the int code to the user. That's might be the reason they did it this way, instead of casting it.
    There might be an extending class that does it. However
    as i said before
    i have no idea :(

  • Is there a way to enter an event time in Calendar as a reminder for me and NOT have Calendar erase it in the info as it reads it into the event start and end times?

    Calendar can recognize and derive event start and end times from the text description entered.  However, when it seizes on a time (start and/or end) and automatically enters it as the start and/or end time, it also (sigh) deletes the time from the text.  I enter the times as reminders for me, how can one prevent them from being deleted by Calendar??

    I am having the same darn problem. I really dislike this auto feature in Calendar.
    Can anyone help us please?

  • How do you create default Read/Write Permissions for more than 1 user?

    My wife and I share an iMac, but use separate User accounts for separate mail accounts, etc.
    However, we have a business where we both need to have access to the same files and both have Read/Write permissions on when one of us creates a new file/folder.
    By default new files and folders grant Read/Write to the creator of the new file/folder, and read-only to the Group "Staff" in our own accounts or "Wheel" in the /Users/Public/ folder, and read-only to Everyone.
    We are both administrators on the machine, and I know we can manually override the settings for a particular file/folder by changing the permissions, but I would like to set things up so that the Read/Write persmissions are assigned for both of us in the folder for that holds our business files.
    It is only the 2 of us on the machine, we trust each other and need to have complete access to these many files that we share. I have archiveing programs running so I can get back old versions if we need that, so I'm not worried about us overwriting the file with bad info. I'm more concerned with us having duplicates that are not up to date in our respective user accounts.
    Here is what I have tried so far:
    1. I tried to just set the persmissions of the containing folder with us both having read/write persmissions, and applied that to all containing elements.
    RESULT -> This did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    2. I tried using Sandbox ( http://www.mikey-san.net/sandbox/ ) to set the inheritance of the folder using the methods laid out at http://forums.macosxhints.com/showthread.php?t=93742
    RESULT -> Still this did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    3. I have set the umask to 002 ( http://support.apple.com/kb/HT2202 ) so that new files and folders have a default permission that gives the default group Read/Write permissions. This unfortunately changes the default for the entire computer, not just a give folder.
    I then had to add wife's user account to the "Staff" group because for some reason her account was not included in that. I think this is due to the fact that her account was ported into the computer when we upgraded, where as mine was created new. I read something about that somewhere, but don't recall where now. I discovered what groups we were each in by using the Terminal and typing in "groups username" where username was the user I was checking on.
    I added my wife to the "Staff" group, and both of us to the "Wheel" group using the procedures I found at
    http://discussions.apple.com/thread.jspa?messageID=8765421&#8765421
    RESULT -> I could create a new file using TextEdit and save it anywhere in my account and it would have the permissions: My Username - Read/Write, "Staff" or "Wheel" (depending on where I saved it) - Read/Write, Everyone - Read Only, as expected from the default umask.
    I could then switch over to my wife's account, open the file, edited it, and save it, but then the permissions changed to: Her Username - Read/Write, (unknown) - Read/Write, Everyone - Read Only.
    And when I switch back to my account, now I can open the file, but I can't save it with my edits.
    I'm at my wits end with this, and I can believe it is impossible to create a common folder that we can both put files in to have Read/Write permissions on like a True Shared Folder. Anyone who has used windows knows what you can do with the Shared folder in that operating system, ie. Anyone with access can do anything with those files.
    So if anyone can provide me some insight on how to accomplish what I really want to do here and help me get my system back to remove the things it seems like I have screwed up, I greatly appreciate it.
    I tried to give as detailed a description of the problem and what I have done as possible, without being to long winded, but if you need to know anything else to help me, please ask, I certainly won't be offended!
    Thanks In Advance!
    Steve

    Thanks again, V.K., for your assistance and especially for the very prompt responses.
    I was unaware that I could create a volume on the HD non-destructively using disk utility. This may then turn out to be the better solution after all, but I will have to free up space on this HD and try that.
    Also, I was obviously unaware of the special treatment of file creation by TextEdit. I have been using this to test my various settings, and so the inheritance of ACLs has probably been working properly, I just have been testing it incorrectly. URGH!
    I created a file from Word in my wife's account, and it properly inherited the permissions of the company folder: barara - Custom, steve - Custom, barara - Read/Write, admin - Read Only, Everyone - Read Only
    I tried doing the chmod commands on $TMPDIR for both of us from each of our accounts, but I still have the same behavior for TextEdit files though.
    I changed the group on your shared folder to admin from wheel as you instructed with chgrp. I had already changed the umask to 002, and I just changed it back to 022 because it didn't seem to help. But now I know my testing was faulty. I will leave it this way though because I don't think it will be necessary to have it set to 002.
    I do apparently still have a problem though, probably as a result of all the things I have tried to get this work while I was testing incorrectly with TextEdit.
    I have just discovered that the "unknown user" only appears when I create the a file from my wife's account. It happens with any file or folder I create in her account, and it exists for very old files and folders that were migrated from the old computer. i.e. new and old files and foders have permissions: barara - Read/Write, unknown user - Read Only, Everyone - Read Only
    Apparently the unknown user gets the default permissions of a group, as the umask is currently set to 022 and unknown user now gets Read Only permissions on new items, but when I had umask set to 002, the unknown user got Read/Write permissions on new items.
    I realize this is now taking this thread in a different direction, but perhaps you know what might be the cause of this and how to correct or at least know where to point me to get the answer.
    Also, do you happen to know how to remove users from groups? I added myself and my wife to the Wheel group because that kept showing up as the default group for folders in /Users/Shared
    Thanks for your help on this, I just don't know how else one can learn these little "gotchas" without assistance from people like you!
    Steve

  • Permissions of files exported from iPhoto; read/write owner-only

    Hi folks,
    When I export a set of files as JPG from iPhoto to a folder, I notice that the permissions are set to read & write for the owner only; the rest of the group and world are not allowed to see them.
    I noticed this when I exported a set of files, then burned that to DVD for someone else to have; he couldn't read the files on the DVD.
    Actually, it's even a bit weirder: 2 or 3 files (out of several dozen) are actually world readable, and when I export the original (RAW) images, these are group & world readable.
    Does anyone have any idea if I'm missing a setting or something? It's relatively easy to adjust the permissions, but seems very odd and illogical.
    Thanks,
    Evert

    Evert:
    First Control-click on one of the thumbnails in iPhoto and select Show File in the Contextual Menu. Check the file for permissions. If they are the same as the exported file then you'll have to change the permissions on the library and all files therein.
    One way to do it is to download and run BatChmod on the iPhoto Library folder with the settings shown here, putting your administrator login name, long or short, in the owner, set the group to what is already the default (often it's staff). Set the permission to for the group and others to what you want them to be and click on the Apply button. You can either type in the path to the folder or just drag the folder into that field.
    Or, you can select the library folder, type Command-i and set the permissions in the Info window clicking on the Apply to enclosed when done. (I'm running Leopard and the permissions section in the Info window are different that they were in Tiger but I think the button is Apply to enclosed).
    Do you Twango?
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.

  • Slow read/write over Thunderbolt and SSD

    Hey all,
    I recently picked up a Seagate GoFlex Thunderbolt sled and a 256GB Toshiba SSD for it.
    I am wanting to use it as my Lightroom 5 catalog and Smart Preview drive, and leave the main HD clean.
    When transfering all the info over from my main HDD, (158GB) I checked on Activity Monitor to see the real time read/write to the drive, and it only topped out at 150MB/sec.  What gives?
    I tested with an external FW800 and got 50-60MB/sec
    Then tested with esata drive and got 80MB/sec
    Both of these are platter drives.
    I was expecting WAAAYYY more from the Seagate TB and SSD combo.
    Did I set something up wrong?  Have I been reading data wrong?
    2011 i7 MBP
    16GB ram
    Toshiba SSD
    Seagate GOflex Thunderbolt Adapter
    Thank you

    Hey,
    I have the Seagate GoFlex Thunderbolt.  And C300 SSD connected to it.  The speed at which I am getting the SSD write at 196.5 MB/s and to read 323.4 MB/s.
    You can use something call blackmagic disk speed to monitor how fast your drive is actually reading and writing.
    The speed limitation you see may be from the actual SSD or copying from HDD to SSD.  This will test direct reading and writing on the SSD, and will give you a better/truer speed.
    Hope that helps
    Ed

  • Changes to Read/Write permissions not working

    I have had issues with authorizing and licensing third party music production plugin applications. I'm running Yosemite 10.10.2 on an iMac. When authorizing a specific program an error message reading - "Registration File Error (permission denied, error #8313)" I was told to change the disk permissions to Read and Write for everyone. I unlocked the window and changed the permissions accordingly. I ran the authorization again and the same error shows up, despite having been changed. Will the Disk Utility "Repair permissions" tab correct this problem? I have retried this operation more than several times and no dice. Very frustrating problem and is delaying several production projects.

    Hi Linc - I went to the preferences page for the Mac hard disc and changed the "everyone" tab to Read & Write in the Sharing and Permissions window. In each case where I changed the access, I unlocked and then locked the window once it was changed. Even restarted the computer. Each time that I did this and then tried authorize the software the same error message appeared. It just seems like the Read & Write function isn't taking effect. I've been working with the software company and they have  replicated the problem, but were able to change their preferences tab so that the authorization went through. Is there a way around this problem? I've been going back and forth with the software company trying to solve the problem for several weeks now.

  • Wiki "Read Only" users can read & write

    I am looking to use the built in Wiki software on our Snow Leopard mac mini server. The wiki is meant to be private with one user able to read and write and another account that can read only. After selecting "read only" the user can still read & write. The read only account is not an administrator so I am not sure why they are not read only, could someone point me in the right direction?
    Thanks for your help!

    Hi,
    According to your post, my understanding is that users with read only permissions can't see custom Master Page and Managed Metadata Navigation SharePoint 2013.
    I suggest to customize the ClientObject model code to retrieve info with elevated privileges and then re-deployed the master page.
    Here is a similar thread for you to take a look at:
    http://social.technet.microsoft.com/Forums/en-US/bd7aa817-114a-46ae-a08a-71d903227ce0/sharepoint-2010-custom-master-page-and-read-only-permission-issue?forum=sharepointadminprevious
    If it doesn’t work, please check more information about the custom Master Page, such as whether it contain content like web parts that with permission limited, or whether you have any changes
    to the read permission group.
    In addition, please check the SharePoint ULS log to find more information, , the ULS log file is in the location: C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS
    You can check the ULS log by the methods here:
    http://blog.credera.com/technology-insights/microsoft-solutions/troubleshooting-sharepoint-errors/
    Best Regards,
    Linda Li

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

Maybe you are looking for