Shared ExFAT drive over SMB failing

I am running a VM of Server 2012R2  (Hosted by 10.7.5) and I put up a drive formatted as ExFAT so that both bootcamp Windows and the MacOS could work with the data without issues. There are some System Center  tools I want to load onto the server VM, but I can't access that specific share across the network. Grrrr!
I can connect to the MiniServer which has the drive with the macadmin credentials, but loading the shared folders is an absolute pain.
I can move the data to another drive, but still, it's a problem to do it the way that normally should work.
My Mavericks Mac also can't connect to it over SMB, but the twist is that I can access the drive without any problem over AFP with the same credentials.
The permissions look fine, however the creation date is a bit off...

Yes I did. I will try it again in the morning (my friend is asleep now in the same room as his computer).
Does Sharepoints need to be running for me to connect to his external drive, or do I simply need to set it up as a Share and use it as I was hoping to in the first place?
Max

Similar Messages

  • Sharing firewire drive over smb

    I have an external Lacie FW400 drive I'm using as a media drive. I need to be able to access this drive from other computers (both windows and linux) via smb.
    The Lacie is connected to my Powermac, and is mounted on my desktop. I can access my home folder on the powermac via smb from either linux or windows machines, but the Lacie doesn't show up in my /Users/username/Desktop folder over a smb connection. I am unable to figure out how to access files stored on the Lacie.
    Anyone done this before and care to share how they did it?
    Powermac G5   Mac OS X (10.4.5)  

    Answered my own question. Found Share Points application and prefs pane.
    That did the trick!
    Powermac G5   Mac OS X (10.4.5)  

  • Sharing external drive over network - problem

    Hey guys,
    I searched the forum, but nothing posted to other threads helped.
    I want to share a LaCie external USB drive over my network. It consists of two macs: a mac mini and a macbook running OSX 10.4.9.
    The external drive is formated in FAT32. I enabled sharing in the system pref. but I don't see it when trying to connect. I installed SharePoints and shared it from there, still nothing. I tried connecting using the "afp://..:" stuff, but I get an error message, saying I cannot connect.
    Got some ideas?

    Thanks for your reply
    Did you use the little "Show File System Attributes" yet?
    Yep, it's r&w
    Did you use the enable Windows(smb)Sharing to Shared?
    Yes
    Did you add Users yet?
    Yes, my username which i use to connect is added.

  • Problem sharing FAT32 drive over network

    I have a mac mini running Mac OS X Server 10.4.11. It has three external drives connected to it (two running firewire and one usb).
    The first firewire (HFS+) is visible throughout the network and appears in the sharing section of Workgroup Manager. The other firewire (FAT32) and usb (FAT32) are seen on the desktop of the server and there is no problem reading and writing to them but they don't show up when I want to add them as a sharepoint. Is it not possible to share a FAT32 drive? Permissions on these drives are correct and read, write access is enabled.
    I have a MBP and a Powerbook G4 on the network which need to access files on the FAT32 formatted drive and backup to them as well. Any help in this matter is appreciated.

    Are the sharepoints you're seeking here using AFS or SMB services, or a mixture?
    Any Windows boxes in the mixture? If not, have you tried reformatting one of the drives as HFS+ and exporting that, if you don't need the FAT volume structure? (FAT32 is really "FAT28" internally, but I digress.) If you can run HFS+ and export as AFS and it works for your needs, that's a path out of here.

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Macbook pro hard drive apple claimed 'failed', has changed it's tune. BUT....

    This is a very looooong story, but here's the short version of it (I swear). I can add more details as needed. And there is definitely a question at the end.
    And I should preface this by saying I am a novice computer user, and really don't understand how hard drives, or permissions, or any of that stuff works.
    1. Two weeks ago my mid 2011 17" top-of-the-line macbook pro wouldn't wake from sleep upon opening from clamshell mode (familiar story to many)
    2. Did a hard shut down using power button
    3. Powered back up and got spinny rainbow no matter what I clicked. Took 10 minutes for any single action to respond, even opening text edit.
    4. Could not force quit, could not restart, system eventually froze
    5. Did another hard shut down
    6. System would then NOT boot normally
    7. Booting in safe mode showed invalid node errors
    8. Was able to get the OS to work, was preparing to do a backup, and in a "smack my head later" move, I closed the lid not thinking, when I went to the other room to get something to drink (had I know what was to come, that drink would have been a double gin & tonic). MBP not wake up again. Again- hard shut down.
    9. This time would not boot at all- even in safe mode, even running repair disk permissions on screen while booting. Would get to the end of booting and just die- not start, not finish, nothing.
    10. Brought to apple genius bar. They confirmed the hard drive had failed. They replaced hard drive for free (with the exact same model and make of drive!! TOSHIBA MK7559GSFX. Thanks for instilling more confidence apple!!)
    11. I brought my failed drive home and swapped it out with the drive in my 2008 15" macbook pro, which is where it has been living since then.
    12. Fired it up in target disk mode with my iMac, and the drive never appeared as an external drive/device. Tried this multiple times after multiple restarts
    13. Booted the 15" mbp to Lion's 'over the air' recovery, and was able to get disk utility open that way
    14. Will NOT boot from the drive (get the apple logo, the spinny circle and the bar that never progresses, no matter how many hours/days I leave it for)
    In the past week I have:
    used DiskWarrior (says drive is damaged and can't be repaired)
    used Data Rescue 3 (which I was getting error messages with too but now is telling me it can't do anything because the drive is the bootable drive, and not an external (presumably because it's in the machine?)
    In Disk Utility I have verified permissions, repaired permissions, verified disk and repaired disk. They all report errors, including repair disk, which tells me the drive is damaged and can't be repaired. (I get it, I have a bad hardware problem)
    booted from Lion-installed external travel drive, to try and have the drive be recognized
    attempted to run Drive Genuis 3 with no success
    attempted to create a new image using both disk utility and one of my software programs (don't remember which) with no success
    attempted to restore disk image using disk utiltity with no success
    am now running Stellar Phoenix 'recover lost volumes' with what appears to be success- it has found 5 lost volumes, but appears to be only 1/15th of the way done after about 24 hours. I'll let it run intil it's done- however long that may be. I have no idea what happens when it ends.
    Now, here is where the story gets interesting.
    For 5 days the drive would not only not boot, but it wouldn't mount either. It just didn't exist. There was no information on the drive in disk utility, and I couldn't access it no matter what mode I chose.
    BUT THEN, sometime shortly after hitting 'repair disk' about 12 times (getting failure messages each time), it mounted. Wahoo!
    Not only did it mount, but it has been mounting reliablibly each time after many restarts, using an external drive loaded with Lion as the boot drive.
    AND, now the drive's S.M.A.R.T. status says it's verified- both on the main Toshiba drive and the volume that contains my user account. It also shows the correct GB of data on the volume: 316.73 GB.
    I can also run verfiy permissions, which completed successfully.
    AND, I was able to click on the volume name (Louise. My iMac's name is Thelma. They work great together, LOL), and see the contents on the drive!
    First, when I opened the applications folder, it showed it as empty. But then I left it alone for about 5-10 minutes, and suddenly all of my apps appeared in the folder. Awesome!! There were a few in there I didn't have on my iMac, and didn't purchase through the app store, so I backed those up right away to another external hard drive I had plugged in.
    This was surprising and encouraging until I got to my user folder- 'jamie'.
    I opened it and it too shows as being empty. Zero kb. Left it alone for a long time, but sadly no files appeared. This was a few days ago- still no files appearing. 
    To make this clear- my main user folder- called 'jamie'-  says 'zero kb', 0 items. This is the folder that houses all of my documents and image files.
    BUT, disk utility, and Stellar Phoenix, and Drive Genuis 3 all are reading 'Louise' and telling me the data is there- 316.73 GB worth. I just can't see it.
    So I realized this may be a problem with permissions because I am using a new- different- user account. I was also thinking that if I could show hidden files, I might be able to see something in the folder that might help. So I downloaded TinkerTool and selected 'show hidden and system files'. No dice, still empty. Then I right-clicked on my user folder to get info and under 'sharing and permissions', the users, and the settings at the bottom are greyed out and I can't change anything.
    In system preferences, the user for the failed drive does not show at all, and when I try to add it by using the same name and password, it tells me the user already exists.
    Now I'm stumped. I feel like I've made progress and am getting close to getting the contents, but not sure what to do next, or if I'm fighting a losing battle.
    Here is what I am doing now:
    I am going to let Stellar Phoenix finish finding lost volumes, because it's the only app that seems to be doing anything.
    I purchased an external drive enclosure from amazon (this one: http://www.amazon.com/gp/product/B00655YT9C/ref=oh_details_o00_s00_i00), which I will be receiving on Monday.
    I'll pull the drive out of the 15" mbp and put it in the new external enclosure and will then try to run Data Recovery 3 on the drive
    If that's unsuccessful, I'll try DiskWarrior again, and/or maybe one of the other software programs
    And then if those don't work I guess I'm done.
    Based on the information I provided, do I indeed have a failed hard drive, and will NOT be able to recover the contents of my drive, and am SOL and am just wasting my own time?
    Or is there a way I can get the files in my user account to show?
    I wanted to provide all of the backstory on this first, instead of just saying "user folder empty" in case the backstory helps someone know what's going on.
    Anyone want a crack at this? I can upload screen grabs if that helps. Insight? Suggestions? Ideas? Thank you in advance!!
    ================================================================================ ================
    BTW: about backing up. I am diligent about backing up and always have redundant backups of everything (I'm a professional photographer). I DID have my mbp fully backed up via Time Machine to an external drive, but due to my iMac ALSO failing recently and needing repair work, I had to wipe the drive clean and do a full backup of my iMac, which filled the last drive I had left, because I also recently purchased a Canon 5D Mark III, which produces insanely huge RAW files which helped completely fill a single drive I had used to back up both the iMac and mbp. Couple that with writing a book under deadline for a major publisher and not having time to research or purchase another drive, and a failing macbook pro drive a short time later and it was the perfect storm- a total cluster.
    (Also, tried Data Rescue 3 on the external drive that DID have the mbp backup on it and was unsuccessful).
    The good news is, since the hard drive was less than a year old when it failed (again, thanks Apple!), and it was all mirrored content to my iMac to begin with, (most of which I would pass over to the iMac whenever I had the chance), only about 15% of the data on this failed drive is data I don't have on my iMac. Unfortunately it's all documents- word docs, pages docs, text docs, excel docs, that can't be recreated, including, what I think is the funniest blog post I've ever written. Sigh...... No client work or images thank god, But still. I really want it back.

    u can give try to using external enclosre and backup ur files. I hope u will get files back. or using external enclosre with ur hdd boot into Linux and try to mount ur harddisk.
    there are multiple no of tutorial to mount apple partion in Linux.

  • Can you share an external hard drive over a network when your Apple Airport Extreme is in bridge mode?

    Hello, is it possible to share an external hard drive over a network when I have my Airport Extreme in bridge mode?  I can't use my AE as my main router at the moment but still want to be able to use the hard drive on the network, and the router I am using isn't capable of adding an external hard drive.  I use Windows 7 and the other router is a Netgear.  I have searched the communities and have not come across an answer to this question.  I have tried several configurations within windows to try and see the hard drive but none have worked.  I can see the hard drive when I run Airport utlities, but it cannot be seen on the network.  Thanks to anyone who can help!

    I think there is some confusion in this thread..
    If you are sharing on a local LAN port forwarding is not required.
    is it possible to share an external hard drive over a network when I have my Airport Extreme in bridge mode?
    Answer is yes.. no port forwarding, mapping whatever term is used.. is needed. Port mapping is required when you cross over a NAT router.. as long as all the devices are inside a single LAN.. then no port mapping.
    I assign to my Airport Extreme, do I do so with the settings of:
    Service: SMB
    Type: TCP
    Server IP: xx.x.x.x
    Port Start: 445
    Port End: 445
    This would not work even from WAN.. SMB is blocked by all responsible ISP.. there is simply too many unprotected windows machines out there. If they allowed SMB .. the world would be flooded with hijacked bots. And stolen data like bank accounts. SMB is not a secure protocol.
    But this is not necessary on a LAN.
    The problem can be Mavericks which does a terrible job presenting network drives.. The usual recommendations are to use AFP or force the connection to CIFS (ie SMB1 not 2).
    If you use airport,, then use AFP.
    In finder.. Go, Connect to server.
    AFP://AEname or AEIPaddress. (replace with the network name of the AE or its actual IP address).
    When asked for password.. type public if you did not change it or use whatever password you put.
    Store the password in the keychain.

  • Users cannot connect over SMB 10.10.1 server.app 4.0 and 4.0.3

    Hello,
    I have an issue where users cannot connect to a server for files sharing over SMB.
    Info:
    All users on 10.10.1
    2 Servers on 10.10.1
    Server.app 4.0.3 but issue was also present using 4.0
    SMB connection works when connecting to the OD Master
    SMB does not work when connecting to the OD Replica ServerBut AFP works fine when connecting to the OD Replica Server.
    I have destroyed and re-added the OD replica but that did not seem to help
    This is what I see in the logs each time I try to connect(logs have been cleaned to remove client details:
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: label: default
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: dbname: od:/Local/Default
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: mkey_file: /var/db/krb5kdc/m-key
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: acl_file: /var/db/krb5kdc/kadmind.acl
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: digest-request: netr probe 0
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:12 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:13 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:13 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:13 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:13 server.pretendco.com kdc[4802]: Got a canonicalize request for a LKDC realm from local-ipc
    Jan  9 14:37:13 server.pretendco.com kdc[4802]: Asked for LKDC, but there is none
    Jan  9 14:37:13 server.pretendco.com sandboxd[395] ([4802]): kdc(4802) deny file-read-data /private/etc/krb5.conf
    Jan  9 14:37:22 server.pretendco.com kdc[4802]: Got a canonicalize request for a LKDC realm from local-ipc
    Jan  9 14:37:22 server.pretendco.com kdc[4802]: Asked for LKDC, but there is none
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: od failed with 2 proto=ntlmv2
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: user=SERVER2\\username
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: kdc failed with 36150275 proto=unknown
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: guest failed with -1561745590 proto=ntlmv2
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init request
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: init return domain: SERVER2 server: SERVER2 indomain was: <NULL>
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: uid=0
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: od failed with 2 proto=ntlmv2
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: user=SERVER2\\codywood
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: kdc failed with 36150275 proto=unknown
    Jan  9 14:37:23 server.pretendco.com digest-service[9961]: digest-request: guest failed with -1561745590 proto=ntlmv2
    I suspect the problem is to do with Kerberos and in relation to this server being an OD Replica.
    I would really appreciate anyone's insight into this.
    Thanks
    Morgs

    I have the same problem although I upgraded from Lion Server to Mountain Lion Server. The error appears to go hand in hand with this error.
    userInit: CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/fullyqualifieddomainname/Users/user is unavailable. User domains will be volatile.
    I've read a number of things to try. A lot of people point to DNS being a problem, but I'm confident this is correct in my environment.

  • ACL control on OS X for ExFat drive

    Is it possible to use ACL controls on OS X 10.9 (Server) when using an ExFat formatted drive?  I am sharing the drive via the file sharing service and I want to make it available to only certain users or groups. Currently, I get an error notice saying that POSIX permissions will not apply to an ExFat Drive, but I can not tell where I am supposed to enable, or add, or otherwise control ACL features in the server app.

    Thank you, I found the ACL interface now... That was very helpful. The kicker (in my case) is that the ExFat drives are not showing up in the storage options tab. (Step #2 in your list.) So, it seems that from the interface I can not apply the ACLs. However, it should be noted that the File Sharing service does see the drives (and their icons are also on the desktop and in the Finder sidebar). So, the drives are mounting but not visible in the storage options tab. Any suggestions?
    FYI: the drives are Western Digital Passports with USB3 interfaces. At the time of original posing I was connecting them though a USB 3.0 Hub by plugable (USB3-HUB7A). I tried removing the hub and directly connecting the dives to the MacMini but this did not result in any change in visibility of the drives on the storage option tab.
    To answer the question about why ExFat over HFS+: HFS+ is my preferred option. These drives are sometimes connected to the network. And sometimes they are needed by a Windows machine for local data processing. Size of the files is often over 2GB as the data on the drives is used in a workflow for audio processing and the files are 96kHz/24 bit .wav. So, cross-platform large file size dictated the drive format options. The server is deployed as part of a home network therefore buying dedicated drives just for connecting to the server are not in scope at the moment but may be in the future. At that time data would be transferred to HFS+ formatted drives.
    Would it be better to format the drives as HFS+ and use a utility like MacDrive on our Windows Machines?

  • Creating a folder alias on a shared network drive

    Hi,
    I have a folder that is housed on my computer that I would like an alias of on my company's shared network drive. I will have access to the shared network drive from my home computer either through Citrix or a VPN, so I would like the alias on that drive so that it will reflect any changes I make to the files within either from my work or from my home computer. My question therefore is, how do I create an alias of a folder on a shared network drive? I tried creating an alias on my local drive and then dragging it to the shared network drive, but that just copied the folder and removed the alias.
    Any help would be greatly appreciated!

    Nevermind. I must have copied over the original folder the first time instead of the alias, as a simple drag and drop of an alias folder just worked fine for me.

  • Running a VM from an exFAT drive

    I use a windows 7-x64 box at work, on which i hv my linux VM running on VMware player 3. How can I use the same VM on a mac. Right now I'm running it on an NTFS external HDD which is perfect for windows. However, I need to use it on my mac (Snow Leopard) through VMware Fusion. The VM is almost 60Gig ruling out FAT32 by a long mile. I've heard a lot about exFAT. Any ideas if VMware fusion will run a VM from an exFAT drive?

    Fusion by itself doesn't deal with exFAT.  It's up to the host OS (a.k.a.: OSX) to mount the exFAT partition, then Fusion will use that drive just like it would any other drive that your VM files are stored on.
    Me personally, I use Tuxera NTFS to mount NTFS partitions.  You can get the free 3G-NTFS to do the same.  I was getting suspicious errors when running large monolithic VMDK files (i.e.: 60GB VMDK) which caused me to switch from 3G-NTFS to Tuxera.  But Tuxera is not free so you may want to try 3G-NTFS first.
    However I will explain an alternate method for you: 2GB split VMDKs.
    VMware Player, Workstation and Fusion all support VMDK files split into multiple 2GB chunks.  Normally you have to specify this when you create the virtual disk, but it can be converted later.  (Try creating a new test VM in Fusion or Player and messing with the advanced settings.)  Having the VMDK in 2GB chunk will allow you to use a FAT32 partitioned disk, which can then be used with Windows or OSX without any additional software.  Additionally, copying 2GB chunks is more reliable than a single 60GB file.  And if you ever need to recover data from a corrupt VMDK, it's easier working with 2GB files.  (Many tools have trouble opening very large files.)
    The only time most people will "need" to use a VMDK (a.k.a.: not split) is if they need the virtual machine to be compatible with a VMware ESX based host.  However, you can use tools such as VMware Converter to copy virtual machines to and from ESX hosts and in the same process, convert the VMDKs to split files.  The source and destination virtual machines can be both Player so you'd use Converter just to "convert" to 2GB split VMDKs.  The biggest requirement for you is that you need enough disk space.  (Converter "copies" the data, so you end up with a clone of the original virtual machine.  Thus you need twice the space.)  After you convert and confirm the converted virtual machine works, then you can delete the original, if desired.  (This copy method is a good thing because if something goes wrong during the conversion, you simply delete the bad copy and start over.  If it didn't copy and something messed up, you'd possibly destroy your original.)
    Me personally, I "need" to have my VMDKs monolithic because they're mostly running on ESX hosts.  I occasionally copy out some of the virtual machines for testing so being able to run them without going through Converter is highly desirable in my situation.  I still use Converter for various instances when I create the virtual machine in Fusion first, since ESX doesn't support growable disks, which are useful on my space limited Mac.  But if you never use ESX, then 2GB split VMDKs on a FAT32 drive makes the most sense for compatibility between a Player/Windows host and a Fusion/Mac host.

  • Hyper-V over SMB 3.0 poor performance on 1GB NIC's without RDMA

    This is a bit of a repost as the last time I tried to troubleshoot this my question got hijacked by people spamming alternative solutions (starwind) 
    For my own reasons I am currently evaluating Hyper-V over SMB with a view to designing our new production cluster based on this technology.  Given our budget and resources a SoFS makes perfect sense.
    The problem I have is that in all my testing, as soon as I host a VM's files on a SMB 3.0 server (SoFS or standalone) I am not getting the performance I should over the network.  
    My testing so far:
    4 different decent spec machines with 4-8gb ram, dual/quad core cpu's, 
    Test machines are mostly Server 2012 R2 with one Windows 8.1 hyper-v host thrown in for extra measure.
    Storage is a variety of HD and SSD and are easily capable of handling >100MB/s of traffic and 5k+ IOPS
    Have tested storage configurations as standalone, storage spaces (mirrored, spanned and with tiering)
    All storage is performing as expected in each configuration.
    Multiple 1GB NIC's from broadcom, intel and atheros.  The broadcoms are server grade dual port adapters.
    Switching has been a combination of HP E5400zl, HP 2810 and even direct connect with crossover cables.
    Have tried stand alone NIC's, teamed NIC's and even storage through hyper-v extensible switch.
    File copies between machines will easily max out 1GB in any direction.
    VM's hosted locally show internal benchmark performance in line with roughly 90% of underlying storage performance.
    Tested with dynamic and fixed vhdx's
    NIC's have been used in combinations of RSS and TCP offload enabled/disabled.
    Whenever I host VM files on a different server from where it is running, I observe the following:
    Write speeds within the VM to any attached vhd's are severely effected and run at around 30-50% of 1GB
    Read Speeds are not as badly effected but just about manager to hit 70% of 1GB
    Random IOPS are not noticeably affected.
    Running multiple tests at the same time over the same 1GB links results in the same total through put.
    The same results are observed no matter which machine hosts the vm or the vhdx files. 
    Any host involved in a test will show a healthy amount of cpu time allocated to hardware interupts.  On a 6 core 3.8Ghz cpu this is around 5% of total.  On the slowest machine (dual core 2.4Ghz) this is roughly 30% of cpu load.
    Things I have yet to test:
    Gen 1 VM's
    VM's running anything other than server 2012 r2
    Running the tests on actual server hardware. (hard as most of ours are in production use)
    Is there a default QoS or IOPS limit when SMB detects hyper-v traffic?  I just can't wrap my head around how all the tests are seeing an identical bottleneck as soon as the storage traffic goes over smb.
    What else should I be looking for? There must be something obvious that I am overlooking!

    By nature of a SOFS reads are really good, but there is no write cache, SOFS only seems to perform well with Disk mirroring, this improves the write performance and redundancy but halves your disk capacity.
    Mirror (RAID1 or RAID10) actually REDUCES number of IOPS. With read every spindle takes part in I/O request processing (assumimg I/O is big enough to cover the stripe) so you multiply IOPS and MBps on amount of spindles you have in a RAID group and all writes
    need to go to the duplicated locations that's why READS are fast and WRITES are slow (1/2 of the read performance). This is absolutely basic thing and SoFS layered on top can do nothing to change this.
    StarWind iSCSI SAN & NAS
    Not wanting to put the cat amongst the pigeons, this isn't strictly true, RAID 1 and 10 give you the best IOP performance of any Raid group, this is why all the best performing SQL Cluster use RAID 10 for most of their storage requirements,
    Features
    RAID 0
    RAID 1
    RAID 1E
    RAID 5
    RAID 5EE
    Minimum # Drives
    2
    2
    3
    3
    4
    Data Protection
    No Protection
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Single-drive
    failure
    Read Performance
    High
    High
    High
    High
    High
    Write Performance
    High
    Medium
    Medium
    Low
    Low
    Read Performance (degraded)
    N/A
    Medium
    High
    Low
    Low
    Write Performance (degraded)
    N/A
    High
    High
    Low
    Low
    Capacity Utilization
    100%
    50%
    50%
    67% - 94%
    50% - 88%
    Typical Applications
    High End Workstations, data
    logging, real-time rendering, very transitory data
    Operating System, transaction
    databases
    Operating system, transaction
    databases
    Data warehousing, web serving,
    archiving
    Data warehousing, web serving,
    archiving

  • How do I get the second HD to be a shared network drive??

    PowerPC G5 / dual 2Ghz - used as our file server.
    running OS X Server - v.10.3.9
    Have one internal HD - 160GB partitioned into two drives (General Access & Archive) in the Public folder of this computer
    Appears as Network: Local / G5 Server. When you click the connect button you get a choice of which volume to mount/connect to (ie. General Access; Archive or both)
    Running out of space on the drive so I purchased a second 160GB HD. Installed it, formatted it (with DiskUtility), named it, and copied all files from Archive (partition)
    The thought is to make this new 160GB drive Archive and to remove the partition from the original 160GB HD and have that be just General Access.
    How do I get the second HD to be a shared network drive??
    Also can I remove the partition to the first without having to clone it and reformat?

    Roam & Ali B,
    Thanks!!
    Next dumb question(s).
    We have a laCie FW external HD that I was using as a back-up drive (temporarily)
    I do have some files on it that I have to bring back onto the server to use. I get a admin user&passsword prompt every time I have to copy something back. It usually copies over a locked file that can only be deleted from the Server terminal.
    1. Why isn't this working without the admin prompt? I used DiskUtil to format it before I could use it.
    2. How exactly do I "clone" the current content - its not a simple 'Select All' and drag&drop? Is it going to bring over the partition as well??
    3. Once I have everything on the FW HD, re-boot using it as the System, and re-format the original 160GB HD - will I run into #1.

  • Managing shared hard drive on linksys e3000

    I setup a shared hard drive on my Linksys E3000 and everything works fine except for this issue:
    I created two shares, say Folder1 and Folder2. However any edits made to Folder1 from any laptop's File explorer (Windows & Mac) reflects on Folder2. If i create a folder in Folder1 it would automatically be created in Folder2 and vice versa. So it is like having just one folder. How can i edit these folders exclusively without having one folder update the other automatically.
    Thanks

    I recommend that you phone contact your regional Linksys support office and ask for help and information regarding this.We find that phone contact has better immediate results over using email.
    Let us know how it goes please.

  • Driver's SQLSetConnectAttr failed/SQL Server 2005/ODBC Native Client

    I have an ODBC application that I am writing in VS 2008 C++. I freely admit to not being an ODBC expert. I am having trouble with SetConnectAttr failures. I've got them down to one specific case that I will describe here. The environment is 64-bit Windows
    7 (client) and SQL Server 2005 running on Win Server 2003. I am using a 32-bit ODBC connection. For the most part the application is working: I am successfully using the connection to do multiple INSERTs into multiple tables. I get lots of hits when I
    search on the above error message but they are so "all over the map" (Oracle, packaged applications, specific situations) that I have not been able to find one that seems to apply. Connection Pooling is not turned on in the connection.
    I make the following call
    SQLUINTEGER timeout = 10;
    // seconds
    retCode = SQLSetConnectAttr(ConnectionHandle, SQL_ATTR_LOGIN_TIMEOUT, &timeout, 0);
    I receive a zero in retCode.
    However when I subsequently issue the SQLConnect() I receive
    [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed
    I was setting three different connection attributes and getting the above message twice, but I have for debugging purposes commented out two of the set's. The above call is the only attribute I am setting so it is apparently the one that is failing. Any
    help would be appreciated.
    Charles

    In case anyone else runs into this, here is the real problem with the original code:
        retCode = SQLSetConnectAttr(ConnectionHandle, SQL_ATTR_LOGIN_TIMEOUT,
    &timeout, 0);
    The SQL_ATTR_LOGIN_TIMEOUT attribute takes an integer by value, not a pointer to an integer.  This is a confusing aspect of this API.  If you pass a pointer, ODBC is really "seeing" a large integer value, which causes the
    code to effectively behave as though there were an infinite timeout.
    Instead, just pass the value of the timeout:
        retCode = SQLSetConnectAttr(ConnectionHandle, SQL_ATTR_LOGIN_TIMEOUT,
    (SQLPOINTER) timeout, SQL_IS_UINTEGER);
    Then the timeout value should be respected.  Using SQLSetConnectOption(SQL_LOGIN_TIMEOUT) seems to be equivalent to the above, provided you pass the timeout by value and not via a pointer.

Maybe you are looking for