Backup NFS-mounted disks

My understanding (and my experience) is that Time Machine ignores NFS-mounted disks. Can other programs (SuperDuper, Carbon Copy Cloner, etc) do that?
I have a bunch of linux systems in my lab which I can mount on a mac. If I had a program that would back them all up every night, it would be nice.

CCCloner allows for backups of remote drives, Superduper does not. but you might be better off asking in the CCC support forums to see if it will work with NFS and non HFS+ file systems. I would guess not. as for case sensitivity there is HFSX otherwise known as mac os extended, case sensitive. you can format a drive that. all OS X backup programs will work with such drives but there may be problems if you ever try to back up a non case sensitive file system to a case sensitive formatted drive.
see this link for additional OS X backup tools.
http://discussions.apple.com/thread.jspa?messageID=7495315#7495315
If none of them will back up remote non HFS drives (as I suspect is the case), there is always rsync.

Similar Messages

  • Can you oracle 9.2 rman backup to NFS mounted disk?

    Running is disc space issues.
    We're using oracle 9.2, can we run rman backups to an NFS mounted disk?

    to RMAN there's no difference between local mounted disk or NFS mounted disk
    It's just a mount point with some directory structures.

  • Time Machine Backup of mounted disk

    Hi,
    For sharing my iPhoto Library among multiple users on a MacBook Pro running, currently 10.6.8, I have by a Apple Support guide created a disk image file, and mounted it for storing my pictures without permission issues.
    However, to account for future expansion, the disk image is made to be 100 GB, while current photo library size is approx 50 GB. Is there any way of getting Time Machine to backup the files within the mounted disk, rather than backing up the disk image file itself? So that the size of the backup at present would be the actual size of contents of the disk, rather than the whole disk image file itself? And furthermore, by copying a disk image file, will the actual files within also be backuped?
    Hoping this makes sense to some you you, thanks!

    Create a sparse bundle disk image and copy your library there. A sparse bundle disk image consists of strips, rather than one single file. The bundle will expand to fit your needs by adding strips. If things change in the bundle, TimeMachine should be smart enough to only backup the changed strips, not the entire image. So if you tell Disk Utility to create an empty sparse bundle disk image of  1000GB and you put in only 50 GB, the bundle will only occupy 50 GB.
    When you delete loads of data, you can even shrink back the disk image (through a terminal command) to free up real disk space.
    Message was edited by: eljonco, added shrinking info

  • How to backup a mounted & encrypted disk image?

    I have read all the posts about creating a sparse bundles. I guess this allows you to backup the disk image as long as it is not mounted? Will it still backup if it is mounted? Will it be usable when I restore it?
    What if all I want to do is just backup the mounted disk and not the image? I did some tests and it seems to work. When I look at the backup on a different machine by taking over the disk, I see the volume as a possible restore and I see the files. Am I missing something? I thought I had read that this does not work? I do realize that the backup is not very secure since I can now restore files on an image I had intended to maintain secure. It would be nice if time machine allowed me to backup in an encrypted format the required some password for the restore. Create sparse bundle on that backup disk and backup to that mounted image?
    Steven

    Well since it is encrypted, no matter how you delete it, it should remain secure. That is an advantage of an encrypted file
    However, if you want to over write the contents of the file, you can just drag it to the trash and select secure empty trash.
    NOTE: if you have a lot of other deleted files in the trash, it is going to take longer to do the secure empty trash as those other files are also going to be securely erased. I've made the mistake of doing the secure empty trash with several hundred files in the trash, some of which were rather large files.

  • Missing menu "File|NFS Mounts" in Disk utility

    I had many NFS mounts using Disk utility and I used "File|NFS Mounts" but now that option is missing and I can't see my mounts neither those mounts are working, so I have two questions
    1. Where I can see my old mounts listed?
    2. How can I make nfs mount work?

    The problem is Boot Camp:  It uses a hybrid GPT/MBR partitioning scheme - which ends up hiding the Recovery HD partition - which is an EFI physical partition (neither GPT nor MBR).
    I would expect a new version of Boot Camp to release - like real soon - because of the Recovery HD partition invisibility issue.
    In this article - it is suggested that rEFIt should be used to partition a hard drive that is going to support multiple boots including Mac OS X, LINUX, and Windows. 
    (http://wiki.onmac.net/index.php/Triple_Boot_via_BootCamp)
    The key in the article to using Boot Camp with rEFIt is this:
    "Run the Boot Camp Assistant and create the Windows XP driver cd. Then exit Boot Camp.  DO NOT PARTITION USING BOOT CAMP: you are only using Boot Camp for the drivers, not the partitioning."
    All partitioning is done in terminal mode using the "diskutil" command.
    rEFIt is used to update both the GPT and MBR records so that all partitions will be visible using its "gptsync" command.
    Then - you replace the standard Mac boot menu with the rEFIt boot menu.  THAT will show the Mac OS X partition, Recovery HD (an EFI partition), and the Windows partition.
    My caveat is that rEFIt - which is open sourced and available here:  http://refit.sourceforge.net
    has not been recently updated and tested with respect to Mac OS X Lion.
    Hope this helps!

  • Where does Disk Utility define NFS mounts?

    Hi, I used to use Disk Utility to define a NFS mount point for my Drobo, but then I sold the Drobo and deleted the mount point from Disk Utility.  However, my system.log file shows that rpc.statd is trying to find the Drobo once every hour still.  I double checked and there is nothing listed in auto_master, so the only place I can think of that Disk Utility defines the mounts within is Directory Services but I can't find where.  Does anyone know where Disk Utility defines NFS mounts and how I can clear it out?

    Mountain Lion NFS Mounts Missing In...: Apple Support Communities

  • Cannot mount disk from a Sol8 system via nfs

    Hi All
    I have Sol10 up and running. I noticed that it was not very easy to mount a linux disk with nfs. I found out that I had to add -o vers=3. Now I would like to mount a disk from a Sol8 system
    So far I only managed to get:mount sol8sys:/storage /mnt/sol8sys
    nfs mount: sol8sys:/storage: Permission deniedThis gives me the impression something is not configured well on the Sol8 system.
    Well here is some info about that $ exportfs
    -               /storage   rw=serv1:serv2:sol10sys,ro=serv3   "Storage Array"
    sol10sys represents the system I'm trying to mount on.......
    Any suggestions why I cannot mount this disk ? or give me some hints on how to proceed solving this problem!
    Thanks a lot
    LuCa

    The reverse resolution has to match exactly. Most machines are in DNS, so the reverse resolution will be a name, not an IP address. Therefore they won't match.
    Also, how were you specifing the IP address? You can't just throw an IP address in an access list. From the share_nfs man page we can see that an access list contains:
    hostname, netgroup, domain name suffix, or network. 'IP address' isn't a valid member.
    Darren

  • Backing up to an NFS mount

    Has anyone figured out a way to backup to an NFS mount? Ideally, I'd like to create something like a 500GB sparse bundle disk image on /Network/Servers/foo/Backups/my_machine.sparsebundle, and just have time machine write it's backups to this image, similar to how it works for AFP mounts. I can't seem to figure out how to get a disk image to show up as a valid TM target disk though.

    There seems to be an unsupported solution (search for "time machine nfs" with you favourite search machine). Nevertheless it does not (yet?) work.

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Nfs mount created with Netinfo not shown by Directory Utility in Leopard

    On TIger I used to mount dynamically a few directories using NFS.
    To do so, I used NetInfo.
    I have upgraded to Leopard and the mounted directories
    are still working, although Netinfo is not present anymore.
    I was expecting to see these mount points and
    modify them using Directory Utility, which has substituted Netinfo.
    But they are not even shown in the Mount panel of Directory Utility.
    Is there a way to see and modify NFS mount point previously
    created by NetInfo with the new Directory Utility?

    Thank you very much! I was able to recreate the static automount that I had previously had. I just had to create the "mounts" directory in /var/db/dslocal/nodes/Default/ and then I saved the following text as a .plist file within "mounts".
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>dir</key>
    <array>
    <string>/Network/Backups</string>
    </array>
    <key>generateduid</key>
    <array>
    <string>0000000-0000-0000-0000-000000000000</string>
    </array>
    <key>name</key>
    <array>
    <string>server:/Backups</string>
    </array>
    <key>opts</key>
    <array>
    <string>url==afp://;AUTH=NO%20USER%[email protected]/Backups</string>
    </array>
    <key>vfstype</key>
    <array>
    <string>url</string>
    </array>
    </dict>
    </plist>
    I don't think the specific name of the .plist file matters, or the value for "generateduid". I'm listing all this info assuming that someone out there might care.
    I assume this would work for SMB shares also... if SMB worked, which it hasn't on my system since I installed leopard.

  • Does Time Machine still ignore mounted disk images? Specifically, encrypted sparse bundle disk images?

    My Time Machine backs up to Time Capsule which cannot be encrypted. I also have confidential data in an encrypted sparse bundle disk image in my home folder. When TM backs up and the encrypted sparse bundle disk image is mounted and I'm accessing the data, does TM back up the data "in the clear" decrypted form or does it exclude the disk image because it's mounted? I've done a little research, but there's conflicting information. Not sure what happens in Lion now...

    Time Machine does not backup mounted disk images! The encrypted sparse bundle disk image was mounted, I updated a doc and did a TM backup - the file was not listed in the TM repository and the doc remained unchanged in the encrypted sparse bundle disk image on TM. Then I ejected the disk image and did a TM backup - the updated doc was backed up in the encrypted sparse bundle disk image! Thank you!

  • Permanent NFS Mount

    Hi
    I'm trying to figure out how to create a permanent NFS mount on my 10.5.6 Server hosts. Using Directory Utility seems to only create autofs mounts - which I've had trouble with in the past on other platforms, so I'm not very trusting of it.
    /etc/fstab.hd is apparently ignored, so I'm not sure how else to get a permanent mount. Is it even possible on Leopard Server?
    Thanks.

    I also found that nested / heirarchical mounts don't work with the Apple version of Sun's automounter. I thought for sure if I made a ``multiple mounts'' entry they would work since AIUI in this case the automounter gets to see the whole proposed subtree at once from a single Directory Services lookup, but no, it doesn't work either. There are ``multiple mounts'' examples on both Apple's man page and Sun's, but on Sun's page the example is nested and on Apple's it isn't. I guess that's a kind of transparency, but a rather CYA-ish kind that leaves us out here wagging our jaws quite a bit when we expect it to behave like other automounters.
    However! nested mounts with the 'net' option DO work. ?!
    and in this case unlike the traditional Sun ``multiple mounts'' case, the automounter must build the tree with multiple Directory Services lookups not just one. How can it even do that? Is it Searching the directory instead of doing a simple Lookup?
    I can load all the mounts into Open Directory as separate, nodes, or whatever you call them, like this:
    cat > nested-example
    0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Mounts 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:VFSType dsAttrTypeStandard:VFSOpts
    terabithia\:/arrchive/incoming:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/Radio:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/backup:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/ebooks:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/fonts:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/movies:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Antoine:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Lauren:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/Roger:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive/music/jen:nfs:nosuid,nodev,hard,intr,net
    terabithia\:/arrchive:nfs:nosuid,nodev,hard,intr,net
    ^D
    dsimport -g nested-example /Local/Default I -u someadminuser
    and they will show up under /Network/Servers/terabithia/arrchive. I can no longer choose the mountpoint myself, which is a disadvantage for more than vanity---with the Solaris automounter, it's possible to build a single nested tree on the client out of filesystems pulled in from a bunch of different NFS servers, while the Mac's 'net' naming convention straightjackets me into only rebuilding trees that exist within one NFS server.
    Also, this works on 10.4, too, though in that case of course you use netinfo or niload fstab instead of dsimport.
    Now can someone explain why 'net' suddenly works so much better? And is there a hidden downside to using it?

  • Setting umask on NFS mounts under OS X 10.6

    I don't see any up to date information on setting umasks with 10.6. We use NFS SAN storage for several folders, mounted using the new NFS Mounts feature of Disk Utility and the SAN and Mac are playing nicely for UID and GID mapping even using our Active Directory LDAP groups and accounts.
    However, it creates everything as 755 (umask 0022, the system default) and I'd like to change that umask to 002 to create 775's instead. Nothing for 10.4 or 10.5 seems to apply here (NSUMask, etc.) and it doesn't appear to be a NFS Mount option, either.
    NFSv4 and ACLs are not really a good option for me since it's a multiprotocol SAN environment we want to keep NFSv3 permissions bits for UNIX/Linux/MAC and ACLs for Windows. Any ideas how to set the default umask, or at least a way to set the umask for each mount?

    xzi wrote:
    Yes, I tried that, it does not appear to work on 10.6 that's my point.
    I'm sorry, but I just can't figure out "etc". All I can do is keep asking if you have tried this or that. Have tried both variations of that config file and it didn't work? Have you tried instructions in this document, which seems updated for Snow Leopard?
    Have you considered that you may have already tried the correct solution, but simply didn't do it correctly? Have you tried with logging-out and back in? With rebooting?

Maybe you are looking for

  • Help again please...don't want to muck up the whole catalog...

    About a month ago, Rob Cole, DJ_Paige, JohnBeardy and Richard Plondon patiently lead me through the steps to create best practices in establishing catalog workflow with LR as I made my transition from a combo of Media Pro, Photo Mechanic and Name Man

  • Discs with paper labels won't play in iMac

    I applied Avery paper labels on CDs for a client. Now, the CDs won't play in my iMac. I can hear the superdrive spinning on the CD, slow down to a crawl, then eject the disk. I lost the original content, so I need to retrieve the content to new CDs.

  • Adding a boolean field to an existing sbo table

    hi everybody i want to add a boolean field to an existing sbo table. when i try to do it from the tools ... manage user fields i get only types like numeric string.. i dont get the booelan type does anybody know how can i do that thanks. for now i us

  • Change hostname and IP-address by a script

    Hey! I want to migrate a collaboration suite from a standalone machine to a virtual machine. Their I need to change the hostname and the IP-address from the whole collaboration suite. My question is: Is it possible to change the hostname and the IP-a

  • Not good support of Adobe!!

    I recently asked Adobe to suggest a way to me in order to purchase Adobe Dynamic Link separately and with the whole package of Premium Suite. There are thousands of clients that bought Adobe Premier because that's what they needed at the the time the