Backup to Remote Disk Array

Hi,
I have a remote disk array of IDE drives (2TB) and want to use RMAN to backup my database to these disks. The question is how can I do this? I though of using NFS or SAMBA but I don't think this is a good idea. Does anyone have any suggestions? A plug in library for rman? I'm using 9i with redhat Linux.
Thanks in advance,
Steve.

nothing is stopping you from using NFS technically speaking, from what i know.
as long as the master server is available during the backup you should not have any issues. the only issue could be the time it takes to backup the data.
in your case you can configure those disk arrays to the database host and create some volumes on these disk arrays and backup your database.
there are no libraries for disks. RMAN installation for that particular version of o/s should take care of it.
you have to configure RMAN for media management software libraries, when you want to back the data to a tape drive.
mukundan

Similar Messages

  • RMAN backup to remote disk in Windows

    Hi,
    I'm running Oracle 9.2.0.1 with Windows Server 2003 in both Server A and Server B. My prime DB is stored in Server A meanwhile my RMAN Catalog is in Server B. I’m running RMAN to do backup of the DB in Server A which is running out of space. So, I have decided to store my backupsets to Server B since I do have few hundred Gs there. Overall, I will push the backup script from RMAN Catalog in Server B and will store the backupsets back to Server B.
    My problem is whenever I run the script, no matter either from Server A or Server B, I’m getting the following error. Attached together is the RMAN configuration parameters.
    Error:
    RMAN> backup database;
    Starting backup at 20-AUG-09
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=13 devtype=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: sid=9 devtype=DISK
    allocated channel: ORA_DISK_3
    channel ORA_DISK_3: sid=14 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00012 name=F:\ORADATA\DRDB\DATA\DATA01.DBF
    channel ORA_DISK_1: starting piece 1 at 20-AUG-09
    channel ORA_DISK_2: starting full datafile backupset
    channel ORA_DISK_2: specifying datafile(s) in backupset
    input datafile fno=00002 name=F:\ORADATA\DRDB\DATA\DATA07.DBF
    channel ORA_DISK_2: starting piece 1 at 20-AUG-09
    channel ORA_DISK_3: starting full datafile backupset
    channel ORA_DISK_3: specifying datafile(s) in backupset
    input datafile fno=00006 name=E:\ORADATA\DRDB\INDEX\INDX01.DBF
    channel ORA_DISK_3: starting piece 1 at 20-AUG-09
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 08/20/2009 17:06:
    56
    ORA-19504: failed to create file "\\ServerB\f$\drdb\backupset\BackupDRDB_DB_39k
    n665g_105_1"
    ORA-27040: skgfrcre: create error, unable to create file
    OSD-04002: unable to open file
    O/S-Error: (OS 5) Access is denied.
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 10;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '\\ServerB\f$\d
    rdb\controlfile\Backup%d_CTL_%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 3;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '\\ServerB\f$\drdb\backupset\Backup
    %d_DB_%u_%s_%p';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '\\ServerB\f$\drdb\controlfile\SNCFDRDB.
    CTL';
    My Login ID exists in Administrator and ORA_DBA group in both Server A and Server B. I can manually run the path '\\ServerB\f$\drdb\backupset\’ from Server A and able to create/modify/delete a file inside this folder. But I don’t understand why getting the ‘Access denied’ error? Anyway, is this a correct way to do backup into remote disk? I’m quite new to Oracle and just attended RMAN course with OU. I have Google-ed this problem but none can assist me. Please do help me or share your knowledge to resolve this problem.
    Thanks & regards,
    Nesan

    Hi Andre,
    Yes you're correct, it's because of the Oracle service which run under Local System Account. I have went to it's properties and changed the logon details to my own Login ID which is Administrator for both Server A and B. Now my remote backup is working great...
    But does this changes will affect other apps/jobs/results in Oracle?
    Guys,
    If you have any better idea to solve this issue please drop into this thread or let me know if you find anything wrong on my solution.
    Thanks a lot for Andre.... You have solved my 2 weeks problem.
    Regards,
    Nesan

  • BRtools Two-Phase-Backup to remote disk and remote tape drive

    Hello,
    i want to two-phase-backup a SAP-System to a remote disk and from there to a tape drive.
    Backup to remote disk works fine, but can i use brtools to write the backup-files from the remote machine to a tape drive that is connected to the remote machine?
    I'm using SLES 10 on both machines.
    I can't find anything on the SAP Documentation to this. It ist possible to backup to remote disk and to remote tape, but it is not clear that it works when in Phase 1 of the backup the files are written to a remote disk. It seems like i can only tell brtools to backup the files from a local disc to a remote tape-drive, is this right?
    Thanks in advance for your help.
    Regards
    Phil

    Hi Nirmal,
    thanks for your answer.
    But i can do disk to tape Backups, when i choose "Backup of Disk Backup" from BRtools, i see a list of all Backups i wrote to the local disk and can write them to remote tape by using pipe as backup device.
    The only thing is, my Backups are stored on a remote disk, and from there it seems i have to write them to tape with cronjobs using a shell script. That's not my favourite solution, but it's ok.
    Regards
    Phil

  • Remote Disk Backup

    Hi gurus,
    I want to ask some questions again..
    I want to backup my SAP database to remote disk.
    My server is using:
    OS : Windows Server 2003 Enterprise 64bit
    DB : Oracle 10.2.0.4
    As i've read here: http://help.sap.com/saphelp_nwmobile71/helpdata/en/47/0b9ed8434f57dde10000000a1553f7/content.htm
    my only option for windows user is use backup_dev_type = ftp right?
    so I have configured all things according to what's in the article above.
    i've mapped the disk storage location into drive z: in my SAP Server.
    When i execute backup, SAP returns error saying that z: is unknown location.
    I use windows ftp server which is in IIS component.
    When i see the picture from that article above, it uses SAPFTP.
    Do I must use SAPFTP to backup to remote disk?
    If so, how could I configure it?
    I have tried to open SAPFTP in 'TCP/IP Connections' via tcode 'SM59' and 'Test Connection', which returned something like:
    Logon:                    569  msec
      0  KB:                    1  msec
    10  KB:                    4  msec
    20  KB:                    5  msec
    30  KB:                    7  msec
    Any help for me?
    Thanks very much.

    gurus,
    i've changed stage_root dir to ftp://folder, and now it's seems more convincing, but another error displayed:
    Detail log:                    adzqrhjw.sve            
    BR0002I BRARCHIVE 7.00 (32)            
    BR0006I Start of offline redo log processing: adzqrhjw.sve 2009-01-08 13.45.24       
    BR0484I BRARCHIVE log file: D:\oracle\DEV\saparch\adzqrhjw.sve       
    BR0477I Oracle pfile D:\oracle\DEV\102\database\initDEV.ora created from spfile D:\oracle\DEV\102\database\spfileDEV.ora       
    BR0101I Parameters       
    Name                           Value       
    oracle_sid                     DEV       
    oracle_home                    D:\oracle\DEV\102       
    oracle_profile                 D:\oracle\DEV\102\database\initDEV.ora       
    sapdata_home                   D:\oracle\DEV       
    sap_profile                    D:\oracle\DEV\102\database\initDEV_remote.sap       
    backup_dev_type                stage       
    archive_stage_dir              ftp://backupdev       
    compress                       no       
    stage_copy_cmd                 ftp       
    remote_host                    backupDEV       
    remote_user                    administrator       
    system_info                    SAPServiceDEV SAPDEV Windows 5.2 Build 3790 Service Pack 1 AMD64       
    oracle_info                    DEV 10.2.0.2.0 8192 2582 20859434 SAPDEV UTF8 UTF8       
    sap_info                       700 SAPSR3 0002LK0003DEV0011S14735821930013NetWeaver_ORA       
    make_info                      NTAMD64 OCI_10201_SHARE Feb  6 2008       
    command_line                   brarchive -u / -jid LOG__20090108134524 -c force -p initDEV_remote.sap -s       
    BR0280I BRARCHIVE time stamp: 2009-01-08 13.45.43       
    BR0008I Offline redo log processing for database instance: DEV       
    BR0009I BRARCHIVE action ID: adzqrhjw       
    BR0010I BRARCHIVE function ID: sve       
    BR0048I Archive function: save       
    BR0011I 9 offline redo log files found for processing, total size 388.720 MB       
    BR0112I Files will not be compressed       
    BR0130I Backup device type: stage       
    BR0106I Files will be saved on disk in directory: backupDEV:ftp://backupdev       
    BR0134I Unattended mode with 'force' active - no operator confirmation allowed       
    BR0202I Saving init_ora       
    BR0203I to ftp://backupdev\DEV ...       
    BR0278E Command output of 'C:\usr\sap\DEV\SYS\exe\uc\NTAMD64\sapftp.exe -v -n -i backupDEV -u D:\oracle\DEV\saparch\.adzqrhjw.ftp -a -c put D:\oracle\DEV\102\database\initDEV.ora ftp://backupdev\DEV\initDEV.ora -c bin -c put D:\oracle\DEV\102\database\spf       
    Connected to backupDEV Port 21.       
    220 Microsoft FTP Service
    331 Password required for administrator.
    230 User administrator logged in.
    200 Type set to A.
    200 PORT command successful.
    550 ftp://backupdev\DEV\initDEV.ora: The filename, directory name, or volume label syntax is incorrect.       
    200 Type set to I.       
    200 PORT command successful.       
    550 ftp://backupdev\DEV\spfileDEV.ora: The filename, directory name, or volume label syntax is incorrect.       
    221       
    BR0280I BRARCHIVE time stamp: 2009-01-08 13.45.43       
    BR0279E Return code from 'C:\usr\sap\DEV\SYS\exe\uc\NTAMD64\sapftp.exe -v -n -i backupDEV -u D:\oracle\DEV\saparch\.adzqrhjw.ftp -a -c put D:\oracle\DEV\102\database\initDEV.ora ftp://backupdev\DEV\initDEV.ora -c bin -c put D:\oracle\DEV\102\database\spfi       
    BR0222E Copying init_ora to/from ftp://backupdev\DEV failed due to previous errors       
    BR0016I 0 offline redo log files processed, total size 0.000 MB       
    BR0007I End of offline redo log processing: adzqrhjw.sve 2009-01-08 13.45.43       
    BR0280I BRARCHIVE time stamp: 2009-01-08 13.45.43       
    BR0005I BRARCHIVE terminated with errors     
    can you tell me what's wrong with it?
    Thanks gurus,

  • Low Network Utilization in Remote Disk Backup via SAP FTP

    Hi gurus,
    I'm running whole online backup + redo log to backup my SAP server to backup server via SAPFTP (remote disk backup).
    I'm using 100Mbps network connection between the two servers, with SAP server configured to 'maximize data throughput for network application'.
    When I copy data from SAP Server to backup server manually, the speed is OK (around 98% network utilization).
    But when I use put command in program RSFTP002 (SAPFTP) for the same file, the network utilization is just around 10%.
    I use Windows Server 2003 x64 for SAP Server, and Windows Server 2008 x64 for Backup Server.
    Each has adequate RAM, 8GB for SAP Server and 4GB for Backup Server.
    Is this technical problem or is it because backup is considered background process, thus lowering its speed?
    Help will be very much appreciated.
    Edited by: Bobby Gunawan on Jan 9, 2009 11:44 AM

    go read http://help.sap.com/saphelp_45b/helpdata/en/0d/d311324a0c11d182b80000e829fbfe/frameset.htm
    it's old but should help you nonetheless.
    I would not use SAPFTP as you found out, it is slow.
    use DD instead which is much faster.

  • Is RAC node configuration  when disk array fails on one node .

    Hi ,
    We recently had all the filesystem of node 1 of RAC cluster , turned into read only mode. Upon further investigation it was revealed that it was due to disks array failure on node 1 . The database instance on node 2 is up and running fine . The OS team are rebuilding the node 1 from scratch and will restore oracle installable from the backup .
    My question is once all files are restored :
    Do we need to add the node to the RAC configuration ?
    Do we need to do relink of oracle binary files ?
    Can the node be brought up directly once all the oracle installables are restored properly or will the oRacle team require to perform addition steps to bring the node into RAC configuration .Thanks,
    Sachin K

    Hi ,
    If the restore fails in some way . We will require to first remove and then add the nodes to the node 1 cluster right ? Kindly confirm on the below steps.
    In case of such situation below are the steps we plan to follow:
    version ; 10.2.0.5
    Affected node :prd_node1
    Affected instance :PRDB1
    Surviving Node :prd_node2
    Surviving instance: PRDB2
    DB Listener on prd_node1:LISTENER_PRD01
    ASM listener on prd_node1:LISTENER_PRDASM01
    DB Listener on prd_node2:LISTENER_PRD02
    ASM listener on prd_node2:LISTENER_PRDASM02
    Login to the surviving node .In our case its prd_node2
    Step 1 - Remove ONS information :
    Execute as root the following command to find out the remote port number to be used
    $cat $CRS_HOME/opmn/conf/ons.config
    and remove the information pertaining the node to be deleted using
    #$CRS_HOME/bin/racgons remove_config prd_node1:6200
    Step 2 - Remove resources :
    In this step, the resources that were defined on this node has to be removed. These resources include (a) Database (b) Instance (c) ASM. A list of this can
    be acquired by running crs_stat -t command from any node
    The srvctl remove listener command used below is only applicable in 10204 and higher releases including 11.1.0.6. The command will report an error if the
    clusterware version is less than 10204. If clusterware version is less than 10204, use netca to remove the listener
    srvctl remove listener -n prd_node1 -l LISTENER_PRD01
    srvctl remove listener -n prd_node1 -l LISTENER_PRDASM01
    srvctl remove instance -d PRDB -i PRDB1
    srvctl remove asm -n prd_node1 -i +ASM1
    Step 3 Execute rootdeletenode.sh :
    From the node that you are not deleting execute as root the following command which will help find out the node number of the node that you want to delete
    #$CRS_HOME/bin/olsnodes -n
    this number can be passed to the rootdeletenode.sh command which is to be executed as root from any node which is going to remain in the cluster.
    #$CRS_HOME/install/rootdeletenode.sh prd_node1,1
    Step 5 Update the Inventory :
    From the node which is going to remain in the cluster run the following command as owner of the CRS_HOME. The argument to be passed to the CLUSTER_NODES is a
    comma seperated list of node names of the cluster which are going to remain in the cluster. This step needs to be performed from once per home (Clusterware,
    ASM and RDBMS homes).
    ## Example of running runInstaller to update inventory in Clusterware home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=prd_node2" CRS=TRUE
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in ASM home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ASM_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    ## Example of running runInstaller to update inventory in RDBMS home
    $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=prd_node2"
    ## Optionally enclose the host names with {}
    We need steps to add the node back into the cluster . Can anyone please help us on this ?
    Thanks,
    Sachin K

  • Using rman to backup a remote database

    I am after a good guide or advice on how (what) to install and (how to) run Oracle RMAN
    to backup a remote Oracle database.
    RMAN must be installed and run on a Solaris (SunOS 5.8) box remote from the Solaris
    box (also SunOS 5.8) running the Oracle database but connected to it via a TCP/IP
    network.
    The Oracle Recovery Manager and Database version is 9.2.0.1.0.
    RMAN must perform the backup of the remote database over the network and store
    the backup on a disk local to the RMAN box.
    The backups may be full or incremental and may be "hot" backups taken while the
    Oracle database is open.
    The disk files are to be then backed up to tape using Veritas.
    What software do I need to install on the RMAN box ?
    What software do I need to install on the remote Oracle database box ?
    Thanks,
    Brett Morgan.

    writes to a NFS-mounted diskRman is writing to disk or tape (when is bundled (libraries needed) with some backup software for tapes such as Veritas). The disk device could be local, or mounted disk form other host or storage (over NFS as Werner mentioned, SMB and others).

  • How to resolve errors (-50 and -8058) when moving Time Machine backups to new disk?

    I'm trying to move my Time Machine backups (about 600GB total) to a new external hard drive.  I started the process last night, but after an hour or so received two errors, each repeated multiple times:
    "The operation couldn't be completed because an unexpected error occurred (error code -8058)." 
    "The operation couldn't be completed because an unexpected error occurred (error code -50)."
    I opted to cancel the file transfer.  I looked up the error codes but I'm still not sure what they mean.  I found an old support article about error code -50 (Mac OS X 10.1: "Error Code -50" Alert Appears When Copying Files From a Remote Disk) and an old support article about error code -8058 that doesn't appear to be entirely relevant to my issue (Mac OS X 10.4: Error -8058, unable to eject when trying to copy a disc in Finder).  I've also found a number of Support Community discussions, none of which are particularly helpful.
    Questions:
    What do these two error codes mean?  Are the files that cause these errors somehow corrupt?
    If I click "Okay" when the error dialog appears, are the files that are causing the errors transferred or are they omitted from the transfer?
    If I transfer the files and click "Okay" when the errors appear, or if I use Terminal and cp -R as suggested in Mac OS X 10.1: "Error Code -50" Alert Appears When Copying Files From a Remote Disk, will I have trouble recovering files from the new backup disk?
    Do I need to verify/repair permissions on the original Time Machine disk before attempting the transfer?
    Is there some other method I should use (e.g. Terminal instead of Finder) to transfer the backups?
    Details:
    MacBook Air
    Mac OS 10.8.5
    Both the old and new Time Machine disks are formatted Mac OS Extended (Journaled)

    The two drives are handled as separate drives, even if they have the same name.
    In essence, the old backups are from a drive that's no longer connected; see #E3 in the Time Machine - Troubleshooting *User Tip,* also at the top of this forum.

  • Disk array configurations with oracle redo logs and flash recovery area.

    Dear Oracle users,
    We are planning to buy the new server for oracle database 10g standard edition. We put oracle database file, redo logs, and flash recovery area on each of disk array. My question is what is the best disk array configuration for redo logs and flash recovery area? RAID 10 or RAID 1? Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.
    thanks,
    Belinda

    Thank you so much for the suggestion. Could you please let me know the answer for my question of FRA redundancy?
    “Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.”

  • Stripe size for scatch disk array

    I am building a CS5/64 workstation running on Win 7/64 that will be used to edit 1-4Gb images. The scratch disks will consist of a RAID 0 array using 3-4 WD600Gb 10K drives shortstroked on an Areca card.
    What is the best stripe size for a 3-4 disk array for large images? Does Adobe publish how they R/W to the scratch disk, size of block,etc?
    Larry

    Right click on the root of your C: drive, and choose Properties.
    Click the Hardware tab, select the drive (array) you'd like to set advanced caching on, and click the [Properties] button.
    Click the Policies tab, and note the setting of the [ ] Turn off Windows write-cache buffer flushing on the device.  This may not be available, depending on the drivers.
    Note, specifically, that this feature can cause quite a lot of disk data to end up in your RAM for a while, if an application gets significantly ahead of the drive's ability to write data.  This is where the warning about having good battery backup comes in.  I'll add my own comment:  Your system should be very stable as well.  You don't want it crashing when a lot of writes are pending.
    -Noel

  • XSan/iMovie HD problems - won't render to remote disk, please help!

    Without warning last week my g4's started to crash on the simple task of rendering a title in iMovie to the Xsan - then shortly after that the g5's decided to choke on a title render in iDVD. No changes what-so-ever were made to any of the systems, configurations, permissions, settings you name it, one day things are fine and the next yikes! Now I know there have been problems with iMovie writing to remote disk (for instance with the fast/slow/reverse effect which just never did work on the Xsan being a remote disk.) I've worked around it in past this one is rather enormous though - I can only import and export video to the XSan right now. Anything else seems to lead my machines into a full crash, and to make matters worse I've got kids lined up around the block wanting to get in to spruce up their videos right now. We've done a defrag of the XSan and that didn't help, does anyone have any suggestions of advice, it's important that iMovie function with our XSan, since that was why it was purchased initially. Thanks.

    So, you're re-sharing the Xsan volume to a bunch of workstations using AFP, or are all these direct attached?
    Did the G4/G5 workstations get updated to a newer OS without you knowing? Or a new version of QuickTime?
    Do you have a metadata controller and a backup? Try failing the master for the backup.
    You could always unmount the volume from all clients, stop the volume and run cvfsk.

  • Events on remote disks are missing from event library

    The iMovie events on remote disks are missing.  I've looked for the com.apple.iMovieApp.plist file and it's missing (was a solution I found from earlier discussions).  The icon for the remote disks shows a disk and a yellow caution symbol. 
    I have a directory on this remote drive with iMovie events.  How can I reconnect these events back into iMovie?

    Do you not have a backup copy of your itunes library?
    If not, then you can transfer itunes purchases from an ipod:  File>Transfer Purcahses

  • Backup NFS-mounted disks

    My understanding (and my experience) is that Time Machine ignores NFS-mounted disks. Can other programs (SuperDuper, Carbon Copy Cloner, etc) do that?
    I have a bunch of linux systems in my lab which I can mount on a mac. If I had a program that would back them all up every night, it would be nice.

    CCCloner allows for backups of remote drives, Superduper does not. but you might be better off asking in the CCC support forums to see if it will work with NFS and non HFS+ file systems. I would guess not. as for case sensitivity there is HFSX otherwise known as mac os extended, case sensitive. you can format a drive that. all OS X backup programs will work with such drives but there may be problems if you ever try to back up a non case sensitive file system to a case sensitive formatted drive.
    see this link for additional OS X backup tools.
    http://discussions.apple.com/thread.jspa?messageID=7495315#7495315
    If none of them will back up remote non HFS drives (as I suspect is the case), there is always rsync.

  • Can you get Time Machine to backup to a disk image?

    I just bought an OWC external enclosure that holds two SATA drives. I upgraded my internal (main) HD to a 1TB drive, and then put my old internal 500 gig (main) HD and my old internal 500 gig (time machine) HD into the external enclosure.
    So I'm good, with an external time machine raid array, that I can take off site.
    But, I really wanted to encrypt the data, so as if someone broke into my car and stole my offsite backup, not have my data.
    I thought about creating a 1TB (well, 930 gigs) encrypted image in disk utility on the external, mounting it, and choosing that for time machine. When I leave, shutdown would dismount the image, then turn off drive, unplug and leave.
    But time machine won't allow me to choose the image.
    I read about a terminal hack for backing up to unsupported network volumes....I tired that just for kicks, didn't work.
    Is there any way to get time machine to allow me to choose a mounted disk image as my backup volume?
    Thanks!

    Likewise, I'm interested in the same question, but for a different reason: backing up to a NAS. Since a disk image gives you a nice virtual HFS+ partition, it seems a lot less suicidal than trying to shoehorn a Time Machine backup into whatever the NAS's native filesystem happens to be.
    In theory it should be possible, since nested disk images (even mixing sparsebundle and plain old DMG) work just fine normally. And having your data in a nice monolithic DMG file would make it easier to copy and move the TM backups -- not to mention, you cap the max size and do encryption easily, as you described.
    But it's almost impossible to find this information on the Web -- Googling for this topic results in a bunch of noise about Time Machine's sparsebundle images. I managed to find one similar thread:
    http://forums.macosxhints.com/showthread.php?t=91297
    But that thread, like this thread, so far doesn't have any solution.
    I'm going to try to copy an existing TM backup to the disk image and see if that works. Will post again later, and hope that someone might have a solution someday.
    Incidentally, if all you want is encryption, apparently you can get an encrypted TM backup by manually creating the sparsebundle file and turning on encryption. See the parent thread to this comment:
    http://www.macosxhints.com/comment.php?mode=view&cid=98733
    Anyway, let's hope someone might have a time-machine-inside-a-disk-image solution someday...

  • Remote disk and time machine?

    i have my external hard drive attached to my airport extreme and all my computers, mac and windows, play nice and can copy files to and from it but i have partitioned the hard drive to allow for space for time machine to work but time machine wont find it or if it does refuses to use it as a disk. does time machine work with remote disks?

    heay,
    I had the same issue with an exteral 250Gb USD WD disk....
    I found the solution to my issue in the following manner.
    1) the disk/partition must be formated "OS X extended" format.
    2) you should be able to use you the Disk/Partition with TM while connected directly to the MAC. (this is just to make sure TM works fine.
    3) connect the disk to the airport extreme via USB.
    4) mount the the disk on the desktop
    5) go to TM preferences and change the backup disk to the disk/partition you just mounted.
    6) Activate the TM "status info" in the menu bar
    7) turn TM "ON" (DO NOT ATTEMPT TO RUN TM, or you'll have to do the entire procedure from scratch)
    8) close the TM preferences.
    9) unmount the disk from your desktop (use the the eject button on the shared devices tap in the finder)
    10) restart the MAC (you don't need to do this... but this is just to make sure that the disk is unmounted completely)
    11) once the boot-up is completed, go to the TM status icon and choose "backup now"
    You must make sure that the disk and or partitions you want to use with TM are never mounted before or during the TM process.
    As long as the disk/partition is mounted TM will not function.
    if you need to use the disk for other purposes than TM wait until TM has done its first run... then you can mount and unmount freely... TM is very sensitive to this "mount" issue.
    Note that sometimes TM will fail to funtion... this is not a big problem...
    you just need to make sure the disk/partition is unmounted and then run the TM back from the menu bar.
    Also note that your first backup will most likely take a very very very very very long time to be completed. just leave the MAC on a make sure it does not fall sleep.
    try the above and the let me know if it works or not...
    www.gekko-systems.com

Maybe you are looking for