Online backup client eats CPU!

I have an online backup client from Mozy installed on my Mac Mini, and the actual backup process is scheduled to run daily at 23:00. The automatic backup is switched off. However, the backup client process is running at all times, and sometimes takes 50 - 60% of the CPU even when the scheduled daily backup is not being taken. This is not always momentary, and seems to continue for minutes, not just a few seconds. As you may imagine, this has a serious effect on the performance of the machine and is very frustrating.
An additional problem is that it seems impossible to stop the process from the Activity Monitor. It can be force quit, but restarts almost immediately. Repeating the force quit does not seem to help.
I have referred this to Mozy support several times, but they appear unable to resolve the issue. This is unfortunate, as the backup itself is useful, and works well. I am therefore wondering if there is anything that I can do within OS X to bring this process under control? I was thinking of possibly setting up a cron job to start and stop the backup client process?
Any thoughts and suggestions would be of interest and gratefully received!
Nigel

Thanks for that.
Unfortunately, it seems my thinking was not quite the best idea, as I have now received the following advice from Mozy Support:-
"I spoke with on of our senior support agents about the idea of quiting the Mozy client when not needed and he told me if you were to do that, Mozy wont know when files are changed and so wont back them up when you run a backup. Even when you aren't running a backup the Mozy client is watching all of the files on the machine and noting when a file needs to be backed up.
If you like, however, I can still provide you with the commands to effectively quit the Mozy backup daemon and launch it on demand. Alternatively, if the older Mozy client was better on resources and works for you, by all means use that version instead. To avoid having to re-select all of your files while changing the Mozy client version, when you select Uninstall from the MozyHome icon in the menubar, make sure the checkbox for Keep settings and log files is checked.
I know the Mozy Mac engineers are working towards reducing the resource footprint as much as possible, but Mozy is first committed to provide the most advanced and feature rich products. As such there will be a performance degradation as subsequent versions are released and ran on legacy hardware."
So switching the daemon off is clearly not a good idea; and the implication of the other comment is that Mozy don't care how much processor their system uses. Having researched elsewhere, it seems that this problem has been identified a couple of years ago, and Mozy don't seem to have made any serious effort to resolve it. From my perspective, I just want a reliable backup, the other bells and whistles are not interesting, and I would have thought that other users of a budget product would have similar requirements. However, Mozy seem content to leave us out in the cold. Fortunately, there are now other similar products, and a recent review in Macworld identified one called CrashPlan which seems to offer similar prices to Mozy (or slightly less).
For the time being I have reinstated an old version of Mozy that did not give this trouble, but will be looking seriously at CrashPlan, as reviews seem quite favourable.
Nigel

Similar Messages

  • Online Backup of supported Linux VM on Hyper-V 2012 R2 / SC DPM 2012 R2

    Hi,
    I'm trying to set up a lab environment:
    Win 2012 R2 with Hyper-V
    running 2 Linux Machines:
    Linux2 - CentOS 6.4 with manually installed Linux Integration services 3.4
    Linux3 - CentOS 6.4 without LIS (should be already included in CentOS)
    Another machine running Win 2012 R2 Server with SC DPM 2012 R2
    but both VMs show as "Offline" when trying to back them up via DPM. Tried local Windows Server Backup with the same result.
    I am able to backup the VMs "offline" (pausing the VM, taking snapshot, resume VM) but according to MS, SC DPM 2012 R2 should be able to do Online backups for supported Linux VMs (http://blogs.technet.com/b/virtualization/archive/2013/07/24/enabling-linux-support-on-windows-server-2012-r2-hyper-v.aspx)
    The only things in the EventLog are these:
    A storage device in 'Linux3' loaded but has a different version from the server.  Server version 6.0  Client version 4.2 (Virtual machine ID 4F5CDDD8-B855-41CF-83B2-772C1B99090D). The device will work, but this is an unsupported configuration.
    This means that technical support will not be provided until this problem is resolved. To fix this problem, upgrade the integration services. To upgrade, connect to the virtual machine and select Insert Integration Services Setup Disk from the Action menu.
    Any Ideas ?
    Thanks

    Hi,
    That list would need to come from the Windows hyper-v group, they are responsible with adding the feature to the integration components for various Linux OSes.  DPM just backs up whatever the hyper-V writer presents to us, if the guest supports
    online, we back it up online, if not hyper-V saves the guest before the VSS snapshot is taken and DPM takes the backup from the saved state.
    NEW NOTE ADDED 1-29-14: Windows group just released “Linux Integration Services Version 3.5 for Hyper-V”. The
    document mentions that some versions of Red Hat and CentOS are now
    supported to do online backup.
    Live virtual machine backup support
    ======================
    RHEL/CentOS 6.0-6.3
    RHEL/CentOS 5.7-5.8
    RHEL/CentOS 5.5-5.6
    ADDTL NOTES: If there are open file handles during a live virtual machine backup operation, the backed-up virtual hard disks (VHDs) might have to undergo a file system consistency check (fsck) when restored.
    Live backup operations can fail silently if the virtual machine has an attached iSCSI device or a physical disk that is directly attached to a virtual machine (“pass-through disk”).
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • Online Backup Failed from DB13, brbackup runs on application server

    Hi Everone,
    I am getting problem when i am scheduling Online Backup from DB13 t-code.
    Backup gets failed because logical command BRBACKUP execute on our Application server host rather than DBCI server host.
    I have pest the lob logs below:
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000389, user ID BASIS)
    Execute logical command BRBACKUP On host xyzapp - (Application server)
    Parameters:-u / -jid ALLOG20110824210000 -c force -t online -m all -p initSEP.sap -a -c force -p initSEP.sap -s
    d
    BR0051I BRBACKUP 7.00 (40)
    BR0252E Function fopen() failed for '/oracle/client/10x_64/instantclient/dbs/initSEP.sap' at location BrInitSapRead-1
    BR0253E errno 2: No such file or directory
    BR0159E Error reading BR*Tools profile /oracle/client/10x_64/instantclient/dbs/initSEP.sap
    BR0280I BRBACKUP time stamp: 2011-09-02 21.01.09
    BR0301E SQL error -12545 at location BrDbConnect-2, SQL statement:
    'CONNECT /'
    ORA-12545: Connect failed because target host or object does not exist
    BR0310E Connect to database instance SEP failed
    BR0056I End of database backup: begroqyn.log 2011-09-02 21.01.09
    BR0280I BRBACKUP time stamp: 2011-09-02 21.01.09
    BR0054I BRBACKUP terminated with errors
    External program terminated with exit code 3
    BRBACKUP returned error status E
    Job finished
    Please help.
    Thanks,
    Ocean

    Hello
    The previous contributor is correct, the profile is usually read from ORACLE_HOME.  I have never seen this error linking to the Oracle client directory, please double check the following
    Ensure that your envirionment variables are correctly set ORACLE_HOME, DIR_LIBRARY
    Brtools are always installed on the DB server and can be called from any application server.   if you are scheduling from DB13 on the application server, what is the connection setup?  RSH, RFC,  standalone gateway?
    If using a standalone gateway, it must be installed on the DB server, please refer to oss notes #446172, #1025707
    Best Regards
    Rachel

  • Verizon Online Backup and Sharing using FTP

    Can I access my Verizon Online Backup and Sharing 250MB with an FTP client instead of installing the backup software?

    no ftp, only through the software.  

  • Whole online backup from DB13 failed due to processing error

    Hello!
    I have difficulty by execution of Whole database online backup from DB13 via FTP on the remote target.  The backup goes very slow and breaks after a while.
    The BRBACKUP action log looks as follows:
    backup_mode ALL
    backup_type online
    backup_dev_type stage
    stage_root_dir /sap/DEVB
    compress no
    stage_copy_cmd ftp
    remote_host 192.168.200.3
    remote_user sapbackup
    #FILE..... E:\ORACLE\DEV\SAPDATA2\SR3_10\SR3.DATA10
    #SAVED.... /sap/DEVB/bdwkuudu/SR3.DATA10 #1/6
    BR0280I BRBACKUP time stamp: 2007-10-23 01.44.35
    BR0063I 6 of 41 files processed - 12000.047 MB of 93092.766 MB done
    BR0204I Percentage done: 12.89%, estimated end time: 23:06
    BR0001I ******____________________________________________
    BR0202I Saving E:\ORACLE\DEV\SAPDATA2\SR3_6\SR3.DATA6
    BR0203I to /sap/DEVB/bdwkuudu/SR3.DATA6 ...
    #FILE..... E:\ORACLE\DEV\SAPDATA2\SR3_6\SR3.DATA6
    #SAVED.... /sap/DEVB/bdwkuudu/SR3.DATA6 #1/7
    BR0280I BRBACKUP time stamp: 2007-10-23 02.38.05
    BR0063I 7 of 41 files processed - 14000.055 MB of 93092.766 MB done
    BR0204I Percentage done: 15.04%, estimated end time: 21:36
    BR0001I ********__________________________________________
    BR0202I Saving E:\ORACLE\DEV\SAPDATA2\SR3_7\SR3.DATA7
    BR0203I to /sap/DEVB/bdwkuudu/SR3.DATA7 ...
    <b>BR0278E Command output of 'F:\usr\sap\DEV\SYS\exe\uc\NTAMD64\sapftp.exe -v -n -i 192.168.200.3 -u H:\oracle\DEV\sapbackup\.bdwkuudu.ftp -b -c put E:\ORACLE\DEV\SAPDATA2\SR3_7\SR3.DATA7 /sap/DEVB/bdwkuudu/SR3.DATA7':
    Connected to 192.168.200.3 Port 21.
    220-FTP server ready.
    220 This is a private system - No anonymous login
    331 User sapbackup OK. Password required
    230-User sapbackup has group access to: administrator
    230-This server supports FXP transfers
    230-OK. Current restricted directory is /
    230-************************************************
    230-* Use SITE command to change client codepage: *
    230-* ie, site codepage [client codepage] *
    230 ************************************************
    200 TYPE is now 8-bit binary
    200 PORT command successful
    150 Connecting to port 4977
    NiWrite error: -6, bytes to send: 32767 bytes written: 0
    BR0280I BRBACKUP time stamp: 2007-10-23 03.08.37
    BR0279E Return code from 'F:\usr\sap\DEV\SYS\exe\uc\NTAMD64\sapftp.exe -v -n -i 192.168.200.3 -u H:\oracle\DEV\sapbackup\.bdwkuudu.ftp -b -c put E:\ORACLE\DEV\SAPDATA2\SR3_7\SR3.DATA7 /sap/DEVB/bdwkuudu/SR3.DATA7': 1
    BR0222E Copying E:\ORACLE\DEV\SAPDATA2\SR3_7\SR3.DATA7 to/from /sap/DEVB/bdwkuudu/SR3.DATA7 failed due to previous errors </b>
    BR0280I BRBACKUP time stamp: 2007-10-23 03.08.43
    BR0317I 'Alter tablespace PSAPSR3 end backup' successful
    BR0056I End of database backup: bdwkuudu.ans 2007-10-23 03.08.37
    BR0280I BRBACKUP time stamp: 2007-10-23 03.08.43
    BR0054I BRBACUP terminated with errors
    Any helpful information will be appreciated.
    regards!
    Thom

    this post is duplicated at Re: Backup to remote stage failed due to RFC error
    please only post your question once and only in 1 forum.
    thanks.

  • Online Backup Failed (Status Its shows Scheduling failed)

    Dear all ,
    OS AIX , DB Oracle , SAP ECC6.0 , Tivoli Storage Manager
    Scheduled  Online Backups are  failed  (Backups are fired through DB13)
    JOB Status : Scheduling failed
    Job Log:
    02.12.2009     17:00:01     Job started
    02.12.2009     17:00:01     Step 001 started (program RSDBAJOB, variant &0000000000657, user ID BASIS)
    02.12.2009     17:00:01     No application server found on database host - rsh/gateway will be used
    02.12.2009     17:00:01     Execute logical command BRBACKUP On host PRODORADB
    02.12.2009     17:00:01     Parameters:-u / -jid ALLOG20091201170000 -c force -t online -m all -p initIRPoffsite.sap -a -c force -p initIRP
    02.12.2009     17:00:01     offsite.sap -cds
    02.12.2009     17:01:01     SXPG_STEP_XPG_START: is_local_host: rc = 403
    02.12.2009     17:01:01     SXPG_STEP_XPG_START: host = PRODORADB
    02.12.2009     17:01:01     SXPG_STEP_XPG_START: is_local_r3_host: rc = 802
    02.12.2009     17:01:01     SXPG_STEP_XPG_START: RFC_TCPIP_CONNECTION_OPEN: rc = 1003
    02.12.2009     17:01:01     SXPG_STEP_COMMAND_START: SXPG_STEP_XPG_START returned: 1.003
    02.12.2009     17:01:01     SXPG_COMMAND_EXECUTE(LONG)
    02.12.2009     17:01:01     <timestamp> = 20091202170101
    02.12.2009     17:01:01     COMMANDNAME = BRBACKUP
    02.12.2009     17:01:01     ADDITIONAL_PARAMETERS =
    02.12.2009     17:01:01     -u / -jid ALLOG20091201170000 -c force -t online -
    02.12.2009     17:01:01     m all -p initIRPoffsite.sap -a -c force -p initIRP
    02.12.2009     17:01:01     offsite.sap -cds
    02.12.2009     17:01:01     LONG_PARAMS
    02.12.2009     17:01:01     OPERATINGSYSTEM = ANYOS
    02.12.2009     17:01:01     TARGETSYSTEM = PRODORADB
    02.12.2009     17:01:01     DESTINATION
    02.12.2009     17:01:01     SY-SUBRC = 1003
    02.12.2009     17:01:01     SXPG_COMMAND_EXECUTE failed for BRBACKUP - Reason: program_start_error: For More Information, See SYS
    02.12.2009     17:01:01     Job cancelled after system exception ERROR_MESSAGE
    Note :
    when i execute immediately or through Command line  backup  is going fine
    but when i scheduling  backup it was Cancelled  (Status shows Schudling Failed)
    pls give me suggestion
    sathessh

    Dear Yoganand.V
    thanks for your immediate response
    and still i am facing this issue
    the Backups are triggered from DB13 and Scheduled from Central Instance not Application Server and Auto Script is running at OS level and pls look FINISHED BACKUP LOG for your Reference
    08.12.2009     17:50:18     Job started
    08.12.2009     17:50:18     Step 001 started (program RSDBAJOB, variant &0000000000717, user ID S2K_BASIS)
    08.12.2009     17:50:18     No application server found on database host - rsh/gateway will be used
    08.12.2009     17:50:18     Execute logical command BRBACKUP On host PRODORADB
    08.12.2009     17:50:18     Parameters:-u / -jid ALLOG20091208175017 -c force -t online -m all -p initIRPdaily.sap -a -c force -p initIRPda
    08.12.2009     17:50:18     ily.sap -cds
    08.12.2009     18:29:23     BR0051I BRBACKUP 7.00 (34)
    08.12.2009     18:29:23     BR0055I Start of database backup: becbwhwk.anf 2009-12-08 17.50.18
    08.12.2009     18:29:23     BR0484I BRBACKUP log file: /oracle/IRP/sapbackup/becbwhwk.anf
    08.12.2009     18:29:23     BR0477I Oracle pfile /oracle/IRP/102_64/dbs/initIRP.ora created from spfile /oracle/IRP/102_64/dbs/spfileIRP.ora
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 17.50.19
    08.12.2009     18:29:23     BR0319I Control file copy created: /oracle/IRP/sapbackup/cntrlIRP.dbf 15122432
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 17.50.20
    08.12.2009     18:29:23     BR0057I Backup of database: IRP
    08.12.2009     18:29:23     BR0058I BRBACKUP action ID: becbwhwk
    08.12.2009     18:29:23     BR0059I BRBACKUP function ID: anf
    08.12.2009     18:29:23     BR0110I Backup mode: ALL
    08.12.2009     18:29:23     BR0077I Database file for backup: /oracle/IRP/sapbackup/cntrlIRP.dbf
    08.12.2009     18:29:23     BR0061I 42 files found for backup, total size 141794.742 MB
    08.12.2009     18:29:23     BR0143I Backup type: online
    08.12.2009     18:29:23     BR0130I Backup device type: util_file_online
    08.12.2009     18:29:23     BR0109I Files will be saved by backup utility
    08.12.2009     18:29:23     BR0142I Files will be switched to backup status during the backup
    08.12.2009     18:29:23     BR0289I BRARCHIVE will be started at the end of processing
    08.12.2009     18:29:23     BR0134I Unattended mode with 'force' active - no operator confirmation allowed
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 17.50.20
    08.12.2009     18:29:23     BR0229I Calling backup utility with function 'backup'...
    08.12.2009     18:29:23     BR0278I Command output of '/usr/sap/IRP/SYS/exe/run/backint -u IRP -f backup -i /oracle/IRP/sapbackup/.becbwhwk.lst -t file_onli
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     Data Protection for SAP(R)
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     Interface between BR*Tools and Tivoli Storage Manager
    08.12.2009     18:29:23     - Version 5, Release 5, Modification0.0  for AIX LF 64-bit -
    08.12.2009     18:29:23     Build: 316B  compiled on Oct 23 2007
    08.12.2009     18:29:23     (c) Copyright IBM Corporation, 1996, 2007, All Rights Reserved.
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BKI2027I: Using TSM-API version 5.5.1.0 (compiledwith 5.3.0.0).
    08.12.2009     18:29:23     BKI2000I: Successfully connected to ProLE on porttdpr3ora64.
    08.12.2009     18:29:23     BKI0005I: Start of program at: Tue Dec  8 17:50:20 IST 2009 .
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     -- Parameters --
    08.12.2009     18:29:23     Input File            : /oracle/IRP/sapbackup/.becbwhwk.lst
    08.12.2009     18:29:23     Profile               : /oracle/IRP/102_64/dbs/initIRPdaily.utl
    08.12.2009     18:29:23     Configfile            : /oracle/IRP/102_64/dbs/initIRP.bki
    08.12.2009     18:29:23     Manual sorting file   : disabled
    08.12.2009     18:29:23     Tracefile             : disabled
    08.12.2009     18:29:23     Traceflags            : disabled
    08.12.2009     18:29:23     Parallel sessions     : 1
    08.12.2009     18:29:23     Multiplexed files     : 1
    08.12.2009     18:29:23     RL compression        : 0
    08.12.2009     18:29:23     Exit on error         : disabled
    08.12.2009     18:29:23     BATCH                 : enabled
    08.12.2009     18:29:23     Buffer size           : 131072
    08.12.2009     18:29:23     Buffer Copy Mode      : SIMPLE
    08.12.2009     18:29:23     Redologcopies         : disabled
    08.12.2009     18:29:23     Versioning            : enabled
    08.12.2009     18:29:23     Current Version      : 581
    08.12.2009     18:29:23     Versions to keep     : 6
    08.12.2009     18:29:23     Delete Versions      : <= 575
    08.12.2009     18:29:23     Backup Type           : file_online
    08.12.2009     18:29:23     TSM log server        : disabled
    08.12.2009     18:29:23     TSM server            : [email protected] with 2 sessions configured, using 1 session
    08.12.2009     18:29:23     TSM client node      : PRODORADB_TDP
    08.12.2009     18:29:23     Days for backup      : Sun Mon Tue Wed Thu Fri Sat
    08.12.2009     18:29:23     Backup mgmt class    : DAILYMGMTCLASS
    08.12.2009     18:29:23     Archiv mgmt class    : DAILYMGMTCLASS
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 18.24.44
    08.12.2009     18:29:23     BR0232I 8 of 8 files saved by backup utility
    08.12.2009     18:29:23     BR0230I Backup utility called successfully
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0056I End of database backup: becbwhwk.anf 2009-12-08 18.24.44
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 18.24.44
    08.12.2009     18:29:23     BR0052I BRBACKUP completed successfully
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 18.24.44
    08.12.2009     18:29:23     BR0291I BRARCHIVE will be started with options '-U -jid ALLOG20091208175017 -d util_file -c force -p initIRPdaily.sap -cds'
    08.12.2009     18:29:23     
    08.12.2009     18:29:23     BR0280I BRBACKUP time stamp: 2009-12-08 18.29.23
    08.12.2009     18:29:23     BR0292I Execution of BRARCHIVE finished with return code 0
    08.12.2009     18:29:23     Job finished
    regards

  • Online Backup from Tape failed

    Hi All
    We are trying to restore the online backup from tape the backup device is Netbackup Utility
    Please find below the log
    BR0280I BRRESTORE time stamp: 2011-01-07 16.13.44
    BR0229I Calling backup utility with function 'restore'...
    BR0278I Command output of '/usr/sap/QC4/SYS/exe/run/backint -u PC4 -f restore -i /oracle/QC4/sapbackup/.reeypwkq.lst -t file -p /oracle/QC4/102_64/dbs/initQC
    4.utl':
    Program:                /usr/sap/QC4/SYS/exe/run/backint 5.0GA
    Input File:             /oracle/QC4/sapbackup/.reeypwkq.lst
    Profile:                /oracle/QC4/102_64/dbs/initQC4.utl
    Function:               RESTORE
    BR0386E File '/oracle/PC4/sapdata1/pc4_1/pc4.data1' reported as not found by backup utility
    BR0386E File '/oracle/PC4/sapdata2/pc4_103/pc4.data103' reported as not found by backup utility
    BR0386E File '/oracle/PC4/sapdata1/pc4_64/pc4.data64' reported as not found by backup utility
    BR0386E File '/oracle/PC4/sapdata2/pc4_116/pc4.data116' reported as not found by backup utility
    BR0386E File '/oracle/PC4/sapbackup/cntrlPC4.dbf' reported as not found by backup utility
    BR0280I BRRESTORE time stamp: 2011-01-07 16.13.47
    BR0279E Return code from '/usr/sap/QC4/SYS/exe/run/backint -u PC4 -f restore -i /oracle/QC4/sapbackup/.reeypwkq.lst -t file -p /oracle/QC4/102_64/dbs/initQC4
    .utl': 2
    BR0374E 0 of 161 files restored by backup utility
    BR0280I BRRESTORE time stamp: 2011-01-07 16.13.47
    BR0231E Backup utility call failed
    BR0406I End of file restore: reeypwkq.rsb 2011-01-07 16.13.47
    BR0280I BRRESTORE time stamp: 2011-01-07 16.13.47
    BR0404I BRRESTORE terminated with errors
    Please help us out in rsloving this error
    Regards
    John N

    >
    John Namala wrote:
    > BR0278I Command output of '/usr/sap/QC4/SYS/exe/run/backint -u PC4 -f restore -i /oracle/QC4/sapbackup/.reeypwkq.lst -t file -p /oracle/QC4/102_64/dbs/initQC
    Hi,
    you are restoreing PC4 database on system QC4.
    This requires a netbackup permission configuration to allow client QC4 to access the backup from system PC4.
    Volker

  • Online Backups

    Hi,
    I am working as a dba and i am not familiar with Online Backups now i want to implement it in one of my client place so please any body can help me regarding
    these time is less .
    please anybody help me.
    regards,
    srinivasr

    Hi 581156,
    You can use RMAN to do your online backups, please can you tell us your RDBMS version,OS and if the DB is on archive mode or not? I need this to send you the right links to help you on this process.
    Cheers,
    Francisco Munoz Alvarez

  • How does recovery work after an online backup

    Hello,
    While trying to conceptually understand how backup and recovery works, I came accross a question concerning hot (online) backup.
    This is a conceptual question (I am trying to understand how things work), it is not a "how should I proceed/ what should I do step by step" question.
    As far as I understand, an online backup of a tablespace can be performed by copying the OS files making up a tablespace while the database is up and being used (i.e. transactions are modifying data in the database). Before the copying of the OS files starts, the Oracle RDMS must be notified that an online backup is being taken via "ALTER...BEGIN BACKUP" (such that some additional information is written to the Redo Log, which may be required for subsequent recovery using the online backup). During recovery the Oracle RDBMS uses the copies of the OS files together with the online and archived redo logs in order to reconstruct all committed transactions and it further uses the UNDO tablespace to rollback open (uncommitted) transactions.
    Thinking about this, it seems to me, that in order for this to work in all possible scenarios the undo information from the time the backup was taken may be required. Therefore backup of the UNDO tablespace should be taken as well (see the explanation for this assumption below). However browsing the internet (including the Oracle online documentation) I did not find any statements concerning the backup of the UNDO tablespace when an online backup is taken. Moreover I couldn't figure out when exactly such a backup of the UNDO tablespace must be done, to ensure that the database can be recovered in all scenarios.
    I believe that undo information from the time the hot backup was taken may be required e.g. in the following scenario:
    Assume we are taking a hot backup of a given tablespace, i.e. we are copying all OS files that make up this tablespace, while the database is potentially being used. Let D1 be one of the datafiles in our tablespace and let transaction T1 modify datafile D1. Let transaction T1 further be uncommitted while the copy of datafile D1 is being made and let (at least some of) the changes from T1 be included in the backup copy D1' of D1 (because DBWR has already written the modified blocks at the time they were being copied to the backup). Let transaction T1 be rolled back after the copy is completed. D1' will thus contain modifications from T1, while D1 will not.
    Now some time later the datafile D1 is lost. When recovering D1 from the copy D1', the (archived) redo logs will be applied to D1'. Before that, transaction T1 should be rolled back in the copy D1', because modifications from T1 must not appear in the recovered version of the database.
    I do however not understand, where the information to rollback transaction T1 exactly comes from. It may still be in the current UNDO tablespace. I do however assume that rollback information is not kept in the UNDO tablespace forever. I see three possible answers to this
    (a) There are some requirements which I missed so far to backup the UNDO tablespace whenever a hot backup is made.
    (b) Since the Oracle "RDBMS" has to be notified that an online backup is being done, it might store all relevant undo information (e.g. write it to the redo log) when the tablespace is put in backup mode.
    (c) There are situations when recovery is not possible due to "missing old UNDO information".
    Answer (b) seems the most plausible to me. I did however not find any confirmation of this and if (b) really is the answer, I would be interested to understand what information is stored where by the Oracle RDMBS and how it is used for recovery.
    To summarize I have the following questions:
    (I) Is there any requirement to backup the UNDO tablespace together with an online backup of a tablespace, and if so, where is this stated in the Oracle documentation?
    (II) What mechanisms ensure that uncommitted transactions can be cleared from the online copy of a tablespace (potentially a long time after the copy was taken)?
    (III) Do you know any links (Oracle documentation or other online resources) explaining these datails?
    Thank you for any hints and answers
    kind regards
    Martin

    Its a highly technical question and I can be completely wrong due to my very less knowledge but I would attempt to answer still. Hope I say something sensible so bear with me.
    As far as I understand, an online backup of a tablespace can be performed by copying the OS files making up a tablespace while the database is up and being used (i.e. transactions are modifying data in the database).Correct. But it would depend on the tool you are going to use to do so.Using o/s level commands like CP and all would require you to manually copy the files to the backup location. Using RMAN, it would be lot easier and RMAN would take care of everything.
    Before the copying of the OS files starts, the Oracle RDMS must be notified that an online backup is being taken via "ALTER...BEGIN BACKUP" (such that some additional information is written to the Redo Log, which may be required for subsequent recovery using the online backup). Again, this is a requirement only in the case of user-managed backup . In that case, because of the fractured block issue , its important that the corresponding older information/image of the buffer is also copied in the redo stream and that's done when the begin backup command is used. Using RMAN, this is not needed as RMAN can read the consistent image which it would store in the backup piece, exactly in the same way in which select request is fulfilled by oracle for a dirty buffer which is yet to be made consistent.
    During recovery the Oracle RDBMS uses the copies of the OS files together with the online and archived redo logs in order to reconstruct all committed transactions and it further uses the UNDO tablespace to rollback open (uncommitted) transactions.Correct!
    Thinking about this, it seems to me, that in order for this to work in all possible scenarios the undo information from the time the backup was >taken may be required. Therefore backup of the UNDO tablespace should be taken as well (see the explanation for this assumption below). >However browsing the internet (including the Oracle online documentation) I did not find any statements concerning the backup of the UNDO >tablespace when an online backup is taken. Moreover I couldn't figure out when exactly such a backup of the UNDO tablespace must be done, to >ensure that the database can be recovered in all scenarios.The reason that its not a must to do so is that if the transaction is yet active, there is no way that Oracle would overwrite the Undo information of it, even if you may come after 100 years, it would be there. The Undo segment would mark those blocks as active undo blockswhich contains the information of that transaction whose status within the transacton table of that undo segment is still marked as active. So its there all the time in the undo tablespace. Now, for an instance, let's assume that the undo is not there as well( it would be but let's assume), the changes made to the undo segment's blocks are also recored in the redo as its just a change happening to any other segment like EMP,DEPT except with the difference that its not done by you but by oracle. So using that information, in the future , if there would be a need to replay those changes, necessary information to do so can be brought up from the redo blocks stored in the redo/archive logs. Yes, if there would be pending transactions that would require Undo information to get them rolled back and you have lost Undo tablespace and have no backup of it , you wont be able to bring back the database as it would be inconsistent and oracle would not let you to do it. In that case, you may require to use hacks to get it up and that would be really tricky situation.
    (I) Is there any requirement to backup the UNDO tablespace together with an online backup of a tablespace, and if so, where is this stated in the Oracle documentation?As I said above, it must be there if you are anticipating loss of Undo tablespace. If you have lost it, you would need a backup and all the archive logs and redo logs to recover and get it back to the point where the current database is . Rest, oracle would take care as it would reapply the redo contents of the undo segments over the undo segment and get it consitent.
    (II) What mechanisms ensure that uncommitted transactions can be cleared from the online copy of a tablespace (potentially a long time after the copy was taken)?As I said , pending transaction's undo is never overwritten by Oracle. Its always kept and marked as active undo . Only a transaction end would make it elgible to get overwritten and that too won't happen immediately(undo_retention would kick in) .
    (III) Do you know any links (Oracle documentation or other online resources) explaining these datails?I have to see if its there some where step by step mentioned and I shall update the reply once I shall find the link. Hoep someone else in the meantime finds it .
    HTH
    Aman....

  • Open resetlogs is not  working when creating clone db with online backup

    Hi All,
    I am trying to create a clone database using hot backup of a database .
    STEPS THAT I FOLLOWED
    LET ----- >CURRENT_DB NAME=DEV
    CLONE DATABASE NAME=DEVCLONE
    steps PERFORMED FORM DEV DB
    - put the database in backup mode using 'alter database begin backup'
    - copy all the data files to a different folder
    - during copy i have performed some operations on the DB (creating users, tables, dmls etc...)
    - in between copying i also performed log switch
    - after completion of copy , "alter database end backup"
    - created a backup control file in a human readable format (alter database backup controlfile to trace as ........)
    steps performed for clone DB side ((DEVCLONE)
    - created a parameter file for the database .
    - modified the backup control file so that it will point to the location of copied destination of datafiles
    - set the ORACLE_SID
    - then 'sqlplus / as sysdba
    - starup nomount
    - run the modified control file ( created a control file for the clone database)
    - recover the database using "recover database using backup controlfile"
    I have provided the archive files that it was asking for (archive logs that has been generated in DEV DB)
    then i canceled the recovery by typing "cancel"
    - recover database using backup controlfile until cancel;
    then typed "cancel"
    - then try to open the database with open resetlogs but it showed below error
    alter database open resetlogs
    ERROR at line 1:
    ORA-01195: online backup of file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\DATA_GUARD\DEVHOT\SYSTEM01.DBF'
    please help me on this ......
    Thanks

    Thanks , now i am able to open the DB in open reset logs mode .
    Previously , when i had not taken the archive log after "alter database end backup" , i was not able to open the db with open resetlogs because the
    fuzzy status of all the datafile headers were YES .
    But after taking the archive log that got generated after "alter database end backup" and applying it on the clone db(Created with HOT backup ) the datafile_header status got changed from YES to NO .
    So for that i am able to open the clone db with open resetlogs .
    Can you please help me with a small description why this is happening ?
    Thanks.......

  • Database open fails after online backup recovery

    Hi Friends
    We are trying to set up an additional server using the online backup of our DEV server. We have been following SAP Note 549828 for the same. Having restored the online backup, the open database failed.
    To resolve the same, in accordance to SAP Note 549828, we created a backup control file with success using the command
    create controlfile reuse set database DEV resetlogs noarchivelog
    However on issuance of the command
    RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL
    we run into an error as to
    ORA-00279: change 794638222 generated at 10/25/2007 12:43:20 needed for thread 1
    ORA-00289: suggestion : /oracle/DEV/oraarch/DEVarch1_9766.dbf
    ORA-00280: change 794638222 for thread 1 is in sequence #9766
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/DEV/oraarch/DEVarch1_9766.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/DEV/sapdata1/system_1/system.data1'
    We even manually copied the file system.data1 from the source to the target server but to no avail.
    Also the SQL command
    SELECT FILE#, CHANGE# FROM V$RECOVER_FILE
    displays a different change# for system.data1 while it is showing same number for all other datafiles.
    Please advice at the earliest as we are struck. Points awaiting their master.
    Regards
    Lokesh Gupta

    Some inputs addition to Erics comments
    The problem is you dont have archives ie Offline redo logfiles in correct location.
    /oracle/DEV/oraarch/DEVarch1_9766.dbf  Here DEVarch1_9766 is archive file which is misssing from location /oracle/DEV/oraarch. To recover DB you need Archives generated during hot backup.
    Generally these steps willl give you desired result.
    select * from v$logfile;
    We normally switch the log files to the number of log groups that exist
    alter system switch logfile;
    create a Backup directory to hold our hot back datafiles and Archives
    When the backup is complete check the backup location to see if all the files are available. We could now either FTP the same to the other system or copy over these files to another location in case of cloning on the same system.
    Copy over all the files to their respective filesystems and directories and then edit the file that was created using the backup controlfile to trace. Copy that file to the remote system and edit it accordingly.
    check that all the files are in the right location and edit that information in the control file
    Once the controlfile is run successfully and you get the statement processed, we can start applying the archive logs that we have moved to the archive log destination directory as per the init<sid>.ora file.
    do a recover of the database to its consistent state
    recover database using backup controlfile until cancel;
    The create control file command only changes the structure of the database and the SID name, the header of the datafiles still hold all the required information. The above command would ask you to input the archive log file names one by one to do recovery or you could choose the AUTO option. Once the recovery process is complete, open the database with the resetlogs option
    Regards
    Vinod

  • Swing application eats CPU on resume after standby

    My application contains a mix of regular Swing components and about 200 (small) custom components (JPanels) that draw themselves. The problem is that if put my laptop into Standby, then restart it, the application eats CPU. It goes from ~4% pre standby to ~48% after standby. Totally repeatable. There is only the main thread, the AWT thread and a couple timers firing: one to get data from a server and one to redraw the custom controls to make the control blink (calls repaint() to cue up a call to paintComponent()). I put timing code in the two timer driven threads and both are running fine. NOTE: each of the custom components is just a rectangle with a short text string in it. Think checkerboard with text. The background color is all that is being changed. It's basically a grid of alarm/status "lights".
    I did some searching and only found a reference to a drawing bug that was fixed back in 1.4.x that had to do with WinXP and dual-headed computers.
    Note that another simple Swing application I wrote (no custom controls), doesn't appear to behave this way, so I suspect it has to do with the custom drawing stuff (rolling my own paintComponent() method).
    Any thoughts on how to figure out what is eating the CPU? I'm using NetBeans 6.7 and JDK 1.6_14 under Windows XP.
    Edited by: garrysimmons on Sep 15, 2009 1:52 PM
    Edited by: garrysimmons on Sep 15, 2009 2:57 PM
    Edited by: garrysimmons on Sep 15, 2009 3:02 PM

    It doesn't matter if it's a Swing timer or a Util timer. It behaves the same either way. That was one of the first things I tried.
    I ended up refactoring the code to go from a grid of individual controls (think checker board where each square is a control that draws itself), to a single "board" control that draws a grid. That seems to have fixed the issue. I can go in/out of Standby and the app doesn't eat a ton of CPU any more. The real problem may still be there, just too small of an effect to notice.
    I don't know why the original design would work fine before a Standby, but not after. Having ~200 little controls doing repaints twice a second must confuse something in the JVM/Windows versus having 4 large controls doing the same amount of drawing.
    It's working, so time to move on.

  • Getting error while running script for online backup

    Hi,
    I am running a script for online backup but ended up with an the below error.
    *ERROR* [Backup Worker Thread] com.day.crx.core.backup.Backup Failed to create temporary directory
    Please help out in resolving this.
    Thanks in Advnace.
    Maheswar

    Hi mahesh,
    If you are using backup feature from crx console, I mean http://localhost:4502/crx/config/backup.jsp  I can say that we had also some problems with this functionalities.
    First off all what you need to check are the permissions, because when you check a source code there is line which creates a File object using a path specified by you to make a backup of repository.
    File targetDir = new File(req.getParameter("targetDir", listDir.getParentFile().getAbsolutePath()));
    You need to have sure that the proper read write access has been granted for this path.
    Another issue is that maybe there was already prepared some hotfix if you are using CQ5.4. Please refer to the following link:
    http://dev.day.com/content/kb/home/Crx/CrxSystemAdministration/CRXOnlineBackup.html
    and also to this one:
    http://dev.day.com/content/docs/en/crx/current/release_notes/overview.html which contains a hotfix number #34797 which was applied to backup.jsp file.
    Regards,
    kasq

  • Problem in new database creation with the help of online  backup

    Dear dba's
    i am using oracle 11gR2 database in windows server 2003. database is running in ARCHIVE LOG mode.
    i have taken an online backup of all datafile,controlfile and spfile.Then i crated folders in all the locations as required for new database.
    then i registerd the service of new database named as 'newdb' by
    oradim -NEW -SID newdb
    then i created a password file manually in 'oracle_home\database' location.
    i created a new contolfile named as controlfile_01.ctl. the content of controlfile as follows
    STARTUP NOMOUNT
    CREATE CONTROLFILE SET DATABASE "NEWDB" NORESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_1_7FK0XG7B_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_1_7FK0XHWB_.LOG'
    ) SIZE 50M,
    GROUP 2 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_2_7FK0XKB8_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_2_7FK0XM0Z_.LOG'
    ) SIZE 50M,
    GROUP 3 (
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\ONLINELOG\O1_MF_3_7FK0XNOZ_.LOG',
    'D:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\NEWDB\ONLINELOG\O1_MF_3_7FK0XOWB_.LOG'
    ) SIZE 50M
    DATAFILE
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_SYSTEM_7FK0SKN0_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_SYSAUX_7FK0SKPG_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_UNDOTBS1_7FK0SKTC_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_USERS_7FK0SKWB_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\DATAFILE\O1_MF_EXAMPLE_7FK0Z5LK_.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\MARSH.DBF',
    'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\JOMARSH.DBF'
    CHARACTER SET AL32UTF8
    the control file path was registered in pfile also.
    then i brought the database to nomount stage.
    the problem is when i try to mount database it shows following error. anyone can help me to over come from this issue????????
    SQL> startup pfile='D:\app\Administrator\product\11.1.0\db_1\database\INITnewdb.ora' nomount;
    ORACLE instance started.
    Total System Global Area 535662592 bytes
    Fixed Size 1334380 bytes
    Variable Size 301990804 bytes
    Database Buffers 226492416 bytes
    Redo Buffers 5844992 bytes
    SQL> ALTER DATABASE MOUNT;
    ALTER DATABASE MOUNT
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    the alert massage is:
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: 'D:\APP\ADMINISTRATOR\ORADATA\NEWDB\CONTROLFILE\CONTROLFILE_01.CTL'
    ORA-27048: skgfifi: file header information is invalid
    OSD-04001: invalid logical block size (OS 1413563730)
    Fri Dec 09 13:11:59 2011
    Checker run found 1 new persistent data failures
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    Thanks & Regards,
    John Marshal.A

    Hi;
    Error: ORA 205
    Text: error in identifying control file <name>
    Cause: The system could not find a control file of the specified name and
    size.
    Action: Either
    Check that the proper control filename is referenced in the
    CONTROL_FILES initialization parameter in the initialization parameter
    file and try again.
    When using mirrored control files, that is, more than one control file
    is referenced in the initialization parameter file, remove the control
    filename listed in the message from the initialization parameter file
    and restart the instance.
    If the message does not recur, remove the problem control file from
    the initialization parameter file and create another copy of the
    control file with a new filename in the initialization parameter file.
    Regard
    Helios

  • Can't access online backup and sharing web site

    Does anyone know how to access the Online Backup and Sharing web site from a computer that does not have the O/B/S software installed?  I used to be able to go to My Verizon/My Services/Internet and hit "Launch" for O/B/S, and the link would go directly to the web site for uploading or downloading material.  Now, that link takes me to a Verizon page that asks to activate the O/B/S service, and wants to download the software.  My O/B/S service was activated years ago and has worked until recently without the software installed.  There is no other obvious way to get to the O/B/S web site.  (If it matters, I tried this in both IE 9, with Windows 7 64-bit, and Safari.)
    After some effort, I got the help number for the outside vendor that runs the O/B/S service.  They say that the Verizon web site should link directly to their O/B/S web site, regardless of whether or not the O/B/S software is installed locally, but that the failure to do so is entirely an error of Verizon's on the Verizon web site and they have nothing to do with it.  They also said there is no way to bypass the Verizon web site to get to a Verizon O/B/S account on their web site.  They said they have received many complaints about this from Verizon O/B/S customers, and that the same problem has even arisen when the O/B/S software is installed on the local computer.
    So does anyone know if it is really necessary to install the O/B/S software on any computer used to access O/B/S, and if it is now impossible to access the material from any other computer?  That would be a major reduction in the usefulness of the service that was not announced anywhere that I know of.  I saw a strong of comments on this issue in the Forums from a few years ago, but the conclusion was that the problem had been fixed.  It seems to have arisen again.

    Anthony, I tried to private message you several times, but I kept getting an error message stating that my message had invalid HTML that was being removed, and to resend.  It did not actually have any HTML, and hitting the resend button just led to the same message.
    Also, since my last post, I downloaded the OBS software onto my computer.  This gave me access to OBS from that computer.  However, even after doing this, I cannot use the old method of directly getting to the OBS web site from My Verizon.  The link from "Launch" does not work either on the computer with the OBS software or on any other computer.  Rather, the link goes only to the Verizon page saying that my OBS service has not been activated (which it surely has been) and asking to download the OBS software.
    If this is happening to everyone and not just to me, this means that material stored on OBS is only available from a computer that has the OBS software downloaded on it.  That is certainly not supposed to be the case, and it would make OBS much less useful.
    Again, the OBS technical people said that this was a problem that they knew about, but it was entirely a problem with the Verizon web site that they could not do anything about.

Maybe you are looking for