Incremental BACKUP to resolve large gap in Data Guard

Hi, all,
We run into a situation that standby database was shutdown for a long time and had a large gap between the primary and standby. I am interested in the idea to use incremental backup from primary db to resolve the gap. Does anyone know the procedure to do the work? Our environment is Oracle 10g RAC R2 on both primary and standby, ASM. Any suggestions are highly appreciated. Thanks in advance

Check Using RMAN Incremental Backups to Roll Forward a Physical Standby Database.

Similar Messages

  • Duplicate vs Standby -- What To Do Without Data Guard

    WHAT WE HAVE
    Three 10.2.0.3.0 databases in production on the same Linux x86 server (Red Hat OS). An identical Linux x86 server (with same OS) is available for housing a corresponding backup instance (either duplicate or physical standby).
    THE ASSIGNMENT
    Develop a plan that, in the event of a loss of our prod server, provides the shortest time before substitute databases are up and running on the backup server. WE CANNOT USE DATA GUARD (due to server licensing issues). It isn't necessarily a high priority to apply archive logs to duplicates or standbys on a regular basis, but I realize the more up-to-date a backup database is, the less recovery time would be needed if and when it's pressed into service. (BTW: We take weekly incremental level 0 and daily incremental level 1 backups of our prod databases with RMAN and keep them for 28 days.)
    QUESTIONS
    1. Does our 'no Data Guard' rule eliminate using standbys - even if I could use OS processes to transfer archive logs between servers and manually apply redo? (I'm not quite sure what exactly constitutes Data Guard.)
    2. If I'm allowed to create standbys, I'll create them using RMAN. Since the scenario being planned for is a loss of the production server, I figure I'm worried about failover and not switchover. Thus, once standbys are created, in addition to manually transferring and applying archives and creating redo files for them, what other action(s) would I need to take before they could be activated, if that's ever needed? Would the redo files I would have to create be of type STANDBY or ONLINE?
    3. Can a standby be opened in read/write mode without being activated?
    4. If I can only use duplicates and if the production server were to go down one day, could the most recent valid RMAN backup(s) for the prod databases be used in recovering the duplicates?
    I'd prefer the backup instances have the SAME SID (db_name) as the corresponding one in prod, if possible.
    Thanks.

    Hi
    1. Does our 'no Data Guard' rule eliminate using standbys - even if I could use OS processes to transfer archive logs between servers and manually apply redo? (I'm not quite sure what exactly constitutes Data Guard.)
    Ans:- yes if you are able to transfer archivelog to standby destination ,you also have to register archivelogs in controlfile manually,then only you can recover standby with archivelogs.
    2. If I'm allowed to create standbys, I'll create them using RMAN. Since the scenario being planned for is a loss of the production server, I figure I'm worried about failover and not switchover. Thus, once standbys are created, in addition to manually transferring and applying archives and creating redo files for them, what other action(s) would I need to take before they could be activated, if that's ever needed? Would the redo files I would have to create be of type STANDBY or ONLINE?
    Ans:- Yes you can use RMAN to create standby database.you need to activate standby only with the case of failover.Standby logfile is needed if you use Real Time redo apply .In your case you are not enabling Data Guard to transport logs ,so it is not possible to Real Time apply.
    3. Can a standby be opened in read/write mode without being activated ?
    Ans:- Yes,If you do switchover/failover to open for read/write.Without switchover/failover you can open only in read only mode.
    4. If I can only use duplicates and if the production server were to go down one day, could the most recent valid RMAN backup(s) for the prod databases be used in recovering the duplicates?
    Ans:- IF you make duplicate database then it will not accept any log from primary database.
    Hope this will help you.
    Regards
    Tinku

  • Oracle 11G Direct SQL Load with Data Guard

    Does SQL Loader in direct mode always bypass the writing of redo logs ?
    If the database has force logging on, will SQL Loader in direct mode bypass the writing of redo logs ?
    Is there a way to run SQL Loader in direct mode that will create redo logs that can be applied by Data Guard to the backup database ?

    846797 wrote:
    Does SQL Loader in direct mode always bypass the writing of redo logs ?
    If the database has force logging on, will SQL Loader in direct mode bypass the writing of redo logs ?
    Is there a way to run SQL Loader in direct mode that will create redo logs that can be applied by Data Guard to the backup database ?In case of data guard setup , redo logs will always be generated.

  • Active data guard standby database

    Hello,
    we have to choose which technology to use for maintain Oracle backup center synchronized with main center. This backup center should be read only, and data generated in main should be replicated into backup. So I would like to use Active Data Guard, but I have a possible issue with reading in backup center, because reading is not just select, but filling(inserting data) some temporary tables (from client side, and from procedures in database). Will Standby database in backup center allow this if Active Data Guard is used. We would go with Oracle 11gR2.
    Also other suggestions on replication technologies (I know there are RAC, ADG, Oracle Streams, Golden Gate), or any document which would help us decide which of these technologies would suit our needs the best.
    Thank You.

    Hi,
    user12121832 wrote:
    we have to choose which technology to use for maintain Oracle backup center synchronized with main center. This backup center should be read only, and data generated in main should be replicated into backup. So I would like to use Active Data Guard, but I have a possible issue with reading in backup center, because reading is not just select, but filling(inserting data) some temporary tables (from client side, and from procedures in database). Will Standby database in backup center allow this if Active Data Guard is used. We would go with Oracle 11gR2. You can use Standby Database to make yours Backup.
    In 11.1 or later you can use feature "Snapshot Standby Databases" if this write on database is temporary (i.e you will not use the new data) and at backup time you not need use Snapshot Standby Database feature.
    A snapshot standby database is a physical standby database that you temporarily convert into an updatable standby database. You can use snapshot standby databases as clones or test databases to validate new functionality and new releases, and when finished you then convert the database back into a physical standby. While running in the snapshot standby database role, it continues to receive and queue redo data so that data protection and the RPO are not sacrificed.
    A snapshot standby database has the following characteristics:
    * A snapshot standby database cannot be the target of a switchover or failover. A snapshot standby database must first be converted back into a physical standby database before performing a role transition to it.
    * A snapshot standby database cannot be the only standby database in a Maximum Protection Data Guard configuration.
    Note:
    Flashback Database is used to convert a snapshot standby database back into a physical standby database. Any operation that cannot be reversed using Flashback Database technology will prevent a snapshot standby from being converted back to a physical standby.
    http://docs.oracle.com/cd/E11882_01/server.112/e17157/unplanned.htm#HAOVW11832
    Also other suggestions on replication technologies (I know there are RAC, ADG, Oracle Streams, Golden Gate), or any document which would help us decide which of these technologies would suit our needs the best.If you need "standby" database opened in read-write all time the best option is to use Golden Gate.
    Regards,
    Levi Pereira

  • Robocopy wont reset archive attribute with /M switch..incremental backup to zip folder...partially RESOLVED!!!

    hello, OS server 2012, 2008 r2, w7 ult.
    I want to make backup only files which have archive attribute set. /M switch is what I need but robocopy wont reset archive bit from subfolders and only reset bit on files in root of source folder. How to force robocopy to reset archive bit from all copied
    files???
    This is syntax:
    @echo off
    set year=%date:~10,4%
    set month=%date:~4,2%
    set day=%date:~7,2%
    SET Now=%Time: =0%
    SET Hours=%Now:~0,2%
    SET Minutes=%Now:~3,2%
    Robocopy c:\source c:\dest /m /s /r:1 /w:1 /log:"c:\dest\Incremental_%month%-%day%-%year% %hours%h-%minutes%m.log" /NP
    "C:\Program Files\7-Zip\7z.exe" a -tzip -mx1 C:\c\Incremental_%month%-%day%-%year%-%hours%h-%minutes%m.zip C:\dest
    echo Y | DEL C:\Temp\*.*
    I can use attrib command but I have in working folder 900.000 files so this is not option.
    edit....robocopy work as expected in 2012 standard. /M switch reset archive attribute so I can use robocopy for incremental backups.
    Script: c:\a is source and c:\b is destination. c:\b\temp is temporary folder from where 7zip package backed up files to archive. After 7zip finish jod script delete all files folders from temp folder. I want to use zip archive because this will be much
    easier for file system. With time I will have some zip files instead 10000000 files or folders. Zip files will have in name date and time for easier tracking. 
    @echo off
    set year=%date:~10,4%
    set month=%date:~4,2%
    set day=%date:~7,2%
    SET Now=%Time: =0%
    SET Hours=%Now:~0,2%
    SET Minutes=%Now:~3,2%
    Robocopy c:\a c:\b\temp /s /m /r:1 /w:1 /log:"c:\b\temp\Incremental_%month%-%day%-%year% %hours%h-%minutes%m.log" /NP
    "C:\Program Files\7-Zip\7z.exe" a -tzip -mx1 C:\b\Incremental_%month%-%day%-%year%-%hours%h-%minutes%m.zip C:\b\Temp
    set folder="C:\b\temp\"
    cd /d %folder%
    for /F "delims=" %%i in ('dir /b') do (rmdir "%%i" /s/q || del "%%i" /s/q)

    Hi,
    Thank you for your helpful sharing! This will help others who are fighting with similar issues.
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Why are the incremental backups so large??!!   16GB vs. 300MB

    I have the Time Machine system preference turned to Off, as I do the backups Manually every couple of weeks (Back Up Now from the Menu Bar). Here is what puzzles me: I don't work with large files. My incremental increase in size in a couple of weeks might be 200-300 Megabytes. Yet, each incremental backup is 12- 16 GB, which of course takes forever. Can anyone shed any light into why the large size??

    PrefabSprouter wrote:
    I have the Time Machine system preference turned to Off, as I do the backups Manually every couple of weeks (Back Up Now from the Menu Bar). Here is what puzzles me: I don't work with large files. My incremental increase in size in a couple of weeks might be 200-300 Megabytes. Yet, each incremental backup is 12- 16 GB, which of course takes forever. Can anyone shed any light into why the large size??
    what Tesserax said ! or, if you have +disk warrior+ installed, it could be its +trash protection+ folder.
    install timetracker from here. it will let you look at any given backup and determine what has been backed up. that way, i discovered the DW folder as the culprit and excluded the sucker from being backed up. voila, backups came back to a regular size

  • HT3275 my time machine says the backup is too large but it should only be incremental

    I have time machine trying to back up my mac (OSX10.6) to a time capsule but it gives the error:
    "This backup is too large for the backup disk. The backup requires 240.66 GB but only 8.25 GB are available."
    This time capsule is used for backups on multiple macs in our house and has previously performed a successful full backup for this mac. I thought it would just delete old backups to create space for new ones, but it doesn't seem to want to do this.
    How can I fix this?

    TM often seems unable to fix the old backups if the disk is used by several computers.. so this is not unusual.
    You either need to delete the sparsebundle for your computer and start a fresh backup.. 240GB is a lot of stuff btw.. is it restarting or corrupted an old backup and needs to restart.
    The alternative is to use a USB drive connected to the TC and use that as the TM target. A local external disk would do the same thing of course.

  • Level 1 incremental backup taking larger space

    Hi All,
    Yesterday i had level 1 incremental backup. Backup completed successfully but its size was as equal as Full Backup.
    What could me the possible reason for that ?
    Environmnt is Oracle 10g R2 2 Node RAC on windows.

    Hi,
    you can enable block change tracking and see the results.
    ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)
    Thanks,
    Renu

  • Apply problem after sync with incremental backup

    Hello all,
    I have primary as rac with 4 nodes and one physical (all have asm)
    I made incremental backup to re-sync my primary with physical standby and when i tried to start DG process the shiping is working but apply not workin : check please :
    THREAD#,Last Standby Seq Received
    1,29598
    2,22308
    3,27230
    4,21868
    THREAD#,Last Standby Seq Applied
    1,28634
    2,21227
    3,25780
    4,21104
    when i run apply command i see in alert log file this :
    Media Recovery Waiting for thread 4 sequence 1
    Fetching gap sequence in thread 4, gap sequence 1-100
    Tue Apr 07 12:06:49 2015
    FAL[client]: Failed to request gap sequence
    GAP - thread 4 sequence 1-100
    DBID 1722434320 branch 849356882
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that's sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    the system searching for gab started with seq 1 in node 4 !!!!
    how can i deal with that please ??

    Thanks Stefan , kindly find below the output for both primary and standby :
    1- select NAME,RESETLOGS_CHANGE#, CHECKPOINT_CHANGE#,CONTROLFILE_CHANGE#,CURRENT_SCN,RECOVERY_TARGET_INCARNATION# from v$database
    Primary :
    ORACLE,925702,3231069178,3231935937,3231937322,2
    Standby :
    ORACLE,925702,3096376393,3096507373,3096507372,2
    2- list incarnation
    Primary :
    List of Database Incarnations
    DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
    1       1       ORACLE   1722434320       PARENT  1          24-AUG-13
    2       2       ORACLE   1722434320       CURRENT 925702     04-JUN-14
    Standby:
    List of Database Incarnations
    DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
    1       1       ORACLE   1722434320       PARENT  1          24-AUG-13
    2       2       ORACLE   1722434320       CURRENT 925702     04-JUN-14
    3- select FILE#,RESETLOGS_CHANGE#,RESETLOGS_TIME, CHECKPOINT_CHANGE#,CHECKPOINT_TIME from v$datafile_header;
    FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
             1            925702 04-JUN-14         3231779881 08-APR-15
             2            925702 04-JUN-14         3231779881 08-APR-15
             3            925702 04-JUN-14         3231779881 08-APR-15
             4            925702 04-JUN-14         3231779881 08-APR-15
             5            925702 04-JUN-14         3231779881 08-APR-15
             6            925702 04-JUN-14         3231779881 08-APR-15
             7            925702 04-JUN-14         3231779881 08-APR-15
             8            925702 04-JUN-14         3231779881 08-APR-15
             9            925702 04-JUN-14         3231779881 08-APR-15
            10            925702 04-JUN-14         3231779881 08-APR-15
            11            925702 04-JUN-14         3231779881 08-APR-15
         FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
            12            925702 04-JUN-14         3231779881 08-APR-15
            13            925702 04-JUN-14         3231779881 08-APR-15
            14            925702 04-JUN-14         3231779881 08-APR-15
            15            925702 04-JUN-14         3231779881 08-APR-15
            16            925702 04-JUN-14         3231779881 08-APR-15
            17            925702 04-JUN-14         3231779881 08-APR-15
            18            925702 04-JUN-14         3231779881 08-APR-15
            19            925702 04-JUN-14         3231779881 08-APR-15
            20            925702 04-JUN-14         3231779881 08-APR-15
            21            925702 04-JUN-14         3231779881 08-APR-15
            22            925702 04-JUN-14         3231779881 08-APR-15
         FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
            23            925702 04-JUN-14         3231779881 08-APR-15
            24            925702 04-JUN-14         3231779881 08-APR-15
            25            925702 04-JUN-14         3231779881 08-APR-15
            26            925702 04-JUN-14         3231779881 08-APR-15
            27            925702 04-JUN-14         3231779881 08-APR-15
            28            925702 04-JUN-14         3231779881 08-APR-15
            29            925702 04-JUN-14         3231779881 08-APR-15
    29 rows selected.
    Standby:
    FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
             1            925702 04-JUN-14         3065328471 25-MAR-15
             2            925702 04-JUN-14         3065328464 25-MAR-15
             3            925702 04-JUN-14         3065328464 25-MAR-15
             4            925702 04-JUN-14         3065328461 25-MAR-15
             5            925702 04-JUN-14         3065328471 25-MAR-15
             6            925702 04-JUN-14         3065328471 25-MAR-15
             7            925702 04-JUN-14         3065328461 25-MAR-15
             8            925702 04-JUN-14         3065328471 25-MAR-15
             9            925702 04-JUN-14            1016743 04-JUN-14
            10            925702 04-JUN-14            1020278 04-JUN-14
            11            925702 04-JUN-14            1020681 04-JUN-14
         FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
            12            925702 04-JUN-14            1021083 04-JUN-14
            13            925702 04-JUN-14            1021086 04-JUN-14
            14            925702 04-JUN-14            1021877 04-JUN-14
            15            925702 04-JUN-14            1021880 04-JUN-14
            16            925702 04-JUN-14            1021883 04-JUN-14
            17            925702 04-JUN-14            1021886 04-JUN-14
            18            925702 04-JUN-14            1031089 04-JUN-14
            19            925702 04-JUN-14            1031555 04-JUN-14
            20            925702 04-JUN-14            1032064 04-JUN-14
            21            925702 04-JUN-14            1032525 04-JUN-14
            22            925702 04-JUN-14            1032922 04-JUN-14
         FILE# RESETLOGS_CHANGE# RESETLOGS CHECKPOINT_CHANGE# CHECKPOIN
            23            925702 04-JUN-14            1033338 04-JUN-14
            24            925702 04-JUN-14            1033731 04-JUN-14
            25            925702 04-JUN-14            1034126 04-JUN-14
            26            925702 04-JUN-14           90283375 09-JUL-14
            27            925702 04-JUN-14           90291448 09-JUL-14
            28            925702 04-JUN-14         3065328471 25-MAR-15
            29            925702 04-JUN-14         3065328461 25-MAR-15
    29 rows selected.
    4- select name,CHECKPOINT_CHANGE#,CHECKPOINT_TIME, UNRECOVERABLE_CHANGE# ,UNRECOVERABLE_TIME from v$datafile
    Prmairy :
    NAME
    CHECKPOINT_CHANGE#
    CHECKPOINT_TIME
    UNRECOVERABLE_CHANGE#
    UNRECOVERABLE_TIME
    +ASM_ORADATA/oracle/datafile/system.274.849356797
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/sysaux.260.849356797
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/undotbs1.263.849356799
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/users.269.849356799
    3232100160
    04/08/2015 10:07:12
    404,796,458
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/example.272.849356895
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/undotbs2.262.849357043
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/undotbs3.261.849357043
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/undotbs4.267.849357045
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/asd01.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/dev01.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/feed01.dbf
    3232100160
    04/08/2015 10:07:12
    404,790,786
    08/25/2014 22:36:42
    +ASM_ORADATA/oracle/datafile/indx01.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/indx02.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/website01.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/website02.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/website03.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/website04.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/users01.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,505
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users02.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,502
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users03.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,499
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users04.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,496
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users05.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,493
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users06.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,490
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users07.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,487
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/users08.dbf
    3232100160
    04/08/2015 10:07:12
    404,796,505
    08/25/2014 22:36:50
    +ASM_ORADATA/oracle/datafile/feed02.dbf
    3232100160
    04/08/2015 10:07:12
    134,340,010
    07/16/2014 22:06:21
    +ASM_ORADATA/oracle/datafile/indx03.dbf
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/oracle/datafile/system_2_01
    3232100160
    04/08/2015 10:07:12
    0
    +ASM_ORADATA/catalog01
    3232100160
    04/08/2015 10:07:12
    0
    Standby :
    NAME
    CHECKPOINT_CHANGE#
    CHECKPOINT_TIME
    UNRECOVERABLE_CHANGE#
    UNRECOVERABLE_TIME
    +DATA/oracledrs/datafile/system.3248.875795981
    3096432813
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/sysaux.3244.875796111
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/undotbs1.3249.875796547
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/users.3255.875796621
    3096432811
    03/30/2015 07:55:20
    404,796,458
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/example.3258.875796817
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/undotbs2.3256.875796823
    3096432813
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/undotbs3.3253.875796869
    3096432811
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/undotbs4.3247.875797223
    3096432813
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/asd01.dbf
    3096432813
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/dev01.dbf
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/feed01.dbf
    3096432813
    03/30/2015 07:55:21
    404,790,786
    08/25/2014 22:36:42
    +DATA/oracledrs/datafile/indx01.dbf
    3096432803
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/indx02.dbf
    3096432803
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/website01.dbf
    3096432813
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/website02.dbf
    3096432811
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/website03.dbf
    3096432811
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/website04.dbf
    3096432811
    03/30/2015 07:55:20
    0
    +DATA/oracledrs/datafile/users01.dbf
    3096432811
    03/30/2015 07:55:20
    404,796,505
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users02.dbf
    3096432803
    03/30/2015 07:55:20
    404,796,502
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users03.dbf
    3096432803
    03/30/2015 07:55:20
    404,796,499
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users04.dbf
    3096432814
    03/30/2015 07:55:21
    404,796,496
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users05.dbf
    3096432813
    03/30/2015 07:55:21
    404,796,493
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users06.dbf
    3096432803
    03/30/2015 07:55:20
    404,796,490
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users07.dbf
    3096432811
    03/30/2015 07:55:20
    404,796,487
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/users08.dbf
    3096432814
    03/30/2015 07:55:21
    404,796,505
    08/25/2014 22:36:50
    +DATA/oracledrs/datafile/feed02.dbf
    3096432813
    03/30/2015 07:55:21
    134,340,010
    07/16/2014 22:06:21
    +DATA/oracledrs/datafile/indx03.dbf
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/system_2_01
    3096432814
    03/30/2015 07:55:21
    0
    +DATA/oracledrs/datafile/catalog.288.873258217
    3096432811
    03/30/2015 07:55:20
    0
    Thanks in advance,

  • Time Machine Stuck (Incremental backup, not Initial)

    Hey...me again.
    I was finally successful yesterday backing up 160GB of data with Time Machine.
    However, this morning, the incremental backup is taking forever. It's been stuck at "3.6MB of 6.7MB" for over 20 minutes. It would seem to me that 6.7MB should be pretty quick to backup.
    I have received these messages from TM Buddy:
    Starting standard backup
    Backing up to: /Volumes/Time Machine/Backups.backupdb
    No pre-backup thinning needed: 100.0 MB requested (including padding), 279.56 GB available
    Unable to rebuild path cache for source item. Partial source path:
    Copied 305079 files (49.1 MB) from volume Macintosh HD.
    No pre-backup thinning needed: 100.0 MB requested (including padding), 279.31 GB available
    And Console shows this:
    Aug 25 10:23:53 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: Backup requested by user
    Aug 25 10:23:53 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: Starting standard backup
    Aug 25 10:23:53 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: Backing up to: /Volumes/Time Machine/Backups.backupdb
    Aug 25 10:26:21 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: No pre-backup thinning needed: 100.0 MB requested (including padding), 279.56 GB available
    Aug 25 10:27:51 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: Unable to rebuild path cache for source item. Partial source path:
    Aug 25 10:41:06 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: Copied 305079 files (49.1 MB) from volume Macintosh HD.
    Aug 25 10:42:21 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd[254]: No pre-backup thinning needed: 100.0 MB requested (including padding), 279.31 GB available
    Any idea on how to resolve?
    Thanks,
    K

    kieranroy wrote:
    Hey...me again
    Hi again. You seem to have really angered the god of TM, haven't you!
    However, this morning, the incremental backup is taking forever. It's been stuck at "3.6MB of 6.7MB" for over 20 minutes. It would seem to me that 6.7MB should be pretty quick to backup.
    Unable to rebuild path cache for source item. Partial source path:
    This usually doesn't cause a real problem, as long as it's occasional.
    Copied 305079 files (49.1 MB) from volume Macintosh HD.
    This we see on occasion -- the crazy high file count, and very slow backup.
    Aug 25 10:42:21 Kieran-Roys-MacBook-Pro /System/Library/CoreServices/backupd254: No pre-backup thinning needed: 100.0 MB requested (including padding), 279.31 GB available
    Normally, TM will make a second, "catch-up" pass if changes were made during it's first pass. Usually this is very quick, but since the first one way very slow, this one may be worse.
    Do you have any folders, particularly email mailboxes, with a very large number of files (thousands), that are actively being added-to or changed? There was a similar situation recently where an app named SpamSieve was putting messages into a "spam" folder, and the user never cleared it out. Clearing it out helped, but excluding it from Time Machine solved the problem.
    Click here to download the TimeTracker app. It shows most of the files saved by TM for each backup (excluding some hidden/system files, etc.). I think it will only show completed backups, so you may not be able to find anything. But look at what's being backed-up to see if any of the files, even if there's only one, are in extremely large folders. If so, try excluding the folder (TM Preferences > Options).

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • Error: "This backup is too large for the backup volume."

    Well TM is acting up. I get an error that reads:
    "This backup is too large for the backup volume."
    Both the internal boot disk and the external baclup drive are 1TB. The internal one has a two partitions, the OSX one that is 900GBs and a 32GB NTFS one for Boot Camp.
    The external drive is a single OSX Extended part. that is 932GBs.
    Both the Time Machine disk, and the Boot Camp disk are excluded from the backup along with a "Crap" folder for temporary large files as well as the EyeTV temp folder.
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    This happened after moving a large folder (128GB in total) from the root of the OSX disk over to my Home Folder.
    I have reformated the Time Machine drive and have no backups at all of my data and it refuses to backup!!
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    Some screenshots:
    http://www.xcapepr.com/images/tm2.png
    http://www.xcapepr.com/images/tm1.png
    http://www.xcapepr.com/images/tm4.png

    xcapepr wrote:
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    TM makes an initial "estimate" of how much space it needs, "including padding", that is often quite high. Why that is, and Just exactly what it means by "padding" are rather mysterious. But it does also need work space on any drive, including your TM drive.
    But beyond that, your TM disk really is too small for what you're backing-up. The general "rule of thumb" is it should be 2-3 times the size of what it's backing-up, but it really depends on how you use your Mac. If you frequently update lots of large files, even 3 times may not be enough. If you're a light user, you might get by with 1.5 times. But that's about the lower limit.
    Note that although it does skip a few system caches, work files, etc., by default it backs up everything else, and does not do any compression.
    All this is because TM is designed to manage it's backups and space for you. Once it's initial, full backup is done, it will by default then back-up any changes hourly. It only keeps those hourly backups for 24 hours, but converts the first of the day to a "daily" backup, which it keeps for a month. After a month, it converts one per week into a "weekly" backup that it will keep for as long as it has room
    What you're up against is, room for those 30 dailies and up to 24 hourlies.
    You might be able to get it to work, sort of, temporarily, by excluding something large, like your home folder, until that first full backup completes, then remove the exclusion for the next run. But pretty soon, it will begin to fail again, and you'll have to delete backups manually (from the TM interface, not via the Finder).
    Longer term, you need a bigger disk; or exclude some large items (back-them up to a portable external or even DVD/RWs first); or a different strategy.
    You might want to investigate CarbonCopyCloner, SuperDuper!, and other apps that can be used to make bootable "clones". Their advantage, beyond needing less room, is when your HD fails, you can immediately boot and run from the clone, rather than waiting to restore from TM to your repaired or replaced HD.
    Their disadvantages are, you don't have the previous versions of changed or deleted files, and because of the way they work, their "incremental" backups of changed items take much longer and far more CPU.
    Many of us use both a "clone" (I use CCC) and TM. On my small (roughly 30 gb) system, the difference is dramatic: I rarely notice TM's hourly backups -- they usually run under 30 seconds; CCC takes at least 15 minutes and most of my CPU.

  • "Backup is too large for the backup volume" error

    I've been backing up with TM for a while now, and finally it seems as though the hard drive is full, since I'm down to 4.2GB available of 114.4GB.
    Whenever TM tries to do a backup, it gives me the error "This backup is too large for the backup volume. The backup requires 10.8 GB but only 4.2GB are available. To select a larger volume, or make the backup smaller by excluding files, open System Preferences and choose Time Machine."
    I understand that I have those two options, but why can't TM just erase the oldest backup and use that free space to make the new backup? I know a 120GB drive is pretty small, but if I have to just keep accumulating backups infinitely, I'm afraid I'll end up with 10 years of backups and a 890-zettabyte drive taking up my garage. I'm hoping there's a more practical solution.

    John,
    Please review the following article as it might explain what you are encountering.
    *_“This Backup is Too Large for the Backup Volume”_*
    First, much depends on the size of your Mac’s internal hard disk, the quantity of data it contains, and the size of the hard disk designated for Time Machine backups. It is recommended that any hard disk designated for Time Machine backups be +at least+ twice as large as the hard disk it is backing up from. You see, the more space it has to grow, the greater the history it can preserve.
    *Disk Management*
    Time Machine is designed to use the space it is given as economically as possible. When backups reach the limit of expansion, Time Machine will begin to delete old backups to make way for newer data. The less space you provide for backups the sooner older data will be discarded. [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    However, Time Machine will only delete what it considers “expired”. Within the Console Logs this process is referred to as “thinning”. It appears that many of these “expired” backups are deleted when hourly backups are consolidated into daily backups and daily backups are consolidated into weekly backups. This consolidation takes place once hourly backups reach 24 hours old and daily backups reach about 30 days old. Weekly backups will only be deleted, or ‘thinned’, once the backup drive nears full capacity.
    One thing seems for sure, though; If a new incremental backup happens to be larger than what Time Machine currently considers “expired” then you will get the message “This backup is too large for the backup volume.” In other words, Time Machine believes it would have to sacrifice to much to accommodate the latest incremental backup. This is probably why Time Machine always overestimates incremental backups by 2 to 10 times the actual size of the data currently being backed up. Within the Console logs this is referred to as “padding”. This is so that backup files never actually reach the physically limits of the backup disk itself.
    *Recovering Backup Space*
    If you have discovered that large unwanted files have been backed up, you can use the Time Machine “time travel” interface to recovered some of that space. Do NOT, however, delete files from a Time Machine backup disk by manually mounting the disk and dragging files to the trash. You can damage or destroy your original backups by this means.
    Additionally, deleting files you no longer wish to keep on your Mac does not immediately remove such files from Time Machine backups. Once data has been removed from your Macs' hard disk it will remain in backups for some time until Time Machine determines that it has "expired". That's one of its’ benefits - it retains data you may have unintentionally deleted. But eventually that data is expunged. If, however, you need to remove backed up files immediately, do this:
    Launch Time Machine from the Dock icon.
    Initially, you are presented with a window labeled “Today (Now)”. This window represents the state of your Mac as it exists now. +DO NOT+ delete or make changes to files while you see “Today (Now)” at the bottom of the screen. Otherwise, you will be deleting files that exist "today" - not yesterday or last week.
    Click on the window just behind “Today (Now)”. This represents the last successful backup and should display the date and time of this backup at the bottom of the screen.
    Now, navigate to where the unwanted file resides. If it has been some time since you deleted the file from your Mac, you may need to go farther back in time to see the unwanted file. In that case, use the time scale on the right to choose a date prior to when you actually deleted the file from your Mac.
    Highlight the file and click the Actions menu (Gear icon) from the toolbar.
    Select “Delete all backups of <this file>”.
    *Full Backup After Restore*
    If you are running out of disk space sooner than expected it may be that Time Machine is ignoring previous backups and is trying to perform another full backup of your system? This will happen if you have reinstalled the System Software (Mac OS), or replaced your computer with a new one, or hard significant repair work done on your exisitng Mac. Time Machine will perform a new full backup. This is normal. [http://support.apple.com/kb/TS1338]
    You have several options if Time Machine is unable to perform the new full backup:
    A. Delete the old backups, and let Time Machine begin a fresh.
    B. Attach another external hard disk and begin backups there, while keeping this current hard disk. After you are satisfied with the new backup set, you can later reformat the old hard disk and use it for other storage.
    C. Ctrl-Click the Time Machine Dock icon and select "Browse Other Time Machine disks...". Then select the old backup set. Navigate to files/folders you don't really need backups of and go up to the Action menu ("Gear" icon) and select "Delete all backups of this file." If you delete enough useless stuff, you may be able to free up enough space for the new backup to take place. However, this method is not assured as it may not free up enough "contiguous space" for the new backup to take place.
    *Outgrown Your Backup Disk?*
    On the other hand, your computers drive contents may very well have outgrown the capacity of the Time Machine backup disk. It may be time to purchase a larger capacity hard drive for Time Machine backups. Alternatively, you can begin using the Time Machine Preferences exclusion list to prevent Time Machine from backing up unneeded files/folders.
    Consider as well: Do you really need ALL that data on your primary hard disk? It sounds like you might need to Archive to a different hard disk anything that's is not of immediate importance. You see, Time Machine is not designed for archiving purposes, just as a backup of your local drive(s). In the event of disaster, it can get your system back to its' current state without having to reinstall everything. But if you need LONG TERM storage, then you need another drive that is removed from your normal everyday working environment.
    This KB article discusses this scenario with some suggestions including Archiving the old backups and starting fresh [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    Let us know if this clarifies things.
    Cheers!

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • Incremental Backup question

    Hi,
    I have a hot backup taken every wednesday and archive log backups taken every day. (using RMAN and netbackup). Oracle is 10.2.0.3 and OS is RHEL4.
    Do I still need or (will it be helpful to have ) incremental backups? What is its purpose? Will the recovery be faster if I have incremental backups? Please comment.
    Thank you,

    i guess this is a good best practice found in Expert Oracle Database by Sam Alapati
    "If you expect few changes in data, you are better off using incremental backups, since they
    won’t consume a lot of space. Incremental backups, as part of your backup strategy, will reduce the
    time required to apply redo during recovery. However, if most of your database blocks change frequently,
    your incremental backups will be quite large. In such a case, you are better off making a
    complete image copy of the database at regular intervals."
    Edited by: doro on Aug 19, 2011 1:57 PM

Maybe you are looking for

  • Error while sending Email through Java Code in OIM

    Hi All, I have created a java code using tcEmailNotificationUtil, and integrated the same with the adapter. I am triggering this adapter when an approval process gets completed. As soon as the approval process gets completed my email task is triggeri

  • PersonalJava in Pocket PC 2002

    Hi, Can anyone mind to tell me what are the developing tools that i need in order for me to develop an application in Pocket PC 2002? Thanks... Regard, George.

  • SRM Contract Management

    Dear All, Few clarification on SRM contracts pls: SRM 5.0: In SRM 5.0 we can create Local contract and GOA.Local can be used only in SRM and cannot be distributed to backend system.GOA can be distributed as contracts or scheduling agreements to backe

  • Level of SAP NTW BI 7.1 for BO XI 3.1 SP2 Integration kit

    Hello, As it is not specified in the document u201CBusinessObjects XI 3.1 SP2 Integration for SAP Solutions - Supported Platforms.pdfu201D , could you please provide me a document which specified the minimum level of SAP NetWeaver BI 7.1 required for

  • Iwork or appleworks or office for Mac - which one?

    I have just purchased a Mac mini - my first Mac ever! My first surprise was that it didn't come with bundled software, no word processor, no spread sheet, no outlook (or equivalent). it does have something called ilife but i haven't found out what th