Speeding up full backup of Replicate database ASE 15.5

Greetings all
I need to speed up replicate database backup.
ASE version 15.5
Adaptive Server Enterprise/15.5/EBF 20633 SMP ESD#5.2/P/RS6000/AIX 5.3/asear155/2602/64-bit/FBO/Sun Dec  9 11:59:29 2012
Backup Server/15.5/EBF 20633 ESD#5.2/P/RS6000/AIX 5.3/asear155/3350/32-bit/OPT/Sun Dec  9 08:34:37 2012
RS version
Replication Server/15.7.1/EBF 21656 SP110 rs1571sp110/RS6000/AIX 5.3/1/OPT64/Wed Sep 11 12:46:38 2013
Primary database is 1.9 TB about 85% occupied
Replicate database is same size but used about 32%  (mostly dbo tables are replicated)
As noted above backup sever is 32 bit on AIX.
SIMILARITIES
Both servers use SAN with locally mounted folder for backup files/stripes.
Databases are on  'raw' devices for data and log
Both backup servers have the similar RUN files with following
-N25 \
-C20 \
-M/sybase/15/ASE-15_0/bin/sybmultbuf \
-m2000 \
Number of stripes are 20 for both primary and replicate databases.
DIFFERENCES
Replicate has less memory and less number of engines.
Devices on primary are mostly 32 GB and those on replicate are mostly 128 GB
OBSERVATIONS
Full Back up times on primary consistently about 1 hour.
Full Back up times on replicate are consistently double that (120 to 130 minutes).
Full Backup with suspended  replication or with minimal activity does not reduce the run times much.
What do I need to capture to pinpoint cause of the slow backup on replicate side ?
Thanks in advance
Avinash

Mark
Thanks for the inputs.
We use compression level 2 on both primary and replicate.
This was tried out before upgrade to 15.5 and seems good enough.
BTW on a different server I also tried new compression levels 100 and 101.
for database of same size and did not get substantial reduction in run times.
Stripe sizes increased from 23 GB to 30-33 GB.
As far as I have noted Replicate side is not starved for CPU.
sp_sysmon outputs during the backup period do not show high CPU usage.
Wll it be accurate to to say that like a huge report query,  backup activity also churns the caches ?
(i.e. each allocated/used page if not found in the cache is brought in cache by a physical read )
Avinash

Similar Messages

  • DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model

    I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2.  All backups are working as expected except for one.  The database in question is supposed to be backuped up iwht a daily express
    full and hourly incremental schedule.  Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule.  I checked the logs on the old DPM
    server and the transaction log backups were working just fine up until I stopped protection the data source.  The SQL server is 2008 R2 SP2.  Other databases on the same server that are set to full recovery model are working just fine.  If we
    switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth.  We also tried switching the recovery model
    on the failing database from full to simple and then back again, but to no avail.  Both the SQL server and the DPM server have been rebooted.  We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
    database is really using the full recovery model.
    Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?

    I was having this same problem and appear to have found a solution.  I wanted hourly recovery points for all my SQL databases.  I was getting hourly for some but not for others.  The others were only getting a recovery point for the Full Express
    backup.  I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem.  I was still not getting the hourly recovery points.
    I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started.  My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
    mode SCDPM continued to treat it as simple. 
    I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db. 
    It worked.  The original db was still only getting express full recovery points and the copy was getting hourly.  I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
    that it is set to full, and then add it back to protection.   I have not tested this yet.
    This is the article I read: 
    Article I read

  • Recover full backup into another database

    Hello,
    I have a particular need that does not seems to be done so often and I just cannot get it.
    So here is the situation : I have a backup of a full database. That means that I have the init parameter file, the autobackuped controlfile (+pfile) and the autobackuped backupset.
    The source database is release 10.2.0.5.0, RAC instance.
    On another server, I have a simple instance, same release and I would like to recover the full backup in the second database.
    I have already done that once before but I had both pfile and controlfile backuped manually and the two instances were simple ones.
    Here I have tried the same way : shutdown my target database, changing my pfile backup parameters to match the target database. Startup the target database in nomount mode using the pfile. Create spfile from pfile. Then restore controlfile from the backuped controlfile with rman.
    But here this step is a problem.
    My question is simple : what is the best way / good practices to get this working?
    Thanks in advance for your help. Ask if you need any further informations.
    Max

    Hello,
    Here is the original init pfile content :
    instdb1.__db_cache_size=1107296256
    instdb2.__db_cache_size=1023410176
    instdb2.__java_pool_size=16777216
    instdb1.__java_pool_size=16777216
    instdb2.__large_pool_size=16777216
    instdb1.__large_pool_size=16777216
    instdb1.__shared_pool_size=436207616
    instdb2.__shared_pool_size=520093696
    instdb2.__streams_pool_size=16777216
    instdb1.__streams_pool_size=16777216
    *.audit_trail='DB'
    *.background_dump_dest='/u1/app/oracle/admin/instdb/bdump'
    *.cluster_database_instances=2
    *.cluster_database=TRUE
    *.compatible='10.2.0.0.0'
    *.control_file_record_keep_time=95
    *.control_files='+DG_DATA/instdb/controlfile/backup.305.615208725','+DG_FLASH/instdb/controlfile/current.256.614223119'
    *.core_dump_dest='/u1/app/oracle/admin/instdb/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DG_DATA'
    *.db_create_online_log_dest_1='+DG_FLASH'
    *.db_domain='inst.xx'
    *.db_file_multiblock_read_count=16
    *.db_flashback_retention_target=1440
    *.db_name='inst'
    *.db_recovery_file_dest='+DG_DATA'
    *.db_recovery_file_dest_size=53687091200
    instdb1.instance_number=1
    instdb2.instance_number=2
    *.job_queue_processes=10
    instdb1.local_listener='LISTENER_INST1.INST.XX'
    instdb2.local_listener='LISTENER_INST2.INST.XX'
    instdb1.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst1'
    instdb2.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst2'
    *.log_archive_dest_2='SERVICE=INSTB.INST.XX VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) OPTIONAL LGWR ASYNC NOAFFIRM NET_TIMEOUT=10'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='inst_%t_%s_%r.arc'
    *.max_dump_file_size='200000'
    *.open_cursors=300
    *.parallel_max_servers=20
    *.pga_aggregate_target=824180736
    *.processes=550
    instdb1.remote_listener='LISTENER_INST1.INST.XX'
    instdb2.remote_listener='LISTENER_INST2.INST.XX'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.resource_limit=TRUE
    *.session_max_open_files=20
    *.sessions=480
    *.sga_target=1610612736
    instdb1.thread=1
    instdb2.thread=2
    *.undo_management='AUTO'
    instdb1.undo_tablespace='UNDOTBS1'
    instdb2.undo_tablespace='UNDOTBS2'
    *.user_dump_dest='/u1/app/oracle/admin/inst/udump'
    And here is the test I have done :
    *1. modified the init pfile to this :*
    inst.__db_cache_size=1107296256
    inst.__java_pool_size=16777216
    inst.__large_pool_size=16777216
    inst.__shared_pool_size=436207616
    inst.__streams_pool_size=16777216
    *.audit_trail='DB'
    *.background_dump_dest='C:\Oracle\admin\inst\bdump'
    *.compatible='10.2.0.5.0'
    *.control_file_record_keep_time=95
    *.control_files='C:\Oracle\oradata\inst\control01.ctl','C:\Oracle\oradata\inst\control02.ctl','C:\Oracle\oradata\inst\control03.ctl'
    *.core_dump_dest='C:\Oracle\admin\inst\cdump'
    *.db_block_size=8192
    *.db_create_file_dest='C:\Oracle\oradata\inst'
    *.db_create_online_log_dest_1='C:\Oracle\inst'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_flashback_retention_target=1440
    *.db_name='inst'
    *.db_recovery_file_dest='C:\Oracle\oradata'
    *.db_recovery_file_dest_size=53687091200
    *.job_queue_processes=10
    inst.log_archive_dest_1='LOCATION=C:\Oracle\oradata'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='inst_%t_%s_%r.arc'
    *.max_dump_file_size='200000'
    *.open_cursors=300
    *.parallel_max_servers=20
    *.pga_aggregate_target=824180736
    *.processes=550
    *.remote_login_passwordfile='EXCLUSIVE'
    *.resource_limit=TRUE
    *.session_max_open_files=20
    *.sessions=480
    *.sga_target=1610612736
    inst.thread=1
    *.undo_management='AUTO'
    inst.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\Oracle\admin\inst\udump'
    *2. shutdown the database, startup in nomount and restore controlfile (with the error when trying to restore controlfile) :*
    RMAN> shutdown immediate;
    Oracle instance shut down
    RMAN> startup nomount pfile='C:\Oracle\init\initInst.ora';
    connected to target database (not started)
    Oracle instance started
    Total System Global Area 1610612736 bytes
    Fixed Size 1305856 bytes
    Variable Size 369099520 bytes
    Database Buffers 1233125376 bytes
    Redo Buffers 7081984 bytes
    RMAN> restore controlfile from 'C:\Oracle\ctl\inst_ctrl_c-2972284490-20120318-00';
    Starting restore at 04-MAY-12
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=596 devtype=DISK
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 05/04/2012 14:20:12
    RMAN-06172: no autobackup found or specified handle is not a valid copy or piece
    Thank you for your help.
    Max

  • AUXILIARY database update using full backup from target database

    Hi,
    I am now facing the problem with how to implement AUXILIARY database update to be consistent with the target database during a certain period (a week). I did a fully backup on our target database everyday using rman. I know it is possible to use expdp to realize it but i want to use the current fully backup to do it. Does anybody has idea or experience with that? Thanks in advance!
    Regards,
    lik

    That's OK. If you don't use RMAN to clone your database. You can create a database just using the cold backup of the primary database simply.
    Important things are
    1) you must catalog all datafiles as image copy level 0 in the cloned database
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@clonedb (in host 2)
    RMAN> catalog datafilecopy
    '/oracle/oradata/CLONE/datafile/abc.dbf',
    '/oracle/oradata/CLONE/datafile/def.dbf',
    '/oracle/oradata/CLONE/datafile/ghi.dbf'
    level 0 tag 'CLONE';
    2) You need to make incrementals of the primary database to refresh the clone database.Make sure that you need to specify a tag for the incremental and the name of tag is the exactly same as the one used step (1).
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@prod (in host 3)
    RMAN> backup incremental level 1 tag 'CLONE' for recover of copy with tag 'CLONE' database format '/backup/%u';
    3) Copy the newly created incrementals (in host 3) to the clone database site (host 2). Make sure the directory must be exactly same.
    $ rcp /backup/<incr_backup> /backup/
    -- rcp <the loc of a incremental in host 3> <the loc of a incremental in host 2>
    4) Apply incrementals to update the clone database. Make sure you provide the tag you specified.
    RMAN> connect catalog rman/rman@rcvcat
    RMAN> connect target sys/manager@clone
    RMAN> recover copy of database with tag 'CLONE';
    5) After update the clone database, then delete the incremental backups and uncatalog the image copies
    RMAN> delete backup tag 'CLONE';
    RMAN> change copy like '/oracle/oradata/CLONE/datafile/%' uncatalog;
    *** As you can see, you can clone a database using any methods. The key is you have to catalog the clone database when you refresh it. After finishing it, then uncatalog..

  • One full backup job to run full backup of all databases and it failed. I post error message.Any help?

    Executed as user: abc\user1. ... 2004-2009, Quest Software Inc. Registered Name: abc INC 
    Processed 1152 pages for database 'abc123', file 'abc123' on file 1. Processed 4 pages for database 'abc123', file 'abc123_log' on file 1. BACKUP DATABASE successfully processed 1156 pages in 0.725 seconds (13.051 MB/sec). 
    Backup added as file number: 1  Native Size: 11.19 MB Backup Size: 1.87 MB CPU Seconds: 0.27 [SQ 
    The backup set on file 1 is valid.  CPU Seconds: 0.25 [SQLSTATE 01000] (Message 1) 
    LiteSpeed(R) for SQL Server Version 5.1.0.1293 Copyright 2004-2009, Quest Software Inc. Registered Name: abc INC 
    Processed 456 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1' on file 1. Processed 24 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1_Data2' on file 1. Processed 1 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1_log'
    ...  The step failed.

    Hi bestrongself,
    Before you use a SQL Server Agent job to back up all database, I recommend you run the backup statement in query windows directly, and check if it can run well. I do a test by using the following statement,
    DECLARE @name VARCHAR(50) -- database name
    DECLARE @path VARCHAR(256) -- path for backup files
    DECLARE @fileName VARCHAR(256) -- filename for backup
    DECLARE @fileDate VARCHAR(20) -- used for file name
    -- specify database backup directory
    SET @path = 'C:\Backup\'
    -- specify filename format
    SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
    DECLARE db_cursor CURSOR FOR
    SELECT name
    FROM master.dbo.sysdatabases
    WHERE name NOT IN ('master','model','msdb','tempdb') -- exclude these databases
    OPEN db_cursor
    FETCH NEXT FROM db_cursor INTO @name
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
    BACKUP DATABASE @name TO DISK = @fileName
    FETCH NEXT FROM db_cursor INTO @name
    END
    CLOSE db_cursor
    DEALLOCATE db_cursor
    the script allow you to backup each database within your instance of SQL Server. 
    Your account  need to have read and right permission to the path file.
    If the above script can run well directly, then you can create a job and put your backup statement inside of it. There is a similar detail about how to create a simple backup job in SQL Server. You can review it.
    http://www.petri.co.il/create-backup-job-in-sql-server.htm
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here.
    Sofiya Li
    TechNet Community Support

  • How can I do a FULL Backup to Oracle 8i Database?

    Hi,
    We are using a product that uses Oracle 8i and we are trying to perform a FULL Backup to the database. I am not an oracle person and not sure how to do this.
    Please also inform me on how to perform a FULL Restor for the backed up database. I would really appreaciate it.
    null

    Before I can amply reply to your post, I need to know what tools you have.
    What is your O.S.?
    What do you want to backup to? (Tape Drive/Hard Disk)
    What time frame requirements do you have?
    Are you running in Archive Log Mode? <-- (in case you do not know, log in with svrmgrl, connect internal, run ARCHIVE LOG LIST; and tell me what is reported on the top line of output.)
    Do you have requirements that the database must be Online at all times?
    Are you using 3rd party Backup Software, if so, which one(s)?
    I know these are a lot of questions, but it helps me help you. :)
    Rgds,
    Mark Brown
    SAP Japan

  • Is it possible to restore database from a full backup without controlfile?

    Hi,
    I have done a full backup for the database oracle 11g using:
    run{
    allocate channel c1 type disk;
    set limit channel c1 kbytes 102400;
    sql 'alter system archive log current';
    backup full tag 'dbfull' format 'd:\dbbackup\f_%d_%u_%s_%p_%t' database include current controlfile;
    backup archivelog all delete input format 'd:\dbbackup\al__%d_%u_%s_%p_%t';
    release channel c1;
    but, I deleted all the datafiles include all control files.
    Is it possible to restore the database?
    Is there a way to restore the control file from the backup files?

    Thanks Werner!
    I finally find the correct backuppiece for the controlfile.
    D:\dbbackup
    2010-03-19 18:41 <DIR> .
    2010-03-19 18:41 <DIR> ..
    2010-03-19 18:41 5,711,872 AL__ORCL_04L8VSN6_4_1_714076902
    2010-03-19 18:41 104,857,600 F_ORCL_02L8VSJ0_2_10_714076768
    2010-03-19 18:41 104,857,600 F_ORCL_02L8VSJ0_2_11_714076768
    2010-03-19 18:41 104,857,600 F_ORCL_02L8VSJ0_2_12_714076768
    2010-03-19 18:41 24,535,040 F_ORCL_02L8VSJ0_2_13_714076768
    2010-03-19 18:39 104,857,600 F_ORCL_02L8VSJ0_2_1_714076768
    2010-03-19 18:39 104,857,600 F_ORCL_02L8VSJ0_2_2_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_3_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_4_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_5_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_6_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_7_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_8_714076768
    2010-03-19 18:40 104,857,600 F_ORCL_02L8VSJ0_2_9_714076768
    2010-03-19 18:41 9,830,400 F_ORCL_03L8VSMP_3_1_714076889 ---->the controlfile backup piece.the file name contains '03' instead of '02'(datafile).
    After run 'start nomount', I restore the controlfiles using 'restore controlfile from 'd:\dbbackup\F_ORCL_03L8VSMP_3_1_714076889';
    The key point is to find the right backup file piece for the control file.

  • Is it possible to take a full backup without shutting down an instance?(10g

    Hi. everyone.
    Is it possible to take a full backup without shutting down an instance?
    The db version is 10gr2.
    As far as I know, in order to take a full backup of a database,
    one need to shutdown a database.
    Our db environment is a RAC environment which has two nodes.
    Yesterday, I heard from a hardware vendor that we do not need to shutdown
    instances(2-nodes) in order to take a full backup.
    Best Regards.

    Hi,
    >>As far as I know, in order to take a full backup of a database, one need to shutdown a database.
    In fact, a database operating in NOARCHIVELOG mode is necessary to shutdown the database in order to perform a full backup (Cold Backup), but otherwise a database that is operating in ARCHIVELOG mode, then is possible to perform a full database backup with the database in a open state. (Hot Backup)
    For more information see if these links below can help you:
    Oracle Backup and Recovery
    http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm
    Oracle Backup and Recovery FAQ
    http://orafaq.com/faqdbabr.htm
    Cheers

  • Backup UDB DPF Databases

    Hi,
    i try to backup a DPF Database via trx db13 but there is no way to make it.
    The standard backup actions shown in the action-pad are only for single partition databases. Because the db13 do not support event-control i am not able to use single jobs for each partition to run one after another. Time oriented control is no
    constitute alternative to backup DPF databases.
    How do you think i should backup the database?
    Best regards, Tino

    Hi Waldemar,
    yes you are right, i can generate a backup command using the "CLP Script Maintenance" in NW2004s. But i am not able to use the recommended commands...
    db2_all "<<+0< db2 backup database lxd to /backup/LXD"
    db2_all "|<<-0< db2 backup database lxd to /backup/LXD"
    ...because the CLP do not support db2_all.
    The SAPGUI delivers the opportunity to generate a script for every node and let this scripts run from the Planning Calendar - but i am not able to let them run in parallel. A bad lapse if i handle with bigger databases.
    The second problem i realize is that it's not supported to define relationsships between the end of one backup (backup catalog node) and the beginning of the other nodes - no event controle implemented!
    My impression is still the same.. i am not able to plan a full backup for DPF databases using the db6cockpit.

  • Upgraded to 11.2.0.2 and can't take a full backup now

    Hi!
    I'm new to this forum (and SAP) so hopefully i"m posting in the right place.  My role is Database Administrator.
    I upgraded a test database to 11.2.0.2 using the DBUA and now am unable to get a full backup of the database.  Backups are fine in the dev database that was upgraded - a cron job calls a .ksh script which calls BRBACKUP.
    BR0252E Function fopen() failed for '/oracle/$/.saparch$arch1_659_748560082.dbf' at location main-27
    BR0253E errno 2: No such file or directory
    It's looking for archive log 659, which hasn't been written to the saparch directory - the archive logs written out only go up to sequence 658.
    I opened an SR with Oracle, and they said that because the backup is being done with an SAP tool, I should be looking to SAP for support.
    A lot of the BR0252E errors that I've seen online seem to be related to permissions.  Another thing of importance to note is that my Exceed session with the DBUA crashed when I remoted in from home to finish the upgrade.  I checked error logs and didn't see any errors, so proceeded with the upgrade and the database is healthy.  Could there be a saproot.sh script that I missed running, even though I'm not getting a permissions error?
    Thanks for your assistance!
    Erin

    Hi,
    Read this sap note
    Note 952080 - BR*Tools fail with BR0252E Function stat() failed
    [https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=952080]
    Symptom
    BR*Tools fail with the following error messages:
    BR0252E Function stat() failed for 'origlog' at location BrDiskStatGet-1
    BR0253E errno 2: No such file or directory
    BR0277E Determination of disk volume status for origlog failed
    Other terms
    BR*Tools
    Reason and Prerequisites
    The problem is caused by configuration problems with soft links.
    Solution
    Due to technical reasons, BR*Tools does not accept relative soft links in the SAPDATA_HOME directory. The following soft links cause the above mentioned errors:
    /oracle/SID/origlogA -> origlog
    /oracle/SID/sapdata9 -> sapdata
    Use absolute soft links instead:
    /oracle/SID/origlogA -> /oracle/SID/origlog
    /oracle/SID/sapdata9 -> /oracle/SID/sapdata
    Thanks
    Siva

  • Understanding replica volume and recovery point volume usage with SQL Express Full Backup

    I am running some trials to test DPM 2012 R2's suitability for protection a set of SQL Server databases and I am trying to understand what happens when I create a recovery point with Express Full Backup.
    The databases use simple recovery model and in the tests I have made so far I have loaded more data into the databases between recovery points since that will be a typical scenario - the databases will grow over time. The database files are set to autogrowth
    by 10%
    I have been looking at the change in USED space in the replica volume and in the recovery point volume after new recovery points and have a hard time understanding it.
    After the first test where data was loaded into the database and an Express Full Backup recovery point was created, I saw an increase in used space in the replica volume of 85 Gb and 29 GB in the recovery point volume. That is somewhat more than I think
    the database grew (I realize that should have monitored that, but did not), but anyway it is not completely far out.
    In the next test I did the same thing except I loaded twice as much data into the database.
    Here is where it gets odd: This causes zero increased usage in the replica volume and 33 GB increased use in the recovery point volume.
    I do not understand why the replica volume use increases with some recovery points and not with others.
    Note that I am only discussing increased usage in the volumes - not actual volume growth. The volumes are still their original size.
    I have been using 3-4 days on the test and the retention period is set to 12 days, so nothing should be expired yet.

    Hi,
    The replica volume usage represents the physical database file(s) size. The database file size on the replica should be equal to the database file size on the protected server.  This is both .mdf and .ldf files.  If when you load data
    into the database and you overwrite current tables versus adding new ones, or if there is white space in the database files and the load simply uses that white space, then there will not be any increase in the file size, so there will not be any increase
    in the replica used space.
    The recovery point volume will only contain delta changes applied to the database files.  As the changed blocks overwrite the files on the replica during express full backup, VSS (volsnap.sys) driver copies the old blocks about to be overwritten
    to the recovery point volume before allowing the change to be applied to the file on the replica. 
    Hope this helps explain what you are seeing.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Create Incremental Full Backup

    Hey, I thought I created an incremental full backup with attached rman script.
    But at the point of restoring the database it comes up, that I just have an incremental backup of the source database.
    Whats wrong in my configuration ? I like to have a daily full backup of the database where just the changes to the full backup are backed up. But the backupfile should always be a full backup.
    I am using 10gR2 Standard Edition
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/app/control_%F';
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backup/app/LEVEL0_%u_%T';
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/product/10gR2/db_1/dbs/snapcf_db01.f'; # default
    # Database RAC Logswitch
    sql 'alter system archive log current';
    # Fullbackup Database - Perform incremental level 0 backup
    run {
    BACKUP incremental level 0 format '/backup/app/%d_Level0_%T_%U' database PLUS ARCHIVELOG format '/backup/app/%d_A
    rchivelog_%T_%U';
    }

    An Incremental Backup is a fresh backup of all the changes that have occurred in the database since the last backup.
    Thus, a Level 1 Incremental Backup is a smaller backup than the preceding Level 0 Backup.
    However, when you say :
    I like to have a daily full backup of the database where just the changes to the full backup are backed up. But the backupfile should always be a full backup.You seem to want an Incrementally Updated Backup. That is a backup strategy where your Backup itself is updated.
    I am not sure if this is available in the Standard Edition.
    Here's documentation on Incrementally Updated Backups : at 4.4.3 Incrementally Updated Backups: Rolling Forward Image Copy Backups
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup004.htm#sthref408
    While this is the "normal" Incremental Backups : at 4.4 RMAN Incremental Backups
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup004.htm#sthref383
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Syntax for full backup thru RMAN

    hi all ..
    i have oracle 11g installed on M5000, Solaris 10
    i want to take full backup of my database thru RMAN
    i have enter following on backup server
    . .db/env
    connect catalog rrkas/rr345
    (connected)
    connect target sys/root123@grid1
    (connected)
    RMAN> run {
    allocat channel dev1 type disk;
    backup database
    format '/export/home/oracle/dumps/grid3_%t_%sp%p';
    release channel dev1;
    it started the job.On my dbserver file is created as per the format.
    just want to ask, is it the right way to take full/physical backup? Is this backup enough to fully recover the files, in case a new RAC is added to the cluster?
    thanks in advance.

    If your database is running in ARCHIVELOG mode, this is not enough: you must backup archived redo logs for example with
    backup database plus archivelog:Read about consistent and inconsistent backups in http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#BABIHBBE and about archived redo logs backups in http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmbckba.htm#i1006454
    Edited by: P. Forstmann on 20 nov. 2009 07:42

  • Full backup ( ~ 1 TB ) using external HD - speed USB vs. Firewire vs. eSATA  - what are relative speeds - how to install eSATA on mid-2010 Mac Pro desktop ( dual hex-core processors)_

        Hi All,
         I'm trying to resume regular scheduled Full backup ( ~ 1 TB ) of drives using external HD (to allow off-site redundant backup storage) .
         What are relative speeds of USB vs. Firewire vs. eSATA ?
         I suspect eSATA connection may be considerably faster … how to install eSATA on mid-2010 Mac Pro desktop ( dual hex-core processors)?
          ( The quicker and easier backup protocol is, the more likely one is to use it to backup on a routine repetitive basis.)
    Thanks

    Jim Bogy wrote:
    ...I suspect eSATA connection may be considerably faster … how to install eSATA on mid-2010 Mac Pro desktop ( dual hex-core processors)?
          ( The quicker and easier backup protocol is, the more likely one is to use it to backup on a routine repetitive basis.)
    Adding a USB 3.0+eSATA PCIe card, which The hatter mentions, is the best solution that I've found. See http://eshop.macsales.com/item/CalDigit/FASTA6GU3/. The card is not cheap, but the USB 3.0 works flawlessly (which can't be assumed; ask me how I know) and the eSATA connection allows booting from the connected drive. Grant Bennet-Alder's point about HD speed is important to consider; in addition, the size of the individual files being backed up and where on the backup disk they're going will affect overall transfer speed. For example, using the USB 3.0 connection on that CalDigit card going to a Toshiba 3TB external, the transfer rate for a big file (say a virtual machine file) from an internal SSD boot drive was about 145 MB/sec while a bunch of little files might drop to 30 MB/sec and both rates decrease as an inner partition on the external is used. All told, a nearly 700 GB backup took under 1.5 hours. Using a HD as the source added almost an extra hour, though a WD external was used for that. Using a WD green drive plugged into this http://eshop.macsales.com/item/NewerTech/FWU3ES2HDK/ with an eSATA connection took about 2.5 hours also, but that was bootable whereas the USB 3.0 connection is not.
    Another point to consider is that USB 3.0 is ubiquitous on PC's now so there's lots of price competition for externals; not so much for eSATA externals.

  • Restore database in the past using only archivelog without full backup

    Hi,
    We have a 11g Oracle database up & running.
    We don't have a full backup, but we have all archived log from the last 2 months.
    Is it possible to "restore" the database using archived logs in a date in the past?
    I mean for example 3 days ago?
    Thanks in advance.

    user8973191 wrote:
    Oh, ok Vijayaraghavan K.
    Thx for u help.
    And about the users ?
    i need to create the "same" user in another machine ?
    For example:
    In this machine i using the "system" user, where i have my tables.
    Another machine, when i restore, my tables will go to the "system" user too? or i can choice? or i need to create one?A true backup is a copy of the data files at the file/block level. The restore is therefore a restore of the data files/blocks. Applying the redo (archive logs) is also done at the block level. None of that knows or cares (or needs to know or care) about logical objects (such as users, tablespaces, tables, rows, etc) within the database. so if you do a proper restore, you are restoring files to a consistent state and thus everything that was defined within those files will be there when restored.

Maybe you are looking for

  • Ipod nano not recognized by pc and will not turn on

    ipod nano not recognized by pc and will not turn on

  • Audigy platinum pinouts for dri

    I have an original Audigy platinum with the connections for a dri've bay - which I did not purchase. My new case allows for a front panel microphone and headphone (as well as other connections) in addition to the back panel ones. After installing the

  • How to make a table of the user to non public privilege?

    Hi all, im working with an RAD program and using oracle as the database. currently im on db 10g. I am able to connect to the db but the problem is it shows other users table. It is not only showing the tables of the user i connected to. this is a pro

  • Exchange Rate Type default different from M for a company code

    Dear SAP guru, Would you shed some light on this? How would we be able to default/assign a different exchange rate type from M at a company code level? I have a company code - finance company - that every foreign currenty transaction needs to be tran

  • Appending Firewall Rules to vShield Edge with PowerCLI Script

    Hi, I have a script which enables us to upload 4k worth of firewall rules, but every time it executes, all existing rules are over written. Is this something to do with the API or just a scripting issue - if so, can anyone suggest how to append on to