Recover full backup into another database

Hello,
I have a particular need that does not seems to be done so often and I just cannot get it.
So here is the situation : I have a backup of a full database. That means that I have the init parameter file, the autobackuped controlfile (+pfile) and the autobackuped backupset.
The source database is release 10.2.0.5.0, RAC instance.
On another server, I have a simple instance, same release and I would like to recover the full backup in the second database.
I have already done that once before but I had both pfile and controlfile backuped manually and the two instances were simple ones.
Here I have tried the same way : shutdown my target database, changing my pfile backup parameters to match the target database. Startup the target database in nomount mode using the pfile. Create spfile from pfile. Then restore controlfile from the backuped controlfile with rman.
But here this step is a problem.
My question is simple : what is the best way / good practices to get this working?
Thanks in advance for your help. Ask if you need any further informations.
Max

Hello,
Here is the original init pfile content :
instdb1.__db_cache_size=1107296256
instdb2.__db_cache_size=1023410176
instdb2.__java_pool_size=16777216
instdb1.__java_pool_size=16777216
instdb2.__large_pool_size=16777216
instdb1.__large_pool_size=16777216
instdb1.__shared_pool_size=436207616
instdb2.__shared_pool_size=520093696
instdb2.__streams_pool_size=16777216
instdb1.__streams_pool_size=16777216
*.audit_trail='DB'
*.background_dump_dest='/u1/app/oracle/admin/instdb/bdump'
*.cluster_database_instances=2
*.cluster_database=TRUE
*.compatible='10.2.0.0.0'
*.control_file_record_keep_time=95
*.control_files='+DG_DATA/instdb/controlfile/backup.305.615208725','+DG_FLASH/instdb/controlfile/current.256.614223119'
*.core_dump_dest='/u1/app/oracle/admin/instdb/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DG_DATA'
*.db_create_online_log_dest_1='+DG_FLASH'
*.db_domain='inst.xx'
*.db_file_multiblock_read_count=16
*.db_flashback_retention_target=1440
*.db_name='inst'
*.db_recovery_file_dest='+DG_DATA'
*.db_recovery_file_dest_size=53687091200
instdb1.instance_number=1
instdb2.instance_number=2
*.job_queue_processes=10
instdb1.local_listener='LISTENER_INST1.INST.XX'
instdb2.local_listener='LISTENER_INST2.INST.XX'
instdb1.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst1'
instdb2.log_archive_dest_1='LOCATION=/u1/app/oracle/admin/inst/arch_orainst2'
*.log_archive_dest_2='SERVICE=INSTB.INST.XX VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) OPTIONAL LGWR ASYNC NOAFFIRM NET_TIMEOUT=10'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='inst_%t_%s_%r.arc'
*.max_dump_file_size='200000'
*.open_cursors=300
*.parallel_max_servers=20
*.pga_aggregate_target=824180736
*.processes=550
instdb1.remote_listener='LISTENER_INST1.INST.XX'
instdb2.remote_listener='LISTENER_INST2.INST.XX'
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.session_max_open_files=20
*.sessions=480
*.sga_target=1610612736
instdb1.thread=1
instdb2.thread=2
*.undo_management='AUTO'
instdb1.undo_tablespace='UNDOTBS1'
instdb2.undo_tablespace='UNDOTBS2'
*.user_dump_dest='/u1/app/oracle/admin/inst/udump'
And here is the test I have done :
*1. modified the init pfile to this :*
inst.__db_cache_size=1107296256
inst.__java_pool_size=16777216
inst.__large_pool_size=16777216
inst.__shared_pool_size=436207616
inst.__streams_pool_size=16777216
*.audit_trail='DB'
*.background_dump_dest='C:\Oracle\admin\inst\bdump'
*.compatible='10.2.0.5.0'
*.control_file_record_keep_time=95
*.control_files='C:\Oracle\oradata\inst\control01.ctl','C:\Oracle\oradata\inst\control02.ctl','C:\Oracle\oradata\inst\control03.ctl'
*.core_dump_dest='C:\Oracle\admin\inst\cdump'
*.db_block_size=8192
*.db_create_file_dest='C:\Oracle\oradata\inst'
*.db_create_online_log_dest_1='C:\Oracle\inst'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_flashback_retention_target=1440
*.db_name='inst'
*.db_recovery_file_dest='C:\Oracle\oradata'
*.db_recovery_file_dest_size=53687091200
*.job_queue_processes=10
inst.log_archive_dest_1='LOCATION=C:\Oracle\oradata'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='inst_%t_%s_%r.arc'
*.max_dump_file_size='200000'
*.open_cursors=300
*.parallel_max_servers=20
*.pga_aggregate_target=824180736
*.processes=550
*.remote_login_passwordfile='EXCLUSIVE'
*.resource_limit=TRUE
*.session_max_open_files=20
*.sessions=480
*.sga_target=1610612736
inst.thread=1
*.undo_management='AUTO'
inst.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\Oracle\admin\inst\udump'
*2. shutdown the database, startup in nomount and restore controlfile (with the error when trying to restore controlfile) :*
RMAN> shutdown immediate;
Oracle instance shut down
RMAN> startup nomount pfile='C:\Oracle\init\initInst.ora';
connected to target database (not started)
Oracle instance started
Total System Global Area 1610612736 bytes
Fixed Size 1305856 bytes
Variable Size 369099520 bytes
Database Buffers 1233125376 bytes
Redo Buffers 7081984 bytes
RMAN> restore controlfile from 'C:\Oracle\ctl\inst_ctrl_c-2972284490-20120318-00';
Starting restore at 04-MAY-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=596 devtype=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 05/04/2012 14:20:12
RMAN-06172: no autobackup found or specified handle is not a valid copy or piece
Thank you for your help.
Max

Similar Messages

  • Time Machine is making a full backup into another directory

    Hi, ive been using Time Machine for a month then im stopped using it cause a travel outside the country. When im come back (4 days ago) I try to backup again and i see that is backing up the entire machine again... Looked in the Finder under backupdb folder and it has now 2 directorys one with the full backup since june and another with no backups because ive canceled the backup.
    Anyone knows how to fix it?
    Thanks.

    Patrick Lafferty wrote:
    Today though in proceeding with a new backup, it, (like with your experience), created a new directory with a whole backup of the entire machine....I see two directories now on the TM volume. Is there a way to combine the two directories?
    No. TM keeps the backups for each Mac completely separate, using each Mac's unique Ethernet Address, contained in the hardware. The main purpose of this to allow you to back up 2 or more Macs to the same drive without conflicts.
    There is no way to combine them.
    My 1TB backup HDD just had a huge chunk taken out of it with this backup of what would seem to be redundant data.
    Yes, most of it is. But the old Mac's version of OSX is somewhat different from your new one: not only is much of the hardware different, so are the processors.
    Worse, the old backups are now "stranded:" since they're for a different Mac, TM on the new Mac won't delete the old ones when it needs to make room for new backups. Instead, it will delete the oldest backups from the new Mac.
    So your best bet would be to simply erase the TM drive with Disk Utility, and let TM start fresh with your new Mac.
    As an alternative, you can selectively delete the old backups via Time Machine (do not use the Finder). This is one-by-one, so rather tedious and time-consuming.
    Since they're from a different Mac, you can view them via the instructions in item 17 of the Frequently Asked Questions post at the top of this forum, and use the procedure in item 12 to delete individual backups.

  • Import data backup into another DB instance

    Hello all.
    I need to import a Data Backup into Another Database instance. MaxDb version 7.3.
    On higher versions of MaxDb I performed this operation without any problems (http://help.sap.com/erp2005_ehp_04/helpdata/EN/43/d5ebc2c9ed3ab3e10000000a422035/frameset.htm).
    For this version of MaxDB I have such problem:
    the commands don't coincide with help description, I define backup template, transfer DB instance to admin state, open utility session and try to srart recovery (there is no command db_activate RECOVER <template>)
    dbmcli on Q46>util_connect
    OK
    dbmcli on Q46>recover_start DEMO
    ERR
    -24988,ERR_SQL: sql error
    -903,Message not available,blockcount mismatch
    What I can do? How can I start recover from data backup from another DB on this version of MaxDB?

    I have tried to perform backup of original system and recovery to another database one more time.
    There is the another error:
    dbmcli on Q46>recover_start DemoD46_recover
    ERR
    -24988,ERR_SQL: sql error
    -3014,Invalid end of SQL statement

  • Can I plug in (restore) an RMAN tablespace backup into another DB ?

    11.2.0.3/AIX 6.1
    We accidently dropped a development DB without taking the latest expdp backup of an important schema. All the objects on this schema belonged to only one tablespace and we have the RMAN backup of that tablespace. Is there anyway we could recreate that schema by restoring the tablespace backup into another database ?

    Yes.
    Use the TRANSPORT TABLESPACE feature/method :
    http://oracle.su/docs/11g/backup.112/e10643/rcmsynta2021.htm
    Also check out sys.dbms_tts.transport_set_check
    RMAN> transport tablespace emp_data, emp_data2
               tablespace destination '/u01/app/oracle/oradata'
               auxiliary destination '/u04/app/oracle/oradata';If you need to check your endian format use this query :
    SELECT
      PLATFORM_NAME,
      ENDIAN_FORMAT
    FROM
      V$TRANSPORTABLE_PLATFORM;http://www.fadalti.com/oracle/database/how_to_transportable_tablespaces.htm
    http://husnusensoy.wordpress.com/2008/07/12/migrating-data-using-transportable-tablespacetts/
    Best Regards
    mseberg

  • RESTORE BACKUP IN ANOTHER DATABASE

    Hi,
    I used RMAN to make a full backup of the a database production. Now, I need recovery this same backup in another database from test. It is in another machine. How can I do it?
    My work so far:
    - I copy de backups files to another machine, with another database which was installed a same version of the Oracle Database 10g.
    - I put the test database to run in archivelog mode
    And now?
    Sorry my english.
    Thanks
    Message was edited by:
    user523458

    Hi,
    Since 2 weeks ago, I´m trying clone a production database into a test database.
    I´m search, search so much and I did not arrive the place some. I write and execute the following script:
    ==========================================================================
    RMAN> run {
    2> SET NEWNAME FOR DATAFILE 1 TO '/u02/oradata/oralab/system01.dbf';
    3> SET NEWNAME FOR DATAFILE 2 TO '/u02/oradata/oralab/undotbs01.dbf';
    4> SET NEWNAME FOR DATAFILE 3 TO '/u02/oradata/oralab/sysaux01.dbf';
    5> SET NEWNAME FOR DATAFILE 4 TO '/u02/oradata/oralab/users01.dbf';
    6> SET NEWNAME FOR DATAFILE 5 TO '/u02/oradata/oralab/integracao.dbf';
    7> SET NEWNAME FOR DATAFILE 6 TO '/u02/oradata/oralab/legis_data.dbf';
    8> SET NEWNAME FOR DATAFILE 7 TO '/u02/oradata/oralab/NOVA_INTERNET';
    9> SET NEWNAME FOR DATAFILE 8 TO '/u02/oradata/oralab/legis_text.dbf';
    10> SET NEWNAME FOR DATAFILE 9 TO '/u02/oradata/oralab/log01.dbf';
    11> SET NEWNAME FOR DATAFILE 10 TO '/u02/oradata/oralab/tecwin_web.dbf';
    SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    12> 13> 14> 15> DUPLICATE TARGET DATABASE TO ORALAB
    16> PFILE=/u01/app/oracle/product/10.1.0/Db_1/dbs/initoralab.ora
    17> LOGFILE
    18> GROUP 1 ('/u02/oradata/oralab/redo01.log','/u02/oradata/oralab/redo01b.log') SIZE 10M REUSE,
    19> GROUP 2 ('/u02/oradata/oralab/redo02.log','/u02/oradata/oralab/redo02b.log') SIZE 10M REUSE,
    20> GROUP 3 ('/u02/oradata/oralab/redo03.log','/u02/oradata/oralab/redo03b.log') SIZE 10M REUSE,
    21> GROUP 4 ('/u02/oradata/oralab/redo04.log','/u02/oradata/oralab/redo04b.log') SIZE 10M REUSE,
    22> GROUP 5 ('/u02/oradata/oralab/redo05.log','/u02/oradata/oralab/redo05b.log') SIZE 10M REUSE;
    23> }
    EXIT
    ==========================================================================
    And I obtained the following result :
    ==========================================================================
    executing command: SET NEWNAME
    using target database controlfile instead of recovery catalog
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    sql statement: ALTER SYSTEM ARCHIVE LOG CURRENT
    Starting Duplicate Db at 07-AUG-06
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=160 devtype=DISK
    contents of Memory Script:
    set until scn 213934104;
    set newname for datafile 1 to
    "/u02/oradata/oralab/system01.dbf";
    set newname for datafile 2 to
    "/u02/oradata/oralab/undotbs01.dbf";
    set newname for datafile 3 to
    "/u02/oradata/oralab/sysaux01.dbf";
    set newname for datafile 4 to
    "/u02/oradata/oralab/users01.dbf";
    set newname for datafile 5 to
    "/u02/oradata/oralab/integracao.dbf";
    set newname for datafile 6 to
    "/u02/oradata/oralab/legis_data.dbf";
    set newname for datafile 7 to
    "/u02/oradata/oralab/NOVA_INTERNET";
    set newname for datafile 8 to
    "/u02/oradata/oralab/legis_text.dbf";
    set newname for datafile 9 to
    "/u02/oradata/oralab/log01.dbf";
    set newname for datafile 10 to
    "/u02/oradata/oralab/tecwin_web.dbf";
    restore
    check readonly
    clone database
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 07-AUG-06
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting datafile backupset restore
    channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to /u02/oradata/oralab/system01.dbf
    restoring datafile 00002 to /u02/oradata/oralab/undotbs01.dbf
    restoring datafile 00003 to /u02/oradata/oralab/sysaux01.dbf
    restoring datafile 00004 to /u02/oradata/oralab/users01.dbf
    restoring datafile 00005 to /u02/oradata/oralab/integracao.dbf
    restoring datafile 00006 to /u02/oradata/oralab/legis_data.dbf
    restoring datafile 00007 to /u02/oradata/oralab/NOVA_INTERNET
    restoring datafile 00008 to /u02/oradata/oralab/legis_text.dbf
    restoring datafile 00009 to /u02/oradata/oralab/log01.dbf
    restoring datafile 00010 to /u02/oradata/oralab/tecwin_web.dbf
    channel ORA_AUX_DISK_1: restored backup piece 1
    piece handle=/u03/admin/integra/flash_recovery_area/INTEGRA/backupset/2006_08_07/o1_mf_nnndf_TAG20060807T103544_2fg5w27d_.bkp tag=TAG20060807T103544
    restore not complete
    failover to previous backup
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 08/07/2006 14:36:15
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 10 found to restore
    RMAN-06023: no backup or copy of datafile 9 found to restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    RMAN>
    Recovery Manager complete.
    ==========================================================================
    But the Backuset is OK, look:
    ==========================================================================
    RMAN> list backupset of database;
    using target database controlfile instead of recovery catalog
    List of Backup Sets
    ===================
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    264 Full 24G DISK 01:33:39 07-AUG-06
    BP Key: 264 Status: AVAILABLE Compressed: NO Tag: TAG20060807T103544
    Piece Name: /u03/admin/integra/flash_recovery_area/INTEGRA/backupset/2006_08_07/o1_mf_nnndf_TAG20060807T103544_2fg5w27d_.bkp
    List of Datafiles in backup set 264
    File LV Type Ckp SCN Ckp Time Name
    1 Full 213646558 07-AUG-06 /u03/integra/system01.dbf
    2 Full 213646558 07-AUG-06 /u03/integra/undotbs01.dbf
    3 Full 213646558 07-AUG-06 /u03/integra/sysaux01.dbf
    4 Full 213646558 07-AUG-06 /u03/integra/users01.dbf
    5 Full 213646558 07-AUG-06 /u03/integra/integracao.dbf
    6 Full 213646558 07-AUG-06 /u03/integra/legis_data.dbf
    7 Full 213646558 07-AUG-06 /u03/integra/NOVA_INTERNET
    8 Full 213646558 07-AUG-06 /u03/integra/legis_text.dbf
    9 Full 213646558 07-AUG-06 /u03/integra/log01.dbf
    10 Full 213646558 07-AUG-06 /u03/integra/tecwin_web.dbf
    =========================================================================
    I create the dir "/u03/admin/integra/flash_recovery_area" in the AUXILIARY database
    and mount - with NFS - where the backupset were in the PRODUCTION database.
    1 - The share it´s work fine.
    2 - The RMAN backup it´s work fine.
    3 - The restore to another database "IN DIFERENT MACHINE" dont´s work.
    I am certain of that the file of backupset exists.
    Somebody can help me? Why this script don´t work?
    PS.: Sorry my English. I´m practising
    Thanks
    Message was edited by:
    user523458
    Message was edited by:
    user523458
    Message was edited by:
    user523458

  • Import AWR Html report into another database

    Hello,
    Im in 11gR2, is it possible to import the AWR Html report from database 1 into another database 2?
    I would like to use ADDM from another database 2 to analyze the AWR Html report of database 1.
    Thanks,

    AWR report importing on another database
    i dont think so

  • DBMS_DATAPUMP import into another database instance

    Hi there,
    i have a quick question as i didn't find an answer in my oracle documentations...
    I use a PL/SQL program in a Oracle Application Express (APEX) application to import a dumpfile via the dbms_datapump API. It`s no problem to import it into the database on which APEX runs and i'm connected to in my application, but now i want to start an import of the dumpfile into another database. I know that it is possible in the commandline interface of the data pump tool to import a dumpfile from one database instance to another with a connect identifier in the connect string.
    Could someone tell me where i have to put these information in the datapump API?
    Fennek

    Hi,
    thx for the replies...
    I think damorgan is right, in the meantime i read somewhere else that you appearantly cant start an import into another databse instance out of PL/SQL...
    Well, maybe i will try to use dbms_scheduler to start an external script which starts the import tool like apex_disco wrote...
    Fennek

  • Backup 3.1.2 - Break up FULL backups into Parts (Locally)

    I use "Backup" for local and iDisk backups. I noticed when I backup to iDisk, Backup automatically breaks up a full backup into several parts. How can force Backup to break up a large backup locally? Breaking it up would make for easier copies over to my iDisk than these massive files I'm trying to move over. See below for more detail on my issue. Thanks!
    --- I have a large iTunes and iPhoto libs - 5GBs and 11GBs that I would like to back up to iDisk. Backing it up directly to iDisk hasn't worked for me. What I do instead is back it up locally using "Backup", then I copy it over to iDisk. However, I'm having problems copying these large files. I'm getting Error Code 0 when trying to move them.

    You might want to take a look at this similar thread in the Backup forum:
    http://discussions.apple.com/thread.jspa?threadID=2016476&tstart=0

  • DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model

    I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2.  All backups are working as expected except for one.  The database in question is supposed to be backuped up iwht a daily express
    full and hourly incremental schedule.  Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule.  I checked the logs on the old DPM
    server and the transaction log backups were working just fine up until I stopped protection the data source.  The SQL server is 2008 R2 SP2.  Other databases on the same server that are set to full recovery model are working just fine.  If we
    switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth.  We also tried switching the recovery model
    on the failing database from full to simple and then back again, but to no avail.  Both the SQL server and the DPM server have been rebooted.  We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
    database is really using the full recovery model.
    Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?

    I was having this same problem and appear to have found a solution.  I wanted hourly recovery points for all my SQL databases.  I was getting hourly for some but not for others.  The others were only getting a recovery point for the Full Express
    backup.  I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem.  I was still not getting the hourly recovery points.
    I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started.  My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
    mode SCDPM continued to treat it as simple. 
    I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db. 
    It worked.  The original db was still only getting express full recovery points and the copy was getting hourly.  I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
    that it is set to full, and then add it back to protection.   I have not tested this yet.
    This is the article I read: 
    Article I read

  • Select records from one database and insert it into another database

    Hi
    I need to write a statement to select records from one database which is on machine 1 and insert these records on a table in another database which is on machine 2. Following is what I did:
    1. I created the following script on machine 2
    sqlplus remedy_intf/test@sptd @load_hrdata.sql
    2. I created the following sql statements in file called load_hrdata.sql:
    rem This script will perform the following steps
    rem 1. Delete previous HR data/table to start w/ clean import tables
    rem 2. Create database link to HR database, and
    rem 3. Create User Data import table taking info from HR
    rem 4. Drop HRP link before exiting
    SET COPYCOMMIT 100
    delete from remedy.remedy_feed;
    commit;
    COPY FROM nav/donnelley@hrp -
    INSERT INTO remedy.remedy_feed -
    (EMPLID, FIRST_NAME, MI, LAST_NAME, BUSINESS_TITLE, WORK_PHONE, -
    RRD_INTRNT_EMAIL, LOCATION, RRD_OFFICE_MAIL, RRD_BUS_UNIT_DESCR) -
    USING SELECT EMPLID, FIRST_NAME, MI, LAST_NAME, BUSINESS_TITLE, WORK_PHONE, -
    RRD_INTRNT_EMAIL, LOCATION, RRD_OFFICE_MAIL, RRD_BUS_UNIT_DESCR -
    FROM ps_rrd_intf_medium -
    where empl_status IN ('A', 'L', 'P', 'S', 'X')
    COMMIT;
    EXIT;
    However, whenever I run the statement I keep getting the following error:
    SP2-0498: missing parenthetical column list or USING keyword
    Do you have any suggestions on how I can fix this or what am I doing wrong?
    Thanks
    Ali

    This doesn't seem to relate to Adobe Reader. Please let us know the product you are using so we may redirect you or refer to the list of forums at http://forums.adobe.com/

  • Speeding up full backup of Replicate database ASE 15.5

    Greetings all
    I need to speed up replicate database backup.
    ASE version 15.5
    Adaptive Server Enterprise/15.5/EBF 20633 SMP ESD#5.2/P/RS6000/AIX 5.3/asear155/2602/64-bit/FBO/Sun Dec  9 11:59:29 2012
    Backup Server/15.5/EBF 20633 ESD#5.2/P/RS6000/AIX 5.3/asear155/3350/32-bit/OPT/Sun Dec  9 08:34:37 2012
    RS version
    Replication Server/15.7.1/EBF 21656 SP110 rs1571sp110/RS6000/AIX 5.3/1/OPT64/Wed Sep 11 12:46:38 2013
    Primary database is 1.9 TB about 85% occupied
    Replicate database is same size but used about 32%  (mostly dbo tables are replicated)
    As noted above backup sever is 32 bit on AIX.
    SIMILARITIES
    Both servers use SAN with locally mounted folder for backup files/stripes.
    Databases are on  'raw' devices for data and log
    Both backup servers have the similar RUN files with following
    -N25 \
    -C20 \
    -M/sybase/15/ASE-15_0/bin/sybmultbuf \
    -m2000 \
    Number of stripes are 20 for both primary and replicate databases.
    DIFFERENCES
    Replicate has less memory and less number of engines.
    Devices on primary are mostly 32 GB and those on replicate are mostly 128 GB
    OBSERVATIONS
    Full Back up times on primary consistently about 1 hour.
    Full Back up times on replicate are consistently double that (120 to 130 minutes).
    Full Backup with suspended  replication or with minimal activity does not reduce the run times much.
    What do I need to capture to pinpoint cause of the slow backup on replicate side ?
    Thanks in advance
    Avinash

    Mark
    Thanks for the inputs.
    We use compression level 2 on both primary and replicate.
    This was tried out before upgrade to 15.5 and seems good enough.
    BTW on a different server I also tried new compression levels 100 and 101.
    for database of same size and did not get substantial reduction in run times.
    Stripe sizes increased from 23 GB to 30-33 GB.
    As far as I have noted Replicate side is not starved for CPU.
    sp_sysmon outputs during the backup period do not show high CPU usage.
    Wll it be accurate to to say that like a huge report query,  backup activity also churns the caches ?
    (i.e. each allocated/used page if not found in the cache is brought in cache by a physical read )
    Avinash

  • How to restore RMAN hot backup to another database on another server?

    I want to know how to restore RMAN hot backup from production server to another database on a testing server.
    The hot backup is from a database named PROD on the production server
    The database to be restored with the hot backup is TEST on the testing server. There is already a PROD database on the testing server and this PROD database must be kept.
    I have read some threads about changing initTEST.ora to PROD to restore such backup but (I think) will not work in my case since I already have a PROD database on the testing server.
    The version is 11gR2 on Linux but the compatible parameter is set to 10.2.0.1.0.
    Thanks for any help.

    Hi,
    Since you are on 11g, hope this helps you http://shivanandarao.wordpress.com/2012/04/28/duplicating-database-without-connecting-to-target-database-or-catalog-database-in-oracle-11g/
    881656     
    Handle:     881656
    Status Level:     Newbie
    Registered:     Aug 25, 2011
    Total Posts:     53
    Total Questions:      31 (31 unresolved)
    Looks like forum is of no help to you. To get better responses, consider closing your threads by providing appropriate points if you feel that they have been answered. Keep the forum clean !!

  • AUXILIARY database update using full backup from target database

    Hi,
    I am now facing the problem with how to implement AUXILIARY database update to be consistent with the target database during a certain period (a week). I did a fully backup on our target database everyday using rman. I know it is possible to use expdp to realize it but i want to use the current fully backup to do it. Does anybody has idea or experience with that? Thanks in advance!
    Regards,
    lik

    That's OK. If you don't use RMAN to clone your database. You can create a database just using the cold backup of the primary database simply.
    Important things are
    1) you must catalog all datafiles as image copy level 0 in the cloned database
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@clonedb (in host 2)
    RMAN> catalog datafilecopy
    '/oracle/oradata/CLONE/datafile/abc.dbf',
    '/oracle/oradata/CLONE/datafile/def.dbf',
    '/oracle/oradata/CLONE/datafile/ghi.dbf'
    level 0 tag 'CLONE';
    2) You need to make incrementals of the primary database to refresh the clone database.Make sure that you need to specify a tag for the incremental and the name of tag is the exactly same as the one used step (1).
    RMAN> connect catalog rman/rman@rcvcat (in host 1)
    RMAN> connect target sys/manager@prod (in host 3)
    RMAN> backup incremental level 1 tag 'CLONE' for recover of copy with tag 'CLONE' database format '/backup/%u';
    3) Copy the newly created incrementals (in host 3) to the clone database site (host 2). Make sure the directory must be exactly same.
    $ rcp /backup/<incr_backup> /backup/
    -- rcp <the loc of a incremental in host 3> <the loc of a incremental in host 2>
    4) Apply incrementals to update the clone database. Make sure you provide the tag you specified.
    RMAN> connect catalog rman/rman@rcvcat
    RMAN> connect target sys/manager@clone
    RMAN> recover copy of database with tag 'CLONE';
    5) After update the clone database, then delete the incremental backups and uncatalog the image copies
    RMAN> delete backup tag 'CLONE';
    RMAN> change copy like '/oracle/oradata/CLONE/datafile/%' uncatalog;
    *** As you can see, you can clone a database using any methods. The key is you have to catalog the clone database when you refresh it. After finishing it, then uncatalog..

  • One full backup job to run full backup of all databases and it failed. I post error message.Any help?

    Executed as user: abc\user1. ... 2004-2009, Quest Software Inc. Registered Name: abc INC 
    Processed 1152 pages for database 'abc123', file 'abc123' on file 1. Processed 4 pages for database 'abc123', file 'abc123_log' on file 1. BACKUP DATABASE successfully processed 1156 pages in 0.725 seconds (13.051 MB/sec). 
    Backup added as file number: 1  Native Size: 11.19 MB Backup Size: 1.87 MB CPU Seconds: 0.27 [SQ 
    The backup set on file 1 is valid.  CPU Seconds: 0.25 [SQLSTATE 01000] (Message 1) 
    LiteSpeed(R) for SQL Server Version 5.1.0.1293 Copyright 2004-2009, Quest Software Inc. Registered Name: abc INC 
    Processed 456 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1' on file 1. Processed 24 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1_Data2' on file 1. Processed 1 pages for database 'WSS_Search_abc1', file 'WSS_Search_abc1_log'
    ...  The step failed.

    Hi bestrongself,
    Before you use a SQL Server Agent job to back up all database, I recommend you run the backup statement in query windows directly, and check if it can run well. I do a test by using the following statement,
    DECLARE @name VARCHAR(50) -- database name
    DECLARE @path VARCHAR(256) -- path for backup files
    DECLARE @fileName VARCHAR(256) -- filename for backup
    DECLARE @fileDate VARCHAR(20) -- used for file name
    -- specify database backup directory
    SET @path = 'C:\Backup\'
    -- specify filename format
    SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)
    DECLARE db_cursor CURSOR FOR
    SELECT name
    FROM master.dbo.sysdatabases
    WHERE name NOT IN ('master','model','msdb','tempdb') -- exclude these databases
    OPEN db_cursor
    FETCH NEXT FROM db_cursor INTO @name
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
    BACKUP DATABASE @name TO DISK = @fileName
    FETCH NEXT FROM db_cursor INTO @name
    END
    CLOSE db_cursor
    DEALLOCATE db_cursor
    the script allow you to backup each database within your instance of SQL Server. 
    Your account  need to have read and right permission to the path file.
    If the above script can run well directly, then you can create a job and put your backup statement inside of it. There is a similar detail about how to create a simple backup job in SQL Server. You can review it.
    http://www.petri.co.il/create-backup-job-in-sql-server.htm
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here.
    Sofiya Li
    TechNet Community Support

  • Restore Cold Backup Into Test Database

    Hi all.
    Here's what I want to do...
    Copy TESTDB1 into TESTDB2 just by moving the datafiles across the network.
    Which files will I need to move over, etc? (control files and datafiles good enough?)
    Any input is appreciated.
    Thanks.

    I hope I'm not missing anything:
    spfile/pfile
    pwd file
    control files
    data files
    redo log files
    Some of them can be recreated but why bother if you can copy the full DB when is down. I'm assuming you have the same directory structure.

Maybe you are looking for

  • Ipod not syncing because of error message

    I am getting an error message each time I try to sync that says that "Stuff you need to know part 3" step three was not copied to my ipod because it could not be found. Any idea of what I need to do now to locate it?

  • How to download a CSV file in AL11 to XL into diferent tabs  ?

    Hello All,         I have a file in AL11 of type CSV. When I download this file into XLS from AL11 all the data is downloaded under one tab itself. But I want the data in separate columns wherever there is a comma in the line. How can I do this ? Reg

  • Removing an old email account

    I found out how to hide the old account but I have no idea how to delete them??   Does anyone know..Thank You

  • Form Personalization - Validation for an existing field value can be done?

    Dear All, I have a requirement to have a validation on a form value based on its current value. The field should be editable as well it should check its current value with the newly eneterd value, which should not be less than the originally displaye

  • Photo Booth Crashing

    Every time I try to boot up photo booth after a fresh install it crashes and closes immediately. I have tried to explore the DVD to install just Photo Booth but can not find it. Is there a way to JUST INSTALL Photo Booth? Thanks, Kevin