Using  Data Pump when database is read-only

Hello
I used flashback and returned my database to the past time then I opened the database read only
then I wanted use data pump(expdp) for exporting a schema but I encounter this error
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYS.SYS_EXPORT_SCHEMA_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-16000: database open for read-only access
but I could by exp, export that schema
My question is that , don't I can use Data Pump while database is read only ? or do you know any resolution for the issue ?
thanks

You need to use NETWORK_LINK, so the required tables are created in a read/write database and the data is read from the read only database using a database link:
SYSTEM@db_rw> create database link db_r_only
  2   connect to system identified by oracle using 'db_r_only';
$ expdp system/oracle@db_rw network_link=db_r_only directory=data_pump_dir schemas=scott dumpfile=scott.dmpbut I tried it with 10.2.0.4 and found and error:
Export: Release 10.2.0.4.0 - Production on Thursday, 27 November, 2008 9:26:31
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39006: internal error
ORA-39065: unexpected master process exception in DISPATCH
ORA-02054: transaction 1.36.340 in-doubt
ORA-16000: database open for read-only access
ORA-02063: preceding line from DB_R_ONLY
ORA-39097: Data Pump job encountered unexpected error -2054
I found in Metalink the bug 7331929 which is solved in 11.2! I haven't tested this procedure with prior versions or with 11g so I don't know if this bug only affects to 10.2.0.4 or 10* and 11.1*
HTH
Enrique
PS. If your problem was solved, consider marking the question as answered.

Similar Messages

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

  • Exporting whole database (10GB) using Data Pump export utility

    Hi,
    I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
    Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
    Please correct me if i am wrong and kindly help.
    Thanks for your help in advance.

    You need to create a directory object.
    sqlplus user/password
    create directory foo as '/path_here';
    grant all on directory foo to public;
    exit;
    then run you expdp command.
    Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
    Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
    Dean

  • Select table when import using Data Pump API

    Hi,
    Sorry for the trivial question, I export the data using Data Pump API, with "TABLE" mode.
    So all tables will be exported in one .dmp file.
    My question is, then how to import few tables only using Data Pump API?, how to define "TABLES" property like command line interface?
    should I use DATA_FILTER procedures?, if yes how to do that?
    Really thanks in advance
    Regards,
    Kahlil

    Hi,
    You should be using metadata_filter procedure for the same.
    eg:
    dbms_datapump.metadata_filter
                (handle1
                 ,'NAME_EXPR'
                 ,'IN (''TABLE1'', '"TABLE2'')'
    {code}
    Regards
    Anurag                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Differences between using Data Pump to back up database and using RMAN ?

    what are differences between using Data Pump to back up database and using RMAN ? what is CONS and PROS ?
    Thanks

    Search for Database backup in
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/backrec.htm#i1007289
    In short
    RMAN -> Physical backup.(copies of physical database files)
    Datapump -> Logical backup.(logical data such as tables,procedures)
    Docs for RMAN--
    http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#
    Docs for Datapump
    http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm
    Edited by: Sunny kichloo on Jul 5, 2012 6:55 AM

  • Migration from 10g to 12c using data pump

    hi there, while I've used data pump at the schema level before, I'm rather new at full database imports.
    we are attempting a full database migration from 10.2.0.4 to 12c using the full database data pump method over db link.
    the DBA has advised that we avoid moving SYSTEM and SYSAUX objects. but initially when reviewing the documentation it appeared that these objects would not be exported from the target system given TRANSPORTABLE=NEVER. can someone confirm this? the export/import log refers to objects that I believed would not be targeted:
    23-FEB-15 19:41:11.684:
    Estimated 3718 TABLE_DATA objects in 77 seconds
    23-FEB-15 19:41:12.450: Total estimation using BLOCKS method: 52.93 GB
    23-FEB-15 19:41:14.058: Processing object type DATABASE_EXPORT/TABLESPACE
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"TEMP" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"USERS" already exists
    23-FEB-15 20:10:33.200:
    Completed 96 TABLESPACE objects in 1759 seconds
    23-FEB-15 20:10:33.208: Processing object type DATABASE_EXPORT/PROFILE
    23-FEB-15 20:10:33.445:
    Completed 7 PROFILE objects in 1 seconds
    23-FEB-15 20:10:33.453: Processing object type DATABASE_EXPORT/SYS_USER/USER
    23-FEB-15 20:10:33.842:
    Completed 1 USER objects in 0 seconds
    23-FEB-15 20:10:33.852: Processing object type DATABASE_EXPORT/SCHEMA/USER
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OUTLN" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"ANONYMOUS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OLAPSYS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"MDDATA" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"SCOTT" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"LLTEST" already exists
    23-FEB-15 20:10:52.372:
    Completed 1140 USER objects in 19 seconds
    23-FEB-15 20:10:52.375: Processing object type DATABASE_EXPORT/ROLE
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"SELECT_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"EXECUTE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"DELETE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.256: ORA-31684: Object type ROLE:"RECOVERY_CATALOG_OWNER" already exists
    any insight most appreciated.

    Schema's SYS,CTXSYS, MDSYS and ORDSYS are Not Exported using exp/expdp
    Doc ID: Note:228482.1
    I suppose he already installed a software 12c and created a database itseems - So when you imported you might have this "already exists"
    Whenever the database is created and software installed by default system,sys,sysaux will be created.

  • Backing up database with read-only tablespaces

    I am trying to develop a script that will dynamically build RMAN scripts for backing up
    a database with read-only tablespaces. The application running on this database creates
    new tablespaces in read-write mode on weekly basis, populates them, and then puts them in read-only mode. So I need to backup all read-write tablespaces plus take backup of all read-only tablespaces once. The problem is that the application also includes a process that puts a tablespace back into read-write mode, updates it, and puts it back into read-write mode. So I need to be able to access "a history" of the tablespace - when was it put into read-only mode - to compare it with a history of backups. While history of backups is available in RMAN views, I couldn't find any way to extract tablspaces
    history.
    There should be RMAN command to the effect
    "backup all read-write tablespaces and read-only tablespaces if they have not been backed up at least once since becoming read-only".
    Regards,
    Sev
    null

    just rsync the files to a compressed Zpool. do this using shadow migration, and you only loose access to the data for a few seconds.
    1) make new dataset with compression
    2) enable shadow migration between the new and old
    3) change the database to use the new location
    4) watch as data is automatically copied and compressed :-)
    the the down side, you need extra space to pull the off.

  • What are the 'gotcha' for exporting using Data Pump(10205) from HPUX to Win

    Hello,
    I have to export a schema using data pump from 10205 on HPUX 64bit to Windows 64bit same 10205 version database. What are the 'gotcha' can I expect from doing this? I mean export data pump is cross platform so this sounds straight forward. But are there issues I might face from export data pump on HPUX platform and then import data dump on to Windows 2008 platform same database version 10205? Thank you in advance.

    On the HPUX database, run this statement and look for the value for NLS_CHARACTERSET
    SQL> select * from NLS_DATABASE_PARAMETERS;http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4218.htm#sthref2018
    When creating the database on Windows, you have two options - manually create the database or use DBCA. If you plan to create the database manually, specify the database characterset in the CREATE DATABASE statement - http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5004.htm#SQLRF01204
    If using DBCA, see http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#ADMQS0021 (especially http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#BABJBDIF)
    HTH
    Srini

  • Putting apps database in Read-Only mode

    Hi,
    I want to put the apps database in read-only mode to that user will be able to login into the applications and see data but will not be able to update it.
    What is the best way to do this?
    Thanks

    I am still looking at how to do this because if users
    update the UAT/test database, when the prod upgrade
    is completed, they will think that their updates will
    be available and this will cause some issues.This is a training/expectation-setting issue, not a technical one. You need to make sure that your users understand the difference between a test system and production, and that changes made in testing will not be present in production. They also need to understand that this situation is actually to their benefit: it enables them to really work with the test system to uncover potential problems and learn new features, without fear of making changes that could negatively impact their day-to-day work in production.
    Please note that I'm not trying to be a jerk here. I very respectfully submit that attempting to make an instance read-only for training purposes, even if possible, will involve a great deal of technical work for very little (and perhaps even negative) overall benefit to the users.
    Regards,
    John P.

  • Best Approach for using Data Pump

    Hi,
    I configured a new database which I set up with schemas that I imported in from another production database. Now, before this database becomes the new production database, I need to re-import the schemas so that the data is up-to-date.
    Is there a way to use Data Pump so that I don't have to drop all the schemas first? Can I just export the schemas and somehow just overwrite what's in there already?
    Thanks,
    Nora

    Hi, you can use the NETWORK_LINK parameter for import data from other remote database.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007380
    Regards.

  • Standby database errors - Alter database open read only

    alter database open read only
    AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
    Signalling error 1152 for datafile 1!
    Beginning standby crash recovery.
    Serial Media Recovery started
    Managed Standby Recovery starting Real Time Apply
    Media Recovery Waiting for thread 1 sequence 216
    Mon Dec 20 11:58:18 2010
    Standby crash recovery need archive log for thread 1 sequence 216 to continue.
    Please verify that primary database is transporting redo logs to the standby database.
    Wait timeout: thread 1 sequence 216
    Standby crash recovery aborted due to error 16016.
    Errors in file /u01/app/oracle/diag/rdbms/mdm2/MDM2/trace/MDM2_ora_17442.trc:
    ORA-16016: archived log for thread 1 sequence# 216 unavailable
    Recovery interrupted!
    Completed standby crash recovery.
    Signalling error 1152 for datafile 1!
    Errors in file /u01/app/oracle/diag/rdbms/mdm2/MDM2/trace/MDM2_ora_17442.trc:
    ORA-10458: standby database requires recovery
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '+MDMDG1/mdm2/datafile/system.280.738243341'
    ORA-10458 signalled during: alter database open read only...
    Mon Dec 20 12:13:46 2010
    ALTER DATABASE RECOVER managed standby database using current logfile disconnect
    Attempt to start background Managed Standby Recovery process (MDM2)
    Mon Dec 20 12:13:46 2010
    MRP0 started with pid=23, OS id=18974
    MRP0: Background Managed Standby Recovery process started (MDM2)
    started logmerger process
    Mon Dec 20 12:13:51 2010
    Managed Standby Recovery starting Real Time Apply
    Parallel Media Recovery started with 2 slaves
    Waiting for all non-current ORLs to be archived...
    All non-current ORLs have been archived.
    Media Recovery Waiting for thread 1 sequence 216
    Completed: ALTER DATABASE RECOVER managed standby database using current logfile disconnect
    The above lines are from alert log of standby database.
    Standby standbase
    SQL> alter database open read only;
    alter database open read only
    ERROR at line 1:
    ORA-10458: standby database requires recovery
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '+MDMDG1/mdm2/datafile/system.280.738243341'
    Parameters set on primary are
    log_archive_dest_1 LOCATION=+MDMDG3/MDM1/ARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLE ) DB_UNIQUE_NAME=MDM1
    log_archive_dest_state_1 ENABLE
    log_archive_dest_2 SERVICE=MDM2 SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=MDM2
    log_archive_dest_state_2 ENABLE
    dg_broker_config_file1 +MDMDG2/mdm/dg_config/dgconfig1_mdm.dat
    dg_broker_config_file2 +MDMDG2/mdm/dg_config/dgconfig2_mdm.dat
    fal_server MDM2
    standby_file_management AUTO
    log_archive_config dg_config=(MDM1,MDM2)
    db_file_name_convert MDM2, MDM1
    ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE availability ;
    Standby pfile
    *.archive_lag_target=900
    *.audit_file_dest='/u01/app/oracle/admin/MDM2/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='+MDMDG1/MDM2/CONTROLFILE/controlfile01.ctl','+MDMDG2/MDM2/CONTROLFILE/controlfile02.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='+MDMDG1'
    *.db_domain=''
    *.db_file_name_convert='MDM1','MDM2'
    *.db_name='MDM'
    *.db_recovery_file_dest='+MDMDG2'
    *.db_recovery_file_dest_size=10485760000
    *.db_unique_name='MDM2'
    *.dg_broker_config_file1='+MDMDG2/MDM/DG_CONFIG/dgconfig1_MDM.dat'
    *.dg_broker_config_file2='+MDMDG2/MDM/DG_CONFIG/dgconfig2_MDM.dat'
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=MDM2XDB)'
    *.fal_server='MDM11','MDM12'
    *.instance_name='MDM2'
    *.log_archive_config='dg_config=(MDM1,MDM2)'
    *.log_archive_dest_1='LOCATION=+MDMDG3/MDM2/ARCH VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=MDM2'
    *.log_archive_dest_2='SERVICE=MDM1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=MDM1'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='MDM_%t_%s_%r.arc'
    *.log_file_name_convert='MDM1','MDM2'
    *.memory_target=838860800
    *.nls_language='ENGLISH'
    *.nls_territory='UNITED KINGDOM'
    *.open_cursors=300
    *.processes=500
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    *.standby_file_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    On standby ASM
    ASMCMD [+] > find * *
    +MDMDG1/ASM/
    +MDMDG1/ASM/ASMPARAMETERFILE/
    +MDMDG1/ASM/ASMPARAMETERFILE/REGISTRY.253.737811541
    +MDMDG1/MDM/
    +MDMDG1/MDM2/
    +MDMDG1/MDM2/CONTROLFILE/
    +MDMDG1/MDM2/CONTROLFILE/controlfile01.ctl
    +MDMDG1/MDM2/CONTROLFILE/current.265.738243333
    +MDMDG1/MDM2/DATAFILE/
    +MDMDG1/MDM2/DATAFILE/CANVAS_POPULARITY_DATA.264.738243343
    +MDMDG1/MDM2/DATAFILE/CANVAS_POPULARITY_IDX.277.738243343
    +MDMDG1/MDM2/DATAFILE/MDM_SRC_DATA.282.738243343
    +MDMDG1/MDM2/DATAFILE/MDM_SRC_IDX.275.738243343
    +MDMDG1/MDM2/DATAFILE/MIPS_MDM_DATA.283.738243341
    +MDMDG1/MDM2/DATAFILE/MIPS_MDM_IDX.276.738243343
    +MDMDG1/MDM2/DATAFILE/SYSAUX.281.738243341
    +MDMDG1/MDM2/DATAFILE/SYSTEM.280.738243341
    +MDMDG1/MDM2/DATAFILE/TEST_TBSP1.273.738243345
    +MDMDG1/MDM2/DATAFILE/TEST_TBSP2.272.738243345
    +MDMDG1/MDM2/DATAFILE/UNDOTBS1.256.738243343
    +MDMDG1/MDM2/DATAFILE/UNDOTBS2.279.738243343
    +MDMDG1/MDM2/DATAFILE/USERS.278.738243347
    +MDMDG1/MDM2/ONLINELOG/
    +MDMDG1/MDM2/ONLINELOG/group_1.259.738243429
    +MDMDG1/MDM2/ONLINELOG/group_2.257.738243431
    +MDMDG1/MDM2/ONLINELOG/group_21.284.738243505
    +MDMDG1/MDM2/ONLINELOG/group_22.261.738243505
    +MDMDG1/MDM2/ONLINELOG/group_23.274.738243505
    +MDMDG1/MDM2/ONLINELOG/group_3.258.738243431
    +MDMDG1/MDM2/ONLINELOG/group_31.262.738243513
    +MDMDG1/MDM2/ONLINELOG/group_32.270.738243513
    +MDMDG1/MDM2/ONLINELOG/group_33.263.738243513
    +MDMDG1/MDM2/ONLINELOG/group_4.260.738243431
    +MDMDG2/MDM/
    +MDMDG2/MDM/DG_CONFIG/
    +MDMDG2/MDM2/
    +MDMDG2/MDM2/AUTOBACKUP/
    +MDMDG2/MDM2/AUTOBACKUP/2010_12_20/
    +MDMDG2/MDM2/AUTOBACKUP/2010_12_20/s_738242861.263.738244155
    +MDMDG2/MDM2/CONTROLFILE/
    +MDMDG2/MDM2/CONTROLFILE/controlfile02.ctl
    +MDMDG2/MDM2/CONTROLFILE/current.271.738243335
    +MDMDG2/MDM2/ONLINELOG/
    +MDMDG2/MDM2/ONLINELOG/group_1.270.738243429
    +MDMDG2/MDM2/ONLINELOG/group_2.269.738243431
    +MDMDG2/MDM2/ONLINELOG/group_21.268.738243505
    +MDMDG2/MDM2/ONLINELOG/group_22.272.738243505
    +MDMDG2/MDM2/ONLINELOG/group_23.262.738243505
    +MDMDG2/MDM2/ONLINELOG/group_3.273.738243431
    +MDMDG2/MDM2/ONLINELOG/group_31.266.738243513
    +MDMDG2/MDM2/ONLINELOG/group_32.265.738243513
    +MDMDG2/MDM2/ONLINELOG/group_33.264.738243513
    +MDMDG2/MDM2/ONLINELOG/group_4.261.738243431
    +MDMDG3/MDM/
    +MDMDG3/MDM/ARCH/
    +MDMDG3/MDM2/
    +MDMDG3/MDM2/ARCH/
    -- Please can I know how to open read only standby database.

    user5846399 wrote:
    ORA-16016: archived log for thread 1 sequence# 216 unavailable
    Recovery interrupted!archived log for thread 1 sequence# 216
    This file is needed for recovery, Find it and move it to the standby database side.

  • Database open read only

    Does the alert<sid>.log always show a warning message as such when a standby database is put into read only mode?
    ***Warning - Executing transaction without active Undo Tablespace

    Hi,
    It's a bug happen to 9i
    Bug 3270493:EXCESSIVE QMNX TRACE FILES WHEN PLACING STANDBY IN READ ONLY MODE
    This does not reproduce in 10g. It looks like there were some modifications made to kwqitnfy to check whether the database is in read-write mode before starting the QMNC process.
    The workaround is to set aq_tm_processes=0 when using the database in read-only mode for the long-term. This can be done with a simple alter system command:
    ALTER SYSTEM SET aq_tm_processes=0;
    By running the above command, the errors would stop and therefore so would the trace files.
    Note: Use spfile is applied in order to ensure aq_tm_processes will be zero after restart.
    Mike

  • Startup command starts database but alter database open read only gives error

    Hi,
    I'm having some strange behavior. I've 10.2.0.3 database on windows 2003.
    The startup command starts database without any issues.
    But if I try following it gives error.
    startup mount (successful no errors)
    alter database open read only; gives error that file Mnnnnnn is missing.
    Why this is happening and how to fix it?
    Thanks.

    You really need to show us exactly what Oracle is telling you, using copy and paste, so we can have some clue.  You can hide details like instance and host names if they show up.
    I'm wondering if you had some messed up offline datafile in a tablespace, where Oracle handles it with startup, but gets upset when you try to open read-only.  It's some bizarro sequence of events like: add a datafile to a tablespace, decide that was a mistake, alter it offline, then remove it from the OS without telling Oracle any more about it.  I've come back from vacation to find scenarios like this, it winds up being a time-bomb trying to recreate a standby later.  Or something like that, I could be unremembering some details.

  • Can we load data in chunks using data pump ?

    We are loading data using data pump. So I want to clear my understanding.
    Please correct me if I am wrong on my understandings -
    ODI will fetch all data from source (whether it is INIT or CDC ) in one go and unload into staging area.
    If it is true, will performance hamper in case very huge data (50 million records at source) at source as ODI tries to load entire data in one go. I believe it will give better performance if we load in chunks using data pump.
    Please confirm and correct.
    Also I would like to know how can we configure chunk load using data-pump.
    Thanks in Advance.
    Regards,
    Dinesh.

    You may consider usingLKM Oracle to Oracle (datapump)
    http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/oracle_db.htm#r15c1-t2
    In 11g ODI reads from source and write to target in parallel. This is the case where you specify select query in source command and insert/update query in the target command. At source side Odi reads records from source and add them to a data queue. At target side a parallel thread reads data from the data queue and writes to the target. So the overall performance would be the slower of the read or write process.
    Thanks,

  • Error during open standby database in read only mode

    hi,
    alter database open read only;
    alter database open read only
    ERROR at line 1:
    ORA-10458: standby database requires recovery
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/u02/app/oracle/oradata/standby/system01.dbf'
    Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
    what is the reason of this.
    Thanks in advance.

    thanks for your reply.
    [oracle@standby admin]$ cat listener.ora
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.105.1.124)(PORT = 1521))
    LISTENER_STANDBY =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.105.1.124)(PORT = 1521))
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME =/opt/app/oracle/product/11.2.0/db_1)
    (PROGRAM = extproc)
    tnsfile
    [oracle@standby admin]$ cat tnsnames.ora
    standby =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =10.105.1.124)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = standby)
    ora11g =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =10.105.1.120)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ora11g)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.105.1.120)(PORT = 1521))
    EXTPROC_CONNECTION_DATA =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (CONNECT_DATA =
    (SID = PLSExtProc)
    (PRESENTATION = RO)
    pfile
    [oracle@standby standby]$ cat initstandby.ora
    ora11g.__db_cache_size=130023424
    ora11g.__java_pool_size=4194304
    ora11g.__large_pool_size=4194304
    ora11g.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    ora11g.__pga_aggregate_target=146800640
    ora11g.__sga_target=276824064
    ora11g.__shared_io_pool_size=0
    ora11g.__shared_pool_size=121634816
    ora11g.__streams_pool_size=8388608
    *.audit_file_dest='/u01/app/oracle/admin/ora11g/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.1.0'
    #*.control_files='/u02/app/ora11g/oradata/ora11g/control01.ctl','/u01/app/oracle/flash_recovery_area/ora11g/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='ora11g'
    *.db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=4039114752
    *.diagnostic_dest='/opt/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=ora11g)'
    *.memory_target=2016M
    *.open_cursors=300
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.undo_tablespace='UNDOTBS1'
    *.audit_file_dest='/opt/app/oracle/admin/standby/adump'
    *.audit_trail=none
    #*.background_dump_dest='/opt/app/oracle/admin/standby/bdump'
    #*.compatible='10.2.0.2.0'
    #*.control_files='/opt/app/oracle/oradata/standby/control01.ctl'
    #,'/opt/app/oracle/oradata/standby/control02.ctl','/opt/app/oracle/or
    *.control_files='/u02/app/oracle/oradata/standby/control_sb01.ctl'
    #,'/u02/app/oracle/oradata/standby/control_02.ctl','/u02/app/oracle/oradata/standby/control_03.ctl'
    *.core_dump_dest='/u01/app/oracle/diag/rdbms/standby/standby/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    #*.db_name='standby'
    #*.dispatchers='(PROTOCOL=TCP) (SERVICE=standby)'
    *.job_queue_processes=10
    *.log_archive_dest_1='location=/opt/app/oracle/arch'
    *.log_archive_config='dg_config=(standby,ora11g)'
    *.log_archive_dest_1='LOCATION=/opt/app/oracle/oradata/standby/archivelog'
    *.log_archive_dest_2='service=orcl valid_for=(online_logfiles,primary_role) db_unique_name=ora11g'
    *.log_archive_format='%t_%s_%r.dbf'
    *.standby_file_management=auto
    *.db_unique_name =standby
    *.fal_server='ora11g'
    *.fal_client='standby'
    *.service_names='standby'
    *.open_cursors=300
    *.pga_aggregate_target=525336576
    *.processes=1500
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=1576009728
    *.undo_management='AUTO'
    *.undo_tablespace='undotbs02'
    #*.user_dump_dest='/opt/app/oracle/diag/rdbms/standby/standby/trace'
    *.standby_file_management ='manual'
    *.instance_name =standby
    #*.standby_archive_dest=/opt/app/oracle/oradata/standby/archivelog
    *.db_file_name_convert=(/u02/app/ora11g/oradata/ora11g,/u02/app/oracle/oradata/standby)
    *.log_file_name_convert='/u02/app/ora11g/oradata/ora11g','/u02/app/oracle/oradata/standby'
    #*.remote_listener=LISTENER_ora11g

Maybe you are looking for

  • Video problem with CS4.2

    I am having an unusual problem with my cs4 programs, noticed most within PPro and AE.  Quite frequently my entire monitor image will momentarily shift a few pixels down and to the right.  It will return to its correct alignment within less than a sec

  • Text box on a mask prints with a 'watermark'

    I am using one of the template newsletters. It's the Family Newsletter template which says Johnson Family in the sample. I changed the text in the sky blue mast, provided with the template. On the computer, the appearance is fine, but when printed my

  • Make Profit Centre mandatory

    Hi, Our client wants to make the field for Profit Centre mandatory while document entry for the GL's related to Balance Sheet. Although one can do it through the FSG but my question is that if there is any other way than going thru the  tedious task

  • Insert euro symbol - Oracle 9i

    Hi, I can't insret the euro symbol , the dba is on oracle 9i. On the server I have NLS_CHARACTERSET=WE8MSWIN1252 and for NLS_NCHAR_CHARACTERSET=AL16UTF16. Which superset is needed for this parameter to display the euro symbol? Thanks

  • Why do iPhoto pictures appear darker than Photoshop?

    Hello everyone, I don't print from iPhoto, but in viewing on screen, photos are noticeably darker than in Photoshop. Does anyone know why this is the case, and whether it is possible to brighten the on-screen view in iPhoto? My photos appear in a mag