Consistent=y in exp

Hi all,
I red oracle documents to know about consistent parameter in export and "set transaction read only" command, but i didn't get properly.
why do we use this parameter and set transaction command, can you please give an example.
Thanks a lot.
Edited by: 863784 on Oct 2, 2011 2:24 AM

I red oracle documents to know about consistent parameter in export and "set transaction read only" command, but i didn't get properly.
why do we use this parameter and set transaction command, can you please give an example.When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started.
I suggest you to read tom kyte article on this question.
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:21545894805637

Similar Messages

  • Consistent parameter in Data Pump.

    Hi All,
    As we know that there is no consistent parameter is data pump, can any one tell me how DP takes care of this.
    From net i got the below one liner as
    Data Pump Export determines the current time and uses FLASHBACK_TIME .But I failed to understand what exactly it meant.
    Regards,
    Sphinx

    This is the equivalent of consistent=y in exp. If you would use flashback_time=systimestamp to get the data pump export to be "as of the point in time the export began, every table will be as of the same commit point in time".
    According to the docs:
    “The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.”

  • CONSISTENT and FLASHBACK_TIME parameters in export

    Version: 11.2
    This is what I gather about CONSISTENT (Original exp) and its datapump equivalent FLASHBACK_TIME
    If i give CONSISTENT=Y (for exp) or FLASHBACK_TIME=SYSTIMESTAMP (for expdp datapump) and start an export job at 3PM , oracle will export all tables (and all other objects ) as of 3PM. Any changes to schema objects done after 3PM will be ignored. I think Oracle internally uses UNDO tablespace for this feature.
    From your experience, what are the consequences you had by not setting CONSISTENT=Y (for exp) or FLASHBACK_TIME=SYSTIMESTAMP (for expdp datapump) ?

    That you said is right, the Oacle will export all object as specific SCN at 3PM.
    When you use consitenty, flashback_time or flashback_scn in your export, the Oracle read consitenty data in the UNDO tablespace. Its common that ORA-01555 happens when you use this feature.
    The "Snapshot to old"is caused by Oracle read consistency mechanism. If you have lots of updates, long running SQL and too small UNDO, the ORA-01555 error will appear.
    Therefore, When you use the consistent in your export, make sure that your UNDO tablaspace is large enough.

  • EXP-00105: parameter CONSISTENT is not supported for this user

    Hi,
    I have use Oracle 10g on unix platform
    i have export is taken from following command
    exp \'/ as sysdba\' file=t.dmp full=y buffer=10485760 log=0101.log CONSISTENT=y statistics=none
    export is sucessfull but one warning
    EXP-00105: parameter CONSISTENT is not supported for this user
    if i have use without CONSISTENT parameter then export is successfull ,
    why EXP-00105 error occured ?

    As per Oracle Error Notes:
    EXP-00105: parameter string is not supported for this user
    Cause: The user attempted to specify either CONSISTENT or OBJECT_CONSISTENT when connected as sysdba.
    Action: If a consistent export is needed, then connect as another user.
    Looks likE the SYS user cannot do a transaction level consistent read (read-only transaction). You could have performed this by SYSTEM user or any DBA priviliged user to to take the complete export of your DB.
    Anyway, for more information the error "EXP-00105", please take a look into the same question on another Oracle related forums.
    http://www.freelists.org/archives/oracle-l/05-2006/msg00236.html
    Regards,
    Sabdar Syed.

  • Why exp fail and what other method I should use

    Hi Everybody,
    I plan to "copy" data from a table partition of a transactional database to a remote historical database table,both of the source and destination tables are partitioned in the same way
    In the source 9i database, I do the exp using below command
    exp reporter/password file=rs_p20101128.dmp tables=(reporter.reporter_status:p_20101128)
    About to export specified tables via Conventional Path ...
    +. . exporting table REPORTER_STATUS+
    +. . exporting partition P_20101128 212932 rows exported+
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    Export terminated successfully with warnings.
    In the remote 10g databse, I do the imp using below command but fail
    imp reporter/password01   FROMUSER=reporter file=/tmp/rs_p20101128.dmp tables=(REPORTER_STATUS:P_20101128)
    Import: Release 10.2.0.2.0 - Production on Mon Nov 29 17:52:31 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V09.02.00 via conventional path
    import done in US7ASCII character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    +. importing REPORTER's objects into REPORTER+
    +. importing REPORTER's objects into REPORTER+
    IMP-00015: following statement failed because the object already exists:
    +"CREATE TABLE "REPORTER_STATUS" ("IDENTIFIER" VARCHAR2(255), "SERIAL" NUMBER"+
    +"(16, 0), "NODE" VARCHAR2(64), "NODEALIAS" VARCHAR2(255), "MANAGER" VARCHAR2"+
    +"(64), "AGENT" VARCHAR2(64), "ALERTGROUP" VARCHAR2(64), "ALERTKEY" VARCHAR2("+
    +"255), "SEVERITY" NUMBER(16, 0), "SUMMARY" VARCHAR2(255), "FIRSTOCCURRENCE" "+
    +......+
    +"0 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 10485760 FREELISTS 1 FREELIST GRO"+
    +"UPS 1) TABLESPACE "REPORTER" LOGGING NOCOMPRESS )"+
    IMP-00055: Warning: partition or subpartition "REPORTER_STATUS":"P_20101128" not found in export file
    Import terminated successfully with warnings.
    Any suggestion to make the things work?
    Clay

    thanks for all your suggestions but the problem persists, pls have a look at below command and output captures.
    In source 9i database, I do export using below command and got below output:
    +reporter@xxam[tmp] 554 %scp rs_p20101128.dmp [email protected]:/tmp/. Password:+
    +HGCP@hgcam02[tmp] 555 %exp reporter/passwdx01 file=rs_p20101127.dmp tables=(reporter.reporter_status:P_20101127) statistics=none INDEXES=N TRIGGERS=N CONSTRAINTS=N consistent=y+
    Export: Release 9.2.0.6.0 - Production on Mon Nov 29 18:18:36 2010
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    Export done in US7ASCII character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P1 character set (possible charset conversion)
    Note: indexes on tables will not be exported
    Note: constraints on tables will not be exported
    About to export specified tables via Conventional Path ...
    +. . exporting table REPORTER_STATUS+
    +. . exporting partition P_20101127 195127 rows exported+
    Export terminated successfully without warnings.
    =================================
    In destination 10g database, I do import using below command and got below output:
    bash-3.00$ imp reporter/passwd0001   FROMUSER=REPORTER TOUSER=REPORTER file=/tmp/rs_p20101127.dmp tables=(REPORTER_STATUS:P_20101127)
    Import: Release 10.2.0.2.0 - Production on Mon Nov 29 18:23:54 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V09.02.00 via conventional path
    import done in US7ASCII character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    +. importing REPORTER's objects into REPORTER+
    IMP-00015: following statement failed because the object already exists:
    +"CREATE TABLE "REPORTER_STATUS" ("IDENTIFIER" VARCHAR2(255), "SERIAL" NUMBER"+
    +"(16, 0), "NODE" VARCHAR2(64), "NODEALIAS" VARCHAR2(255), "MANAGER" VARCHAR2"+
    +"(64), "AGENT" VARCHAR2(64), "ALERTGROUP" VARCHAR2(64), "ALERTKEY" VARCHAR2("+
    +"255), "SEVERITY" NUMBER(16, 0), "SUMMARY" VARCHAR2(255), "FIRSTOCCURRENCE" "+
    +"DATE NOT NULL ENABLE, "LASTOCCURRENCE" DATE, "LASTMODIFIED" DATE, "INTERNAL"+
    +"LAST" DATE, "POLL" NUMBER(16, 0), "TYPE" NUMBER(16, 0), "TALLY" NUMBER(16, "+
    +"0), "CLASS" NUMBER(16, 0), "GRADE" NUMBER(16, 0), "LOCATION" VARCHAR2(64), "+
    +""OWNERUID" NUMBER(16, 0), "OWNERGID" NUMBER(16, 0), "ACKNOWLEDGED" NUMBER(1"+
    +.................+
    +"0 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 10485760 FREELISTS 1 FREELIST GRO"+
    +"UPS 1) TABLESPACE "REPORTER" LOGGING NOCOMPRESS )"+
    Import terminated successfully with warnings.

  • Shell script for exp

    Hello All,
    OS: AIX 5.2
    we are doing full export of our database schema as below:
    exp
    username:schema name
    Enter array fetch buffer size: 4096 >Export file: expdat.dmp > exp020806v1.dmp
    (2)U(sers), or (3)T(ables): (2)U > U
    Export grants (yes/no): yes > yes
    Export table data (yes/no): yes > yes
    Compress extents (yes/no): yes > yes
    Export done in US7ASCII character set and AL16UTF16 NCHAR character set
    Export terminated successfully without warnings.
    Now I want this all be done by shell script.
    user logon as a abc.
    he will get menu options
    1. full database export
    user will select this option (1 by enter type 1) and enter and it will do exp for this particular schema.
    DN

    Below you'll find an Korn-shell export script that works on AIX 4.3.3; it'll work on
    AIX 5.3 as well. It creates a compressed full exportfile by exporting to a named pipe and feeding the output to compress. Because export is a client tool, the scripts first established the NLS characterset of the database and exports it into
    the NLS_LANG variable.
    You'll need to customize the script, e.g. set your own directory names etc.
    The script can be run from cron
    #!/usr/bin/ksh
    # 26-10-2004 - Hans Wijte
    function get_characterset
    {  sqlplus -s / << !
    set feedback off
    set heading off
    select value
    from v\$nls_parameters
    where parameter = 'NLS_CHARACTERSET';
    if [[ -z ${ORACLE_HOME} ]]
    then
    . $HOME/.profile 1>/dev/null 2>&1
    fi
    export ORACLE_SID=${1:-$ORACLE_SID}
    export_dir=${2:-/oracle/exports/$ORACLE_SID}
    export ORAENV_ASK=NO
    . oraenv
    export ORAENV_ASK=YES
    export NLS_LANG="."`get_characterset`
    # Set the necessary Oracle variables #
    export ORACLE_SID=$1
    ORAENV_ASK=NO
    . oraenv
    id=$$
    # Create a named pipe to export to #
    PARFILE=exp_${ORACLE_SID}_exp_tables.par
    mknod oraexp${id}.pipe p
    touch ${PARFILE}
    chmod 600 ${PARFILE}
    # Fill the parameter file with #
    # the desired parameters #
    tabs=`cat tabs.log`
    echo "
    userid = /
    buffer = 1024000
    log = /oracle/home/log/exp_${ORACLE_SID}_exp_tables.log
    file = /oracle/exports/${ORACLE_SID}/exp_${ORACLE_SID}_exp_tables.dmp.Z
    full = yes
    consistent = y
    compress = n
    " > exp_${ORACLE_SID}_exp_tables.par
    # Point the named pipe to the dumpfile and #
    # let compress receive the output from #
    # the named pipe #
    compress < oraexp${id}.pipe > exp_${ORACLE_SID}_exp_tables.dmp.Z &
    # Execute the EXPORT utility with #
    # the parfile just created #
    exp parfile= exp_${ORACLE_SID}_exp_tables.par
    # Sync the file systems #
    sync; sync; sync
    rm -f oraexp${id}.pipe
    rm -f ${PARFILE}
    Message was edited by: Hans
    user486820

  • How to use a subquery in query parameter in exp command

    I am trying to export certain rows of a table using exp command
    exp TRA/simple@TRA consistent=y indexes=n constraints=n tables=TRA$EMPL query=\"where deptid in \(select objectid from Person\)\" file=/dbase/dump/archv.2009-10-2917:24:00.dmp
    but im getting an error
    LRM-00112: multiple values not allowed for parameter 'query'
    EXP-00019: failed to process parameters, type 'EXP HELP=Y' for help
    EXP-00000: Export terminated unsuccessfully

    On what OS you are trying to do that ?
    What oracle version ?
    Did you try to use parameter file ?
    Have in mind that when using parameter file , special characters don't need to be escaped.

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Export (exp) taking long time and reading UNDO

    Hi Guys,
    Oracle 9.2.0.7 on AIX 5.3
    A schema level export job is scheduled at night. Since day before yesterday it has been taking really long time. It used to finish in 8 hours or so but yesterday it took around 20 hours and was still running. The schema size to be exported is around 1 TB. (I know it is bit stupid to take such daily exports but customer requirement, you know ;) ) Today again it is still running although i scheduled it to run even earlier by 1 and 1/2 hour.
    The command used is:
    exp userid=abc/abc file=expabc.pipe buffer=100000 rows=y direct=y
    recordlength=65535 indexes=n triggers=n grants=y
    constraints=y statistics=none log=expabc.log owner=abcI have monitored the session and all the time the wait event is db file sequential read. From p1 i figured out that all the datafiles it reads belong to UNDO tablespace. What surprises me is that when consistent=Y is not specified should it go to read UNDO so frequently ?
    There is total of around 1800 tables in the schema; what i can see from the export log is that it exported around 60 tables and has been stuck since then. The logfile, dumpfile both has not been updated since long time.
    Any hints, clues in which direction to diagnose please.
    Any other information required, please let me know.
    Regards,
    Amardeep Sidhu

    Thanks Hemant.
    As i wrote above, it runs from a cron job.
    Here is the output from a simple SQL querying v$session_wait & v$datafile:
    13:50:00 SQL> l
      1* select a.sid,a.p1,a.p2,a.p3,b.file#,b.name
      from v$session_wait a,v$datafile b where a.p1=b.file# and a.sid=154
    13:50:01 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     158244          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:03 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157566          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:07 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157016          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:11 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     156269          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:16 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     167362          1        508 /<some_path_here>/undotbs_44.dbf
    13:50:58 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     166816          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:02 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     165024          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:14 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        507     159019          1        507 /<some_path_here>/undotbs_43.dbf
    13:52:09 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193598          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:12 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193178          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:14 SQL>Regards,
    Amardeep Sidhu
    Edited by: Amardeep Sidhu on Jun 9, 2010 2:26 PM
    Replaced few paths with <some_path_here> ;)

  • Export with consistent=y raise snapshot too old error.

    Hi,
    Oracle version:9204
    It raises
    EXP-00056: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number 2 with name
    "_SYSSMU2$" too small
    when I do an export with consistent=y option.
    And I find below information in alert_orcl.log
    Wed Apr 20 07:50:01 2005
    SELECT /*NESTED_TABLE_GET_REFS*/ "XXX"."TABLENAME".* FROM
    "XXX"."TABLENAME"
    ORA-01555 caused by SQL statement below (Query Duration=1140060307
    sec, SCN: 0x0000.00442609):
    The undo parameters:
    undo_retention=10800(default value)
    undo_retention is larger than the seconds run export(only 1800
    seconds),so I think the default value is enough.
    undo_management=auto(default value)
    Maybe the rollback tablespace is too small(about 300M)? But I think oracle should increase the size of datafile in this mode.Is that right?
    undo_tablespace=undotbs1
    undo_suppress_errors=false
    I think I must miss something.
    Any suggestions will be very appreciated.
    Thanks.
    wy

    UNDO_RETENTION is a request, not a mandate. If your UNDO tablespace is too small, Oracle may have to discard UNDO segments before UNDO_RETENTION is reached.
    How much UNDO is your database generating every second?
    SELECT stat.undoblks * param.value / 1024 / 1024 / 10 / 60 undo_mb_per_sec
      FROM v$undostat  stat,
           v$parameter param
    WHERE param.name = 'db_block_size'Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Dreaded ORA 1555 and EXP-00056 and LOB Corruption

    I am on Oracle 10.2.0.4 on HP UNIX 11.2.
    I have started getting
    EXP-00056: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number with name "" too small
    ORA-22924: snapshot too old
    I have looked into various causes and still no clue why it happening:
    1.     Undo_retention, it Is set to 5 hours (converted to seconds0> My export backup lasts
    For 1.5 to 2 hours.
    2.     My undo tablespace size is 28GB. Looking at undo advisor I only need 5GB.
    3.     Yes, my table where error message comes consistent has LOB (BLOB) column.
    I did check for LOB corruption as per metalink note (script shown below) and it gives
    Me messages:
    rowid AABV8QAAJAAGAn6AAM is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcaAAAX is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcamABr is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcamABu is corrupt. ORA-01403: no data found
    I do not know what to make of these messages because when I look in my table where problem
    Where error occurs:
    Select pr_id, col1, col2 from pr where rowed in (above rowids)’; there are
    No rows. What does this mean? Why it is corruption.
    Below is the script used to find LOB corruption…
    declare
    pag number;
    len number;
    c varchar2(10);
    charpp number := 8132/2;
    begin
    for r in (select rowid rid, dbms_lob.getlength (LS_VALUE) len
    from PR_ADDTL_DATA) loop
    if r.len is not null then
    for page in 0..r.len/charpp loop
    begin
    select dbms_lob.substr (LS_VALUE, 1, 1+ (page * charpp))
    into c
    from PR_ADDTL_DATA
    where rowid = r.rid;
    exception
    when others then
    dbms_output.put_line('rowid ' || r.rid || ' is corrupt. ' || sqlerrm);
    commit;
    end;
    end loop;
    end if;
    end loop;
    end;
    /

    user632098 wrote:
    Thanks; but script in my therad one supplied by Oracle to check for lob corruption. It has nothing to do with the export error.
    What I am asking is if there in no row on a page (ORA_1403) , that does not mean that there is a corruption. If I was getting execption like ORA-1555 when running this script; that will mean there is lob corruption,ORA-01555 has NOTHING to do a "corruption"; LOB related or otherwise!
    Most likely cause is that some session is doing DML against table & doing "frequent" COMMIT;
    while some (other?) session is doing SELECT against same table.

  • Is there any issues to use exp in a live production instance?

    Hi all,
    I'll migrate from 9.0.1 o hp-ux to a 10gR2 on Redhat AS E4
    To prepare my upgrade, I need the whole structure from our 9.0.1 instance.
    Can I without any danger / problem use exp to export the structure of our live instance while its actively running? (I mean while all our users are connected)
    I suppose its ok but I'm trying to avoid all problems...
    Thanks in advance, your advices are greatly appreciated

    I'm trying to avoid all problems...Well, it depends on what you consider a problem. Assuming people are doing updates, an export on a running DB means that you'll lose some changes.
    Another "problem" can be consistency : if you don't use CONSISTENT=Y you'll risk inconsistencies, if you set it, then you may need a lot of undo segments.
    Apart from that, it should work...

  • Exp XML Schema (.xsd) to another schema on same instance

    Hello,
    I hope someone on this forum can help me or point me in the correct direction. I want to export a schema which contains a table (EQUIPMENT) which has a XMLType column defined against a registered .xsd. I would then like to restored this schema to another schema on the same instance (development instance) and also onto another db instance.
    I have been trying to do this with the help of "Chapter 30 Importing and Exporting XMLType Tables" from the Oracle® XML DB Developer's Guide
    10g Release 2 (10.2). Unfortunately without success. At the end of this message I have encluded sample error messages that I am receiving for creation of TYPES which is causing my import to fail.
    I cannot find any examples on the web on how to achieve an exp from one schema and imp in another either on the same instance or another one.
    DDL for my table is :
    create table EQUIPMENT
    ID number(7) not null,
    STATUSID number(7) not null,
    ATTRIBUTEDATA xmltype
    xmltype column ATTRIBUTEDATA xmlschema "EQUIPMENT.xsd" element EQUIPMENT_XML
    tablespace TBS_DATA1;
    Three test runs as follows:
    1. Using an empty u2 schema and I register the .xsd file. Then I try to import (FROMUSER - TOUSER imp mode) my dump file which leads to the following error:
    IMP-00017: following statement failed with ORACLE error 2304:
    "CREATE TYPE "MTA1440_T" TIMESTAMP '2007-11-14:14:42:16' OID '3EE57B10307317"
    "74E044080020C94102' AS OBJECT ("SYS_XDBPD$" "XDB"."XDB$RAW_LIST_T",""
    "ATTRIBUTE_01" NUMBER(38),"ATTRIBUTE_02" VARCHAR2(4000 CHAR),"ATTRIBUTE_03" "
    "VARCHAR2(4000 CHAR),"ATTRIBUTE_04" NUMBER(38),"ATTRIBUTE_05" VARCHAR2(4000 "
    "CHAR))FINAL INSTANTIABLE "
    IMP-00003: ORACLE error 2304 encountered
    NOTE: Even though import failed, I now see a new TYPE created called "MTA1526_T".
    2. If I try to create the TYPE as is from the error above I get the following error:
    SQL> CREATE TYPE MTA1440_T
    2 OID '3EE57B1030731774E044080020C94102'
    3 AS OBJECT (SYS_XDBPD$ XDB.XDB$RAW_LIST_T,
    4 ATTRIBUTE_01 NUMBER(38),
    5 ATTRIBUTE_02 VARCHAR2(4000 CHAR),
    6 ATTRIBUTE_03 VARCHAR2(4000 CHAR),
    7 ATTRIBUTE_04 NUMBER(38),
    8 ATTRIBUTE_05 VARCHAR2(4000 CHAR)) FINAL INSTANTIABLE;
    9 /
    CREATE TYPE MTA1440_T
    ERROR at line 1:
    ORA-02304: invalid object identifier literal
    3. So now I create the "MTA1440_T" type without the OID value and retry the import.
    IMP-00061: Warning: Object type "U2"."MTA1440_T" already exists with a different identifier
    "CREATE TYPE "MTA1440_T" TIMESTAMP '2007-11-14:14:42:16' OID '3EE57B10307317"
    "74E044080020C94102' AS OBJECT ("SYS_XDBPD$" "XDB"."XDB$RAW_LIST_T",""
    "ATTRIBUTE_01" NUMBER(38),"ATTRIBUTE_02" VARCHAR2(4000 CHAR),"ATTRIBUTE_03" "
    "VARCHAR2(4000 CHAR),"ATTRIBUTE_04" NUMBER(38),"ATTRIBUTE_05" VARCHAR2(4000 "
    "CHAR))FINAL INSTANTIABLE "
    Questions from me:
    A. Can I export TYPES only as suggested by the online documentation ?
    B. If importing onto same instance in another schema surely the OID for the TYPE will always fail - so then why can the import not create the required TYPE name itself during the import ?
    C. Should I use global TYPES and Register the .XSD globally for all schema's in an instance to validate against .. would this prevent me from having errors on an import ?
    I would appreciate any insight any one could provide me. Many thanks in advance.
    Dom

    Hi Guys,
    Thank you all for the replies. I am dissappointed to hear that in 10g does not support exp/imp of schema's structured XML. However I am a little confused or should I say mislead by the "" documentation I
    Here is an extract from chapter "30 - Importing and Exporting XMLType Tables" from the Oracle XML DB 10g Developers Guide documentation:
    "..... Oracle Database supports the import and export of XML schema-based XMLType tables. An XMLType table depends on the XML schema used to define it. Similarly the XML schema has dependencies on the SQL object types created or specified for it. Thus, exporting a user with XML schema-based XMLType tables, consists of the following steps:
    1. Exporting SQL Types During XML Schema Registration. As a part of the XML
    schema registration process .....
    2. Exporting XML Schemas. After all the types are exported, XML schemas are
    exported as XML text .....
    3. Exporting XML Tables. The next step is to export the tables. Export of each table consists of two steps:
    A. The table definition is exported as a part of the CREATE TABLE statement....
    B. The data in the table is exported as XML text. Note that data for out-of-line
    tables is.....
    From this documentation I was under the impression that the exp/imp of XML Schema-Based XMLType Tables was supported.
    Regarding the backup mechanism/strategy for database schema's containing tables with Schema-Based XMLTypes what would you recommend the best online backup method to use - tablespace backups ?
    What I need to be able to do in day-to-day work is to have the ability to be able to take a copy of a customers UAT or production database schema and apply it to a dev or test db instance here for bug testing etc. Do you have any advice on best to achieve this without the use of an exp/imp when the schema will contain Schema-Based XMLType tables.
    Thank you all for your assistance so far.

  • EXP/IMP does not preserve MONITORING on tables

    Consider the following (on 8.1.7):
    1. First, create a new user named TEST.
    SQL> CONNECT SYSTEM/MANAGER
    Connected.
    SQL> CREATE USER TEST IDENTIFIED BY TEST;
    User created.
    SQL> GRANT CONNECT, RESOURCE TO TEST;
    Grant succeeded.2. Connect as that user, create a table named T and enable monitoring on the table.
    SQL> CONNECT TEST/TEST
    Connected.
    SQL> CREATE TABLE T(X INT);
    Table created.
    SQL> ALTER TABLE T MONITORING;
    Table altered.
    SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
    TABLE_NAME                     MON
    T                              YES3. Export the schema using EXP.
    SQL> HOST EXP OWNER=TEST FILE=TEST.DMP CONSISTENT=Y COMPRESS=N4. Drop and recreate the user.
    SQL> CONNECT SYSTEM/MANAGER
    Connected.
    SQL> DROP USER TEST CASCADE;
    User dropped.
    SQL> CREATE USER TEST IDENTIFIED BY TEST;
    User created.
    SQL> GRANT CONNECT, RESOURCE TO TEST;
    Grant succeeded.5. Finally, connect as the user, and import the schema.
    SQL> CONNECT TEST/TEST
    Connected.
    SQL> HOST IMP FROMUSER=TEST TOUSER=TEST FILE=TEST.DMPNow monitoring is no longer enabled:
    SQL> SELECT TABLE_NAME, MONITORING FROM USER_TABLES WHERE TABLE_NAME = 'T';
    TABLE_NAME                     MON
    T                              NOIs this behaviour documented anywhere?
    Are there any IMP/EXP options that will preserve MONITORING?

    Apparently it's a non-public bug #809007 in 8.1.7 which should be fixed with 9i

  • Data Pump Consistent parameter?

    Hi All,
    Is there any consistent parameter in data pump as it is in exp/imp.
    Becuase we are using data pump for backups and want to disable consitenct parameter.
    Please let me know how I can disable consistent parameter in Data Pump.
    Thanks

    if it;s not backup method then How you do the logical
    full database backup????
    From my thinking it's called logical DB backup (when
    u are using exp or expdp)There are many reasons that export shouldn't be used as backup method,
    1. It's very slow to do export on huge database ( which you already experiencing) the import will take much longer.
    2. You only have a 'snapshot' of your database at time of backup, in the event of disaster, you will lost all data changes after backup.
    3. It has performance impact on busy database ( which you also experiencing)
    Other than all these, if you turn CONSISTENT to N, your 'logic' backup is logically corrupted.

Maybe you are looking for