Ora-1555 while taking export

Hi Guys,
I am getting the below error while taking the export:
EXP-00008: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number 3 with name "_SYSSMU3$" too small
EXP-00056: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number 3 with name "_SYSSMU3$" too small
EXP-00000: Export terminated unsuccessfully
SQL> select SEGMENT_NAME,status,max_extents from dba_rollback_segs where tablespace_name='ESUT001'
and status='ONLINE';
SEGMENT_NAME STATUS MAX_EXTENTS
_SYSSMU1$                      ONLINE                 32765
_SYSSMU2$                      ONLINE                 32765
_SYSSMU3$                      ONLINE                 32765
_SYSSMU4$                      ONLINE                 32765
_SYSSMU5$                      ONLINE                 32765
_SYSSMU6$                      ONLINE                 32765
_SYSSMU7$                      ONLINE                 32765
_SYSSMU8$                      ONLINE                 32765
_SYSSMU9$                      ONLINE                 32765
_SYSSMU10$                     ONLINE                 32765
_SYSSMU11$                     ONLINE                 32765
SEGMENT_NAME STATUS MAX_EXTENTS
_SYSSMU12$                     ONLINE                 32765
_SYSSMU13$                     ONLINE                 32765
_SYSSMU14$                     ONLINE                 32765
_SYSSMU15$                     ONLINE                 32765
_SYSSMU16$                     ONLINE                 32765
_SYSSMU17$                     ONLINE                 32765
_SYSSMU18$                     ONLINE                 32765
Could you pls suggest how can resolve this issue.
Thanks
Sumit

You'd better do the export during quiet hour of your database.
Otherwise, use
consistent = N
flag
Increase your undo space will also help
consistent – [N] Specifies the set transaction read only statement for export, ensuring data consistency. This option should be set to “Y” if activity is anticipated while the exp command is executing. If ‘Y’ is set, confirm that there is sufficient undo segment space to avoid the export session getting the ORA-1555 Snapshot too old error.

Similar Messages

  • Error while taking export

    Hi,
    I am taking an export of my database, my database is in 9i , i am taking export from 10g database...
    This is the error i am getting
    [oracle@oracle system]$ exp system@db file=fulldatabase.dmp log=fulldb.log full=y
    Export: Release 10.2.0.2.0 - Production on Fri Mar 2 09:50:53 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Password:
    EXP-00056: ORACLE error 6550 encountered
    ORA-06550: line 1, column 41:
    PLS-00302: component 'SET_NO_OUTLINES' must be declared
    ORA-06550: line 1, column 15:
    PL/SQL: Statement ignored
    EXP-00000: Export terminated unsuccessfully

    Use the 9i client to make the export, then use the 10g client to perform the import. This error is produced because 10g client tries to find some objects that are defined only in 10g.

  • Buffer size while taking export of database

    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    Jibu

    Jibu  wrote:
    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    JibuIn addition to Sybrand's comments about alternatives, I'd like to add that just as a general class of problem, this is not the kind of thing I'd waste a lot of time on trying to come up with some magic optimal number. With exp and imp, I generally make the buffer about 10 times the default and forget it. This is the kind of thing where you very quickly reach a point of diminishing returns in terms of time spent "optimizing" the process vs. actual worthwhile gain in run time.
    By "worthwhile" gain, I mean this . . .
    In terms of a batch process like exp,
    -- is a 50% reduction in run time worthwhile when your starting point is a 1 hour run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 5 minute run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 30 second run time?
    -- how about if the run is scheduled for 2:00 am when there are no other processes and no customers on the database?

  • ORA-3113 while running export (first run ok, second run fails)

    Hi,
    I'm running 10g (10.1.0.2.0) on gentoo Linux.
    After a restart of the database I can do an EXP for any user without errors. But when I repeat the same command after the export I get the following error:
    . . exporting table WEB_SESSION_DATA 0 rows exported
    . exporting synonyms
    EXP-00008: ORACLE error 3113 encountered
    ORA-03113: end-of-file on communication channel
    EXP-00000: Export terminated unsuccessfully
    It's always the same error on the same task (exporting synonyms) but always just for the second and any later export, while the first one is ok.
    I read through some forums and found, that ORA-3113 is a standard error which just hides the real error. So I look into the trace files and found:
    ksedmp: internal or fatal error
    ORA-07445: exception encountered: core dump [jox_lookup_known_object()+413] [SIGSEGV] [Address not mapped to object] [0x1AD14034] [] []
    Current SQL statement for this session:
    SELECT SYNNAM, DBMS_JAVA.LONGNAME(SYNNAM), DBMS_JAVA.LONGNAME(SYNTAB), TABOWN, TABNODE, PUBLIC$, SYNOWN, SYNOWNID, TABOWNID, SYNOBJNO FROM SYS.EXU9SYN WHERE SYNOWNID = :1 ORDER BY SYNTIME
    If I call this SQL from sqlplus I get the same ORA-3113, so this seems to be the cause for the export failure.
    When I remove the DBMS_JAVA.LONGNAME calls from the statement, it runs fine. It also runs fine, when I add a "and 1=0" to the EXU9SYN view. But this will not really solve the problem, because then all synonyms don't get exported.
    I also checked all the synonyms and they are valid, the referenced tables exist and can be queried.
    Because we use JavaStoredProcedures I cannot remove the Java Features in Oracle. But I reinstalled it (rmjvm.sql and initjvm.sql) but this did not help.
    Does anyone have an idea what's happening here?
    Especially the "first run ok, second run fails" seems to be very strange, because DB objects including DBMS_JAVA should be ok, if the first export can be done. So what changes in the database during or after the first run?

    Maybe description for bug 3953108 (ORA-7445 AND ORA-3113 DURING DIRECT FULL DATABASE EXPORT OF PUBLIC SYNONYMS) is helpful.
    Werner

  • Multiple conditions while taking export

    hi,
    when i'm trying to take export using multiple condition,
    < EXP query='where branch_code=0130 and loan_date> 01-jan-2005' file=d:\yyy tables=xxxxx>
    I get an error 'failed to process parameters'
    can anybody help me to solve this
    Regards

    hi satish
    this is the exact query
    EXP query='where branch_code=0130 and loan_date> 01-jan-2005' file=d:\yyy tables=xxxxx userid=zz/xxx@test
    server and client -9i

  • ERROR WHILE SYSTEM EXPORT (BEK) ERROR: SAPSYSTEMNAME not in environment

    HI ALL,
    we have started migration of ecc6.0 from windows (mssql) to linux(maxdb).
    While taking export abap all the export packages are ended with same error .
    I have set the envirornment variables . even though i am getting the same error.here i paste the log of 'TODIR' export package:
    All the export abap package job ended with same error so i pasted only one package log.
    E:\usr\sap\DE5\SYS\exe\run\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#21 $ SAP
    E:\usr\sap\DE5\SYS\exe\run\R3load.exe: version R7.00/V1.4 [UNICODE]
    Compiled May 3 2010 23:35:53
    E:\usr\sap\DE5\SYS\exe\run\R3load.exe -e TODIR.cmd -datacodepage 4103 -l TODIR.log -stop_on_error (DB) INFO: connected to DB (DB)
    INFO: Export without hintfile (RD)
    ERROR: unexpected end of file "F:\mexico_latest_export\ABAP\DATA\TODIR.TOC" (WTF)
    INFO: expected error, don't panic ... (GSI)
    INFO: dbname = "DE5MEXICO " (GSI)
    INFO: vname = "MSSQL " (GSI)
    INFO: hostname = "MEXICO " (GSI)
    INFO: sysname = "Windows NT" (GSI)
    INFO: release = "5.2" (GSI)
    INFO: version = "3790 Service Pack 2" (GSI)
    INFO: machine = "2x Intel 80686 (Mod 23 Step 8)"
    (BEK) ERROR: SAPSYSTEMNAME not in environment
    E:\usr\sap\DE5\SYS\exe\run\R3load.exe: job finished with 1 error(s)
    E:\usr\sap\DE5\SYS\exe\run\R3load.exe: END OF LOG: 2011051008
    Can any one please help me in solving this issue.
    Thanks in advance.
    Vardhan

    Hi Markus,
    Thanks for  the reply.
    I am taking Export with Administrator.
    Could you please tell me  how to define SAPSYSTEMNAME .
    I can define the envirornment variables.
    But how i mean  how would be the format of defining the SAPSYSTEMNAME variable.
    Thanks!
    Vardhan.

  • R3load export of  table REPOSRC with lob col - error ora-1555 and ora-22924

    Hello,
    i have tried to export data from our production system for system copy and then upgrade test. while i export the R3load job has reported error in table REPOSRC, which has lob column DATA. i have apsted below the conversation in which i have requested SAP to help and they said it comes under consulting support. this problem is in 2 rows of the table.
    but i would like to know if i delete these 2 rows and then copy from our development system to production system at oracle level, will there be any problem with upgrade or operation of these prorgams and will it have any license complications if i do it.
    Regards
    Ramakrishna Reddy
    __________________________ SAP SUPPORT COnveration_____________________________________________________
    Hello,
    we have are performing Data Export for System copy of our Production
    system, during the export, R3load Job gave error as
    R3LOAD Log----
    Compiled Aug 16 2008 04:47:59
    /sapmnt/DB1/exe/R3load -datacodepage 1100 -
    e /dataexport/syscopy/SAPSSEXC.cmd -l /dataexport/syscopy/SAPSSEXC.log -stop_on_error
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
    (DB) INFO: Export without hintfile
    (NT) Error: TPRI_PAR: normal NameTab from 20090828184449 younger than
    alternate NameTab from 20030211191957!
    (SYSLOG) INFO: k CQF :
    TPRI_PAR&20030211191957&20090828184449& rscpgdio 47
    (CNV) WARNING: conversion from 8600 to 1100 not possible
    (GSI) INFO: dbname = "DB120050205010209
    (GSI) INFO: vname = "ORACLE "
    (GSI) INFO: hostname
    = "dbttsap "
    (GSI) INFO: sysname = "AIX"
    (GSI) INFO: nodename = "dbttsap"
    (GSI) INFO: release = "2"
    (GSI) INFO: version = "5"
    (GSI) INFO: machine = "00C8793E4C00"
    (GSI) INFO: instno = "0020111547"
    (DBC) Info: No commits during lob export
    DbSl Trace: OCI-call 'OCILobRead' failed: rc = 1555
    DbSl Trace: ORA-1555 occurred when reading from a LOB
    (EXP) ERROR: DbSlLobGetPiece failed
    rc = 99, table "REPOSRC"
    (SQL error 1555)
    error message returned by DbSl:
    ORA-01555: snapshot too old: rollback segment number with name "" too
    small
    ORA-22924: snapshot too old
    (DB) INFO: disconnected from DB
    /sapmnt/DB1/exe/R3load: job finished with 1 error(s)
    /sapmnt/DB1/exe/R3load: END OF LOG: 20100816104734
    END of R3LOAD Log----
    then as per the note 500340, i have chnaged the pctversion of table
    REPOSRC of lob column DATA to 30, but i get the error still,
    i have added more space to PSAPUNDO and PSAPTEMP also, still the same
    error.
    the i have run the export as
    exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log tables=REPOSRC
    exp log----
    dbttsap:oradb1 5> exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log
    tables=REPOSRC
    Export: Release 9.2.0.8.0 - Production on Mon Aug 16 13:40:27 2010
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit
    Production
    With the Partitioning option
    JServer Release 9.2.0.8.0 - Production
    Export done in WE8DEC character set and UTF8 NCHAR character set
    About to export specified tables via Conventional Path ...
    . . exporting table REPOSRC
    EXP-00056: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number with name "" too
    small
    ORA-22924: snapshot too old
    Export terminated successfully with warnings.
    SQL> select table_name, segment_name, cache, nvl(to_char
    (pctversion),'NULL') pctversion, nvl(to_char(retention),'NULL')
    retention from dba_lobs where
    table_name = 'REPOSRC';
    TABLE_NAME | SEGMENT_NAME |CACHE | PCTVERSION | RETENTION
    REPOSRC SYS_LOB0000014507C00034$$ NO 30 21600
    please help to solve this problem.
    Regards
    Ramakrishna Reddy
    Dear customer,
    Thank you very much for contacting us at SAP global support.
    Regarding your issue would you please attach your ORACLE alert log and
    trace file to this message?
    Thanks and regards.
    Hello,
    Thanks for helping,
    i attached the alert log file. i have gone through is, but i could
    not find the corresponding Ora-01555 for table REPOSRC.
    Regards
    Ramakrishna Reddy
    +66 85835-4272
    Dear customer,
    I have found some previous issues with the similar symptom as your
    system. I think this symptom is described in note 983230.
    As you can see this symptom is mainly caused by ORACLE bug 5212539 and
    it should be fixed at 9.2.0.8 which is just your version. But although
    5212539 is implemented, only the occurrence of new corruptions will be
    avoided, the already existing ones will stay in the system regardless of the patch.
    The reason why metalink 452341.1 was created is bug 5212539, since this
    is the most common software caused lob corruption in recent times.
    Basically any system that was running without a patch for bug 5212539 at some time in the past could be potentially affected by the problem.
    In order to be sure about bug 5212539 can you please verify whether the
    affected lob really is a NOCACHE lob? You can do this as described in
    mentioned note #983230. If yes, then there are basically only two
    options left:
    -> You apply a backup to the system that does not contain these
    corruptions.
    -> In case a good backup is not available, it would be possible to
    rebuild the table including the lob segment with possible data loss . Since this is beyond the scope of support, this would have to be
    done via remote consulting.
    Any further question, please contact us freely.
    Thanks and regards.
    Hello,
    Thanks for the Help and support,
    i have gone through  the note 983230 and metalink 452341.1.
    and i have ran the script and found that there are 2 rows corrupted in
    the table REPOSRC. these rows belong to Standard SAP programs
    MABADRFENTRIES & SAPFGARC.
    and to reconfirm i have tried to display them in our development system
    and production system. the development systems shows the src code in
    Se38 but in production system it goes to short dump DBIF_REPO_SQL_ERROR.
    so is it possible to delete these 2 rows and update ourselves from our
    development system at oracle level. will it have any impact on SAP
    operation or upgrade in future.
    Regards
    Ramakrishna Reddy

    Hello, we have solved the problem.
    To help someone with the same error, what we have done is:
    1.- wait until all the processes has finished and the export is stopped.
    2.- startup SAP
    3.- SE14 and look up the tables. Crete the tables in the database.
    4.- stop SAP
    5.- Retry the export (if you did all the steps with sapinst running but the dialogue window in the screen) or begin the sapinst again with the option: "continue with the old options".
    Regards to all.

  • Ora-1555 during exports and imports. possible causes. ?

    From my understanding : I know that this error will occur due to a undo retention being smaller sizer. or rather I should put it that increasing this parameter should help fix the issue.
    Whats not clear is below :
    Qn. Is it possible that ORA-1555 errors can occur during 'import' even if no other sessions are connected and performing any transaction/dmls ?
    Qn. Also why does a ORA-1555 occur during a 'export' ? Is the same reasons ie. there could be possible DMLs occuring ?

    Hello,
    About your first question:
    Qn. Is it possible that ORA-1555 errors can occur during 'import' even if no other sessions are connected and performing any transaction/dmls ?I've never got this error during import but, I always care to get enough place on the UNDO Tablespace.
    With classical import you have a commit after each Table's import (by default) and a commit after each row's import if COMMIT=Y so as to use less space in the Rollback Segment.
    With Datapump, I often decrease the undo_retention parameter before importing so as to use less space on the UNDO Tablespace.
    About the second question:
    Qn. Also why does a ORA-1555 occur during a 'export' ? Is the same reasons ie. there could be possible DMLs occuring ?To get a consistent image of the exported data with the classical export you may use the parameter CONSISTENT=Y. While you may use the FLASHBACK_TIME parameter with Datapump (so it means that the undo_retention should be large enough when exporting).
    Both use the Undo entries, so I imagine that's possible to get some error (may be ORA-01555) if you don't have enough place on your UNDO Tablespace.
    It's possible (thank to the Rollback Segments) to have concurrent DML on the database while exporting.
    Anyway, from my point of view, while exporting or importing if you have enough space on your UNDO tablespace and a correct undo_retention setting (not too large when importing not too small when exporting) it should be fine.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Ora-00604 error while taking tkprof of a trace file

    Sorry i am giving the full erro but omitting exact table names
    Hi ,
    I have an error while taking tkprof of a trace file.
    I gave the following command ---
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela explain= /
    error is --
    Error in create table of EXPLAIN PLAN table : unix_session_user.prof$paln_table
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: Step-6:DDL
    Event Security. You are not permitted to perform the requested structural
    changes to PROF (TABLE)
    Event triggered : CREATE
    ora_login_user
    (session_user) : unix_session_user(dummy)
    Search : select count(*) from
    tabl(dummy table name) where obj_name like '%\%%' escape '\' and obj_type =
    'TABLE' and obj_type = 'USER' and ( event_CREATE = 'Y' or status =
    'Override')
    ORA-06512: at line 162
    ORA-06510: PL/SQL: unhandled
    user-defined exception
    EXPLAIN PLAN option disabled.
    i searched for the error and in oracle forum i found a solution .. http://forums.oracle.com/forums/thread.jspa?threadID=844287&tstart=0
    but after giving the table option it is giving the same error
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=old_schema.plan_table explain= /
    it again gave the same error.
    In both two cases it gives elapsed time results,library cache missing etc but before giving this it throws ORA-00604 error as stated above
    then i again correct the tkprof statement ..
    tkprof <source.trc> <file.prc> sys=no sort=exeela,fchela,prsela table=new_schema.plan_table explain= /
    say this schema name here i used is dummy schema name.
    My question is did this error came as we had not sufficient previlages in the old_schema but that previleges we have in new_schema?
    My databse version is 9.2.0.4.0
    Thanks in advance
    Edited by: bp on Feb 3, 2009 11:36 PM
    Edited by: bp on Feb 3, 2009 11:40 PM

    Please post here full error message, there should be lines with ORA-00604 and then some other ORA as well.
    And are there any trace files generated during this error?
    And as You can see from error description, probably You will have to contact with Oracle support in order to solve this case:
    oerr ora 00604
    00604, 00000, "error occurred at recursive SQL level %s"
    // *Cause:  An error occurred while processing a recursive SQL statement
    // (a statement applying to internal dictionary tables).
    // *Action: If the situation described in the next error on the stack
    // can be corrected, do so; otherwise contact Oracle Support.

  • Error ORA-02375 while trying to export/import JTF.JTF_PF_REPOSITORY table

    We have already created an SR, In the mean time, trying to see whether anyone else has come across this issue.  Thanks.
    On : 11.2.0.3 version, Data Pump Import
    Error ORA-02375 while trying to import JTF.JTF_PF_REPOSITORY table
    We are getting the below error while performing the full db
    import.
    ORA-02375: conversion error loading table
    "JTF"."JTF_PF_REPOSITORY" partition "EBIZ"
    ORA-22337: the type of accessed
    object has been evolved
    ORA-02372: data for row: SYS_NC00040$ :
    0X'8801FE000004AD0313FFFF0009198401190A434F4E4E454354'
    This issue is
    stopping our upgrade of database from 10.2.0.4 to 11.2.0.3. This is very
    critical for us to be resolved.

    Hi,
    seems this is Character set issue fo source and Target DB check this doc:Unable to Export Table WF_ITEM_ATTRIBUTE_VALUES due to errors ORA-02374, ORA-22337, and ORA-02372 (Doc ID 1522761.1)
    HTH

  • Dreaded ORA 1555 and EXP-00056 and LOB Corruption

    I am on Oracle 10.2.0.4 on HP UNIX 11.2.
    I have started getting
    EXP-00056: ORACLE error 1555 encountered
    ORA-01555: snapshot too old: rollback segment number with name "" too small
    ORA-22924: snapshot too old
    I have looked into various causes and still no clue why it happening:
    1.     Undo_retention, it Is set to 5 hours (converted to seconds0> My export backup lasts
    For 1.5 to 2 hours.
    2.     My undo tablespace size is 28GB. Looking at undo advisor I only need 5GB.
    3.     Yes, my table where error message comes consistent has LOB (BLOB) column.
    I did check for LOB corruption as per metalink note (script shown below) and it gives
    Me messages:
    rowid AABV8QAAJAAGAn6AAM is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcaAAAX is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcamABr is corrupt. ORA-01403: no data found
    rowid AABV8QAAKAAAcamABu is corrupt. ORA-01403: no data found
    I do not know what to make of these messages because when I look in my table where problem
    Where error occurs:
    Select pr_id, col1, col2 from pr where rowed in (above rowids)’; there are
    No rows. What does this mean? Why it is corruption.
    Below is the script used to find LOB corruption…
    declare
    pag number;
    len number;
    c varchar2(10);
    charpp number := 8132/2;
    begin
    for r in (select rowid rid, dbms_lob.getlength (LS_VALUE) len
    from PR_ADDTL_DATA) loop
    if r.len is not null then
    for page in 0..r.len/charpp loop
    begin
    select dbms_lob.substr (LS_VALUE, 1, 1+ (page * charpp))
    into c
    from PR_ADDTL_DATA
    where rowid = r.rid;
    exception
    when others then
    dbms_output.put_line('rowid ' || r.rid || ' is corrupt. ' || sqlerrm);
    commit;
    end;
    end loop;
    end if;
    end loop;
    end;
    /

    user632098 wrote:
    Thanks; but script in my therad one supplied by Oracle to check for lob corruption. It has nothing to do with the export error.
    What I am asking is if there in no row on a page (ORA_1403) , that does not mean that there is a corruption. If I was getting execption like ORA-1555 when running this script; that will mean there is lob corruption,ORA-01555 has NOTHING to do a "corruption"; LOB related or otherwise!
    Most likely cause is that some session is doing DML against table & doing "frequent" COMMIT;
    while some (other?) session is doing SELECT against same table.

  • Why am i still getting Ora-1555?

    Hi All
    I am getting ORA-1555 snapshot old error in my alerts.
    My undo tablespace is 16GB and undo_retention is set for 1Hour.
    When cronjobs start expdp for one table, it throws ORA-1555 in alert logs and expdp skips that tables and moves ahead.
    I want to know while my undo_retention is set to 1Hour and i have undo tablespace of 16GB which is even autoextendable on... then why am i getting this error.
    I believe i should not get this error as if expdp requires more undo on this table then it should have extend the undo tablespace as it is allowed to do so.
    Any help on this would be appreciated.
    OS: Sun Solaris 10
    DB: 10.2.0.3
    Thanks
    aps

    Hi Oradba
    i forgot to mention, from last two days we are getting this error for one new table. which is very small in size as compared to the BIGone i am taking about.
    I decided to test the PL/sql method now and i choosed the new table which has given 1555 error for last two days.
    when i described to see the column with BLOB in table to put in pl sql procedure.... i suddenly realize that this table dont have one.
    i again described the table to confirm this and yes, there is no blob column in it and from last two days we have started getting this error. So i believe there is something more on the corrupt blob stuff.
    what do you say?
    Thanks
    aps

  • ORA-39126 during an export of a partition via dbms_datapump

    Hi ,
    i did export using datapump in command line everything went fine but while exporting via dbms_datapump i got this:
    ORA-39126 during an export of a partition via dbms_datapump
    ORA-00920
    'SELECT FROM DUAL WHERE :1' P20060401
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 6228
    the procedure is:
    PROCEDURE pr_depura_bitacora
    IS
    l_job_handle NUMBER;
    l_job_state VARCHAR2(30);
    l_partition VARCHAR2(30);
    v_sql VARCHAR2(2000);
    BEGIN
    -- Create a user-named Data Pump job to do a "table:partition-level" export
    -- Local
    select 'P'|| to_char((select min(STP_LOG_DATE) from SAI_AUDITBITACORA),'YYYYMM')||'01'
    into l_partition
    from user_tab_partitions
    where table_name = 'SAI_AUDITBITACORA'
    and rownum = 1;
    l_partition := rtrim (l_partition,' ');
    l_job_handle:= DBMS_DATAPUMP.OPEN
    operation=>'EXPORT',
    job_mode =>'TABLE',
    job_name =>'EXPORT_ORACLENSSA'
    -- Schema filter
    DBMS_DATAPUMP.METADATA_FILTER
    handle => l_job_handle,
    name => 'SCHEMA_EXPR',
    value => 'IN (''ORACLENSSA'')'
    DBMS_OUTPUT.PUT_LINE('Added filter for schema list');
    -- Table filter
    DBMS_DATAPUMP.METADATA_FILTER
    handle => l_job_handle,
    name => 'NAME_EXPR',
    value => '=''SAI_AUDITBITACORA'''
    DBMS_OUTPUT.PUT_LINE('Added filter for table expression');
    -- Partition filter
    DBMS_DATAPUMP.DATA_FILTER
    handle => l_job_handle,
    name => 'PARTITION_EXPR',
    value => l_partition,
    table_name => 'SAI_AUDITBITACORA'
    DBMS_OUTPUT.PUT_LINE('Partition filter for schema list');
    DBMS_DATAPUMP.ADD_FILE
    handle => l_job_handle,
    filename => 'EXP'||l_partition||'.DMP',
    directory => 'EXP_DATA_PUMP',
    filetype => 1
    DBMS_DATAPUMP.ADD_FILE
    handle => l_job_handle,
    filename => 'EXP'||l_partition||'.LOG',
    directory => 'EXP_DATA_PUMP',
    filetype => 3
    DBMS_DATAPUMP.START_JOB
    handle => l_job_handle,
    skip_current => 0
    DBMS_DATAPUMP.WAIT_FOR_JOB
    handle => l_job_handle,
    job_state => l_job_state
    DBMS_OUTPUT.PUT_LINE('Job completed - job state = '||l_job_state);
    DBMS_DATAPUMP.DETACH(handle=>l_job_handle);
    END;
    I've already drop and recreate the directory, granted read, write to public and to user, grant create session, create table, create procedure, exp_full_database to user, restart the database and the listener with the var LD_LIBRARY pointing first to $ORACLE_HOME/lib, and add more space to temporary tablespace.

    The basic problem is:
    Error: ORA 920
    Text: invalid relational operator
    Cause: A search condition was entered with an invalid or missing relational
    operator.
    Action: Include a valid relational operator such as =, !=, ^=, <>, >, <, >=, <=
    , ALL, ANY, [NOT] BETWEEN, EXISTS, [NOT] IN, IS [NOT] NULL, or [NOT]
    LIKE in the condition.
    Obviously this refers to the invalid statement 'SELECT FROM DUAL ...'. I also recommend, you should contact Oracle Support, because it happens inside an Oracle provided package.
    Werner

  • Error while taking backup  through RMAN in 10g XE

    While taking backup through RMAN in XE instance , an error comes out.
    The contents of oxe_backup_current file is as below :
    XE Backup Log
    Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 6 15:49:51 2011
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    connected to target database: XE (DBID=2635631092)
    RMAN>
    echo set on
    RMAN> shutdown immediate;
    using target database control file instead of recovery catalog
    database closed
    database dismounted
    Oracle instance shut down
    RMAN> startup mount;
    connected to target database (not started)
    Oracle instance started
    database mounted
    Total System Global Area     805306368 bytes
    Fixed Size                     1261444 bytes
    Variable Size                209715324 bytes
    Database Buffers             591396864 bytes
    Redo Buffers                   2932736 bytes
    RMAN> configure retention policy to redundancy 2;
    old RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    new RMAN configuration parameters:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    new RMAN configuration parameters are successfully stored
    RMAN> configure controlfile autobackup format for device type disk clear;
    RMAN configuration parameters are successfully reset to default value
    RMAN> configure controlfile autobackup on;
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters are successfully stored
    RMAN> sql "create pfile=''/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/dbs/spfile2init.ora'' from spfile";
    *sql statement: create pfile=''/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/dbs/spfile2init.ora'' from spfile*
    *RMAN-00571: ===========================================================*
    *RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============*
    *RMAN-00571: ===========================================================*
    *RMAN-03009: failure of sql command on default channel at 07/06/2011 15:50:57*
    *RMAN-11003: failure during parse/execution of SQL statement: create pfile='/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/dbs/spfile2init.ora' from spfile*
    *ORA-27086: unable to lock file - already in use*
    *Linux Error: 11: Resource temporarily unavailable*
    Additional information: 8
    Additional information: 16476
    RMAN> backup as backupset device type disk database;
    Starting backup at 06-JUL-11
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=102 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00003 name=/usr/lib/oracle/xe/oradata/XE/sysaux.dbf
    input datafile fno=00005 name=/usr/lib/oracle/xe/oradata/XE/ftress_data_log01.dbf
    input datafile fno=00006 name=/usr/lib/oracle/xe/oradata/XE/ftress_data_lrg01.dbf
    input datafile fno=00001 name=/usr/lib/oracle/xe/oradata/XE/system.dbf
    input datafile fno=00009 name=/usr/lib/oracle/xe/oradata/XE/ftress_indx_log01.dbf
    input datafile fno=00010 name=/usr/lib/oracle/xe/oradata/XE/ftress_indx_lrg01.dbf
    input datafile fno=00002 name=/usr/lib/oracle/xe/oradata/XE/undo.dbf
    input datafile fno=00004 name=/usr/lib/oracle/xe/oradata/XE/users.dbf
    input datafile fno=00008 name=/usr/lib/oracle/xe/oradata/XE/ftress_data_sml01.dbf
    input datafile fno=00012 name=/usr/lib/oracle/xe/oradata/XE/ftress_indx_sml01.dbf
    input datafile fno=00011 name=/usr/lib/oracle/xe/oradata/XE/ftress_indx_mdm01.dbf
    input datafile fno=00007 name=/usr/lib/oracle/xe/oradata/XE/ftress_data_mdm01.dbf
    channel ORA_DISK_1: starting piece 1 at 06-JUL-11
    channel ORA_DISK_1: finished piece 1 at 06-JUL-11
    piece handle=/usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/backupset/2011_07_06/o1_mf_nnndf_TAG20110706T155057_718dw649_.bkp tag=TAG20110706T155057 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 06-JUL-11
    Starting Control File and SPFILE Autobackup at 06-JUL-11
    piece handle=/usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/autobackup/2011_07_06/o1_mf_s_755797849_718dwofy_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 06-JUL-11
    RMAN> configure controlfile autobackup off;
    old RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    new RMAN configuration parameters:
    CONFIGURE CONTROLFILE AUTOBACKUP OFF;
    new RMAN configuration parameters are successfully stored
    RMAN> alter database open;
    database opened
    RMAN> delete noprompt obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 2
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type                 Key    Completion Time    Filename/Handle
    Backup Set           3      06-JUL-11        
      Backup Piece       3      06-JUL-11          /usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/backupset/2011_07_06/o1_mf_nnndf_TAG20110706T133443_7184wr4d_.bkp
    Backup Set           4      06-JUL-11        
      Backup Piece       4      06-JUL-11          /usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/autobackup/2011_07_06/o1_mf_s_755789675_7184x7fp_.bkp
    deleted backup piece
    backup piece handle=/usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/backupset/2011_07_06/o1_mf_nnndf_TAG20110706T133443_7184wr4d_.bkp recid=3 stamp=755789684
    deleted backup piece
    backup piece handle=/usr/lib/oracle/xe/app/oracle/flash_recovery_area/XE/autobackup/2011_07_06/o1_mf_s_755789675_7184x7fp_.bkp recid=4 stamp=755789699
    Deleted 2 objects
    RMAN>
    Recovery Manager complete.
    RMAN error: See log for details.

    I think it is trying to overwrite exiting file. Either give a different name or cleanup the files in preferred location.

  • Getting error while taking MAX DB trans log backup.

    Hi,
    I am getting error while taking trans log backup of Maxdb database for archived log through data protector as below,
    [Critical] From: OB2BAR_SAPDBBAR@ttcmaxdb "MAX" Time: 08/19/10 02:10:41
    Unable to back up archive logs: no autolog medium found in media list
    But i am able to take complete data and incremental backup through data protector.
    I have already enabled the autolog for MAX DB database and it is writing that log file directly to HP-UX file system. Now i want to take backup of this archived log backup through data protector i.e. through trans log backup. So that the archived log which is on the file system after trans log backup completed will delete the archived logs in filesystem.  So that i don;t have to manually delete the archived logs from file system.
    Thanks,
    Subba

    Hi Lars,
    Thanks for the reply...
    Now i am able to take archive log backup but the problem is i can take only one archive file backup. Not multiple arhive log files generated by autolog at filesystem i.e /sapdb/MAX/saparch.
    I have enabled autolog and it is putting auto log file at unix directory i.e. /sapdb/MAX/saparch
    And then i am using the DataProtector 6.11 with trans log backup to backup the archived files in /sapdb/MAX/saparch. When i start the trans backup session through data protector it uses the archive stage command as "archive_stage BACKDP-Archive LOGBackup NOVERIFY REMOVE" If /sapdb/MAX/saparch has only one archive file it will backup and remove the file successfully. But if /sapdb/MAX/saparch has multiple archive files it gives an error as below,
      Preparing backup.
                Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
                Setting environment variable 'BI_REQUEST' to value 'OLD'.
                Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
                Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c'.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
                Writing '/sapdb/data/wrk/MAX/dbm.ebf' to the input file.
                Writing '/sapdb/data/wrk/MAX/dbm.knl' to the input file.
            Prepare passed successfully.
            Starting Backint for MaxDB.
                Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
                Process was started successfully.
            Backint for MaxDB has been started successfully.
            Waiting for the end of Backint for MaxDB.
                2010-09-06 03:15:21 The backup tool is running.
                2010-09-06 03:15:24 The backup tool process has finished work with return code 0.
            Ended the waiting.
            Checking output of Backint for MaxDB.
            Have found all BID's as expected.
        Have saved the Backup History files successfully.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.ebf
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.knl
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.040' was successful.
    2010-09-06 03:15:24
    Backing up stage file '/export/sapdb/arch/MAX_LOG.041'.
        Creating pipes for data transfer.
            Creating pipe '/var/opt/omni/tmp/MAX.BACKDP-Archive.1' ... Done.
        All data transfer pipes have been created.
        Preparing backup tool.
            Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
            Setting environment variable 'BI_REQUEST' to value 'OLD'.
            Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
            Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c'.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
            Writing '/var/opt/omni/tmp/MAX.BACKDP-Archive.1 #PIPE' to the input file.
        Prepare passed successfully.
        Constructed pipe2file call 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait'.
        Starting pipe2file for stage file '/export/sapdb/arch/MAX_LOG.041'.
            Starting pipe2file process 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait >>/var/tmp/tem
    p1283767880-0 2>>/var/tmp/temp1283767880-1'.
            Process was started successfully.
        Pipe2file has been started successfully.
        Starting Backint for MaxDB.
            Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
            Process was started successfully.
        Backint for MaxDB has been started successfully.
        Waiting for end of the backup operation.
            2010-09-06 03:15:25 The backup tool process has finished work with return code 2.
            2010-09-06 03:15:25 The backup tool is not running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:30 Pipe2file is running.
            2010-09-06 03:15:40 Pipe2file is running.
            2010-09-06 03:15:55 Pipe2file is running.
            2010-09-06 03:16:15 Pipe2file is running.
            Killing not reacting pipe2file process.
            Pipe2file killed successfully.
            2010-09-06 03:16:26 The pipe2file process has finished work with return code -1.
        The backup operation has ended.
        Filling reply buffer.
            Have encountered error -24920:
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
            Constructed the following reply:
                ERR
                -24920,ERR_BACKUPOP: backup operation was unsuccessful
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
        Reply buffer filled.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
            Copying pipe2file output to this file.
    Begin of pipe2file output (/var/tmp/temp1283767880-0)----
    End of pipe2file output (/var/tmp/temp1283767880-0)----
            Removed pipe2file output '/var/tmp/temp1283767880-0'.
            Copying pipe2file error output to this file.
    Begin of pipe2file error output (/var/tmp/temp1283767880-1)----
    End of pipe2file error output (/var/tmp/temp1283767880-1)----
            Removed pipe2file error output '/var/tmp/temp1283767880-1'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.041' was unsuccessful.
    2010-09-06 03:16:26
    Cleaning up.
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-0'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-0' ).
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-1'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-1' ).
    Have finished clean up successfully.
    Thanks,
    Subba

Maybe you are looking for