EXPDP 10g - skip estimate

I have a big database (1,6TB) and now many rows in many tables where deleted.
The tablespaces are still very big but i want to shrink them to free up some disk space.
So i want to try to export the whole database and import them with TABLE_EXISTS_ACTION=REPLACE. (after that i can shrink the tablespaces)
When i start the export the utility estimates the size of the dump before really starts with the export.
I can switch to estimate-mode "STATISTICS"(you can see in my example), but there is no way to skip ist completely!?
expdp system/<secret_pwd> DUMPFILE=expdp_<mydbsid>.dmp LOGFILE=expdp_<mydbsid>_shrinktest.log DIRECTORY=EXPDP_SHRINK_TEST_DIR FULL=Y ESTIMATE=STATISTICS
Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 01 December, 2010 15:15:05
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_FULL_13":  system/<secret_pwd> DUMPFILE=expdp_<mydbsid>.dmp LOGFILE=expdp_<mydbsid>_shrinktest.log DIRECTORY=EXPDP_SHRINK_TES
_DIR FULL=Y ESTIMATE=STATISTICS
Estimate in progress using STATISTICS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
...For this database the estimate process consumes very much time(some hours) and for a productive database time is very expensive.
So is there a way to skip the default estimate process?
(i'm not intressted in an explanation of the expdp parameter estimate only and so on .... i also know that there are other ways to shrink the tablespaces - "alter table move ..." or "alter table shrink ..." - ...)
Anyway ... if there is a way to deactivate the estimation i be interested in.

@N Gasparotto:
:-/ so if there is no way ... i will try some of the other shrinking methods.
@sb92075:
I don't know if i understand your question ... but i will shrink the tablespaces only one time. For this database such a case(deleting so much lines in the tables) will not appear in the future.
@sybrand_b:
thumbs up+ ... big words from a senior dba

Similar Messages

  • Expdp 10g to 11g

    Hi,
    we are planning to migrate the data from 10.2 to 11.1 by expdp.
    what should we look before start it like character set etc.....
    1,In 10.2 we have 15 schemas but only 5 schemas having tables, others having only DML privileges.
    we need to take all the schemas or only 5 schemas.
    2,In 10.2,some tablespace having more than two datafiles.whether i have to create all the datafiles that belongs to tablespace? can we create more or less datafiles? can we use different mount point?
    3,Any other checking like dblink,parameters,nls_lang in source,target.
    Edited by: user12017181 on Feb 21, 2012 4:21 AM

    user12017181 wrote:
    1,In 10.2 we have 15 schemas but only 5 schemas having tables, others having only DML privileges.
    we need to take all the schemas or only 5 schemas.You mean - those schema belongs to DB users?
    Then you'll have to recreate them and regrant permissions.
    2,In 10.2,some tablespace having more than two datafiles.whether i have to create all the datafiles that belongs to tablespace? can we create more or less datafiles? can we use different mount point?Not necessary to have similar file structure, DataPump creates logical dump and it doesn't consider file srtucture of target DB.
    3,Any other checking like dblink,parameters,nls_lang in source,target.Those checks are up to DBA, couse this info regards to dump with sys/system schemas, but you shouldn't import this info from 10g to 11g

  • OBIEE 10g skipping a heirarchy

    Hi,
    I have the heirarchy set up as Progarm, Project, Directorate, Office, Division. Only for one report I have to exclude the Directorate and do only Program, Project, Office, Division. Please help on how this can be done.
    Thanks for your time and help.

    You can create multiple Hiearchies within the same BMM. In your other Hiearchy exclude Directorate and use this in your single report.
    Regards,
    Bharath

  • Long estimate times when exporting with EXPDP

    Hello Everyone,
    I have a few questions about using EXPDP. I'm running Oracle Database 10g Enterprise Edition Release 10.2.0.1.0.
    I'm trying to understand whether or not the export times I'm seeing are reasonable. First, I am performing a SCHEMA export, excluding users, grants, roles, statistics, and several tables. When using block estimation, the following operations take 10 minutes to complete for a single schema:
    6/7/2010 12:39:39 PM: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    6/7/2010 12:49:12 PM: Total estimation using BLOCKS method: 512 KB
    6/7/2010 12:49:13 PM: Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
    6/7/2010 12:49:14 PM: Processing object type SCHEMA_EXPORT/TABLE/TABLE
    10 minutes seems completely unreasonable for 6 tables composing so little data. The 10 minute times appears to be constant regardless of the data size. For example, another schema (with about 30 tables) produces the following output:
    6/7/2010 12:10:52 PM: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    6/7/2010 12:20:33 PM: Total estimation using BLOCKS method: 16.08 GB
    6/7/2010 12:20:35 PM: Processing object type SCHEMA_EXPORT/TABLE/TABLE
    6/7/2010 12:20:36 PM: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Also in about 10 minutes! I wouldn't expect these times to be so close for about 100000x more data! Why does the estimate take so much time? I would expect the block estimate to be a simple calculation over data the DB server is already keeping around.
    When using estimate=statistics, the behavior is even stranger. In this case, the "Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA" operations all take ~17 minutes (regardless of the volume of data in the schema), and the actual export lines in the output (". . exported "test_schema"."table_a" 5.056 MB 5782 rows") complete very quickly (< 2 secs).
    Is this kind of behavior expected?
    To collect these times, I wrote a small application that kicks off expdp and reads the output along with a timestamp one line at a time. If the tool is buffering it's output, that might explain some of this, though for the block estimate, I see the following output during the table exports, which seems reasonable:
    6/7/2010 12:20:47 PM: . . exported "test_schema"."table_a" 2.332 MB 915 rows
    6/7/2010 12:22:58 PM: . . exported "test_schema"."table_b" 954.8 MB 32348 rows
    6/7/2010 12:23:59 PM: . . exported "test_schema"."table_c" 975.6 MB 60573 rows
    6/7/2010 12:24:38 PM: . . exported "test_schema"."table_d" 553.8 MB 973 rows
    6/7/2010 12:24:56 PM: . . exported "test_schema"."table_e" 159.6 MB 2562 rows

    The command:
    expdp 'schema/********@server' estimate=blocks directory=DATA_PUMP_DIR dumpfile=8be7d007-e6c1-4e10-8164-db27d9fce103.dmp logfile=8be7d007-e6c1-4e10-8164-db27d9fce103.log SCHEMAS='schema' EXCLUDE=SCHEMA_EXPORT/USER,SCHEMA_EXPORT/SYSTEM_GRANT,SCHEMA_EXPORT/ROLE_GRANT,SCHEMA_EXPORT/DEFAULT_ROLE,SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA,SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS,SCHEMA_EXPORT/TABLE/COMMENT,SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    The new output:
    6/7/2010 4:22:19 PM: Estimate in progress using BLOCKS method...
    6/7/2010 4:22:19 PM: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    6/7/2010 4:29:48 PM: . estimated "schema"."table1" 11.89 GB
    6/7/2010 4:29:48 PM: . estimated "schema"."table2" 1.298 GB
    6/7/2010 4:29:48 PM: . estimated "schema"."table3" 1.041 GB
    Judging by the output, I'm guessing that expdp is buffering the output and my tool is reading it all at once, such that each "estimated" line has nearly the same time. Is there a better way to to timing information from the estimation?
    Thanks for the help!

  • Urgent: need weekly, monthly report function in 6i for 10g

    Hi, we are trying to move some reports built in 6i to 10g. Need some help urgently.
    The y-axis is values, x-axis is date, if there is a value between 2 dates, , a line will connect them. If there is no value, it will skip it and try next date. I have two problems here to create same graph in 10g.
    1. The report built in 6i has a function you can specify a graphic x-axis for a time period, i.e., from date1 to date2, then choose, weekly, monthly or quarterly.... dates will be put into those period accordingly by the report builder automatically. Do we still have this kind of function in 10g?
    2. How can graph builder in 10g skip a date and jump to next date if there is no value for it? Now, if there is no value for a date, it will treat it as 0, so the line will go back to 0 in stead of move forward to next date.
    Thanks!

    Validate the parameter for null and then go for something like
    Col LIKE (CASE WHEN Parameter='' THEN '%%' ELSE Parameter END)

  • Need help on improving expdp speed

    I just tested a export of one table of 3.5 gb, it took almost 1 and half hours.
    See logs here:
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:27:53
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=exp_t454.par
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 22.59 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "ADMIN"."T454" 3.833 GB 3340156 rows
    Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
    /u01/export/admin_migration/exp_admin_t454_01.dmp
    /u01/export/admin_migration/exp_admin_t454_02.dmp
    Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 23:55:15
    my par file looks like this:
    tables=admin.t454 DIRECTORY=data_pump_dir dumpfile=exp_admin_t454_%U.dmp logfile=exp_admin_t454.log parallel=3 filesize=5000m compression=all
    in the middle of expdp, I ran a status of the job and got this:
    admin1 $ expdp system attach=SYS_EXPORT_TABLE_01
    Export: Release 11.1.0.7.0 - 64bit Production on Saturday, 28 April, 2012 22:49:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Password:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    Job: SYS_EXPORT_TABLE_01
    Owner: SYSTEM
    Operation: EXPORT
    Creator Privs: TRUE
    GUID: BEC5BBC2966860B0E0430AEC944B60B0
    Start Time: Saturday, 28 April, 2012 22:28:07
    Mode: TABLE
    Instance: admin1
    Max Parallelism: 3
    EXPORT Job Parameters:
    Parameter Name Parameter Value:
    CLIENT_COMMAND system/******** parfile=exp_t454.par
    COMPRESSION ALL
    State: EXECUTING
    Bytes Processed: 0
    Current Parallelism: 3
    Job Error Count: 0
    Dump File: /u01/export/admin_migration/exp_admin_t454_%u.dmp
    size: 5,242,880,000
    Dump File: /u01/export/admin_migration/exp_admin_t454_01.dmp
    size: 5,242,880,000
    bytes written: 4,096
    Dump File: /u01/export/admin_migration/exp_admin_t454_02.dmp
    size: 5,242,880,000
    bytes written: 28,672
    Worker 1 Status:
    Process Name: DW01
    State: WORK WAITING
    Worker 2 Status:
    Process Name: DW02
    State: EXECUTING
    Object Schema: admin
    Object Name: T454
    Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
    Completed Objects: 1
    Total Objects: 1
    Completed Rows: 1,695,732
    Worker Parallelism: 1
    Export>
    The database version is 11.1.0.7, and os is aix.
    I wonder what I can do to speed up the expdp. I have to do a migration to expdp a 1tb database soon.
    Thanks in advance.

    Is the table partitioned ? Have you tried traditional export to see how long it takes ?
    Pl see these MOS Docs for possible causes
    Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]
    Bug 12780993 - Poor Datapump EXPDP performance for ESTIMATE phase [ID 12780993.8]     
    Data Pump Export (EXPDP) Runs Very Slow After Upgrade From 11.1.0.6 to 11.1.0.7 [ID 1075468.1]     
    Oracle DataPump Export (EXPDP) Is Slow On Partitioned Tables [ID 1300895.1]     
    Expdp Slow for a Small Table [ID 950995.1]     
    Slow Performance of DataPump Export during Estimate Phase [ID 1354535.1]     
    HTH
    Srini

  • EXPDP Database Link on 9i!

    Hi all,
    I whant to do an expdp (10g) from 10g Database using a database link on Oracle 9i.
    Anyone has done this before? It works ?
    Tks,
    Paulo Portugal.

    Hi all,
    I whant to do an expdp (10g) from 10g Database using a database link on Oracle 9i.
    Anyone has done this before? It works ?
    Tks,
    Paulo Portugal.

  • How to move or copy a database to new server

    Greetings All,
    Oracle Enterprise 11g r2, on a Windows2008 platform.
    I would appreciate some advice regarding moving/copying a database to a new server. Some of the information below may not be pertinent to my goal. Please be patient as I am a newbie.
    I have installed oracle and created a database (prod03) on the new/target server. I created the same tablespaces (and datafiles/names) as are on the existing/source server (prod01), except that on the new/target server (prod03) there is 1 more data file for the USERS tablespace than there is on the existing/source server (prod01).
    My initial thought was to perform a expdp full=y.
    The database contains 220 schemas, when I performed an expdp full=y estimate only it indicated 220Gb. I think this would take much more time to export and then import than what I hope to be able to do below.
    I would like to be able copy the datafiles from the source server prod01 server over to the target server prod03, some names/locations will change.
    One scenario I found (http://www.dba-oracle.com/oracle_tips_db_copy.htm) was to backup the control file to trace on the old/source server (prod01). Copy everything to the new/target server. Tweak the file that creates the new control file.
    Step 4 of the above mentioned link says to change
    CREATE CONTROLFILE REUSE DATABASE "PROD01" NORESETLOGS to
    CREATE CONTROLFILE SET DATABASE "PROD03" RESETLOGS
    Notice the change from REUSE to SET. I am not sure if this is right for my situation.
    Could I issue a backup control file to trace on the target server (prod03), add the reference to the additional datafile. Copy over all of the datafiles for all of the tablespaces (users,system/sysaux/undotbs1,temp),
    Delete the existing control file, and generate the new control file.
    Then perhaps issue a startup resetlogs or startup recover?
    Thanks for your time,
    Bob
    Edited by: Snyds on May 17, 2012 12:26 PM

    So unless someone provides me with an rman script I can't use rman.google is your friend
    Simply telling someone to get the experience dose not help. So your post is useless to me.I suppose you do not have experience with "old-school" manual cloning as well.
    Import of entire 200GB DB with datapump or imp will also require some experience otherwise it will be a long-long exercise.
    So, basically, any advise may be useless to you, because of your "the fact is I don't have the experience. Nor do I have the time to obtain the experience."

  • Popup document through PL/SQL Procedure

    Hi, I am wandering howcome we popup any document, sitting on my server ,through a PL/SQL procedure. I do not want to read the content of that document, which we can do through UTL_FILE, but want to open that document. I have the URL(,location) of the document ,but just don't know how I can open this document with URL, in PL/SQL.
    So, help me out. Thanks.

    Reading a web page using PLSQL is really easy.
    If you are running 10g, skip this ACL prerequisite and jump to the code below.
    If you are running 11g, there are a couple of housekeeping prerequisites. You need to create the network ACLs in the database to allow access to the pages you are interested in. Like this:
    BEGIN
          -- create the ACL
          DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(
             acl          => 'oreillynet-permissions.xml'
            ,description  => 'Network permissions for www.oreillynet.com'
            ,principal    => 'SYSTEM'
            ,is_grant     => TRUE
            ,privilege    => 'connect'
            ,start_date   => SYSTIMESTAMP
            ,end_date     => NULL
          -- assign privileges to the ACL
          DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE (
              acl        => 'oreillynet-permissions.xml'
             ,principal  => 'SYSTEM'
             ,is_grant   => TRUE
             ,privilege  => 'connect'
             ,start_date => SYSTIMESTAMP
             ,end_date   => null
          -- define the allowable destintions
          DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
              acl        => 'oreillynet-permissions.xml'
             ,host       => 'www.orillynet.com'
             ,lower_port => 80
             ,upper_port => 80
        END;Here is the code to read the web page:
    DECLARE
       WebPageURL HttpUriType;
       WebPage CLOB;
    BEGIN
       --Create an instance of the type pointing
       --to Arup Nanda's Author Bio page at OReilly
       WebPageURL := HttpUriType.createUri('http://www.oreillynet.com/pub/au/2307');
       --Retrieve the web page via HTTP
       WebPage := WebPageURL.getclob();
       --Display the page title
       DBMS_OUTPUT.PUT_LINE(regexp_substr(WebPage,'<title>.*</title>'));
    END;
    /

  • 100% Complete Schema Duplication

    100% Complete Schema Duplication
    10gR2
    Need to duplicate a schema 100%, including Pkg, Types, all objects generated
    by XDB etc.
    Does expdp/impdp do that ?
    (or what does expdp/impdp skip, if any ?)
    thanks

    Schema level export should do it. It gets all of them owned by that user/schema.
    Thanks
    chandra

  • Expdp impdp fails from 10g to 11g db version

    Hello folks,
    Export DB Version : 10.2.0.4
    Import DB Version : 11.2.0.1
    Export Log File
    Export: Release 10.2.0.4.0 - Production on Wednesday, 03 November, 2010 2:19:20
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, Data Mining and Real Application Testing options
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 45 GB
    . . exported "DYM"."CYCLE_COUNT_MASTER" 39.14 GB 309618922 rows
    Master table "DYM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for DYM.SYS_EXPORT_SCHEMA_01 is:
    Job "DYM"."SYS_EXPORT_SCHEMA_01" successfully completed at 02:56:49
    Import Log File
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "DYM_PRJ4"."CYCLE_COUNT_MASTER" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 10:54:38
    from 10g expdp to 11g impdp is not allowed ? any thoughts appreciated ??

    Nope , I do not see any error file.
    Current log# 2 seq# 908 mem# 0:
    Thu Nov 04 11:58:20 2010
    DM00 started with pid=530, OS id=1659, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:20 2010
    DW00 started with pid=531, OS id=1661, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DM00 started with pid=513, OS id=1700, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 11:58:55 2010
    DW00 started with pid=520, OS id=1713, wid=1, job SYSTEM.SYS_IMPORT_FULL_02
    Thu Nov 04 12:00:54 2010
    Thread 1 cannot allocate new log, sequence 909
    Private strand flush not complete
    Current log# 2 seq# 908 mem# 0: ####################redo02.log
    Thread 1 advanced to log sequence 909 (LGWR switch)
    Current log# 3 seq# 909 mem# 0: ###################redo03.log
    Thu Nov 04 12:01:51 2010
    Thread 1 cannot allocate new log, sequence 910
    Checkpoint not complete
    Current log# 3 seq# 909 mem# 0:###################redo03.log

  • Expdp without estimate

    Hi,
    I'm trying to do an expdp of full database.
    the problem is that the whole expdp takes 2 hours - and from that 2 hours the estimate part take 40 minutes - which sounds unreasonable.
    I'm trying to find a way to cut the estimate time. i tried to do the estimate with statistics instead of blocks but it didn't help.
    is there another possible solution for that? i'm willing to skip the estimate part completely but i didn't find a way to do that.
    the db version i use is 11.2.0.2 linux redhat
    will appreciate any help,
    Arnon
    Edited by: arnon82 on Jan 31, 2013 10:22 PM

    Hi,
    The estimate phase is a critical part of the export job. It is not only estimating the amount of data that is being exported. It is actually gathering data that is describing what we call TABLE_DATA objects. These are objects where data is stored. Let me explain:
    for a table, it is the table itself
    for a partitioned table, it is each partition
    for a subpartitioned table, it is each subpartition
    So, Data Pump starts by collecting this information. It then stores all of this in the Master Table so the job knows what data needs to be exported. This "phase" needs to be done sooner or later, we do it at the beginning of the job in case you specify parallel. Once this data is collected, parallel worker processes can start to unload data. If this was done later, the job would not begin parallel operation until this was collected. If the parallelism of the job is 1, then this could be done later, but it still needs to be done.
    Now, having said all of that, there have been some performance in this area from release to release. I'm not sure what version you are running, but you could check with Oracle Support to see if there is a patch available for your version.
    Hope this explains things a bit and helps.... a little :^)
    Dean

  • How to schedule a expdp job on dbconsole 10g

    Hi all,
    is there any way to create a schedule expdp job to execute automatically all days on enterprise mananger 10g (dbconsole) ???
    Thanks
    Wander (Brazil)

    Hi,
    on dbconsol i get to schedule a job only to execute immediately or posteriorly, just one time, but I want that it run all day in a specific time.
    Wander(Brazil)

  • Error in Estimate of schema for EXPDP

    Hi all,
    OS:Windows
    DB:11.2 xpress edition.
    I needed to estimate the size of dumpfile before export for a schema A, i used the following command:
    expdp schema_A/schema_a estimate_only=y nologifle=y.but i faced the below mentioned error:
    Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    ORA-31626: job does not exist
    ORA-31633: unable to create master table "schema_A.SYS_EXPORT_SCHEMA_05"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 1020
    ORA-01031: insufficient privileges.Kinldy help.
    Regards,
    Sphinx

    Also refer this
    http://arjudba.blogspot.in/2008/09/expdp-fails-with-ora-31626-ora-31633.html
    Privileges needed for Import and export

  • Oracle 10g expdp fatal error (ORA-39125 & ORA-01801)

    Key phrases: ORA-39125 ... DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS] ... date format is too long
    Hello,
    While performing a routine export (expdp) of schemas in an Oracle 10.2 instance on Linux, a fatal error (ORA-39125) was genereted. A screen capture, including the error messages follows.
    Export: Release 10.2.0.1.0 - 64bit Production on Wednesday, 20 May, 2009 15:34:44
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]
    ORA-01801: date format is too long for internal buffer
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 6241
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    0x807d2378 14916 package body SYS.KUPW$WORKER
    0x807d2378 6300 package body SYS.KUPW$WORKER
    0x807d2378 2340 package body SYS.KUPW$WORKER
    0x807d2378 6861 package body SYS.KUPW$WORKER
    0x807d2378 1262 package body SYS.KUPW$WORKER
    0x801d2490 2 anonymous block
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_13" stopped due to fatal error at 15:34:54
    Has anyone run into this problem?
    Is it related to an Oracle bug?
    Are there fixes/workarounds?
    Thanks in advance,
    Alan

    Hi,
    There have been fixes to this area. I don't know if this exact error has been fixed, but getting the latest patchset would be a good place to start.
    Thanks
    Dean

Maybe you are looking for