New disk for Data Pump

Hi to all,
OS: Windows Server 2003 EE 64bit, Database: Oracle Database 10g
I do not know whether this question is for here, but …
I have a database server with four SCSI hard drives. The last hard drive I added (300GB) in Disk Management is set up as Layout:Simple Type:Dynamic. All of my other drives are set up as Layout:Partition Type:Basic. The new drive I want to use as location for DATA PUMP dump files.
This new drive I don’t plan to put with another disk in mirror or any RAID configuration, as is the case with other existing hard drives.
Whether to leave it as Dynamic or to put them as Basic?
I was curious as to what the differences are, and if I continue to use it as Dynamic will I have any issue?
Thanks.

you can copy the old backups to the new drive but don't do it using finder. clone the old TM drive to the new one using either Superduper or the restore tab in Disk Utility.

Similar Messages

  • ORA-39080: failed to create queues "" and "" for Data Pump job

    When I am running datapump expdp I receive the following error:
    +++Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc+++
    +++tion+++
    +++With the Partitioning, OLAP and Data Mining options+++
    +++ORA-31626: job does not exist+++
    +++ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user CHESHIRE_POLICE_LOCAL+++
    +++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
    +++ORA-06512: at "SYS.KUPV$FT_INT", line 600+++
    +++ORA-39080: failed to create queues "" and "" for Data Pump job+++
    +++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
    +++ORA-06512: at "SYS.KUPC$QUE_INT", line 1555+++
    +++ORA-01403: no data found+++
    Sys has the following two objects as invalid at present after running catproc.sql and utlrp.sql and manual compilation:
    OBJECT_NAME OBJECT_TYPE
    AQ$_KUPC$DATAPUMP_QUETAB_E QUEUE
    SCHEDULER$_JOBQ QUEUE
    While I run catdpb.sql the datapump queue table does not create:
    BEGIN
    dbms_aqadm.create_queue_table(queue_table => 'SYS.KUPC$DATAPUMP_QUETAB', multiple_consumers => TRUE, queue_payload_type =>'SYS.KUPC$_MESSAGE', comment => 'DataPump Queue Table', compatible=>'8.1.3');
    EXCEPTION
    WHEN OTHERS THEN
    IF SQLCODE = -24001 THEN NULL;
    ELSE RAISE;
    END IF;
    END;
    ERROR at line 1:
    ORA-01403: no data found
    ORA-06512: at line 7

    Snehashish Ghosh wrote:
    When I am running datapump expdp I receive the following error:
    +++Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc+++
    +++tion+++
    +++With the Partitioning, OLAP and Data Mining options+++
    +++ORA-31626: job does not exist+++
    +++ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user CHESHIRE_POLICE_LOCAL+++
    +++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
    +++ORA-06512: at "SYS.KUPV$FT_INT", line 600+++
    +++ORA-39080: failed to create queues "" and "" for Data Pump job+++
    +++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
    +++ORA-06512: at "SYS.KUPC$QUE_INT", line 1555+++
    +++ORA-01403: no data found+++
    Sys has the following two objects as invalid at present after running catproc.sql and utlrp.sql and manual compilation:
    OBJECT_NAME OBJECT_TYPE
    AQ$_KUPC$DATAPUMP_QUETAB_E QUEUE
    SCHEDULER$_JOBQ QUEUE
    While I run catdpb.sql the datapump queue table does not create:
    BEGIN
    dbms_aqadm.create_queue_table(queue_table => 'SYS.KUPC$DATAPUMP_QUETAB', multiple_consumers => TRUE, queue_payload_type =>'SYS.KUPC$_MESSAGE', comment => 'DataPump Queue Table', compatible=>'8.1.3');does it work better when specifying an Oracle version that is from this Century; newer than V8.1?

  • Need new disk for 2011 Mac Mini i5 with Radeon 6630M

    I need a new disk for 2011 Mac Mini i5 with Radeon 6630M
    Will this fit?
    Western Digital 2.5 inch Scorpio Black 750GB 7200rpm SATA 16MB Internal Hard Drive WD7500BPKT
    Will I need special screwdrivers or tools?

    Wow, I worked on my MacMini after watching the video.
    But BEWARE!
    That OWC video misses a crucial step! (I emailed them)
    My MacMini had a bit of "invisible" black tape holding the disk connector to the disk, very hard to see. When I pulled off the connector, it came apart !
    That bit of tape must be something Apple added, or some manufacturers added, but it really messed me up. I had to send my MacMini off to be repaired, at a cost about $200 including parts and labour.

  • Best Practice for data pump or import process?

    We are trying to copy existing schema to another newly created schema. Used export data pump to successfully export schema.
    However, we encountered some errors when importing dump file to new schema. Remapped schema and tablespaces, etc.
    Most errors occur in PL/SQL... For example, we have views like below in original schema:
    CREATE VIEW *oldschema.myview* AS
    SELECT col1, col2, col3
    FROM *oldschema.mytable*
    WHERE coll1 = 10
    Quite a few Functions, Procedures, Packages and Triggers contain "*oldschema.mytable*" in DML (insert, select, update) statement, for exmaple.
    Getting the following errors in import log:
    ORA-39082: Object type ALTER_FUNCTION:"TEST"."MYFUNCTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TEST"."MYPROCEDURE" created with compilation warnings
    ORA-39082: Object type VIEW:"TEST"."MYVIEW" created with compilation warnings
    ORA-39082: Object type PACKAGE_BODY:"TEST"."MYPACKAGE" created with compilation warnings
    ORA-39082: Object type TRIGGER:"TEST"."MYTRIGGER" created with compilation warnings
    A lot of actual errors/invalid objects in new schema are due to:
    ORA-00942: table or view does not exist
    My question is:
    1. What can we do to fix those errors?
    2. Is there a better way to do the import with such condition?
    3. Update PL/SQL and recompile in new schema? Or update in original schema first and export?
    Your help will be greatly appreciated!
    Thank you!

    I routinely get many (MANY) errors as follows and they always compile when I recompile using utlrp.
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_LASTOUTPUNCH" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_REFPERIODENDFOREMP" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_TAILOFFSECS" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."FN_GDAPREPORTGATHERER" created with compilation warnings
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ABSENT_EXCEPTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_BAL_PROJ" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_DETAILS" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_SUMMARY" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACTUAL_SCHEDULE" created with compilation warnings
    It works. In all my databases: peoplesoft, kronos, and others...
    I should qualify that it may still be necessary to 'debug' specific problems, but most common typical problems are easily resolved using the utlrp.sql. The usual problems I run into are typically because of a database link that points to another database such as in a production environment that we firewall our test and development databases from linking to (for obvious reasons).

  • Help needed for Data Pump.

    Hi,
    I am using oracle 11g release2 and I am not able to find data pump utility in SQL developer.
    Please let me know, if I need to install it. I am new to this utility.
    Thanks.

    I do not believe SQL Developer would have DataPump utility.
    As the name implies, SQL Developer is intended to be used as a tool to facilitate development (typically of SQL code), as well as provide additional features to assist developers in their daily routines.
    DataPump, is more of a DBA tool used for extracting logical copies of data from an existing database and/or port to another database. To use this utility, you would typically need to be in the dba group (although not necessary if configured properly). But at least on the same server as the datapump utility software resides.
    There are many options and features to using datapump, including network_link which allows extraction of data from a remote database to another database on a different server without ever creating a file on the operating system.
    If I recall correctly, I believe in older versions of Oracle client (e.g., 8.x), the Oracle client provided the export/import utility. But those days are long gone now. With datapump, it is primary a DBA tool.

  • Syntax for Data Pump in Oracle Database 11g

    Hi, thanks to all. i myself clarifying many doubts by refering the POSTS in this Forum.
    I need the syntax for the Data Pump in Oracle 11g.
    Could you post the syntax plese.
    Thanks & Regards..

    Hello,
    You may have all the options in Datapump by typing the statements belows:
    expdp help=y
    impdp help=yElse, you'll have much more details and examples in the Utilities book from Oracle 11g documentation on the link below:
    http://www.comp.dit.ie/btierney/Oracle11gDoc/server.111/b28319/part_dp.htm#i436481
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Apr 19, 2010 11:07 PM

  • New disk for TM backups

    I am replacing my current 500Gb disk for backups through TM with a 1Tb disk.
    Can I just copy the 500Gb disk on the 1Tb disk before I configure TM for using the 1Tb disk so that I don't loose my previous backups and just continue with incremental backups?

    you can copy the old backups to the new drive but don't do it using finder. clone the old TM drive to the new one using either Superduper or the restore tab in Disk Utility.

  • Check for directory for Data Pump Export

    Hi
    In order to perform Data Pump Export, It is required to create the same as a pre-requisite.
    e.g :
    SQL> create directory expdp_dir as '/u01/backup/exports';
    I need to know if there is any way to know if the folder is already created.
    Please advice
    Regards,
    Dheeraj

    Dheeraj Kumar M wrote:
    I need to know if there is any way to know if the folder is already created.Do you mean see if OS folder exists and is accessible to oracle user? Try creating file in that directory using UTL_FILE. If it succeeds - you are OK. Another option is writing java SP to check OS folder existence and permissions.
    SY.

  • Use BD disk for data storage?

    A blue ray disk can hold 25gb  of data as opposed to a DVD 4.7gb.
    Is there any way I can store standard original footage AVI from an old DV camera via firewire on a BD disk and use it later on for
    archive or editing some time in the future?
    That way I could store a whole DV tape on a BD disk.

    Yes. Use a program, like the great, free ImgBurn to do BD-Data discs. Just Copy your files/folders to the blank disc.
    You might want to see this ARTICLE, however. Just something, possibly potential, to think about.
    Good luck,
    Hunt

  • Need new description for Data Element based on Company Code.

    Hi,
    Requirement: The data element AUFNR which has the description "Order Number" needs to have description as "Engagement Number" for a specific company code. All other company codes to retain the description "Order Number".
    Example: There are three company codes: AAAA, BBBB and CCCC. The company code CCCC wants the description of data element: AUFNR as "Engagement Number" while AAAA and BBBB wants to retain the standard description: "Order Number".
    Pre-Work: Before posting this question, i did search however have not found a solution for this.
    In SAP i tried to create an Implicit Enhancement at:
    Program: RADBTDDF
    Sub Routine: SE_DD04T_ROLLNAME_ROLLNAME
    However, as the program is SAP Basis program, i was not successful.
    Any hints are appreciated and solution is rewarded.
    Thanks for your time.
    With Regards,
    Goutham.

    Dear Matthew,
    Thank you for the reply.
    I am giving some more details into the requirement.
    If a user assigned to company code: CCCC (may not be directly, relationship will be drawn) has logged into the SAP system. If he opens the transaction related to Internal Order, like KO01, KO02, KO03 etc. He should be able to see the description of field: "Order Type" as "Engagement Type".
    Similarly in the next screen the field "Order" needs to be displayed as "Engagement Number".
    However, for users who belong to company codes: AAAA and BBBB the SAP Standard Descriptions will be shown.
    Please let me know if this is possible or any inputs in that direction are highly appreciated.
    Let me know if further details are required.
    Best Regards,
    Goutham.
    P.S: As far as i know, Anees Jawad proposal would work if it is required to change the description at the system level.

  • Create New Operations for data collection(View Object)

    ADF data collection(View Object) supports Create, CreateInsert, Delete, etc operations.
    And would like to add more operations like clear, etc. How do i create my own operation and add it to data control for all the view objects in the data control.
    Thanks
    JP

    hi esjp2000
    I'm not sure what kind or "clear operation" you are looking for, but I recently asked about a "clear search" in this forum thread:
    " ADF BC : "clear search" using executeEmptyRowSet() "
    ADF BC : "clear search" using executeEmptyRowSet()
    It is about the executeEmptyRowSet() method, and that might not be what you are looking for. (It might not even be what I am looking for because it is undocumented and that is what my question is about and I haven't had an aswer yet.)
    But ... it does provide an example of a "service method" Shay is referring to, in the example application:
    http://verveja.footsteps.be/~verveja/files/oracle/ClearSearchStuff-v0.01.zip
    (So, the specific service method implementation, "ScottServiceImpl.executeEmptyRowSetOnEmpWithParamsVOVI()" in the example, would be different in your case, depending on the specific "clear operation" you are looking for.)
    success
    Jan Vervecken

  • Data pump issue for oracle 10G in window2003

    Hi Experts,
    I try to run data pump in oracle 10G in window 2003 server.
    I got a error as
    D:\>cd D:\oracle\product\10.2.0\SALE\BIN
    D:\oracle\product\10.2.0\SALE\BIN>expdp system/xxxxl@sale full=Y directory=du
    mpdir dumpfile=expdp_sale_20090302.dmp logfile=exp_sale_20090302.log
    Export: Release 10.2.0.4.0 - Production on Tuesday, 03 March, 2009 8:05:50
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Produc
    tion
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-31626: job does not exist
    ORA-31650: timeout waiting for master process response
    However, I can run exp codes and works well.
    What is wrong for my data pump?
    Thanks
    JIM

    Hi Anand,
    I did not see any error log at that time. Actually, it did not work any more. I will test it again based on your emial after exp done.
    Based on new testing, I got below errors as
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 4108 bytes (PLS non-lib hp,pdzgM60_Make)
    ORA-06512: at "SYS.KUPC$QUEUE_INT", line 277
    ORA-06512: at "SYS.KUPW$WORKER", line 1366
    ORA-04030: out of process memory when trying to allocate 65036 bytes (callheap,KQL tmpbuf)
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.KUPC$_WORKERERROR"
    ORA-06512: at "SYS.KUPW$WORKER", line 13360
    ORA-06512: at "SYS.KUPW$WORKER", line 15039
    ORA-06512: at "SYS.KUPW$WORKER", line 6372
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.DISPATCH_WORK_ITEMS while calling DBMS_METADATA.FETCH_XML_CLOB [PROCOBJ:"SALE"."SQLSCRIPT_2478179"]
    ORA-06512: at "SYS.KUPW$WORKER", line 7078
    ORA-04030: out of process memory when trying to allocate 4108 bytes (PLS non-lib hp,pdzgM60_Make)
    ORA-06500: PL/SQL: storage error
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu sessi,pmucpcon: tds)
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu sessi,pmucalm coll)
    Job "SYSTEM"."SYS_EXPORT_FULL_01" stopped due to fatal error at 14:41:36
    ORA-39014: One or more workers have prematurely exited.
    the trace file as
    *** 2009-03-03 14:20:41.500
    *** ACTION NAME:() 2009-03-03 14:20:41.328
    *** MODULE NAME:(oradim.exe) 2009-03-03 14:20:41.328
    *** SERVICE NAME:() 2009-03-03 14:20:41.328
    *** SESSION ID:(159.1) 2009-03-03 14:20:41.328
    Successfully allocated 7 recovery slaves
    Using 157 overflow buffers per recovery slave
    Thread 1 checkpoint: logseq 12911, block 2, scn 7355467494724
    cache-low rba: logseq 12911, block 251154
    on-disk rba: logseq 12912, block 221351, scn 7355467496281
    start recovery at logseq 12911, block 251154, scn 0
    ----- Redo read statistics for thread 1 -----
    Read rate (ASYNC): 185319Kb in 1.73s => 104.61 Mb/sec
    Total physical reads: 189333Kb
    Longest record: 5Kb, moves: 0/448987 (0%)
    Change moves: 1378/5737 (24%), moved: 0Mb
    Longest LWN: 1032Kb, moves: 45/269 (16%), moved: 41Mb
    Last redo scn: 0x06b0.9406fb58 (7355467496280)
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 3
    Average hash chain = 35384/25746 = 1.4
    Max compares per lookup = 3
    Avg compares per lookup = 847056/876618 = 1.0
    *** 2009-03-03 14:20:46.062
    KCRA: start recovery claims for 35384 data blocks
    *** 2009-03-03 14:21:02.171
    KCRA: blocks processed = 35384/35384, claimed = 35384, eliminated = 0
    *** 2009-03-03 14:21:02.531
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 12911 Reading mem 0
    *** 2009-03-03 14:21:04.718
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 12912 Reading mem 0
    *** 2009-03-03 14:21:16.296
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 3
    Average hash chain = 35384/25746 = 1.4
    Max compares per lookup = 3
    Avg compares per lookup = 849220/841000 = 1.0
    *** 2009-03-03 14:21:28.468
    tkcrrsarc: (WARN) Failed to find ARCH for message (message:0x1)
    tkcrrpa: (WARN) Failed initial attempt to send ARCH message (message:0x1)
    *** 2009-03-03 14:26:25.781
    kwqmnich: current time:: 14: 26: 25
    kwqmnich: instance no 0 check_only flag 1
    kwqmnich: initialized job cache structure
    ktsmgtur(): TUR was not tuned for 360 secs
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 8 - type 586, 4 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:7447M/8185M, Ph+PgF:6833M/9984M, VA:385M/3071M
    Instance name: vmsdbsea
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Windows thread id: 2460, image: ORACLE.EXE (SHAD)
    Dynamic strand is set to TRUE
    Running with 2 shared and 18 private strand(s). Zero-copy redo is FALSE
    *** 2009-03-03 08:06:51.921
    *** ACTION NAME:() 2009-03-03 08:06:51.905
    *** MODULE NAME:(expdp.exe) 2009-03-03 08:06:51.905
    *** SERVICE NAME:(xxxxxxxxxx) 2009-03-03 08:06:51.905
    *** SESSION ID:(118.53238) 2009-03-03 08:06:51.905
    SHDW: Failure to establish initial communication with MCP
    SHDW: Deleting Data Pump job infrastructure
    is it a system memory issue for data pump? my exp works well
    How to fix this issue?
    JIM
    Edited by: user589812 on Mar 3, 2009 5:07 PM
    Edited by: user589812 on Mar 3, 2009 5:22 PM

  • Data Pump and Data Manipulation (DML)

    Hi all,
    I am wondering if there is a method to manipulate table column data as part of a Data Pump export? For example, is it possible make the following changes during DP export:
    update employee
    set firstname = 'some new string';
    We are looking at a way where we can protect sensitve data columns. We already have a script to update column data (ie scramble text to protect personal information) but we would like to automate the changes as we export the database schema, rather than having to export, import, run DML and then export again.
    Any ideas or advice would be great!
    Regards,
    Leigh.

    Thanks Francisco!
    I haven't read the entire document yet but at first glance it appears to give me what I need. Do you know if this functionality for Data Pump is available on Oracle Database versions 10.1.0.4 and 10.2.0.3 (the versions we currently use).
    Regards,
    Leigh.
    Edited by: lelliott78 on Sep 10, 2008 9:48 PM

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • How to find table with colum that not support by data pump network_link

    Hi Experts,
    We try to import a database to new DB by data pump network_link.
    as oracle statement, Tables with columns that are object types are not supported in a network export. An ORA-22804 error will be generated and the export will move on to the next table. To work around this restriction, you can manually create the dependent object types within the database from which the export is being run.
    My question, how to find these tables with colum that that are object types are not supported in a network export.
    We have LOB object and oracle spital SDO_GEOMETRY object type. our database size is about 300G. nornally exp will takes 30 hours.
    We try to use data pump with network_link to speed export process.
    How do we fix oracle spital users type SDO_GEOMETRY issue during data pump?
    our system is 32 bit window 2003 and 10GR2 database.
    Thanks
    Jim
    Edited by: user589812 on Nov 3, 2009 12:59 PM

    Hi,
    I remember there being issues with sdo_geometry and DataPump. You may want to contact oracle support with this issue.
    Dean

Maybe you are looking for

  • One 30" or Two 24" Displays for Logic 9?

    Hi, would it be preferable to have one 30", or two 24", LCD displays for Logic 9 use specifically? The views, or windows (not sure what Logic calls them) will be mainly for audio/midi tracking, audio/midi editing, VI tracking and mixing of rock music

  • Installing iWork on Mac OS X Server

    I've installed my copy of iWork '06 on Mac OS X Server because I need to be able to read some Pages files on the development box. It installed fine (no errors) but when I try to launch Pages I get weird behaviour from the App. It's screens won't rend

  • Final cut express 3 vs. final cut express 4??

    I have an old ibook G4 with a 1.07 Ghz processor. I am currently updating my operating system from panther to tiger and installing about 512MB ram so that I can get final cut express. I am a student who is used to using final cut pro in class... but

  • Clone database with backup sets from another server

    Hi, I want to clone a database in Host A with a backup set stored in Host B. My script in Host A (where i want the cloned DB to be created) is as follows: ORACLE_SID=test8; export ORACLE_SID rman target sys/XXX@OriginalDB_in_HostB catalog prodifs/pro

  • Total Number of Megs in Pictures

    How can I find out what the total number of megs are used in Finder/Pictures including folders as well as individual photos? Thanks for your help and suggestions. Vernon