Data Pump and LOB Counts

I was recently tasked with copying three schemas from one 10gR2 database to another 10gR2 database on the same RHEL server. I decided to use Data Pump in network mode.
After performing the transfer, the count of database objects for the schemas were the same EXCEPT for LOBs. There were significantly fewer on the target (new) database as opposed to the source (old) database.
To be certain, I retried the operation using an intermediate dump file. Again, everything ran without error. The counts were the same as before - fewer LOBs. I also compared row counts of the tables, and they were identical on both systems.
Testing by the application user seemed to indicate everything was fine.
My assumption is that consolidation is going on when the data is imported, resulting in fewer LOBs ... haven't worked much (really, at all) with them before. Just looking for confirmation that this is the case and that nothing is "missing".
Here are the results:
ORIGINAL SOURCE DATABASE
OWNER                          OBJECT_TYPE         STATUS         CNT
COGAUDIT_DEV                   INDEX               VALID            6
                               LOB                 VALID           12
                               TABLE               VALID           21
COGSTORE_DEV                   INDEX               VALID          286
                               LOB                 VALID          390
                               SEQUENCE            VALID            1
                               TABLE               VALID          200
                               TRIGGER             VALID            2
                               VIEW                VALID            4
PLANNING_DEV                   INDEX               VALID           37
                               LOB                 VALID           15
                               SEQUENCE            VALID            3
                               TABLE               VALID           31
                               TRIGGER             VALID            3
14 rows selected.Here are the counts on the BOPTBI (target) database:
NEW TARGET DATABASE
OWNER                          OBJECT_TYPE         STATUS         CNT
COGAUDIT_DEV                   INDEX               VALID            6
                               LOB                 VALID            6
                               TABLE               VALID           21
COGSTORE_DEV                   INDEX               VALID          286
                               LOB                 VALID           98
                               SEQUENCE            VALID            1
                               TABLE               VALID          200
                               TRIGGER             VALID            2
                               VIEW                VALID            4
PLANNING_DEV                   INDEX               VALID           37
                               LOB                 VALID           15
                               SEQUENCE            VALID            3
                               TABLE               VALID           31
                               TRIGGER             VALID            3
14 rows selected.We're just curious ... thanks for any insight on this!
Chris
Edited by: 877086 on Aug 3, 2011 4:38 PM
Edited by: 877086 on Aug 3, 2011 4:40 PM
Edited by: 877086 on Aug 3, 2011 4:40 PM

OK, here is the SQL that produced the object listing:
break on owner skip 1;
select   owner,
         object_type,
         status,
         count(*) cnt
from     dba_objects
where    owner in ('COGAUDIT_DEV', 'COGSTORE_DEV', 'PLANNING_DEV')
group by owner,
         object_type,
         status
order by 1, 2;Here is the export parameter file:
# cog_all_exp.par
userid = chamilton
content = all
directory = xfer
dumpfile = cog_all_%U.dmp
full = n
job_name = cog_all_exp
logfile = cog_all_exp.log
parallel = 2
schemas = (cogaudit_dev, cogstore_dev, planning_dev)Here is the import parameter file:
# cog_all_imp.par
userid = chamilton
content = all
directory = xfer
dumpfile = cog_all_%U.dmp
full = n
job_name = cog_all_imp
logfile = cog_all_imp.log
parallel = 2
reuse_datafiles = n
schemas = (cogaudit_dev, cogstore_dev, planning_dev)
skip_unusable_indexes = n
table_exists_action = replaceThe above parameter files were for the dumpfile version. For the original network link version, I omitted the dumpfile parameter and substituted "network_link = boptcog_xfer".
Chris
Edited by: 877086 on Aug 3, 2011 6:18 PM

Similar Messages

  • Data Pump and Data Manipulation (DML)

    Hi all,
    I am wondering if there is a method to manipulate table column data as part of a Data Pump export? For example, is it possible make the following changes during DP export:
    update employee
    set firstname = 'some new string';
    We are looking at a way where we can protect sensitve data columns. We already have a script to update column data (ie scramble text to protect personal information) but we would like to automate the changes as we export the database schema, rather than having to export, import, run DML and then export again.
    Any ideas or advice would be great!
    Regards,
    Leigh.

    Thanks Francisco!
    I haven't read the entire document yet but at first glance it appears to give me what I need. Do you know if this functionality for Data Pump is available on Oracle Database versions 10.1.0.4 and 10.2.0.3 (the versions we currently use).
    Regards,
    Leigh.
    Edited by: lelliott78 on Sep 10, 2008 9:48 PM

  • Showing date, time and/or counter - HELP!

    I need the date and a counter to show (superimposed?) over the video in my iMovie. I know the date and time recorded was imported with the video files from my DV camcorder because they are available in File Info. The video recorded was a court deposition for a lawyer, and they will want the date recorded to show on the video. I'm also guessing they'll want a counter for easy access to certain parts of the video if needed. How can I do this?

    Sue,
    I did purchase and download the timecode overlay plug-ins you referenced. While they appear to be what I need, when I try to apply them to any of my clips, the clip just turns completely black with no sound. I know this gets into another topic, but any ideas?
    Thanks so much!
    Shannon

  • Data Pump and Physical Standby

    I want to perform a data pump export on a schema attached to a physical standby database. Is this possible and what are the steps.

    Thanks for the information. I will give it a tryI just realized it might not work, because data pump need to update the data pump job information.
    What's the concern not running export on primary?

  • Data pump and SGA, system memory in window 2003

    Hi Experts,
    I have a question for oracle 10g data pump. Based on oracle document,
    all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.
    That means data pump need to used oracle server SGA or other system memory. is it true?
    Or data pump must be support by oracle server memory?
    we use oracle 10 G R4 in 32 bit window 2003 with memory issue. So we take care this point.
    at the present, we use exp to do exp job. we DB size is 280 G. SGA is 2G in window.
    for testing, i can saw data pump message in alert file. I does not saw taht a export job broken DB and data replication job.
    Does any experts have this point experience?
    Thanks,
    JIM

    Hi Jim,
    user589812 wrote:
    I have a question for oracle 10g data pump. Based on oracle document,
    all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.Yes, they are but you can always give a shared location on to another server.
    That means data pump need to used oracle server SGA or other system memory. is it true?
    Or data pump must be support by oracle server memory?Irrespective, the SGA is used for Conventional Export. You can reduce the overhead to some extent by a Direct Export (direct=y), but still the server resources will be in use.
    we use oracle 10 G R4 in 32 bit window 2003 with memory issue. So we take care this point.If you have windiows enterprise edition, why don't you enable PAE to use more memory on provided you have memory beyond 3GB on the server.
    at the present, we use exp to do exp job. we DB size is 280 G. SGA is 2G in window.With respect to the size of the database, your SGA is too small.
    Hope it helps.
    Regards
    Z.K.

  • Data Pump and Java Classes

    I tried importing the data from one schema to another but the Java classes that we use (via Java Stored Procedures) are showing up as invalid. Is there any reason why? We are using Oracle 10g R2. I tried resolving them by running the following sql, but that didn't work either:
    ALTER JAVA CLASS "<java_clss>" RESOLVER (("*" <schema_name>)(* PUBLIC)) RESOLVE;
    Any thoughts will be appreciated.

    There are two ways to instantiate a target's data. One is to use a native data loader or utility. In Oracle's case, Oracle Data Pump (not the "data pump" in GG) or SQL*Loader, as an example. The other way is to use GoldenGate's data pump.
    You can configure DDL synchronization for Oracle, but you have to turn off the recycle bin. See Chapter 13 in the admin guide.

  • Data Guard and LOBs

    Hi, this is my first post.
    I have Oracle9i Enterprise Edition Release 9.2.0.5.0 and my OS is AIX 5.2
    I have a Primary Database with a Standby database. I want to know if Data Guard can transport to the standby database the tables with LOBs fields created in my primary database.
    Thanks.

    Physical Standby : yes. Logical standby: not sure. I would need to query the Dataguard Concepts and Administration Manual, where this is documented.
    Sybrand Bakker
    Senior Oracle DBA

  • Data Pump and Grants - Frustration

    Hello All,
    I've been working on this for days, and I think I've finally got to the point where I can ask a meaningful question:
    My task is to copy the meta-data from 6 important schemas in a PROD db to equivalent schemas in a DEV db.
    Now these schemas all interact by way of foreign keys, views, triggers, packages and need various grants
    imported for these objects to compile.
    Also, there are about 10 read-only users in PROD that don't need to be represented in the DEV db. So,
    I don't want to import grants for these users.
    How can I import just the grants that I need, and leave out the others?
    At present, I either:
    * use exclude=grant, in which case a bunch of grants need to be applied later, and then all those re-compiled
    or
    * don't exclude grant, in which case I get several thousand errors, masking any real errors that may arise.
    Does anyone know a way to solve my problem?
    Thanks in advance,
    Chris
    OS: Solaris 10
    DB: 10.2.0.2.0
    P.S. I could create all the RO users in DEV, but eventually I will be doing this for about 20 DB's and I don't
    want to clutter the user space with 10*20 useless accounts for 6*20 useful ones.

    CONTENT
    Specifies data to load.
    Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
    DATA_OPTIONS
    Data layer option flags.
    Valid keywords are: SKIP_CONSTRAINT_ERRORS.
    ==========================================
    Excluding Constraints
    The following constraints cannot be excluded:
    * NOT NULL constraints.
    * Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).
    This means that the following EXCLUDE statements will be interpreted as follows:
    * EXCLUDE=CONSTRAINT will exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading.
    * EXCLUDE=REF_CONSTRAINT will exclude referential integrity (foreign key) constraints.
    Edited by: sb92075 on Oct 26, 2010 6:19 PM

  • Reconfigure one data pump extract into 4 and several replicats

    Hi
    I have setup a basic replication for two database, source 9iR2 and target 11gR2 (same platform), the goal is to perform a zero downtime migration.
    I have started from very basic configuration just to see how this works, one extract, one data pump and one replicat but I have observed some lagging problem so I think I might need to have more data pumps and more replicats.
    How can I reconfigure current configuration to a multiple replicat and data pump configuration without neededing to reinstantiate the target database? I have looked into support note How to Merge Two Extract And Two Replicate Group To One Each, from one source DB to a target DB (Doc ID 1518039.1) but I need to do the reverse.
    Thanks in advance
    Edited by: user2877716 on Feb 5, 2013 4:04 AM
    Edited by: user2877716 on Feb 5, 2013 4:05 AM

    Good example in the Apress book. Basically, stop extract, let pump and replicat finish the current stream, then split data pump and replicat as needed. Figure out where to start each pump and replicat with respect to the CSN.

  • Data Pump between differnt oracle versions

    This is a follow on to the posting from yesterday. We have a customer database that is at 10.2.N, which we currently do not support. So they are planning on buying a new server and installing oracle at the supported version of 10.1.0.5. We are then planning on exporting the data from the 10.2 database using the schema option and then importing back into the 10.1.05 database
    I thought I could use the data pump and specify the version parameter as 10.1.0.5 during the export, but according to the help that option/parameter is only valid for NETWORK_LINK and SQLFILE. I was not planning on using those options but just the default dmp file.
    Am I better off just using the old export/import utilites. The schema owner has tables that contain lob elements so how does that work with SQLFILE?
    Any advice is appreaciated!
    thanks!

    I don't think it is not supported:
    C:\Documents and Settings\ATorelli>expdp system/xxxxx version=*10.1.0.1* schemas=pippo
    Export: Release *10.2.0.4.0* - Production on Mercoledý, 29 Giugno, 2011 18:42:19
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connesso a: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Avvio di "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** version=10.1.0.1 schemas=pippo
    Stima in corso con il metodo BLOCKS...
    Elaborazione dell'object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Stima totale con il metodo BLOCKS: 0 KB
    Elaborazione dell'object type SCHEMA_EXPORT/USER
    Elaborazione dell'object type SCHEMA_EXPORT/DEFAULT_ROLE
    Elaborazione dell'object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Caricamento/Scaricamento della tabella principale "SYSTEM"."SYS_EXPORT_SCHEMA_01" completato
    File di dump impostato per SYSTEM.SYS_EXPORT_SCHEMA_01 is:
    C:\ORA\ATORELLI\APP\PRODUCT\ADMIN\MRMANREP\DPDUMP\EXPDAT.DMP
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completato in 18:42:23
    Regards

  • Data Pump data compression

    We are seriously looking to begin using data pump and discovered via a metalink bulletin that data pump does not support named pipes or os compression utilities (Doc ID: Note:276521.1). Does anyone know a work around for this or have experience using data pump with very large databases and managing the size of the dmp file.

    With oracle datapump, you can set the maximum size of your dumpfile and in this way also dumping on more directories.
    I found the following on AskTom website. Ik looks like the things you want are possible with Oracle10g Release 2:
    Fortunately, Oracle Database 10g Release 2 makes it easier to create and partially compress DMP files than the old tools ever did. EXPDP itself will now compress all metadata written to the dump file and IMPDP will decompress it automatically—no more messing around at the operating system level. And Oracle Database 10g Release 2 gives Oracle Database on Windows the ability to partially compress DMP files on the fly for the first time (named pipes are a feature of UNIX/Linux).

  • Access 97 Data Pump

    Greetings,
    I have written a data pump in Access97. I am currenty using the latest ODBC drivers from the OTN site (downloaded as of today). I am running Oracle 8.1.7
    The problem I am having is I am pumping information from an older (oracle 7) database into my reports database every night. However, at different stages of the dump Access seems to Dr. Watson on me.
    I have loaded every patch and driver upgrade I can find both here and at Microsoft. If anyone here can identify anything stupid I am doing wrong it would be appreciated.
    I can provide the code I am using on request, however I have changed the way I have executed the code many times with the same result.
    Thanks

    Jim Thompson wrote:
    Hi, I have had a look at the 11gR2 Utilities manual with regards to Access methods for Data Pump and I need a bit of clarification -
    My understanding is that Data Pump ( DBMS_DATAPUMP ) can access a table's meta data ( i.e. what is needed to create the DDL of the table ) via DBMS_METADATA. However whenever accessing the data itself it can do this via
    Data File copying ( basically a non sql method of using Transportable tablespaces ) or
    Direct Path ( which bypasses Sql ) or
    External Table ( a table to dump file mapping system ).
    Q1. What does Direct Path use if it is bypassing Sql ?Direct path inserts uses free space above the high watermark in the tablespace files, and when the inserts are finished, the HWM is moved, so that Oracle can see the newly inserted rows.
    Q2. Do any of the methods use Sql ? ( I am trying to ascertain what part Sql plays at all )Yes. Import can be done using normal inserts into the table, generating normal amounts of redo/undo.
    Q3. I appreciate that under certain circumstances the same method does not have to be used on the Export and Import side eg Export could use Direct Path and import could use External Table - however is that a choice you make as a user or is this a choice that Data Pump automatically makes ? You basically have no saying in which method that's being used, DataPump makes that decision for you, based on what's most efficient.
    Edit: Found more information and details on datapump here:
    http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
    Look at the bottom for links to the documentation for applicable Oracle-versions.
    HtH
    Johan
    Edited by: Johan Nilsson on Feb 16, 2012 3:21 AM

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Oracle data pump vs import/export utility

    Hello all,
    What is the difference between Oracle Data Pump and Import/Export utility? which on is the faster?

    Handle:      user9362044
    Status Level:      Newbie
    Registered:      Jan 26, 2011
    Total Posts:      31
    Total Questions:      11 (7 unresolved)
    so many questions & so few answers.
    What is the difference between Oracle Data Pump and Import/Export utility?unwilling or incapable to Read The Fine Manual yourself?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/toc.htm

  • Data pump + statistics + dba_tab_modifications

    Database 11.2, persons involved with replicating data are using data pump and importing the statistics (that's all I know about their process). When I look at this query:
    select /*mods.table_owner,
           mods.table_name,*/
           mods.inserts,
           mods.updates,
           mods.deletes,
           mods.timestamp,
           tabs.num_rows,
           tabs.table_lock,
           tabs.monitoring,
           tabs.sample_size,
           tabs.last_analyzed
    from   sys.dba_tab_modifications mods join dba_tables tabs
           on mods.table_owner = tabs.owner and mods.table_name = tabs.table_name
    where  mods.table_name = &tab;I see this:
    INSERTS          UPDATES     DELETES     TIMESTAMP          NUM_ROWS     TABLE_LOCK     MONITORING     SAMPLE_SIZE     LAST_ANALYZED
    119333320     0     0     11/22/2011 19:27     116022939     ENABLED          YES          116022939     10/24/2011 23:10As we can see, the source database last gathered stats on 10/24 and the data was loaded in the destination on 11/22.
    The database is giving bad execution plans as indicated in previous thread: Re: Understanding results from dbms_xplan.display_cursor
    My first inclination is to run the following, but since they imported the stats, should they already be "good" and it's a matter of dba_tab_modifications getting out of sync? What gives?
    exec dbms_stats.gather_schema_stats(
         ownname          => 'SCHEMA_NAME',
         options          => 'GATHER AUTO'
       )

    In your previous post you mentioned that the explain plan has 197 records. That is one big SQL statement, so the CBO has plenty of opportunity to mess it up.
    That said, it is a good idea to verify that your statistics are fine. One way to accomplish that is to gather statistics into a pending area and compare the pending area stats with what is currently in the dictionary using DBMS_STATS.DIFF_TABLE_STATS_IN_PENDING.
    As mentioned by Tubby in your previous post, extended stats are powerful and easy way to improve the quality of CBO’s plans. Note that in 11.2.0.2 you can create extended stats specifically tailored to your SQLs (based on AWR or from live system) - http://iiotzov.wordpress.com/2011/11/01/get-the-max-of-oracle-11gr2-right-from-the-start-create-relevant-extended-statistics-as-a-part-of-the-upgrade/
    Iordan Iotzov
    http://iiotzov.wordpress.com/
    Edited by: Iordan Iotzov on Jan 6, 2012 12:45 PM

Maybe you are looking for