Exclude in data pump

Hi all,
We are using oracle 10g with windows platform. During the export operation,
we are using the command EXCLUDE=SCHEMA:"='HR'" for excluding the schema HR.
Can anybody suggest me about the syntex, if need to exclude more than one schemas
by using the EXCLUDE parameter?

This is just an expression clause. You used the 'equal' sign. There is no reason you can't use the 'IN' clause.
You used:
EXCLUDE=SCHEMA:"='HR'"
How about
EXCLUDE=SCHEMA:"IN ('HR', 'SCOTT', 'BLAKE')"
Dean

Similar Messages

  • INCLUDE & EXCLUDE in Data Pump

    Any reason why DATA PUMP does not allow INCLUDE parameter when you have the EXCLUDE parameter in your statement and vice versa?

    "Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file."
    Source: http://www.oracle.com/technology/products/database/utilities/pdf/datapump11g2007_quickstart.pdf
    Jonathan Ferreira
    http://oracle4dbas.blogspot.com

  • Exclude DBMS_SCHEDULER jobs in DATA PUMP import

    Hello,
    I need to exclude all DBMS_SCHEDULER jobs during DATA PUMP import.
    I tried to do this with: EXCLUDE=JOB but this only works on DBMS_JOB.
    Is there a way to exclude all DBMS_SCHEDULER jobs during import?
    Kind Regards

    There are PROCOBJ that can be excluded (Procedural objects in the selected schemas) but I'm affraid it exclude not only DBMS_SCHEDULER jobs.
    Any ideas?

  • How to exclude statistic using Data Pump API?

    How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?

    You would call the metadata filter api like this:
    dbms_datapump.METADATA_FILTER(
    handle = your_handle_here,
    name = 'EXCLUDE_PATH_LIST',
    value = 'STATISTICS');
    Hope this helps.
    Dean

  • Data pump include exclude statistics

    I am new to oracle. Database statistics are stored in data dictionary(sys schema), then how can we include and exclude statistics(sys objects) while doing data pump.

    These statistics are for your schema objects (most cases tables). You have permission to create/update your schema object's statistics aka analyze them.
    This link gives you a bit more info what simple export/import does when you use STATISTICS parameter.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2786
    When using datapump export statistics are always saved for tables and if the source table has statistics, they are imported.

  • Data Pump exclude

    I want to export few sachems and all of them has a table with same name eg. table_name. My question is how to export full database but exclude table_name from one schema?

    Hi,
    If the table name is not unique then you can't use my approach that Srini linked to above. The problem is the filter can only accept the name, no schema prefix and no other method of doing it (i.e. via object id or some other method).
    You can do something like the following to exclude the data but you can;t exclude the table definition unfortunately. This might be good enough for you? Other than that I'm not sure how you address this other than having seperate schema exports, which is probably a lot of hassle for you.
    You could temporarily rename the table in the one schema or give it a mixed case name which you can exclude (creating a synonym which is uppercase perhaps) - but thats just a bodge to get round what is a restriction of datapump.
    SYS@EIBEM>create user testa identified by testa profile eis_dba;
    User created.
    SYS@EIBEM>grant connect,resource to testa;
    Grant succeeded.
    SYS@EIBEM>create user testb identified by testb profile eis_dba;
    User created.
    SYS@EIBEM>grant connect,resource to testb;
    Grant succeeded.
    SYS@EIBEM>create table testa.tab1(col1 date);
    Table created.
    SYS@EIBEM>create table testb.tab1(col1 date);
    Table created.
    SYS@EIBEM>insert into testa.tab1 values (sysdate);
    1 row created.
    SYS@EIBEM>insert into testb.tab1 values (sysdate);
    1 row created.
    SYS@EIBEM>commit;
    Commit complete.
    SYS@EIBEM>
    [oracle@sl02190]:EIBEM:/oracle# expdp / schemas=testa,testb query=TESTA.TAB1\:\"where 1=2\" directory=TMP
    Export: Release 11.2.0.3.0 - Production on Tue Nov 12 14:21:05 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning option
    Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01":  /******** schemas=testa,testb query=TESTA.TAB1:"where 1=2" directory=TMP
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 128 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    . . exported "TESTA"."TAB1"                              4.929 KB       0 rows
    . . exported "TESTB"."TAB1"                              4.945 KB       1 rows
    Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_01 is:
      /tmp/expdat.dmp
    Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:21:25
    Cheers,
    Harry

  • Data pump, Query "1=2" performance?

    Hi guys
    I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
    I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
    While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
    I have been unable to find such information after searching the net for a good while.
    Regards
    Alex

    Thanks.
    Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
    Regards
    Alex

  • 10g to 11gR2 Upgrade using Data Pump Import

    Hi,
    I am intending to move a 10g database from one windows server to another. However there is also a requirement to upgrade this database to 11gR2. Therefore I was going to combine the 2 in one movement by -
    1. take a full data pump export of the source 10g database
    2. create a new empty 11g database on the target environment
    3. import the dump file into the target database
    However I have a couple of queries running over in my mind about this approach -
    Q1. What happens with the SYSTEM and SYS objects from SYSTEM and SYSAUX during the import ? Given that I will have in effect already created a new dictionary on the empty target database - will any import of SYSTEM or SYS simply produce errror messages which should be ignored ?
    Q2. should I use EXCLUDE on SYS and SYSTEM ( is this EXCLUDE better on the export or import side ) ?
    Q3. what happens if there are things such as scheduled jobs etc on the source system - since these would be stored in SYSTEM owned tables, how would I bring these across to the new target 11g database ?
    thanks,
    Jim

    Jim Thompson wrote:
    Hi,
    I am intending to move a 10g database from one windows server to another. However there is also a requirement to upgrade this database to 11gR2. Therefore I was going to combine the 2 in one movement by -
    1. take a full data pump export of the source 10g database
    2. create a new empty 11g database on the target environment
    3. import the dump file into the target database
    However I have a couple of queries running over in my mind about this approach -
    Q1. What happens with the SYSTEM and SYS objects from SYSTEM and SYSAUX during the import ? Given that I will have in effect already created a new dictionary on the empty target database - will any import of SYSTEM or SYS simply produce errror messages which should be ignored ?
    You wont get error related to system , sysaux tablespace bacsei these wont be exported at all. Schemas like "SYS, CTXSYS, MDSYS and ORDSYS" are never exported using datapump. Thats why oracle reommends not to create any objects under sys,system schema.
    Q2. should I use EXCLUDE on SYS and SYSTEM ( is this EXCLUDE better on the export or import side ) ?
    Not required, as datadictionary schemas wont be exported, so spcefying EXCLUDE wont do anything
    Q3. what happens if there are things such as scheduled jobs etc on the source system - since these would be stored in SYSTEM owned tables, how would I bring these across to the new target 11g database ?
    DDL will get export and imported into new database. For example if you have schema A and you had defined job whos owner is A, then while importing this information also gets imported. For user defined Jobs there are view like USER_SCHEDULER_JOBS etc, So dont worry about the user jobs, these will be created.
    Also see
    MOS note - Schema's CTXSYS, MDSYS and ORDSYS are Not Exported [ID 228482.1]
    Export system or sys schema
    I also run following test in my db which shows i cannt export the objects under sys schema;
    SQL> show user
    USER is "SYS"
    SQL> create table pump (id number);
    Table created.
    SQL> insert into pump values (1);
    1 row created.
    SQL> insert into pump values (2);
    1 row created.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Pr
    oduction
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\rnagi\My Documents\Ranjit Doc\Performance\SQL>expdp tables=sys.pump logfile=test.log
    Export: Release 11.2.0.1.0 - Production on Mon Feb 27 18:11:29 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Username: / as sysdba
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Produc
    tion
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=sys.pump logfi
    le=test.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    *ORA-39166: Object SYS.PUMP was not found.*
    *ORA-31655: no data or metadata objects selected for job*
    *Job "SYS"."SYS_EXPORT_TABLE_01" completed with 2 error(s) at 18:11:57*

  • Database Upgrade using Data Pump

    Hi,
    I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
    therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
    I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
    thanks,
    Jim

    Jim,
    Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
    I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
    Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
    Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
    1. you need XX GB for the source database
    2. you need YY GB for the source dumpfile
    3. you need YY GB for the target dumpfile that you copy
    4. you need XX GB for the target databse.
    By doing network you get rid if 2*YY GB for the dumpfiles.
    Dean

  • FILESIZE parameter in DATA PUMP

    Hi All,
    As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
    But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
    I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
    IT creates the files with different size.
    JOB_NAME=SCHEMA.ENV.080410..p1
    DIRECTORY=dump_dir
    DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
    LOGFILE=SCHEMA.ENV.080410..p1.explog
    PARALLEL=16
    CONTENT=ALL
    EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
    TABLES= TABLE NAMES

    user4005330 wrote:
    Hi All,
    As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
    But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
    I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
    IT creates the files with different size.
    JOB_NAME=SCHEMA.ENV.080410..p1
    DIRECTORY=dump_dir
    DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
    LOGFILE=SCHEMA.ENV.080410..p1.explog
    PARALLEL=16
    CONTENT=ALL
    EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
    TABLES= TABLE NAMESAs you defined PARALLEL=16, Data Pump will create 16 process and each process will write to it's own file, that's the reason why you get different sized files

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Data Pump API Question

    Hi,
    Is there a way to use the Data Pump API to export tables from multiple schemas in the same job? I can't figure out what the filters would be. It seems like I can either specify many tables from 1 schema only, or I can specify multiple schemas but not limit the tables I want to export.
    I keep running into this error: ORA-31655: no data or metadata objects selected for job
    I'd like to do something like this:
    --METADATA FILTER: SPECIFY TABLES TO EXPORT
    dbms_datapump.metadata_filter(
    handle => hdl,
    name => 'NAME_EXPR',
    value => 'IN(''schema1.table1'',''schema2.table2'')');
    This does not seem to be possible..
    Any help would be appreciated.
    Thanks,
    Nora

    User that have EXP_FULL_DATABASE role should be able to do what you want.
    Search here for that role http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1007837
    Seems like you could do what you want by using that role in
    joint venture wiht exclude and include parameters http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1009903

  • Data Pump Export issue - no streams pool created and cannot automatically c

    I am trying to use data pump on a 10.2.0.1 database that has vlm enabled and getting the following error :
    Export: Release 10.2.0.1.0 - Production on Tuesday, 20 April, 2010 10:52:08
    Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
    ORA-31626: job does not exist
    ORA-31637: cannot create job SYS_EXPORT_TABLE_01 for user E_AGENT_SITE
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT_INT", line 600
    ORA-39080: failed to create queues "KUPC$C_1_20100420105208" and "KUPC$S_1_20100420105208" for Data Pump job
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPC$QUE_INT", line 1555
    ORA-00832: no streams pool created and cannot automatically create one
    This is my script (that I currently use on other non vlm databases successfully):
    expdp e_agent_site/<password>@orcl parfile=d:\DailySitePump.par
    this is my parameter file :
    DUMPFILE=site_pump%U.dmp
    PARALLEL=1
    LOGFILE=site_pump.log
    STATUS=300
    DIRECTORY=DATA_DUMP
    QUERY=wwv_document$:"where last_updated > sysdate-18"
    EXCLUDE=CONSTRAINT
    EXCLUDE=INDEX
    EXCLUDE=GRANT
    TABLES=wwv_document$
    FILESIZE=2000M
    My oracle directory is created and the user has rights
    googling the issue says that the shared pool is too small or streams_pool_size needs setting. shared_pool_size = 1200M and when I query v$parameter it shows that streams_pool_size = 0
    I've tried alter system set streams_pool_size=1M; but I just get :
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-04033: Insufficient memory to grow pool
    The server is a windows enterprise box with 16GB ram and VLM enabled, pfile memory parameters listed below:
    # resource
    processes = 1250
    job_queue_processes = 10
    open_cursors = 1000 # no overhead if set too high
    # sga
    shared_pool_size = 1200M
    large_pool_size = 150M
    java_pool_size = 50M
    # pga
    pga_aggregate_target = 850M # custom
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS1
    # vlm support
    USE_INDIRECT_DATA_BUFFERS = TRUE
    DB_BLOCK_BUFFERS = 1500000
    Any ideas why I cannot run data pump? I am assuming that I just need to set streams_pool_size but I don't understand why I cannot increase the size of it on this db. It is set to 0 on other databases that work fine and I can set it which is why I am possibly linking the issue to vlm
    thanks
    Robert

    SGA_MAX_SIZE?
    SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH;
    ALTER SYSTEM SET streams_pool_size=32M SCOPE=BOTH
    ERROR at line 1:
    ORA-02097: parameter cannot be modified because specified value is invalid
    ORA-04033: Insufficient memory to grow pool
    SQL> show parameter sga_max
    NAME                         TYPE      VALUE
    sga_max_size                    big integer 480M
    SQL> show parameter cache
    NAME                         TYPE      VALUE
    db_16k_cache_size               big integer 0
    db_2k_cache_size               big integer 0
    db_32k_cache_size               big integer 0
    db_4k_cache_size               big integer 0
    db_8k_cache_size               big integer 0
    db_cache_advice                string      ON
    db_cache_size                    big integer 256M
    db_keep_cache_size               big integer 0
    db_recycle_cache_size               big integer 0
    object_cache_max_size_percent          integer      10
    object_cache_optimal_size          integer      102400
    session_cached_cursors               integer      20
    SQL> ALTER SYSTEM SET db_cache_size=224M SCOPE=both;
    System altered.
    SQL> ALTER SYSTEM SET streams_pool_size=32M SCOPE=both;
    System altered.Lukasz

  • Data Pump - How to avoid exporting/importing dbms_scheduler jobs?

    Hi,
    I am using data pump to export a users objects. When I import them it also imports any jobs that user has created with dbms_scheduler - how can I avoid this. I tried EXCLUDE=JOBS but no luck.
    Thanks,
    Jon.
    Here are my export and import paramater files:
    DIRECTORY=dpump_dir1
    DUMPFILE=reveal.dmp
    CONTENT=METADATA_ONLY
    SCHEMAS=REVEAL
    EXCLUDE=TABLE_STATISTICS
    EXCLUDE=INDEX_STATISTICS
    LOGFILE=reveal.log
    DIRECTORY=dpump_dir1
    DUMPFILE=reveal.dmp
    CONTENT=METADATA_ONLY
    SCHEMAS=reveal
    REMAP_SCHEMA=reveal:reveal_backup
    TRANSFORM=SEGMENT_ATTRIBUTES:n
    EXCLUDE=TABLE_STATISTICS
    EXCLUDE=INDEX_STATISTICS
    LOGFILE=reveal.log

    Sorry for the reply to an old post.
    It seems that now (10.2.0.4) JOB is included in the list of SCHEMA_EXPORT_OBJECTS.
    SQL> SELECT OBJECT_PATH FROM SCHEMA_EXPORT_OBJECTS WHERE object_path LIKE '%JOB%';
    OBJECT_PATH
    JOB
    SCHEMA_EXPORT/JOB
    Unfortunatly, EXCLUDE=JOB still generates invalid argument on my schema imports. I also don't know whether these are old style jobs, or scheduler jobs. I don't see anything for object_path LIKE '%SCHED%' , which is my real interest anyway.
    The data pump is so rich already, I hate ask for more, but ... may we please have even more?? scheduler_programs, scheduler_jobs, scheduler etc.
    Thanks
    Steve

  • Can we load data in chunks using data pump ?

    We are loading data using data pump. So I want to clear my understanding.
    Please correct me if I am wrong on my understandings -
    ODI will fetch all data from source (whether it is INIT or CDC ) in one go and unload into staging area.
    If it is true, will performance hamper in case very huge data (50 million records at source) at source as ODI tries to load entire data in one go. I believe it will give better performance if we load in chunks using data pump.
    Please confirm and correct.
    Also I would like to know how can we configure chunk load using data-pump.
    Thanks in Advance.
    Regards,
    Dinesh.

    You may consider usingLKM Oracle to Oracle (datapump)
    http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/oracle_db.htm#r15c1-t2
    In 11g ODI reads from source and write to target in parallel. This is the case where you specify select query in source command and insert/update query in the target command. At source side Odi reads records from source and add them to a data queue. At target side a parallel thread reads data from the data queue and writes to the target. So the overall performance would be the slower of the read or write process.
    Thanks,

Maybe you are looking for

  • How do you use emojis in the iPhone 5s

    How do you use emojis in the iPhone 5s

  • Use of subroutine in SapScript

    Hi Friends,               I am passing two parameters as input from sapscript perform statement and i want one field from database table as output.               In abap editor i have created two internal tables of the type ITCSY for input and output

  • How can I stream video from my computer to my tv?

    I want to know if and how I can stream video over my wireless network from my computer to my internet ready Sony TV.  The TV is connected to an Airport Express.

  • Kernel and ABAP + Basis Components Versions

    Hello All, Can any 1 please give me detailed information on ABAP + BASIS components and Kernel version from R/3 4.6 to the latest NW systems version. Few things about which i am clear is(please correct me if i am wrong below) Initially ABAP ENGINE=AB

  • How can I arrange bookmarks in a grid format instead of a vertical line?

    Windows has a IE Favorites folder, containing favorite websites, and all of these favorite website folders and links can be viewed in a grid format (spread horizontally and vertically). Firefox Bookmarks: I can only get to display in a very long vert