INCLUDE & EXCLUDE in Data Pump
Any reason why DATA PUMP does not allow INCLUDE parameter when you have the EXCLUDE parameter in your statement and vice versa?
"Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file."
Source: http://www.oracle.com/technology/products/database/utilities/pdf/datapump11g2007_quickstart.pdf
Jonathan Ferreira
http://oracle4dbas.blogspot.com
Similar Messages
-
Hi all,
We are using oracle 10g with windows platform. During the export operation,
we are using the command EXCLUDE=SCHEMA:"='HR'" for excluding the schema HR.
Can anybody suggest me about the syntex, if need to exclude more than one schemas
by using the EXCLUDE parameter?This is just an expression clause. You used the 'equal' sign. There is no reason you can't use the 'IN' clause.
You used:
EXCLUDE=SCHEMA:"='HR'"
How about
EXCLUDE=SCHEMA:"IN ('HR', 'SCOTT', 'BLAKE')"
Dean -
Data pump include exclude statistics
I am new to oracle. Database statistics are stored in data dictionary(sys schema), then how can we include and exclude statistics(sys objects) while doing data pump.
These statistics are for your schema objects (most cases tables). You have permission to create/update your schema object's statistics aka analyze them.
This link gives you a bit more info what simple export/import does when you use STATISTICS parameter.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2786
When using datapump export statistics are always saved for tables and if the source table has statistics, they are imported. -
Exclude DBMS_SCHEDULER jobs in DATA PUMP import
Hello,
I need to exclude all DBMS_SCHEDULER jobs during DATA PUMP import.
I tried to do this with: EXCLUDE=JOB but this only works on DBMS_JOB.
Is there a way to exclude all DBMS_SCHEDULER jobs during import?
Kind RegardsThere are PROCOBJ that can be excluded (Procedural objects in the selected schemas) but I'm affraid it exclude not only DBMS_SCHEDULER jobs.
Any ideas? -
How to exclude statistic using Data Pump API?
How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?
You would call the metadata filter api like this:
dbms_datapump.METADATA_FILTER(
handle = your_handle_here,
name = 'EXCLUDE_PATH_LIST',
value = 'STATISTICS');
Hope this helps.
Dean -
I want to export few sachems and all of them has a table with same name eg. table_name. My question is how to export full database but exclude table_name from one schema?
Hi,
If the table name is not unique then you can't use my approach that Srini linked to above. The problem is the filter can only accept the name, no schema prefix and no other method of doing it (i.e. via object id or some other method).
You can do something like the following to exclude the data but you can;t exclude the table definition unfortunately. This might be good enough for you? Other than that I'm not sure how you address this other than having seperate schema exports, which is probably a lot of hassle for you.
You could temporarily rename the table in the one schema or give it a mixed case name which you can exclude (creating a synonym which is uppercase perhaps) - but thats just a bodge to get round what is a restriction of datapump.
SYS@EIBEM>create user testa identified by testa profile eis_dba;
User created.
SYS@EIBEM>grant connect,resource to testa;
Grant succeeded.
SYS@EIBEM>create user testb identified by testb profile eis_dba;
User created.
SYS@EIBEM>grant connect,resource to testb;
Grant succeeded.
SYS@EIBEM>create table testa.tab1(col1 date);
Table created.
SYS@EIBEM>create table testb.tab1(col1 date);
Table created.
SYS@EIBEM>insert into testa.tab1 values (sysdate);
1 row created.
SYS@EIBEM>insert into testb.tab1 values (sysdate);
1 row created.
SYS@EIBEM>commit;
Commit complete.
SYS@EIBEM>
[oracle@sl02190]:EIBEM:/oracle# expdp / schemas=testa,testb query=TESTA.TAB1\:\"where 1=2\" directory=TMP
Export: Release 11.2.0.3.0 - Production on Tue Nov 12 14:21:05 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning option
Starting "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01": /******** schemas=testa,testb query=TESTA.TAB1:"where 1=2" directory=TMP
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "TESTA"."TAB1" 4.929 KB 0 rows
. . exported "TESTB"."TAB1" 4.945 KB 1 rows
Master table "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for OPS$ORACLE.SYS_EXPORT_SCHEMA_01 is:
/tmp/expdat.dmp
Job "OPS$ORACLE"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:21:25
Cheers,
Harry -
Database Upgrade using Data Pump
Hi,
I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
thanks,
JimJim,
Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
1. you need XX GB for the source database
2. you need YY GB for the source dumpfile
3. you need YY GB for the target dumpfile that you copy
4. you need XX GB for the target databse.
By doing network you get rid if 2*YY GB for the dumpfiles.
Dean -
I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 19.87 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
. . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
thanks,
JimJim,
Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
Dean -
Hi,
Is there a way to use the Data Pump API to export tables from multiple schemas in the same job? I can't figure out what the filters would be. It seems like I can either specify many tables from 1 schema only, or I can specify multiple schemas but not limit the tables I want to export.
I keep running into this error: ORA-31655: no data or metadata objects selected for job
I'd like to do something like this:
--METADATA FILTER: SPECIFY TABLES TO EXPORT
dbms_datapump.metadata_filter(
handle => hdl,
name => 'NAME_EXPR',
value => 'IN(''schema1.table1'',''schema2.table2'')');
This does not seem to be possible..
Any help would be appreciated.
Thanks,
NoraUser that have EXP_FULL_DATABASE role should be able to do what you want.
Search here for that role http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1007837
Seems like you could do what you want by using that role in
joint venture wiht exclude and include parameters http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1009903 -
Data Pump - How to avoid exporting/importing dbms_scheduler jobs?
Hi,
I am using data pump to export a users objects. When I import them it also imports any jobs that user has created with dbms_scheduler - how can I avoid this. I tried EXCLUDE=JOBS but no luck.
Thanks,
Jon.
Here are my export and import paramater files:
DIRECTORY=dpump_dir1
DUMPFILE=reveal.dmp
CONTENT=METADATA_ONLY
SCHEMAS=REVEAL
EXCLUDE=TABLE_STATISTICS
EXCLUDE=INDEX_STATISTICS
LOGFILE=reveal.log
DIRECTORY=dpump_dir1
DUMPFILE=reveal.dmp
CONTENT=METADATA_ONLY
SCHEMAS=reveal
REMAP_SCHEMA=reveal:reveal_backup
TRANSFORM=SEGMENT_ATTRIBUTES:n
EXCLUDE=TABLE_STATISTICS
EXCLUDE=INDEX_STATISTICS
LOGFILE=reveal.logSorry for the reply to an old post.
It seems that now (10.2.0.4) JOB is included in the list of SCHEMA_EXPORT_OBJECTS.
SQL> SELECT OBJECT_PATH FROM SCHEMA_EXPORT_OBJECTS WHERE object_path LIKE '%JOB%';
OBJECT_PATH
JOB
SCHEMA_EXPORT/JOB
Unfortunatly, EXCLUDE=JOB still generates invalid argument on my schema imports. I also don't know whether these are old style jobs, or scheduler jobs. I don't see anything for object_path LIKE '%SCHED%' , which is my real interest anyway.
The data pump is so rich already, I hate ask for more, but ... may we please have even more?? scheduler_programs, scheduler_jobs, scheduler etc.
Thanks
Steve -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
EXPORT ONLY TABLES IN A SCHEMA USING DATA PUMP
Hi folks,
Good day. I will appreciate if I can get a data pump command to export only TABLE oBJECT in a specific shema .
The server is a 4 node RAC with 16 CPU per node.
Thanks in advanceIf all you want is the table definitions, why can you use something like:
expdp user/password direcory=my_dir dumfile=my_dump.dmp tables=schama1.table1,schema1.table2,etc content=metadata_only include=table
This will export just the table definitions. If you want the data to, then remove the content=metadata_only, if you want the dependent objects, like indexes, table_statistics, etc, then remove the include=table.
Dean -
Data pump + inserting new values in a table
Hello Everybody!!
My question is : is it possible to insert new row into a table which is included in running DataPump process (export data) ?
Best regards
Marcin MigdalHave a look here, never done it myself but seems kind of cool :)
http://www.pythian.com/blogs/766/oracle-data-pump-11g-little-known-new-feature -
File name substitution with Data pump
Hi,
I'm experimenting with Oracle data pump export, 10.2 on Windows 2003 Server.
On my current export scripts, I am able to create the dump file name dynamically.
This name includes the database name, date, and time such as the
following : exp_testdb_01192005_1105.dmp.
When I try to do the same thing with data pump, it doesn't work. Has anyone
had success with this. Thanks.
ed lewisHi Ed
This is an example for your issue:
[oracle@dbservertest backups]$ expdp gsmtest/gsm directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
Export: Release 10.2.0.1.0 - Production on Thursday, 19 January, 2006 12:23:55
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "GSMTEST"."SYS_EXPORT_TABLE_01": gsmtest/******** directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "GSMTEST"."BAN_BANCO" 7.718 KB 9 rows
Master table "GSMTEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for GSMTEST.SYS_EXPORT_TABLE_01 is:
/megadata/clona/exp_testdb_01192005_1105.dmp
Job "GSMTEST"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:18
This work OK.
Regards,
Wilson -
RE: (forte-users) Including/Excluding plans
Pascal,
Before you partition try setting "trc:cf:6".
Good luck,
John Hornsby
DS Data Systems (UK) Ltd.
John.Hornsbydsdata.co.uk
Tel. +44 01908 847100
Mob. +44 07966 546189
Fax. +44 0870 0525800
-----Original Message-----
From: Rottier, Pascal [mailto:Rottier.Pascalpmintl.ch]
Sent: 11 August 2000 10:42
To: 'Forte Users'
Subject: (forte-users) Including/Excluding plans
Hi,
I know there is a way to determine why Forte includes plans in a partition
during partitioning. This shows which plans are (direct or indirect)
supplier plans, but also, which plans are included for other reasons. But I
forgot how to do this. Can anyone point me in the right direction.
Pascal Rottier
Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comUse fscript :
findplan xxx
findactenv
showapp
This will give you the partitioning scheme, placement of all
service ojects and a list of all included projects per partition.
David Bell
Technical Manager - North Europe
iPlanet Professional Services
Sun Microsystems | Netscape Alliance
St James's House Phone : +44 1344 482100
Oldbury Fax: +44 1344 420905
Bracknell Mobile: +44 7718 808062
RG12 8SA Email : david.j.belluk.sun.com
http://www.iplanet.com
-----Original Message-----
From: John Hornsby [mailto:John.Hornsbydsdata.co.uk]
Sent: 11 August 2000 11:22
To: Rottier, Pascal
Cc: Forte Users
Subject: RE: (forte-users) Including/Excluding plans
Pascal,
Before you partition try setting "trc:cf:6".
Good luck,
John Hornsby
DS Data Systems (UK) Ltd.
John.Hornsbydsdata.co.uk
Tel. +44 01908 847100
Mob. +44 07966 546189
Fax. +44 0870 0525800
-----Original Message-----
From: Rottier, Pascal [mailto:Rottier.Pascalpmintl.ch]
Sent: 11 August 2000 10:42
To: 'Forte Users'
Subject: (forte-users) Including/Excluding plans
Hi,
I know there is a way to determine why Forte includes plans in a partition
during partitioning. This shows which plans are (direct or indirect)
supplier plans, but also, which plans are included for other
reasons. But I
forgot how to do this. Can anyone point me in the right direction.
Pascal Rottier
Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com
Maybe you are looking for
-
Wacom Volito2 with 2.6.18 and Xorg 7.1?
Hi guys, I'm new to arch linux and like it very much, so thank you all for this great distribution. But, I'm afraid, there's one problem I can't solve: I have the Volito2 Wacom Graphics Tablet and had it running under Ubuntu the time ago..So I though
-
Hi I have an Internal Order Settlement Error. Incident Details: P2P order No xxxxxxxxxx was booked into SAP in period 09 (lines 10 /20 & 30) This P2P Order was linked to an Internal Order (No xxxxxxxxx) and the cost was moved to the Asset Module when
-
IPad suddenly doesnt respond to computer
Hi, Well i just got a iPad 2 and ive had it for 2 days now.. However my computer and iTunes does not respond to the iPad and the iPad does not respond to the computer. I have run iTunes diagnosis and it says nothing is connected. My cable works to ch
-
Hello, iam running into a strange problem while doing a backup via Oracle Enterprise Tool under Oracle 11g: 1. Login: sys as sydba 2. Backup Setting are on "Image Copy" 2.1 Keep most generally default settings 2.2 Test Disk Backup successfully 3. Sch
-
Where I can find Camera RAW upgrade that supports Nikon D600 ?
Where I can find Camera RAW upgrade that supports Nikon D600 ?