Schema level Import issue
Hi ,
Recently i faces one issue :-
schema backup from one database is created for <SCHEMA1> whose default tablespace is <TABS1> , and trying to import in to <SCHEMA1> of different database whose default tablespace is <TABS2> but it looks for <TABS1> tablespace.i have used fromuser touser clause during import.
So, How can i perform this task without creating a <TABS1> tablespace and assign a default tablespace for <SCHEMA1> or renamimg a <TABS2> tablespace to <TABS1> tablespace which is a tidious task in oracle 9i.
1 set up a default tablespace for the target user
2 Make sure the target user doesn't have the RESOURCE role and/or UNLIMITED TABLESPACE privilege.
3 Make sure the target user has QUOTA on the default tablespace ONLY
alter user quota unlimited on <target> quota 0 on <the rest>
4 Import without importing indexes, those won't be relocated.
5 Imp indexfile=<any file> ---> file with create index statements
6 edit this file, adjusting tablespaces
7 run it.
Tablespaces can't be renamed in 9i.
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Steps to Perform before schema level import
Hi,
We are planning to migrate Oracle 8i database to Oracle 10g Database.
Approach that we have decided is export/Import.
Can anyone tell me what all steps we have to perform before importing dmp to new database?
We are planning to go for schema level export/import.
Thanks in Advance
AT1. Get a list of users to be exported
select distinct owner from dba_segments where owner NOT LIKE '%SYS%'
2. exp parfile=gen8i.prf
vi gen8i.prf
system/sys
owern=( <list generated above>)
rest of the exp paramters such as statistics, consistent
3. imp the dump file
4. recompile the packages
5. done. -
Error while doing schema level import using datapump
Hi I get the following ierrors while importing a schema from prod to dev database... can anyone help ,thanks!
impdp system DIRECTORY=DATA_PUMP_DIR DUMPFILE=abcdprod.DMP LOGFILE=abcdprod.log REMAP_SCHEMA=abcdprod:abcddev
ORA-39002: invalid operation
ORA-31694: master table "SYSTEM"."SYS_IMPORT_FULL_01" failed to load/unload
ORA-31644: unable to position to block number 170452 in dump file "/ots/oracle/echo/wxyz/datapump/abcdprod.DMP877410 wrote:
Hi I get the following ierrors while importing a schema from prod to dev database... can anyone help ,thanks!
impdp system DIRECTORY=DATA_PUMP_DIR DUMPFILE=abcdprod.DMP LOGFILE=abcdprod.log REMAP_SCHEMA=abcdprod:abcddev
ORA-39002: invalid operation
ORA-31694: master table "SYSTEM"."SYS_IMPORT_FULL_01" failed to load/unload
ORA-31644: unable to position to block number 170452 in dump file "/ots/oracle/echo/wxyz/datapump/abcdprod.DMPpost complete command line for the expdp that made the abcdprod.DMP file -
Sequence nextval after an schema level export import
If I export a schema that has some sequences and then truncate tables and then import the schema, do I get theh old sequence values?
I guess my question is do sequences get stored at the schema level or the database level.
I noticed that sequences are exported at the schema level when I do an export so that may be the answer but your confirmation would be appreciated.Hi,
Nothing to worry,imp/exp does not change the value of Nextval.you can use truncate table after exp, then u can import it.
Regards
Vinay Agarwal
OCP -
Hi,
I am in process of setting up Oracle Streams schema level replication on version 10.2.0.3. I am able to setup replication for one table properly. Now I want to add 10 more new tables for schema level replication. Few questions regarding this
1. If I create new tables in source, shall I have to create tables in target database manually or I have to do export STREAMS_INSTANTIATION=Y
2. Can you tell me metalink note id to read more on this topic ?
thanks & regards
paragThe same capture and apply process can be used to replicate other tables. Following steps should suffice your need:
Say table NEW is the new table to be added with owner SANTU
downstr_cap is the capture process which is already running
downstr_apply is the apply process which is already there
1. Now stop the apply process
2. Stop the capture process
3. Add the new table in the capture process using +ve rule
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
table_name => 'SANTU.NEW',
streams_type => 'capture',
streams_name => 'downstr_cap',
queue_name => 'strmadmin.DOWNSTREAM_Q',
include_dml => true,
include_ddl => true,
source_database => ' Name of the source database ',
inclusion_rule => true
END;
4. Take export of the new table with "OBJECT_CONSISTENT=Y" option
5. Import the table at destination with "STREAMS_INSTANTIATION=Y' option
6. Start the apply process
7. Start the capture process -
Hello,
Can anybody guide me how to generate an AWR (automatic workload repository) report at schema level? Actually I have created one user name xyz and imported some 1000 objects (tables, views etc) .
I have little doubt here , when we create one user ,is schema for that user will it create automatically …..if its true then I need to generate one AWR report for that user or schemaI don't think this is possible: AWR only works at database/instance level and not at schema level.
-
Schema level and table level supplemental logging
Hello,
I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
SQL>alter database add supplemental log data (primary key) columns;
Database altered.
SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
SUPPLEME SUP SUP
IMPLICIT YES NO
-My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
Successfully logged into database.
GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
what is the deference between schema level and table level supplemental logging?For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
When to use ADD TRANDATA for an Oracle source database
Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
● You can stop DML activity on any and all tables before users or applications perform DDL on them.
● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
statement.
❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
Additional requirements when using ADD TRANDATA
Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
statement:
SQL> alter database add supplemental log data;
To verify that supplemental logging is enabled at the database level, issue the following statement:
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0) -
Dml_Handler at the Schema Level?
Hi:
I'm using 11g R2 and doing a one-way streams replication within the same database. I've got a subset of tables within the same schema setup now for capture and I'm using dml_handlers on apply. The handlers are specified on each table with a package that takes each trapped LCR and writes its data out to a different table than the one that the LCR got captured from. This was done for a bolt-on reporting issue that popped up. This is all working great. Streams rocks!
Here's my next issue. I want to expand/morph the above approach in the following way. I want to do my capture at the schema level for all tables and also run just one dml_handler to take all LCRs for the specified schema and write them out as XML into a clob column. I've got the XML portion working and I pretty much know how I can get the streams part going as well, using a dml_handler-per-table approach similar to what I did above. What I would like to know is whether there's a way to avoid having to setup a dml_hander for each insert, update, and delete LCR on every table within the specified schema.
Instead of doing this....
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'schema.table_a',
object_type => 'TABLE',
operation_name => 'INSERT',
error_handler => false,
user_procedure => 'package.procedure',
apply_database_link => NULL,
apply_name => 'apply_name');
END;
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'schema.table_a',
object_type => 'TABLE',
operation_name => 'UPDATE',
error_handler => false,
user_procedure => 'package.procedure',
apply_database_link => NULL,
apply_name => 'apply_name');
END;
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'schema.table_a',
object_type => 'TABLE',
operation_name => 'DELETE',
error_handler => false,
user_procedure => 'package.procedure',
apply_database_link => NULL,
apply_name => 'apply_name');
END;
Once for each table in the schema
I'd like to be able to do the following:
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
schema_name => 'schema', ---This line is totally made up by me. The real argument is object_name
object_type => 'TABLE',
operation_name => 'ALL', ---This line is also totally made up by me. The real allowed options are Insert, Update, and Delete.
error_handler => false,
user_procedure => 'package.procedure',
apply_database_link => NULL,
apply_name => 'apply_name');
END;
Is there a way to do this? I don't see a procedure in dbms_apply_adm to accomplish it, or I just missed it. I could also do this within a loop using dynamic SQL but I'm hoping I won't have to.
Thanks for any help!
Cheers,
MikeSCHEMA level is not possible with this procedure.
However, you can set the operation_name to 'DEFAULT' - which indicates all operations (INSERT/UPDATE/DELETE/LOB_UPDATE) -
{color:#000080}Hi,{color}
{color:#000080}I have successfully configured the schema level replication, but some issues are front of me to struck me.
So, I am going to explain my scenario as following.{color}
{color:#000080}1: Site1 having oracle 10g DB and having two schema level processes: Capture and Propagate
2: Site2 having oracle 10g DB and having two schema level processes: Capture and Propagate
3: Central_DB having oracle10g DB and having two APPLY processes one for site1 and second for site2.{color}
{color:#000080}I have a plan to connect the total 18 branches to our Central_DB in future.
From all sites to Central_DB we will configure the Schema Level Replication using streams.(All sites have same schema)(DML + DDL)
But From Central_DB to all sites we will configure Table Level Rule-Based replication.(Only DML's){color}
{color:#000080}My Questions are:{color}
{color:#000080}1: Please recommend me that it's a good model or not ?
2: When site1 doing DML or DDL changes locally then changes are captured by capture process and
After that changes are propagated from source queue(Site1) to destination queue(CEntral_DB) and
Applied by apply process of Central_DB.
> If site1 doing changes and at the same time Central_DB goes shutdown and
restart again then changes are applied from site1 to Central_DB.(Normal Working)
> If site1 doing changes and at the same time Central_DB goes shutdown and
site1 doing more changes locally, after few minutes site1 also goes shutdown , Now site1 and Central_DB both are down.
> After one day when both machines are upped and running, but propagation process
contains some connection errors and no further changes are replicated, even status of processes are enabled.{color}
{color:#ff0000}ERROR: {color}
{color:#ff0000} ORA-12545: Connect failed because target host or object does not exis {color}
{color:#000080}Please assist me to troubleshoot this problem why no further changes are replicated?{color}
{color:#000080}Thanks,
Fazi {color}
{color:#000080}
{color}
{color:#000080}
{color}That would be if an IP or DNS lookup (depending on what is specified as HOST in the tnsnames entry) is failing.
Possibly, the lookups before the servers went down were via DNS and now the DNS is no longer available (it has gone down ?). OR that someone has recreated a hosts file and the earlier entry for the target host is missing. -
EXPORT at schema level, but exclude some tables within the export
I have been searching, but had no luck in finding the correct syntax for my situation.
I'm simply trying to export at the schema level, but I want to omit certain tables from the export.
exp cltest/cltest01@clprod file=exp_CLPROD092508.dmp log=exp_CLPROD092508.log statistics=none compress=N
Thanks!Hi,
Think in simple first.. you use the TABLES Clause..
Example.
exp scott/tiger file=empdept.expdat tables=(EMP,DEPT) log=empdept.log
In case if you scehma contains less number of tables.. !!
Logically if you have large number of tables, I say this solutuion might work ...all around... alternative solutions to solve the problems.. If you have hundered of tables... in your schema....
Try to Create a New Schema and using CTAS create a tables which are skippable in the Current Scehma.
Do an Export and once the Job Done.. you recreate the backup fom New schema
and Import to DB (Destinaiton)
- Pavan Kumar N -
Schema Level Bidirectional streams - works only in one direction
Hi All,
we are implementing bidirectional streams at schema level(using scott schema for testing).
Our environment and different parameters are:
Source:
OS =Win2003 64bit
DB Version= 10.2.0.5.0 64bit
DB SID=CIBSPROD
log_archive_dest_1 LOCATION=E:\DBFILES\ArchiveLog\CIBSPROD\PrimaryRole VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=CIBSPROD
log_archive_dest_2 SERVICE=CIBSREP LGWR ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSREP
job_queue_processes=2
Dest:
OS =Win2003 64bit
DB Version= 10.2.0.5.0 64bit
DB SID=CIBSREP
log_archive_dest_1 LOCATION=E:\DBFILES\ArchiveLog\CIBSREP\PrimaryRole VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSREP
log_archive_dest_2 SERVICE=CIBSPROD LGWR ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSPROD
job_queue_processes=2
Follow the "Streams Bi-Directional Setup [ID 471845.1]" article
Problem we are facing is changes are propagating from Source(CIBSPROD) to Destination(CIBSREP) BUT NOT from Destination to Source Database(although archivelogs are shipping from Destination to Source).
Executed below script for configuration:
SET ECHO ON
SPOOL strm-reconfig-scott.out
conn sys/&sys_pwd_source@CIBSPROD as sysdba
EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
DROP USER STRMADMIN CASCADE;
CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
EXECute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
drop database link CIBSREP;
create public database link CIBSREP connect to strmadmin identified by strmadmin using 'CIBSREP';
conn sys/&sys_pwd_downstream@CIBSREP as sysdba
EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
DROP USER STRMADMIN CASCADE;
CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
EXECute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
drop user SCOTT CASCADE;
create user SCOTT identified by scott175;
GRANT CONNECT, RESOURCE to SCOTT;
drop database link CIBSPROD;
create public database link CIBSPROD connect to strmadmin identified by strmadmin using 'CIBSPROD';
--Set up 2 queues for Capture and apply in CIBSPROD Database
conn strmadmin/strmadmin@CIBSPROD
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'APPLY_CIBSPROD_TAB', queue_name => 'APPLY_CIBSPROD', queue_user => 'strmadmin');
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'CAPTURE_CIBSPROD_TAB', queue_name => 'CAPTURE_CIBSPROD', queue_user => 'strmadmin');
--Set up 2 queues for Capture and apply in CIBSREP Database
conn strmadmin/strmadmin@CIBSREP
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'APPLY_CIBSREP_TAB', queue_name => 'APPLY_CIBSREP', queue_user => 'strmadmin');
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'CAPTURE_CIBSREP_TAB', queue_name => 'CAPTURE_CIBSREP', queue_user => 'strmadmin');
--Configure capture,apply and propagation process on CIBSPROD database.
conn strmadmin/strmadmin@CIBSPROD
EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'CAPTURE', streams_name => 'CAPTURE_CIBSPROD', queue_name => 'CAPTURE_CIBSPROD', include_dml => true, include_ddl => true, inclusion_rule => true);
EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'APPLY', streams_name => 'APPLY_CIBSPROD', queue_name => 'APPLY_CIBSPROD', include_dml => true, include_ddl => true, source_database => 'CIBSREP');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'PARALLELISM', value => '5');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => '_HASH_TABLE_SIZE', value => '10000000');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'TXN_LCR_SPILL_THRESHOLD', value => '1000000');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'DISABLE_ON_ERROR', value => 'N');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => '_dynamic_stmts',value => 'Y');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => 'COMMIT_SERIALIZATION',value => 'NONE');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => '_RESTRICT_ALL_REF_CONS',value => 'N');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => 'ALLOW_DUPLICATE_ROWS',value => 'Y');
EXEC dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'scott', streams_name => 'PROP_CIBSPROD_to_CIBSREP', source_queue_name => 'CAPTURE_CIBSPROD', destination_queue_name => 'APPLY_CIBSREP@CIBSREP', include_dml => true, include_ddl => true, source_database => 'CIBSPROD');
--Configure capture,apply and propagation process process on CIBSREP Database
conn strmadmin/strmadmin@CIBSREP
EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'CAPTURE', streams_name => 'CAPTURE_CIBSREP', queue_name => 'CAPTURE_CIBSREP', include_dml => true, include_ddl => true);
EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'APPLY', streams_name => 'APPLY_CIBSREP', queue_name => 'APPLY_CIBSREP', include_dml => true, include_ddl => true, source_database => 'CIBSPROD');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'PARALLELISM', value => '5');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => '_HASH_TABLE_SIZE', value => '10000000');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'TXN_LCR_SPILL_THRESHOLD', value => '1000000');
EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'DISABLE_ON_ERROR', value => 'N');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => '_dynamic_stmts',value => 'Y');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => 'COMMIT_SERIALIZATION',value => 'NONE');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => '_RESTRICT_ALL_REF_CONS',value => 'N');
EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => 'ALLOW_DUPLICATE_ROWS',value => 'Y');
EXEC dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'scott', streams_name => 'PROP_CIBSREP_to_CIBSPROD', source_queue_name => 'CAPTURE_CIBSREP', destination_queue_name => 'APPLY_CIBSPROD@CIBSPROD', include_dml => true, include_ddl => true, source_database => 'CIBSREP');
--Import export schema
host exp USERID=SYSTEM/cibsmgr@CIBSPROD parfile=expparfile-scott.txt
host imp USERID=SYSTEM/cibsmgr@CIBSREP parfile=impparfile-scott.txt
--Start capture and apply processes on CIBSREP
conn strmadmin/strmadmin@CIBSREP
EXEC dbms_capture_adm.start_capture (capture_name=>'CAPTURE_CIBSREP');
EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_CIBSREP');
--Start capture and apply processes on CIBSPROD
conn strmadmin/strmadmin@CIBSPROD
EXEC dbms_capture_adm.start_capture (capture_name=>'CAPTURE_CIBSPROD');
EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_CIBSPROD');
SPOOL OFF
What we have missed in the configuration?
Regards,
AsimFind error SRC database "ORA-26687: no instantiation SCN provided fro SCOTT.STRMTEST in source database CIBSREP"
SCOTT.STRMTEST is heartbeat table used from streams replication on Source and Destination database -
How can we take the incremental export using the datapump at schema level.
Hi,
How can we take the incremental export using the datapump at schema level. Example, today i had taken the full export of one schema.
After 7 days from now , how can i take the export of change data , from the data of my full export.
using the export datapump parameter - FLASHBACK_TIME ,can we mention the date range. Example sysdate - (sysdate -7)
Please advice
thanks
Naveen.Think of the Data Pump Export/Import tools as taking a "picture" or "snapshot."
When you use these utilities it exports the data as it appears (by default) while it's doing the export operation. There is no way to export a delta in comparison to a previous export.
The FLASHBACK_TIME parameter allows you to get a consistent export as of a particular point in time.
I recommend you go to http://tahiti.oracle.com. Click on your version and go to the Utilities Guide. This is where all the information on Data Pump is located and should answer a lot of your questions. -
Schema level refresh help required in types
Hi,
I have the following environments :
OS : windows 2003
Database version : 10.2.0.3
no archive log mode:
i have four schemas. like schema1,schema2,schema3 and schema4.
i have created schema5 like (schema4) by copying dba_role_privs / dba_tab_privs and dba_sys_privs.
i have exported schema4 and imported in schema5.
there three tables not imported.
on exploring found the three tables created on types in schema4.
we have types present in all the schemas except in schema5
when i tried to create types it is coming out with compilation errors.
any idea how to handle types in schema level refreshes?
i am using exp/imp with fromuser/touser clauses.
Thanks,
Raman.My work log copied below:
USERNAME PASSWORD DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
PROFILE
schema4 2072A1370A380D8A IMPACT TEMP
DEFAULT
GRANTEE GRANTED_ROLE ADM DEF
schema4 CONNECT NO YES
schema4 RESOURCE NO YES
schema4 GRP_IMPACT NO YES
schema4 GRUPPE_IMPACTNET NO YES
schema4 GRUPPE_IMPACTLOGIN NO YES
GRANTEE PRIVILEGE ADM
schema4 CREATE TABLE NO
schema4 CREATE ANY TABLE NO
schema4 UNLIMITED TABLESPACE NO
schema4 EXECUTE ANY PROCEDURE NO
create user schema5 identified by ***** default tablespace impact temporary tablespace temp profile default;
grant connect,resource,GRP_IMPACT,GRUPPE_IMPACTNET,GRUPPE_IMPACTLOGIN to schema5;
grant CREATE TABLE,CREATE ANY TABLE,UNLIMITED TABLESPACE,EXECUTE ANY PROCEDURE to schema5;
set head off
spool grants.sql
SELECT 'GRANT '||PRIVILEGE||' ON '||OWNER||'.'||TABLE_NAME||' TO '||'schema5;' FROM DBA_TAB_PRIVS WHERE GRANTEE='schema4';
spool off
@grants.sql
exp sys/******@****.world file=Impact_1.dmp,Impact_2.dmp,Impact_3.dmp FILESIZE =1000M log=schema4_exp.log consistent=y OWNER=schema4 STATISTICS=none
imp 'sys/*****@****.world as sysdba' file=Impact_1.dmp,Impact_2.dmp,Impact_3.dmp log=schema5_imp.log ignore=y fromuser=schema4 touser=schema5
SQL> select count(*),object_type,status,owner from dba_objects where owner like 'schema4' group by object_type,status,owner;
1 LOB VALID schema4
98 TYPE VALID schema4
407 VIEW VALID schema4
786 INDEX VALID schema4
1379 TABLE VALID schema4
44 PACKAGE VALID schema4
19 SYNONYM VALID schema4
50 TRIGGER VALID schema4
153 FUNCTION VALID schema4
22 SEQUENCE VALID schema4
460 PROCEDURE VALID schema4
3 TYPE BODY VALID schema4
42 PACKAGE BODY VALID schema4
3 DATABASE LINK VALID schema4
14 rows selected.
SQL> select count(*),object_type,status,owner from dba_objects where owner like 'schema5' group by object_type,status,owner;
1 LOB VALID schema5
59 TYPE VALID schema5
392 VIEW VALID schema5
15 VIEW INVALID schema5
780 INDEX VALID schema5
1376 TABLE VALID schema5
41 PACKAGE VALID schema5
3 PACKAGE INVALID schema5
19 SYNONYM VALID schema5
50 TRIGGER VALID schema5
89 FUNCTION VALID schema5
64 FUNCTION INVALID schema5
22 SEQUENCE VALID schema5
126 PROCEDURE VALID schema5
334 PROCEDURE INVALID schema5
24 PACKAGE BODY VALID schema5
18 PACKAGE BODY INVALID schema5
3 DATABASE LINK VALID schema5
SQL> select count(*) from dba_objects where owner like 'schema4';
3467
SQL> select count(*) from dba_objects where owner like 'schema5';
3416
=================
15 VIEW INVALID schema5
3 PACKAGE INVALID schema5
64 FUNCTION INVALID schema5
334 PROCEDURE INVALID schema5
18 PACKAGE BODY INVALID schema5 -
Strange import issue .. no longer works in 2.6 or Beta 3
Strange import issue.
I use LR-2.6 everyday but have only played around with the LR-3 for a limited amount of time so the other day I decided check it out and tried to import a handful of RAW(canon) files on a SDxc-8g card via a USB card reader. I set it up to copy and convert to DNG, apply a few keywords and Metadata but when I tried it acted like it imported the first image and then froze. Card reader lights stopped blinking and just stayed on, program task bar doesn’t show any progress, not even for the first image. I then went to the folder that I was going to have the images copies to and it contained a file of the first image, but there is no indication that LR-3 even created it and it doesn’t show up in the catalog. When I tried to close out LR-3 is popped up the “in the middle of a task warning” so I just closed the program and tried it a couple more times to no avail.
So I though o well I will just open up LR-2.6 and import them from there, guess what now LR-2.6 does the exact same thing!!
I thought maybe my card reader was a POS but I was able to drag all the files to a new folder on my desktop and import them from there, nope. Tried rebooting a couple times, even opened a new catalog to test with, in both LR-2.6 and LR-3 and neither will work..
I am stumped any ideas?Thank you for everyone's assistance
I actually figured out what the problem was, like most issues it comes back to operator error, it seem I had inadvertently chose a "rename file on import" template that I had set up to name the files based on a custom fill-in field in the Meta Data, I normally only used that template when I was exporting images that had been completely edited, so being these images were new imports that field was blank, hence it would import the first image and then do to " don't import subjected duplicates" being checked it would just stop the import process. -
IMPORT issues: Works with iMovie 3 but not iMovie 6 ???
Okay, still trying to get my import issue resolved.
I have a SONY HDR-HC3 DVmini.
No problems with iMovie 3 on my Powerbook.
iMovie 6 on my MDD will let me control the camera, but the screen stays BLUE with no IMPORT.
My buddy thinks it might be a Quicktime issue.
He said to download Quicktime Pro, that might resolve this issue.
I can copy the iMovie 3 file to my machine with iMovie 6 and do the editing, just not the importing...
THANXHi
Can be so much.
Often Camera has to be connected to Charger/mains during Capture.
An majority is a faulthy - badly connected FW-Cable (USB don't work at all)
My list on this
*NO CAMERA or A/D-box*
Cable
• Sure that You use the FireWire - USB will not work for miniDV tape Cameras
FireWire - Sure not using the accompany USB-Cable but bought a 4-pin to 6-pin FW one ?
• Test another FW-Cable very often the problem maker.
Camera
• Test Your Camera on another Mac so that DV-in still works OK
• Toogle in iMovie pref. Play-back via Camera (on<->off some times)
• Some Cameras has a Menu where You must select DV-out to get it to work
• Camera connected to "charger" (mains adaptor) - not just on battery
Does Your Camera work on another Mac ?
Sorry to say it is to easy to turn the 6-pin end of the FW-cable 180 deg wrong.
This is lethal to the A/D-chip in the Camera = needs an expensive repair.
(Hard to find out - else than import/export to another Mac ceased to work
everything else is OK eg recording and playback to TV)
Connections
• Daisy Chaining most often doesn’t work (some unique cases - it’s the only way that work (some Canon Cameras ?))
Try to avoid connecting Camera <--> external HD <--> Mac but import directly to the Mac then move
the Movie project to dedicated external hard disk.
Mac
• Free space on internal (start-up) hard disk ? Please specify the amount of free space.
(Other hard disks don't count)
I go for a minmum of 25Gb free space for 4x3 SD Video - and my guess is 5 times more for 16x9 HD ones
after material is imported and edited.
SoftWare
• Delete iMovie pref file may help sometimes. I rather start a new account, log into this and have a re-try.
• Any strange Plug-ins into QuickTime as Perian etc ? Remove and try again.
• FileVault is off ? (hopefully)
Using WHAT versions ? .
• Mac OS - X.5.4 ?
• QuickTime version ? (This is the heart in both iMovie and FinalCut)
• iMovie 8 (7.1.?) ?
• iMovie HD 6 (6.0.4/3) ?
*Other ways to import Your miniDV tape*
• Use another Camera. There where tape play-back stations from SONY
but they costed about 2-4 times a normal miniDV Camera.
• If Your Camera works on another Mac. Make an iMovie movie project here and move it
over to Your Mac via an external hard disk.
(HAS TO BE Mac OS Extended formatted - USB/DOS/FAT32/Mac OS Exchange WILL NOT DO)
(Should be a FireWire one - USB/USB2 performs badly)
from LKN 1935.
Hi Bengt W, I tried it all, but nothing worked. Your answer has been helpfull insofar as all the different trials led to the conclusion that there was something wrong with my iMovie software. I therefore threw everything away and reinstalled iMovie from the HD. After that the exportation of DV videos (there has not been any problem with HDV videos) to my Sony camcorders worked properly as it did before. Thank you. LKN 1935
from Karsten.
in addition to Bengt's excellent '9 yards of advice' ..
camera set to 'Play' , not rec/computer/etc.?
camera not on battery, but power-line?
did your Mac 'recognize' this camera before...?
a technical check.
connect camera, on, playback, fw-connected...
click on the Blue Apple, upper left of your screen ..
choose 'About../More..
under Firewire.. what do you read..?
More
• FileVault - Secure that it’s turned off
• Network storage - DOESN’T WORK
• Where did You store/capture/import Your project ?
External USB hard disk = Bad Choise / FireWire = Good
If so it has to be Mac OS Extended formatted
----> UNIX/DOS/FAT32/Mac OS Exchange is NOT Working for VIDEO !
mbolander
Thanks for all your suggestions. What I learned is that I had a software problem. I had something called "Nikon Transfer" on my Mac that was recognizing my Canon camcorder as a still camera and was preventing iMovie from working properly. After uninstalling Nikon Transfer and doing a reboot, everything worked great.
I never liked the Nikon Transfer software anyway--I guess I'll get a cheap card reader and use that to transfer photos in the future.
*No Camera or bad import*
• USB hard disk
• Network storage
• File Vault is on
jiggaman15dg wrote
if you have adobe cs3 or 4 and have the adobe bridge on close that
or no firewire will work
see if that halps
DJ1249 wrote
The problem was the external backup hard drive that is connected, you need to disconect the external drive before the mac can see the video camera.
Yours Bengt W
Maybe you are looking for
-
Keys on my Keyboard don't work?!
Ok, be gentle, i have no clue... For the past day now, *my full stop*, *question mark-forward slash*, and the *letter 'p'* and also my *arrow button's* , and *Cap's lock* button's... DON'T WORK!!!? And it's driving me mad! (currently i am copy and pa
-
Hi, I was trying out a simple Java mapping example. Example of source structure is - <?xml version="1.0"?> <ns0:MT_SRC xmlns:ns0="http://www.sap-press.com/xi/training/00"> <organization> <employee> <firstname>Jack</firstname> <lastname>Rose</la
-
PS/Bridge CS5 Batch Processor Not Working / Missing Scripts
I'm having a bit of a problem within Photoshop/Bridge CS5 and haven't found anything that works to resolve it. In Bridge, I want to launch the Photoshop Batch Processor via Tools>Photoshop>Batch. When I select this, Photoshop launches, but the dialog
-
Scdeulin details in BAPI_SALESORDER_CREATEFROMDAT2
when we create sales order bys usin VA01,scdeuling is done automatically.But if we use BAPI_SALESORDER_CREATEFROMDAT2,we have to give scdeuling details,which may be not possible to give....is thr any method to find it out.......plz help me in this
-
Hi All, While i excute the RIDC code to retrieve the information using the COLLECTION_INFO service. i got the following error. Error: Event generated by user 'weblogic' at host 'CIS'. Unable to display virtual folder information. Unable to open folde