Datapump Ex-/Import
Hey, I have a training database, which is acutally a copy of my productive database.
Initially I did a datapump export and imported the complete file into the training database. Everything worked fine.
Now I have "to update" the training database. New functions and procedures were added in productive, also several thousands row in tables were deleted.
Exporting the user scheme, but while importing the scheme I get loads of errors messages, that the tables,functions,procedures are already existing.
How can I just update the training users scheme ?
I tried table_exists_action=replace,
but I still get error messages for function and procedures !
Do I have to drop the users scheme before importing again ?
Chris
Christian wrote:
Do I have to drop the users scheme before importing again ?
Yes, you do. If using Data Pump, there is no other way to ensure a consistent copy of the schemas.
Similar Messages
-
Datapump network import from 10g to 11g database is not progressing
We have an 11.2.0.3 shell database in RedHat Linux 5 64-bit platform, and we are pulling data from its 10g database (10.2.0.4) HP-UX platform using datapump via NETWORK_LINK. However, the import seems to be not progressing and completing. We have left it run for almost 13 hours and did not even import a single row to the 11g database. We even try to import only one table with 0 rows but still it is not completing, the logfile just continue to be lopping on the following:
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Estimate in progress using BLOCKS method...
Job: IMP_NONLONG4_5
Operation: IMPORT
Mode: TABLE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Worker 1 Status:
Process Name: DW00
State: EXECUTING
We also see this:
EVENT SECONDS_IN_WAIT STATUS STATE ACTION
SQL*Net message from dblink 4408 ACTIVE WAITING IMP_NONLONG4_5
Below is our par file:
NETWORK_LINK=DATABASE_10G
DIRECTORY=MOS_UPGRADE_DUMPLOC1
LOGFILE=imp_nonlong_grp-4_5.log
STATUS=300
CONTENT=DATA_ONLY
JOB_NAME=IMP_NONLONG4_5
TABLES=SYSADM.TEST_TBL
Any ideas? Thanks.Thanks a lot for all who have looked and responded to this, I appreciate you giving time for suggestions. As a recap, datapump export and import via dumpfile works on both 10g and 11g databases, we are only having issue pulling data from the 10g database by executing datapump network import (using dblink) from the 11g database.
SOLUTION: The culprit was the parameter optimizer_features_enable='8.1.7' that is currently set on the 10g database, we have taken that out of the parameter file of the 10g database and network import worked flawlessly.
HOW DID WE FIGURE OUT: We turned on a trace to the datapump sessions and trace file shows something about optimizer (see below). So we have taken one by one any parameter that has something to do with optimizer and found out it was optimizer_features_enable='8.1.7' parameter that when removed makes the network import successful.
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled',
'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE
*/ NVL(SUM(C1),0), NVL(SUM(C2),0), NVL(SUM(C3),0)
FROM
(SELECT /*+ NO_PARALLEL("SYS_IMPORT_TABLE_02") INDEX("SYS_IMPORT_TABLE_02"
SYS_C00638241) NO_PARALLEL_INDEX("SYS_IMPORT_TABLE_02") */ 1 AS C1, 1 AS C2,
1 AS C3 FROM "DBSCHEDUSER"."SYS_IMPORT_TABLE_02" "SYS_IMPORT_TABLE_02"
WHERE "SYS_IMPORT_TABLE_02"."PROCESS_ORDER"=:B1 AND ROWNUM <= 2500)
SAMPLESUB
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("SYS_IMPORT_TABLE_02")
FULL("SYS_IMPORT_TABLE_02") NO_PARALLEL_INDEX("SYS_IMPORT_TABLE_02") */ 1
AS C1, CASE WHEN "SYS_IMPORT_TABLE_02"."PROCESS_ORDER"=:B1 THEN 1 ELSE 0
END AS C2 FROM "DBSCHEDUSER"."SYS_IMPORT_TABLE_02" "SYS_IMPORT_TABLE_02")
SAMPLESUB -
Which background process involves in datapump export/import?
Hi guys,
Could any one please tell me which background process involves in datapump export and import activity? . any information please.
/mRData pump export and import is done by foreground server processes (master and workers), not background.
http://www.acs.ilstu.edu/docs/Oracle/server.101/b10825/dp_overview.htm#sthref22 -
Datapump table import, simple question
Hi,
As a junior dba, i am a little bit confused.
Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He exported with expdp using tables=XX and i dont know the details..)...
If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?
From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
thanks in advanceHi,
Suppose that a user wants me to import a table with datapump which he exported from another db with different schemas and tablespaces(He >exported with expdp using tables=XX and i dont know the details..)...
If i only know the table name, should i ask these three questions 1)schema 2)tablespace 3)oracle version ?You can get this information from the dumpfile if you want - just to make sure you get the right information. If you run your import command but add:
sqlfile=mysql.sql
Then you can edit that sql file to see what is in the dumpfile. It won't show data, but it will show all of the metadata (tables, tablespaces, etc). It is a .sql file that contains all of the create statements that would have been executed if you did not add sqlfile parameter.
From docs. i know remapping capabilities of impdp.. But are they mandatory when importing just a table?
thanks in advanceYou never have to remap anything, but if the dumpfile contains a table scott.emp, then it will import that table into scott.emp. If you want it to go into blake, then you need to remap_schema. If it is going into tablespace tbs1 and you want it in tbs2, then you need a remap_tablespace.
Suppose that an enduser wants me to export a spesific table using datapump ..
Should i give him also the name of the tablespace where exported table resides?It would be nice, but see above, you can get the tablespace name from the sqlfile command on import.
Hope this helps.
Dean -
Datapump API: Import all tables in schema
Hi,
how can I import all tables using a wildcard in the datapump-api?
Thanks in advance,
tensai_tensai_ wrote:
Thanks for the links, but I already know them...
My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
Using the samples provided in the manuals:
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
BEGIN
-- Create a (user-named) Data Pump job to do a "schema" import
h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
-- A metadata remap will map all schema objects from one schema to another.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
-- Include and exclude
dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
-- no data please
DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
-- dbms_datapump.stop_job(h1);
dbms_datapump.detach(h1);
END;
/This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
For more samples, refer to the documentation:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Datapump Network Import from Oracle 10.2.0 to 10.1.0
I have self-developed table bulk copy system based on dmbs_datapump API. It is pretty stable and worked very well for a long time on many servers with oracle 10.1.0. But some days ago we have installed new oracle server with oracle 10.2.0 and have noticed, that our system can't copy tables from new server to another servers. After some time of debugging we got description for our situation... dbms_datapump.open() call is failing with "ORA-39006: internal error" exception. dmbs_dapatump.get_status() returns more detailed description: "ORA-39022: Database version 10.2.0.1.0 is not supported."... We're really disappointed... what we can do to use dbms_datapump.open on 10.1.0 to copy table from 10.2.0?
P.S. We have tried to use different values (COMPATIBLE, LATEST, 10.0.0, 10.1.0) for version parameter of open()... but without luck...
Please, help, because we can't upgrade such many old servers to 10.2.0 and we really need to use this new server...
Thanks in advance,
Alexey KovyrinHello,
Your problem is about the Character Set.
WE8ISO8859P1 stores character in *1* byte but AL32UTF8 (Unicode) uses up to *4* bytes.
So, when you Import from WE8ISO8859P1 to AL32UTF8 you may have the offending error ORA-12899.
To solve it, you may change the option of the Datatype of your column. And choose the option CHAR
instead of BYTE (the default), for instance:
VARCHAR2 (100) --> VARCHAR2(100 CHAR)Then, you could import the data.
You may also change the parameter NLS_LENGTH_SEMANTICS to CHAR.
alter system set nls_length_semantics=char scope=spfile;Then, restart your database.
NB: Although the parameter nls_length_semantics is dynamic you have to restart the database so
that it's taken into account.
Then, you create the Tables (empty) and afterwards you run your Import.
Hope this help.
Best regards,
Jean-Valentin
Edited by: Lubiez Jean-Valentin on Mar 21, 2010 4:26 PM -
R0 Rollback Segment in Import of 10gR2 into 11gR2 Database
I have a new install of Oracle Database Server Enterprise Edition 11.2.0.3.6 on AIX 7.1.
I used the DBCA to create two databases and used export datapump and import datapump to upgrade two 10gR2 database to 11gR2.
One of the import logs includes the following messages:
Processing object type DATABASE_EXPORT/ROLLBACK_SEGMENT
ORA-39083: Object type ROLLBACK_SEGMENT failed to create with error:
ORA-02221: invalid MAXEXTENTS storage option value
Failing sql is:
CREATE ROLLBACK SEGMENT "R0" TABLESPACE "SYSTEM" STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 2 MAXEXTENTS 2147483645)
I verified that the source database has R0 with maxextents of 2147483645.
I can copy and modify the CREATE ROLLBACK statement and lower the value assigned to MAXEXTENTS, or use MAXEXTENTS UNLIMITED, then run the command in the target database.
In light of the fact that the DBCA did not create R0 in my two 11gR2 databases, I want to know if I still need to create R0 in the one that showed the error in the import log. Has anyone else noticed the disappearance of R0?
Here are the rollback segments that exist in the 11gR2 database at this time:
SQL> Select SEGMENT_NAME, OWNER, TABLESPACE_NAME, INITIAL_EXTENT, NEXT_EXTENT, MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, STATUS
from SYS.DBA_ROLLBACK_SEGS;
SEGMENT_NAME OWNER TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE STATUS
SYSTEM SYS SYSTEM 114688 57344 1 32765 ONLINE
_SYSSMU1_3638931391$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU2_3033359625$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU3_2670780772$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU4_286801445$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU5_1738828719$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU6_3548494004$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU7_700714424$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU8_2755301871$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU9_2087597455$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
_SYSSMU10_3267518184$ PUBLIC UNDOTBS1 131072 65536 2 32765 ONLINE
11 rows selected.
Thanks,
BillSrini,
Both the 10gR2 and 11gR2 versions of the database are using automatic undo management.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> show parameter undo
NAME TYPE VALUE
_undo_autotune boolean TRUE
undo_management string AUTO
undo_retention integer 14400
undo_tablespace string UNDOTBS1
SQL>
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 14400
undo_tablespace string UNDOTBS1
SQL>
Do you know anything about Oracle no longer automatically creating R0 in 11gR2?
Thanks,
Bill -
Import/export--need help gurus!!!!!!
C:\DOCUME~1\ADMINI~1>set oracle_sid=orcl
C:\DOCUME~1\ADMINI~1>expdp scott/tiger directory=orcl_exp_datapump schemas=scott
Export: Release 10.2.0.1.0 - Production on Tuesday, 12 February, 2008 19:35:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** directory=orcl_exp_datapump schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 32.62 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."SYS_EXPORT_FULL_02" 7.628 MB 12224 rows
. . exported "SCOTT"."SYS_EXPORT_FULL_03" 7.670 MB 12231 rows
. . exported "SCOTT"."SYS_EXPORT_FULL_01" 7.080 MB 9573 rows
. . exported "SCOTT"."SYS_EXPORT_TABLESPACE_01" 255.8 KB 318 rows
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.820 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."TESTDEPT" 5.726 KB 7 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\ORCL_EXP_DATAPUMP\EXPDAT.DMP
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 19:37:35
C:\DOCUME~1\ADMINI~1>impdp abc/abc directory=orcl_exp_datapump dumpfile=expdat.dmp remap_schema=scott:abc
Import: Release 10.2.0.1.0 - Production on Tuesday, 12 February, 2008 19:43:48
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-31626: job does not exist
ORA-31633: unable to create master table "ABC.SYS_IMPORT_FULL_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-01031: insufficient privileges
Any advise is appreciated!
trying to import from scott schema to abc schema....but unsuccessful.
DBA creates directory
grants permission on the directory to scott and abc schemas
exports scott schema (userid/password.....scott/tiger)
imports to abc schema(userid/password....abc/abc)
is this the right way?
i need some assistance guys!!!!!!
Message was edited by:
oracle novice
Message was edited by:
oracle novicePlease check the following minimum requirements are met;
-CREATE SESSION
-CREATE TABLE
-object privileges READ and WRITE on the directory
-sufficient tablespace quota on default user's tablespace
Additionally the role IMP_FULL_DATABASE is needed:
-to run an Import DataPump job that imports a different schema.
These requirements apply to the user connected to the database while preforming Import Datapump Job.
Additionally review Oracle Metalink Document: Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges Doc ID: Note:351598.1
https://metalink.oracle.com/metalink/plsql/f?p=130:14:5033277956381036007::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,351598.1,1,1,1,helvetica
Adith -
Grant Privileges in Database Vault for DATAPUMP.
HI,
I am using ORACLE DATABASE 11g R2.
I have installed/enabled DATABASE VAULT 11g on it.
I have configured many user in it for privileges like 'SELECT on table','INSERT on table' ,DELETE .....
I want to give a user DATAPUMP privilege so that he can export and import.
I have 2 users.
1) MAIN
2) BACKUP
MAIN user is the owner and the most important schema. Now i want one more schema named 'BACKUP' which will be able to take backup from MAIN schema. NO OTHER SCHEMA SHOULD BE ALLOWED TO TAKE BACKUP OF MAIN SCHEMA NOT EVEN SYS.
*Can anyone tell me how i can grant proper privilege to BACKUP schema so that he can use DATAPUMP and import/export from OS prompt on the MAIN schema.
NOTE :- I have Database vault installed on my server. Please let me know what all RULES or RULE SETS I need to make to make this happen.
Thanks in advance.I have managed with privileges to grant BACKUP user right to start an IMPORT but i get these errors while importing :-
Failing sql is:
CREATE TABLE "MAIN"."FLX_PM_OFFER_SELECTOR_B" ("USER_NAME" VARCHAR2(50 BYTE), "PRODUCT_GROUP" VARCHAR2(5 BYTE), "REFERENCE_NUM" VARCHAR2(30 BYTE) NOT NULL ENABLE, "SESSION_STATE" VARCHAR2(5 BYTE), "OFFER_FEATURES" BLOB, "RECOMMENDED_OFFERS" VARCHAR2(500 BYTE), "SELECTED_OFFERS" VARCHAR2(500 BYTE), "MAKER_NAME" VARCHAR2(12 BYTE), "MAK
ORA-39083: Object type TABLE:"MAIN"."FLX_PM_ACCOUNT_ROLE_FLOW" failed to create with error:
ORA-47401: Realm violation for CREATE TABLE on MAIN.FLX_PM_ACCOUNT_ROLE_FLOW
{code}
I am getting this error for all the objects :- SYSNONMY,SEQUENCE,
I have granted MAIN users all the privileges but still i am getting these errors. Do i need to create any realm or rule set for this?
Thanks. -
Dump file has larger size than the import location
Hi,
Im very new to Export and Import level logical backup.
I have an export file which is around 80GB of space in my database say(dev db),I just need to import the same file into another schema in another database say(testdb)which has no adequate space to import.
So What are all the steps i need to perform inorder to import the exported file .
I'm using RHEL4 OS
Oracle 11.2.0.2
Kindly advice me on this.
Thanks in advance
Regards,
FaizHello,
It may be a way.
You may pre-create the Default Tablespace used by the User/Schema of the Target Database with the option COMPRESS FOR OLTP:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/tspaces002.htm#CIHGCFBB
Then, assuming you use DATAPUMP, you Import with the parameter below so that the imported Tables can inherit from the option of the Schema's Default Tablespace:
transform=segment_attribute:n:tableBut, there's no warranty about the compression rate.
So, as previously posted, it would be better to get enough place on the Target Database.
Hope this help.
Best regards,
Jean-Valentin -
Some objects in sys schema not imported usinf full database import/export.
Hello All,
We need to migrate database from one server to new server using same database version 11g Release 2 (11.2.0.1.0). We have some production objects in sys schema. We took full export using full=y and then import on new server using full=y. Other objects belonging to different tablespaces and users has successfully imported but production objects i.e table in sys schema not import.
We used following commands for export and import:
# exp system file=/u01/backup/orcl_full_exp.dmp log=/u01/backup/orcl_full_exp.log full=y statistics=none
# imp system file=/u01/backup/orcl_full_exp.dmp log=/u01/backup/orcl_full_imp.log full=y ignore=y
Kind Regards,
SharjeelHi,
First of all it is not good practice to keep user object in the sys schema
second you are useing version 11gr2 what not you go for datapump export/import method.
third, did you get any error in import, what is the error -
Export ,import efficiency
Hi,
I have a basic question here. Which is more efficient exp/imp or the expdp/impdp and why? I would like to read about it. good documents are welcomed.
Thanks
KrisHi ,
Definetly expdp/impdp(Datapump export import) is much better than original exp/imp which is more
used in oracle 9i Databases.
Top 10 difference between exp/imp(export/import) and expdp/impdp(Datapump export and import) are:
1)Data Pump Export and Import operate on a group of files called a dump file set
rather than on a single sequential dump file.
2)Data Pump Export and Import access files on the server rather than on the client.
This results in improved performance. It also means that directory objects are
required when you specify file locations.
3)The Data Pump Export and Import modes operate symmetrically, whereas original
export and import did not always exhibit this behavior.
For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an
export with SCHEMAS=HR, followed by an import with FULL=Y.
4)Data Pump Export and Import use parallel execution rather than a single stream of
execution, for improved performance. This means that the order of data within
dump file sets and the information in the log files is more variable.
5)Data Pump Export and Import represent metadata in the dump file set as XML
documents rather than as DDL commands. This provides improved flexibility for
transforming the metadata at import time.
6)Data Pump Export and Import are self-tuning utilities. Tuning parameters that
were used in original Export and Import, such as BUFFER and RECORDLENGTH,
are neither required nor supported by Data Pump Export and Import.
7)At import time there is no option to perform interim commits during the
restoration of a partition. This was provided by the COMMIT parameter in original
Import.
8)There is no option to merge extents when you re-create tables. In original Import,
this was provided by the COMPRESS parameter. Instead, extents are reallocated
according to storage parameters for the target table.
9)Sequential media, such as tapes and pipes, are not supported.
10)The Data Pump method for moving data between different database versions is
different than the method used by original Export/Import. With original Export,
you had to run an older version of Export (exp) to produce a dump file that was
compatible with an older database version. With Data Pump, you can use the
current Export (expdp) version and simply use the VERSION parameter to specify the target database version
For more details and options:
exp help=y
imp help=y
expdp help=y
impdp help=y
Fine manuals for referring:
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
Hope it helps.
Best regards,
Rafi.
http://rafioracledba.blogspot.com -
Post Upgrade SQL Performance Issue
Hello,
I Just Upgraded/Migrated my database from 11.1.0.6 SE to 11.2.0.3 EE. I did this with datapump export/import out of the 11.1.0.6 and into a new 11.2.0.3 database. Both the old and the new database are on the same Linux server. The new database has 2GB more RAM assigned to its SGA then the old one. Both DB are using AMM.
The strange part is I have a SQL statement that completes in 1 second in the Old DB and takes 30 seconds in the new one. I even moved the SQL Plan from the Old DB into the New DB so they are using the same plan.
To sum up the issue. I have one SQL statement using the same SQL Plan running at dramatically different speeds on two different databases on the same server. The databases are 11.1.0.7 SE and 11.2.0.3 EE.
Not sure what is going on or how to fix it, Any help would be great!
I have included Explains and Auto Traces from both NEW and OLD databases.
NEW DB Explain Plan (Slow)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 94861 | 193M| | 74043 (1)| 00:18:52 |
| 1 | SORT ORDER BY | | 94861 | 193M| 247M| 74043 (1)| 00:18:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 94861 | 193M| | 31803 (1)| 00:08:07 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1889 | 173K| | 455 (1)| 00:00:07 |
|* 5 | HASH JOIN | | 1889 | 164K| | 454 (1)| 00:00:07 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2138 | 21380 | | 8 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1889 | 145K| | 446 (1)| 00:00:07 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 92972 | 9987K| | 31347 (1)| 00:08:00 |
| 10 | NESTED LOOPS OUTER| | 92972 | 8443K| | 31346 (1)| 00:08:00 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 92972 | 7989K| | 31344 (1)| 00:08:00 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 1 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%' AND "MI"."CLAIM_NUMBER" IS NOT NULL)
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%' AND "M"."THEIR_GROUP_ID" IS NOT NULL)
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
Note
- SQL plan baseline "SYS_SQL_PLAN_a3c20fdcecd98dfe" used for this statement
OLD DB Explain Plan (Fast)
Plan hash value: 1046170788
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 95201 | 193M| | 74262 (1)| 00:14:52 |
| 1 | SORT ORDER BY | | 95201 | 193M| 495M| 74262 (1)| 00:14:52 |
| 2 | VIEW | PBM_MEMBER_INTAKE_VW | 95201 | 193M| | 31853 (1)| 00:06:23 |
| 3 | UNION-ALL | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1943 | 178K| | 486 (1)| 00:00:06 |
|* 5 | HASH JOIN | | 1943 | 168K| | 486 (1)| 00:00:06 |
| 6 | TABLE ACCESS FULL| PBM_CODES | 2105 | 21050 | | 7 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL| PBM_MEMBER_INTAKE | 1943 | 149K| | 479 (1)| 00:00:06 |
|* 8 | INDEX UNIQUE SCAN | ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 93258 | 9M| | 31367 (1)| 00:06:17 |
| 10 | NESTED LOOPS OUTER| | 93258 | 8469K| | 31358 (1)| 00:06:17 |
|* 11 | TABLE ACCESS FULL| PBM_MEMBERS | 93258 | 8014K| | 31352 (1)| 00:06:17 |
|* 12 | INDEX UNIQUE SCAN| ADJ_PK | 1 | 5 | | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PBM_EMPLOYER_UK1 | 1 | 17 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("C"."CODE_ID"="MI"."STATUS_ID")
7 - filter("MI"."CLAIM_NUMBER" LIKE '%A0000250%')
8 - access("MI"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
11 - filter("M"."THEIR_GROUP_ID" LIKE '%A0000250%')
12 - access("M"."ADJUSTER_ID"="A"."ADJUSTER_ID"(+))
13 - access("M"."GROUP_CODE"="E"."GROUP_CODE" AND "M"."EMPLOYER_CODE"="E"."EMPLOYER_CODE")
NEW DB Auto trace (Slow)
active txn count during cleanout 0
blocks decrypted 0
buffer is not pinned count 664129
buffer is pinned count 3061793
bytes received via SQL*Net from client 3339
bytes sent via SQL*Net to client 28758
Cached Commit SCN referenced 662366
calls to get snapshot scn: kcmgss 3
calls to kcmgas 0
calls to kcmgcs 8
CCursor + sql area evicted 0
cell physical IO interconnect bytes 0
cleanout - number of ktugct calls 0
cleanouts only - consistent read gets 0
cluster key scan block gets 0
cluster key scans 0
commit cleanout failures: block lost 0
commit cleanout failures: callback failure 0
commit cleanouts 0
commit cleanouts successfully completed 0
Commit SCN cached 0
commit txn count during cleanout 0
concurrency wait time 0
consistent changes 0
consistent gets 985371
consistent gets - examination 2993
consistent gets direct 0
consistent gets from cache 985371
consistent gets from cache (fastpath) 982093
CPU used by this session 3551
CPU used when call started 3551
CR blocks created 0
cursor authentications 1
data blocks consistent reads - undo records applied 0
db block changes 0
db block gets 0
db block gets direct 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 3553
deferred (CURRENT) block cleanout applications 0
dirty buffers inspected 0
Effective IO time 0
enqueue releases 0
enqueue requests 0
execute count 3
file io wait time 0
free buffer inspected 0
free buffer requested 0
heap block compress 0
Heap Segment Array Updates 0
hot buffers moved to head of LRU 0
HSC Heap Segment Block Changes 0
immediate (CR) block cleanout applications 0
immediate (CURRENT) block cleanout applications 0
IMU Flushes 0
IMU ktichg flush 0
IMU Redo allocation size 0
IMU undo allocation size 0
index fast full scans (full) 2
index fetch by key 0
index scans kdiixs1 12944
lob reads 0
LOB table id lookup cache misses 0
lob writes 0
lob writes unaligned 0
logical read bytes from cache -517775360
logons cumulative 0
logons current 0
messages sent 0
no buffer to keep pinned count 10
no work - consistent read gets 982086
non-idle wait count 6
non-idle wait time 0
Number of read IOs issued 0
opened cursors cumulative 4
opened cursors current 1
OS Involuntary context switches 853
OS Maximum resident set size 0
OS Page faults 0
OS Page reclaims 2453
OS System time used 9
OS User time used 3549
OS Voluntary context switches 238
parse count (failures) 0
parse count (hard) 0
parse count (total) 1
parse time cpu 0
parse time elapsed 0
physical read bytes 0
physical read IO requests 0
physical read total bytes 0
physical read total IO requests 0
physical read total multi block requests 0
physical reads 0
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 0
physical reads direct (lob) 0
physical write bytes 0
physical write IO requests 0
physical write total bytes 0
physical write total IO requests 0
physical writes 0
physical writes direct 0
physical writes direct (lob) 0
physical writes non checkpoint 0
pinned buffers inspected 0
pinned cursors current 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo size for direct writes 0
redo subscn max counts 0
redo synch time 0
redo synch time (usec) 0
redo synch writes 0
Requests to/from client 3
rollbacks only - consistent read gets 0
RowCR - row contention 0
RowCR attempts 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 3
session logical reads 985371
session pga memory 131072
session pga memory max 0
session uga memory 392928
session uga memory max 0
shared hash latch upgrades - no wait 284
shared hash latch upgrades - wait 0
sorts (memory) 3
sorts (rows) 243
sql area evicted 0
sql area purged 0
SQL*Net roundtrips to/from client 4
switch current to new buffer 0
table fetch by rowid 1861456
table fetch continued row 9
table scan blocks gotten 0
table scan rows gotten 0
table scans (short tables) 0
temp space allocated (bytes) 0
undo change vector size 0
user calls 7
user commits 0
user I/O wait time 0
workarea executions - optimal 10
workarea memory allocated 342
OLD DB Auto trace (Fast)
active txn count during cleanout 0
buffer is not pinned count 4
buffer is pinned count 101
bytes received via SQL*Net from client 1322
bytes sent via SQL*Net to client 9560
calls to get snapshot scn: kcmgss 15
calls to kcmgas 0
calls to kcmgcs 0
calls to kcmgrs 1
cleanout - number of ktugct calls 0
cluster key scan block gets 0
cluster key scans 0
commit cleanouts 0
commit cleanouts successfully completed 0
concurrency wait time 0
consistent changes 0
consistent gets 117149
consistent gets - examination 56
consistent gets direct 115301
consistent gets from cache 1848
consistent gets from cache (fastpath) 1792
CPU used by this session 118
CPU used when call started 119
cursor authentications 1
db block changes 0
db block gets 0
db block gets from cache 0
db block gets from cache (fastpath) 0
DB time 123
deferred (CURRENT) block cleanout applications 0
Effective IO time 2012
enqueue conversions 3
enqueue releases 2
enqueue requests 2
enqueue waits 1
execute count 2
free buffer requested 0
HSC Heap Segment Block Changes 0
IMU Flushes 0
IMU ktichg flush 0
index fast full scans (full) 0
index fetch by key 101
index scans kdiixs1 0
lob writes 0
lob writes unaligned 0
logons cumulative 0
logons current 0
messages sent 0
no work - consistent read gets 117080
Number of read IOs issued 1019
opened cursors cumulative 3
opened cursors current 1
OS Involuntary context switches 54
OS Maximum resident set size 7868
OS Page faults 12
OS Page reclaims 2911
OS System time used 57
OS User time used 71
OS Voluntary context switches 25
parse count (failures) 0
parse count (hard) 0
parse count (total) 3
parse time cpu 0
parse time elapsed 0
physical read bytes 944545792
physical read IO requests 1019
physical read total bytes 944545792
physical read total IO requests 1019
physical read total multi block requests 905
physical reads 115301
physical reads cache 0
physical reads cache prefetch 0
physical reads direct 115301
physical reads prefetch warmup 0
process last non-idle time 0
recursive calls 0
recursive cpu usage 0
redo entries 0
redo size 0
redo synch writes 0
rows fetched via callback 0
session connect time 0
session cursor cache count 1
session cursor cache hits 2
session logical reads 117149
session pga memory -983040
session pga memory max 0
session uga memory 0
session uga memory max 0
shared hash latch upgrades - no wait 0
sorts (memory) 2
sorts (rows) 157
sql area purged 0
SQL*Net roundtrips to/from client 3
table fetch by rowid 0
table fetch continued row 0
table scan blocks gotten 117077
table scan rows gotten 1972604
table scans (direct read) 1
table scans (long tables) 1
table scans (short tables) 2
undo change vector size 0
user calls 5
user I/O wait time 0
workarea executions - optimal 4Hi Srini,
Yes the stats on the tables and indexes are current in both DBs. However the NEW DB has "System Stats" in sys.aux_stats$ and the OLD DB does not. The old DB has optimizer_index_caching=0 and optimizer_index_cost_adj=100. The new DB as them at optimizer_index_caching=90 and optimizer_index_cost_adj=25 but should not be using them because of the "System Stats".
Also I thought none of the SQL Optimize stuff would matter because I forced in my own SQL Plan using SPM.
Differences in init.ora
OLD-11 optimizerpush_pred_cost_based = FALSE
NEW-15 audit_sys_operations = FALSE
audit_trail = "DB, EXTENDED"
awr_snapshot_time_offset = 0
OLD-16 audit_sys_operations = TRUE
audit_trail = "XML, EXTENDED"
NEW-22 cell_offload_compaction = "ADAPTIVE"
cell_offload_decryption = TRUE
cell_offload_plan_display = "AUTO"
cell_offload_processing = TRUE
NEW-28 clonedb = FALSE
NEW-32 compatible = "11.2.0.0.0"
OLD-27 compatible = "11.1.0.0.0"
NEW-37 cursor_bind_capture_destination = "memory+disk"
cursor_sharing = "FORCE"
OLD-32 cursor_sharing = "EXACT"
NEW-50 db_cache_size = 4294967296
db_domain = "my.com"
OLD-44 db_cache_size = 0
NEW-54 db_flash_cache_size = 0
NEW-58 db_name = "NEWDB"
db_recovery_file_dest_size = 214748364800
OLD-50 db_name = "OLDDB"
db_recovery_file_dest_size = 8438939648
NEW-63 db_unique_name = "NEWDB"
db_unrecoverable_scn_tracking = TRUE
db_writer_processes = 2
OLD-55 db_unique_name = "OLDDB"
db_writer_processes = 1
NEW-68 deferred_segment_creation = TRUE
NEW-71 dispatchers = "(PROTOCOL=TCP) (SERVICE=NEWDBXDB)"
OLD-61 dispatchers = "(PROTOCOL=TCP) (SERVICE=OLDDBXDB)"
NEW-73 dml_locks = 5068
dst_upgrade_insert_conv = TRUE
OLD-63 dml_locks = 3652
drs_start = FALSE
NEW-80 filesystemio_options = "SETALL"
OLD-70 filesystemio_options = "none"
NEW-87 instance_name = "NEWDB"
OLD-77 instance_name = "OLDDB"
NEW-94 job_queue_processes = 1000
OLD-84 job_queue_processes = 100
NEW-104 log_archive_dest_state_11 = "enable"
log_archive_dest_state_12 = "enable"
log_archive_dest_state_13 = "enable"
log_archive_dest_state_14 = "enable"
log_archive_dest_state_15 = "enable"
log_archive_dest_state_16 = "enable"
log_archive_dest_state_17 = "enable"
log_archive_dest_state_18 = "enable"
log_archive_dest_state_19 = "enable"
NEW-114 log_archive_dest_state_20 = "enable"
log_archive_dest_state_21 = "enable"
log_archive_dest_state_22 = "enable"
log_archive_dest_state_23 = "enable"
log_archive_dest_state_24 = "enable"
log_archive_dest_state_25 = "enable"
log_archive_dest_state_26 = "enable"
log_archive_dest_state_27 = "enable"
log_archive_dest_state_28 = "enable"
log_archive_dest_state_29 = "enable"
NEW-125 log_archive_dest_state_30 = "enable"
log_archive_dest_state_31 = "enable"
NEW-139 log_buffer = 7012352
OLD-108 log_buffer = 34412032
OLD-112 max_commit_propagation_delay = 0
NEW-144 max_enabled_roles = 150
memory_max_target = 12884901888
memory_target = 8589934592
nls_calendar = "GREGORIAN"
OLD-114 max_enabled_roles = 140
memory_max_target = 6576668672
memory_target = 6576668672
NEW-149 nls_currency = "$"
nls_date_format = "DD-MON-RR"
nls_date_language = "AMERICAN"
nls_dual_currency = "$"
nls_iso_currency = "AMERICA"
NEW-157 nls_numeric_characters = ".,"
nls_sort = "BINARY"
NEW-160 nls_time_format = "HH.MI.SSXFF AM"
nls_time_tz_format = "HH.MI.SSXFF AM TZR"
nls_timestamp_format = "DD-MON-RR HH.MI.SSXFF AM"
nls_timestamp_tz_format = "DD-MON-RR HH.MI.SSXFF AM TZR"
NEW-172 optimizer_features_enable = "11.2.0.3"
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
OLD-130 optimizer_features_enable = "11.1.0.6"
optimizer_index_caching = 0
optimizer_index_cost_adj = 100
NEW-184 parallel_degree_limit = "CPU"
parallel_degree_policy = "MANUAL"
parallel_execution_message_size = 16384
parallel_force_local = FALSE
OLD-142 parallel_execution_message_size = 2152
NEW-189 parallel_max_servers = 320
OLD-144 parallel_max_servers = 0
NEW-192 parallel_min_time_threshold = "AUTO"
NEW-195 parallel_servers_target = 128
NEW-197 permit_92_wrap_format = TRUE
OLD-154 plsql_native_library_subdir_count = 0
NEW-220 result_cache_max_size = 21495808
OLD-173 result_cache_max_size = 0
NEW-230 service_names = "NEWDB, NEWDB.my.com, NEW"
OLD-183 service_names = "OLDDB, OLD.my.com"
NEW-233 sessions = 1152
sga_max_size = 12884901888
OLD-186 sessions = 830
sga_max_size = 6576668672
NEW-238 shared_pool_reserved_size = 35232153
OLD-191 shared_pool_reserved_size = 53687091
OLD-199 sql_version = "NATIVE"
NEW-248 star_transformation_enabled = "TRUE"
OLD-202 star_transformation_enabled = "FALSE"
NEW-253 timed_os_statistics = 60
OLD-207 timed_os_statistics = 5
NEW-256 transactions = 1267
OLD-210 transactions = 913
NEW-262 use_large_pages = "TRUE" -
Startup open ORA_12514 error after Shutdown, just upgraded 9i to 10g
Hi,
Noobe here, again! Cheers and happy New Year to all!
How should I edit script to start db that has been shutdown prior to backup?
We are having a problem with backups, failure is occurring executing SQL*Plus startup after shutdown. I successfully upgraded our 9i to 10g, 10.2.0.5.0 on Windows server 2003, 32-bit. We have a back_script.bat that runs all this, the only edits that I had to make within this were the names for the new Oracle services, but that is not a problem. When I run through the actions in the batch file, it fails at the sql>startup open; action, throws the error "ORA-12514 TNS:listener does not currently nkow of service requested in connect descriptor" What is different here between 9i and 10g that now this doesn't work? By commenting out the line calling our oracle_stop_start.sql I am getting successful >exp dumps, but there could be processes running 'cause I'm not stopping everything before proceeding with the export.
I have a complete image of the physical server running in a virtual that I am able to test on. Hopefully I'll not put too much info in here, but here is some background! Both the batch file and little stop_start script predate my involvement.
Here are the pertinent statements in our backup_script.bat:
SET NLS_LANG=AMERICAN_AMERICA.UTF8
SET ORACLE_SID=wind
net stop Windchill3_MethodServer (the third party application that uses the database wind)
sqlplus "system/manager@wind as sysdba" @D:\ptc\windchill\backups\oracle_stop_start
exp system/manager@wind owner=guest file=......\oracle_backup.dmp.....etc, etc, etc
Here is oracle_stop_start.sql
shutdown immediate;
startup open;
quit
As stated earlier, this has worked at least since 2007 when the last edits were made to the batch file. Now it fails at the sql>startup open; throwing the error ORA-12514. I can duplicate this executing each statement in a dos shell. The shutdown command closes, dismounts and shuts down the database. I have tried changing the startup statement to sql>STARTUP OPEN wind; but that makes no difference. I have found that if I exit sql*plus and log back in /nolog (in the same shell window) then I can restart the instance, what I do is:
sqlplus /nologsql>connect sys/* as sysdba;
sql>startup open wind;
This does successfully start the Oracle instance, mount and open the database. How else might I do this? I can certainly duplicate these actions, adding the necessary lines to oracle_start_stop.sql, but is there a better & cleaner way to do this? Right now I just want to make these existing scripts work. Once that is done, then I'm going to investigate and learn more about RMAN and datapump export/import and update our backup procedures.
Any thoughts & suggestions?
Thank you,
TomIt might be a difference in behaviour between 9i and 10g. It looks like you are connecting through sql*net (sqlplus "system/manager@wind as sysdba), so the ORACLE_SID setting is not part of your connection. When the database is shutdown you are no longer connected to any database. When you issue the startup command the ORACLE_SID is not set so there is no service specified.
If you are running this on the database server, try setting the ORACLE_SID then running your script like:
{code}sqlplus "system/manager as sysdba" @D:\ptc\windchill\backups\oracle_stop_start{code}
and you should still have the service set for your connection.
John -
How to move E-Business from HP-UX to AIX
Do you have any reference on how to move E-Business suite to different operating systems or more specifically from HP-UX to AIX? Thx.
We have done the platform move and upgrade at the same time :
From: To:
RHEL 2.1 -> AIX 5.3
DB 9.2 -> DB 10.2
eBS 11.5.9 -> eBS 11.5.10.CU2
High level steps where :
1. take a copy of the 9i database to a staging area (using data guard etc)
2. upgrade database to 10g to get datapump capabilities (we had to upgrade to 10.1 since 10.2 was not certified on RHEL 2.1)
3. datapump export all metadata and data
4. create fresh eBS install on target platform, then delete all database objects
5. Create new database,
6. datapump import (from step 3) and recreate indexes
7. upgrade database to 10gR2
8. upgrade apps to 11.5.10CU2
9. re-install upgraded CEMLI components
10. validate and reconcile data
We are preparing presentation for AUOUG that will describe this in more details, we are happy to share this as soon as it is ready.
If you are not doing and upgrade (of DB and eBS) at the same time as migration your process can be simplified.
Usage of the 10g datapump export / import was essentiall part of our strategy.
We have decided not to use Transportable tablespacess as there is no documented way how to do this with 11i DB, (System tablespace is not transportable which causes number of problems with AQ, etc ).
If you are moving to another platform that has the same endian format (AIX and HPUX are both big endian), and you are on DB 10.2 (or ready to upgrade to 10.2) you can use Transportable Database
Transportable Database is a new feature in Oracle Database 10g Release 2 (10.2) that is the recommended method for migrating an entire database to another platform that has the same endian format. The principal restriction on cross-platform transportable database is that the source and destination platform must share the same endian format. For example, while you can transport a database from Microsoft Windows to Linux for x86 (both little-endian), or from HP-UX to AIX (both big-endian), you cannot transport a whole database from HP_UX to Linux for x86 using this feature.
More info on http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/dbxptrn002.htm#sthref1397
For the 11i middle tier, safest and easiest way is to re-install it from CD, and then add you custom CEMLI bits.
Regards,
Radomir
Maybe you are looking for
-
IPod Shuffle Windows or Mac formatted?
I just got an iPod shuffle (2nd generation). It wont appear in ITunes. It's not any of the obvious errors listed and the 5 steps doesn't help. I've reset it several times. When I went to run the iPod updater it claims it is a Windows formatted iPod S
-
Adobe player "Install now" missing from the website.
I am using a windows vista, and everytime I go on the website I get the adobe flash player button missing, it is there for literally 1 second once the website loads It goes missing. If i try to click on it fast before its missing it re-directs me to
-
Hi I need to create a simple view like this No Count 10 15 to get the No i am using a function which would normally return 1 value like 10 above but could sometimes return 2 numbers in the following format 10:5 When I get a number like this i need to
-
Naming Networks in EEM route table monitor
I have the following EEM applet running on one of my core devices to monitor any changes in the routing table. event manager applet route-table-monitor event routing network 0.0.0.0/0 ge 1 action 0.5 set msg "Route changed: Type: $_routing_type, Netw
-
When my clients order a disc I want them to only be able to print 4x6 prints. How do I go about re-sizing the photos to make this possible?? P.S. I know NOTHING about re-sizing so I might need step by step instructions.