Oracle Unicode Migration Scan Issue
When converting a database to unicode, how do I get past the scanning errors caused by a table without rows, such as some of those in the Apex schemas?
I get the red circle in a white x message, for any tables that have no rows (such as the Bonus table in the Scott schema).
If I add a row in the Scott/Bonus table, and then rescan it using the DMU version 1.2 tool, the red circle in the white x message goes away.
(I have the DMU version 1.2 tool installed locally on Windows 7, and am connecting to an 11g database on Linux 5.)
There are at least hundreds, and in my case, thousands of tables that have no rows.
So, unless I am missing something, the DMU tool won't be of any use until there is a workaround or fix for the scanning of tables without rows.
- Mark
I can scan the scott.bonus table successfully even if it has no rows. Are you able to select from the table in sqlplus? What error message do you see when you highlight the failing table in the DMU scan progress tab?
Similar Messages
-
Oracle 11g Migration performance issue
Hello,
There a performance issue with Migration from Oracle 10g(10.2.0.5) to Oracle 11g(11.2.0.2).
Its very simple statement hanging for more than a day and later found that query plan is very very bad. Example of the query is given below:
INSERT INTO TABLE_XYZ
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
While looking at cost in Explain plan :
on 10g --> 62567
0n 11g --> 9879652356776
Strange thing is that
Scenario 1: if I issue just query as shown below, will display rows immediately :
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
Scenario 2: If I create a table as shown below, will work correctly.
CREATE TABLE TABLE_XYZ AS
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
What could be the issue here with INSERT INTO <TAB> SELECT <COL> FROM <TAB1>?Table:
CREATE TABLE AVN_WRK_F_RENEWAL_TRANS_T
"PKSRCSYSTEMID" NUMBER(4,0) NOT NULL ENABLE,
"PKCOMPANYCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKBRANCHCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKLINEOFBUSINESS" NUMBER(4,0) NOT NULL ENABLE,
"PKPRODUCINGOFFICELIST" VARCHAR2(2 CHAR) NOT NULL ENABLE,
"PKPRODUCINGOFFICE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKEXPIRYYR" NUMBER(4,0) NOT NULL ENABLE,
"PKEXPIRYMTH" NUMBER(2,0) NOT NULL ENABLE,
"CURRENTEXPIRYCOUNT" NUMBER,
"CURRENTRENEWEDCOUNT" NUMBER,
"PREVIOUSEXPIRYCOUNT" NUMBER,
"PREVIOUSRENEWEDCOUNT" NUMBER
SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
TABLESPACE "XYZ" ;
Explain Plan(With Insert Statement and Query):_
INSERT STATEMENT, GOAL = ALL_ROWS Cost=9110025395866 Cardinality=78120 Bytes=11952360
LOAD TABLE CONVENTIONAL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS
NESTED LOOPS OUTER Cost=9110025395866 Cardinality=78120 Bytes=11952360
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=115 Cardinality=78120 Bytes=2499840
VIEW PUSHED PREDICATE Object owner=ODS Cost=116615788 Cardinality=1 Bytes=121
SORT GROUP BY Cost=116615788 Cardinality=3594 Bytes=406122
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=116615787 Cardinality=20168 Bytes=2278984
SORT GROUP BY Cost=116615787 Cardinality=20168 Bytes=4073936
NESTED LOOPS OUTER Cost=116614896 Cardinality=20168 Bytes=4073936
VIEW Object owner=SYS Cost=5722 Cardinality=20168 Bytes=2157976
NESTED LOOPS Cost=5722 Cardinality=20168 Bytes=2097472
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW PUSHED PREDICATE Object owner=ODS Cost=4 Cardinality=1 Bytes=20
FILTER
SORT AGGREGATE Cardinality=1 Bytes=21
TABLE ACCESS BY GLOBAL INDEX ROWID Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=4 Cardinality=1 Bytes=21
INDEX RANGE SCAN Object owner=ODS Object name=PK_AVN_F_TRANSACTIONS Cost=3 Cardinality=1
VIEW PUSHED PREDICATE Object owner=ODS Cost=5782 Cardinality=1 Bytes=95
SORT GROUP BY Cost=5782 Cardinality=2485 Bytes=216195
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=5781 Cardinality=2485 Bytes=216195
SORT GROUP BY Cost=5781 Cardinality=2485 Bytes=278320
HASH JOIN Cost=5780 Cardinality=2485 Bytes=278320
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=925 Cardinality=1199 Bytes=73139
SORT GROUP BY Cost=925 Cardinality=1199 Bytes=100716
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=4854 Cardinality=75507 Bytes=3850857
SORT GROUP BY Cost=4854 Cardinality=75507 Bytes=2340717
VIEW Object owner=ODS Cost=4207 Cardinality=75507 Bytes=2340717
SORT GROUP BY Cost=4207 Cardinality=75507 Bytes=1585647
PARTITION HASH ALL Cost=3713 Cardinality=75936 Bytes=1594656
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3713 Cardinality=75936 Bytes=1594656
Explain Plan(Only Query):_
SELECT STATEMENT, GOAL = ALL_ROWS Cost=62783 Cardinality=89964 Bytes=17632944
HASH JOIN OUTER Cost=62783 Cardinality=89964 Bytes=17632944
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=138 Cardinality=89964 Bytes=2878848
VIEW Object owner=ODS Cost=60556 Cardinality=227882 Bytes=37372648
HASH GROUP BY Cost=60556 Cardinality=227882 Bytes=26434312
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=54600 Cardinality=227882 Bytes=26434312
HASH GROUP BY Cost=54600 Cardinality=227882 Bytes=36005356
HASH JOIN OUTER Cost=46664 Cardinality=227882 Bytes=36005356
VIEW Object owner=SYS Cost=18270 Cardinality=227882 Bytes=16635386
HASH JOIN Cost=18270 Cardinality=227882 Bytes=32587126
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=13445038
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522
VIEW Object owner=ODS Cost=26427 Cardinality=227882 Bytes=19369970
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=18686324
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=26427 Cardinality=227882 Bytes=18686324
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=25294902
HASH JOIN Cost=20687 Cardinality=227882 Bytes=25294902
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=12826 Cardinality=34667 Bytes=2080020
HASH GROUP BY Cost=12826 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=7059 Cardinality=227882 Bytes=11621982
HASH GROUP BY Cost=7059 Cardinality=227882 Bytes=6836460
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=6836460
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522 -
We have Oracle 11gR2(11.2.0.1) RAC (2-node) on HP-UX. And we have some issue with SCAN Listener configuration.
Issue-1*
LISTENER_SCAN1 is running on node-1 and LISTENER_SCAN2 & LISTENER_SCAN3 is running on node-2. This is how it's running always. And we have 3 RAC database (cadprod, omsprod & eaiprod) running on this server. Both the instances of cadprod & eaiprod is registered on all three SCAN listeners.
But omsprod is having issues, omsprod1 & omsprod2 are registered on LISTENER_SCAN1 on node-1, but omsprod2 is not register on LISTENER_SCAN2 & LISTENER_SCAN3 on node-2. and some times we are not able to connect the second instance (omsprod2).
Issue-2*
And if node-1 goes down, all three databases are failover to node-2, after that we are not able to connect omsprod & eaiprid database its giving the following error on client side,
ORA-12516, TNS:listener could not find available handler with matching protocol stack
once the node-1 came back its allowing the connection. But cadprod doesn't have this issue, we are able to connect after the failover.
Listener Status:*
oracle@hublhp1:/home/oracle$ lsnrctl status LISTENER_SCAN1
LSNRCTL for HPUX: Version 11.2.0.1.0 - Production on 04-MAR-2011 09:53:29
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
Alias LISTENER_SCAN1
Version TNSLSNR for HPUX: Version 11.2.0.1.0 - Production
Start Date 28-FEB-2011 13:40:09
Uptime 3 days 20 hr. 13 min. 20 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /app/oracle/grid/product/11.2.0/network/admin/listener.ora
Listener Log File /app/oracle/grid/product/11.2.0/log/diag/tnslsnr/hublhp1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.35.110)(PORT=1521)))
Services Summary...
Service "cadprod" has 2 instance(s).
Instance "cadprod1", status READY, has 1 handler(s) for this service...
Instance "cadprod2", status READY, has 1 handler(s) for this service...
Service "eaiprod" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "eaiprodXDB" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "omsprod" has 2 instance(s).
Instance "omsprod1", status READY, has 1 handler(s) for this service...
Instance "omsprod2", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@hublhp3:/home/oracle$ lsnrctl status LISTENER_SCAN2
LSNRCTL for HPUX: Version 11.2.0.1.0 - Production on 04-MAR-2011 09:52:40
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
Alias LISTENER_SCAN2
Version TNSLSNR for HPUX: Version 11.2.0.1.0 - Production
Start Date 28-FEB-2011 12:23:33
Uptime 3 days 21 hr. 29 min. 6 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /app/oracle/grid/product/11.2.0/network/admin/listener.ora
Listener Log File /app/oracle/grid/product/11.2.0/log/diag/tnslsnr/hublhp3/listener_scan2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.35.109)(PORT=1521)))
Services Summary...
Service "cadprod" has 2 instance(s).
Instance "cadprod1", status READY, has 1 handler(s) for this service...
Instance "cadprod2", status READY, has 1 handler(s) for this service...
Service "eaiprod" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "eaiprodXDB" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "omsprod" has 1 instance(s).
Instance "omsprod1", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@hublhp3:/home/oracle$ lsnrctl status LISTENER_SCAN3
LSNRCTL for HPUX: Version 11.2.0.1.0 - Production on 04-MAR-2011 09:52:27
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
STATUS of the LISTENER
Alias LISTENER_SCAN3
Version TNSLSNR for HPUX: Version 11.2.0.1.0 - Production
Start Date 28-FEB-2011 12:23:34
Uptime 3 days 21 hr. 28 min. 53 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /app/oracle/grid/product/11.2.0/network/admin/listener.ora
Listener Log File /app/oracle/grid/product/11.2.0/log/diag/tnslsnr/hublhp3/listener_scan3/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.35.111)(PORT=1521)))
Services Summary...
Service "cadprod" has 2 instance(s).
Instance "cadprod1", status READY, has 1 handler(s) for this service...
Instance "cadprod2", status READY, has 1 handler(s) for this service...
Service "eaiprod" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "eaiprodXDB" has 2 instance(s).
Instance "eaiprod1", status READY, has 1 handler(s) for this service...
Instance "eaiprod2", status READY, has 1 handler(s) for this service...
Service "omsprod" has 1 instance(s).
Instance "omsprod1", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@hublhp3:/home/oracle$
oracle@hublhp3:/home/oracle$Node-1*
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))) # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))) # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
Node-2*
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))) # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))) # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON # line added by Agent -
Oracle Database Migration Assistant for Unicode (DMU) is now available!
Oracle Database Migration Assistant for Unicode (DMU) is a next-generation GUI migration tool to help you migrate your databases to the Unicode character set. It is free for customers with database support contracts. The DMU is built on the same GUI platform as SQL Developer and JDeveloper. It uses dedicated RDBMS functionality to scan and convert a database to AL32UTF8 (or to the deprecated UTF8, if needed for some reasons). For existing AL32UTF8 and UTF8 databases, it provides a validation mode to check if data is really encoded in UTF-8. Learn more about the tool on its OTN pages.
There is a new Database Migration Assistant for Unicode. We encourage you to post all questions related to the tool and to the database character set migration process in general to that forum.
Thanks,
The DMU Development TeamHI there!
7.6.03 ? Why do you use outdated software for your migration.
At least use 7.6.06 or 7.7.07 !
About the performance topic - well, you've to figure out what the database is waiting for.
Activate time measurement, activate the DBanalyzer with a short snapshot interval (say 120 or 60 seconds) and check what warnings you get.
Also you should use the parameter check to make sure that you don't run into any setup-induced bottlenecks.
Apart from these very basic prerequisites for the analysis of this issue, you may want to check
SAP Note 1464560 FAQ: R3load on MaxDB
Maybe you can use some of the performance features available in the current R3load versions.
regards,
Lars
p.s.
open a support message if you're not able to do the performance analysis yourself. -
Unicode Migration on MaxDB - create indices - R3Load
Hi Folks!
Currently we are performing an Unicode Migration (CU&UC) ECC 6.0 on Windows 2003 / MaxDB 7.6.
We are using migration monitor for parallel export and import with 12 R3Load processes on each side.
Export is finished successfully but import is taking very long for index creation.
There is one strange behaviour:
import_state.propeties -> all packages are marked with "+" except one:
S006=?
When I take a look at S006.STR there is only one table (S006) and two indices (S006ADQ, S006Z00) listed.
Taking a look at Import installDir:
S006.TSK shows:
T S006 C ok
P S006~0 C ok
D S006 I ok
I S006~ADQ C ok
S006.TSK.bck shows:
T S006 C xeq
P S006~0 C xeq
D S006 I xeq
I S006~ADQ C xeq
I S006~Z00 C xeq
S006.log shows:
(DB) INFO: S006 created#20081203192452
(DB) INFO: Savepoint on database executed#20081203192453
(DB) INFO: S006~0 created#20081203192453
(DB) INFO: Savepoint on database executed#20081203193256
(IMP) INFO: import of S006 completed (4858941 rows) #20081203193256
(DB) INFO: S006~ADQ created#20081203193351
(DB) INFO: S006~ADQ created#20081203193351
(DB) INFO: COSP~1 created#20081203193504
(DB) INFO: COSP~2 created#20081203193920
(DB) INFO: RESB~M created#20081203194032
(and many many more)
There is only one R3Load running at the moment (others are finished)
I am not a MaxDB pro but what it looks like (to me):
All packages have been imported successfully and import_monitor is creating MaxDB indices using just one R3Load for creation and S006.log to log all the (secondary) indices which have been created.
This procedure is taking very long (more than 12 hours now). Is this a normal behaviour or is there any way to speed up index creation after data has been imported?
If I recall correctly, using Oracle each R3Load creates table, imports data and creates indices right afterwards.
On MaxDB it looks like index creation is performed with just one R3Load after all tables have been imported.
I have read a note about MaxDB parallel index creation which might be the reason for this behaviour (only one index creation at a time using parallel server tasks). But taking a look at the import runtime this doesn't seem to be efficient.
Any ideas or suggestions?
Thanks a lot.
/cheersYou´re right with what you see/saw. That behaviour is caused by the task how MaxDB created indexes. It uses "server tasks" to get all the blocks/pages from disk and create the index. If more than one index creation would run in parallel, it would take much more time because they would "fight" for the max. number of server tasks.
You can see what the database is doing by executing
x_cons <SID> sh ac 1
Unfortunately there is no way of changing that behaviour....
Markus -
Sybase12.5 to Oracle 9i Migration:Error while capturing thru offline
Hi,
While migrating data from Sybase 12.5 to Oracle 9i we are getting the following error.
java.sql.SQLException: ORA-01401: inserted value too large for column
Exception :Sybase12DisconnSourceModelLoad.loadSourceModel(): oracle.mtg.migration.MigrationStopException: java.lang.NumberFormatException: For input string: "« ? » ±"
Log shows that it is failed while loading data from sybase12_syssptvalues.dat file. Please advise what could be causing this error?
Regards,
Prasad JalduPrasad,
The data in the column contains the column separator we
are using, alter the data, load and alter the data back,
or change the delimiter and re run the offline capture
scripts.
see the user guide Omwb/docs/usersguide/trouble.htm#sthref455
or look up delimiters in the index of the user guide
This should solve your NumberFormatException issue.
Regards,
Turloch -
Hi all,
After Oracle 11g migration, i have been facing issues in OEM scheduling. Twice the job did not run even after 11.2.0.7 upgrade. Hence i would like to explore the possibility of scheduling the same in DBMS jobs (we have not faced even a single failure since migration). However there is no mail alert available for successful completion of the job. In this regard i would like to have your views and help to create alerts to proceed further.
ThankxHi Ravi,
Really the link helps,eventhough i have been looking for the rootcause to overcome from the job failure(scheduled jobs)
Actually after completion/succeded of my scheduled job i may also get "INITIALIZATION ERROR" status message,so i need to be Re-schedule the particular job again
for next day.
Look up screenshot of the error
Job Name=OEM_FLEET_ICL_SEND_6_30_AM_DAILY
Job Owner=SYSMAN
Job Type=SQL Script
Target Type=Database Instance
Timestamp=Apr 19, 2011 6:30:44 AM
Status=Initialization Error
Step Output=
Command:Output Log
SQLPlus: Release 11.2.0.2.0 Production on Tue Apr 19 06:30:15 2011*
Copyright (c) 1982, 2010, Oracle. All rights reserved.
SQL> SQL> SQL> SQL> Connected.
SQL> SQL> SQL> SQL>
PL/SQL procedure successfully completed.
SQL> SP2-0103: Nothing in SQL buffer to run.
SQL>
Commit complete.
SQL> SQL> SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
*~~~End Step Output/Error Log~~~*
kindly advice me on this
Thanks -
IBM DB2 to Oracle Database Migration Using SQL Developer
Hi,
We are doing migration of the whole database from IBM DB2 8.2 which is running in WINDOWS to Oracle 11g Database in LINUX.
As part of pre-requisites we have installed the Oracle SQL Developer 4.0.1 (4.0.1.14.48) in Linux Server with JDK 1.7. Also Established a connection with Oracle Database.
Questions:
1) How can we enable the Third Party Database Connectivity in SQL Developer?
I have copied the files db2jcc.jar and db2jcc_license_cu.jar from the IBM DB2 (Windows) to Oracle (Linux)
2) Will these JAR files are universal drivers? will these jar files will support in Linux platform?
3) I got a DB2 full privileged schema name "assistdba", Shall i create a new user with the same name "assistdba" in the Oracle Database & grant DBA Privillege? (This is for Repository Creation)
4) We have around 35GB of data in DB2, shall i proceed with ONLINE CAPTURE during the migration?
5) Do you have any approx. estimation of Time to migrate a 35 GB of data?
6) In-case of any issue during the migration activity, shall i get an support from Oracle Team (We have a Valid Support ID)?
7) What are all the necessary Test Cases to confirm the status of VALID Migration?
Request you to share the relevant metalink documents!!!
Kindly guide me in-order to go-ahead with the successful migration.
Thanks in Advance!!!
Nagu
[email protected]Hi Klaus,
Continued with the above posts - Now we are doing another database migration from IBM DB2 to Oracle, which is very less of data (Eg: 20 Tables & 22 Indexes).
As like previous database migration, we have done the pre-requirement steps.
DB Using SQL Developer
Created Migration Repository
Connected with the created User in SQL Developer
Captured the Source Database
Converted Captured Model to Oracle
Before Translation Phase we have clicked on the "Proceed Summary"
Captured Database Objects & Converted Database Objects has been created under PROJECT section.
Here while checking the status of captured & converted database objects, It's showing the below chart as sample:
OVERVIEW
PHASE TABLE DETAILS TABLE PCT
CAPTURE 20/20 100%
CONVERT 20/20 100%
COMPILE 0/20 0%
TARGET STATUS
DESC_OBJECT_NAME
SCHEMANAME
OBJECTNAME
STATUS
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:ARG_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H0INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H1INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H2INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H3INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H5INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPIREP1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPISWIFT1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPITRAN1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OBJ_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OPR_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:PRD_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:S1TABLE01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STMT_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STM_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:X0IAS39:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
We have seen only "Missing" in the chart, also we couldn't have any option to trace it in Log file.
Only after the status is VALID, we can proceed with the Translation & Migration PHASE.
Kindly help us how to approach this issue now.
Thanks
Nagu -
Oracle Database migration to Exadata
Dear Folks,
I have a requirement to migrate our existing Oracle Database to Exadata Machine. Below is the source & destination details:
Source:
Oracle Database 11.1.0.6 Verson & Oracle DB 11.2.0.3
Non-Exadata Server
Linux Enivrionment
DB Size: 12TB
Destination:
Oracle Exadata 12.1
Oracle Database 12.1
Linux Environment
System Dowtime would be available for 24-30 hours.
Kindly clarify below:
1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
2. Any upgarde activity after migration?
3. Which migration method is best suited in our case?
4. Things to be noted before migration activity?
Thanks for your valuable inputs.
Regards
SaurabhSaurabh,
1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
This would help if you wanted to drop the database in place as this would allow a standby database to be used which would reduce downtime or a backup and recovery to move the database as is into the Exadata. This does not however allow you the chance to put in some things that could help you on the Exadata such as additional partitioning/adjusting partitioning, Advanced Compression and HCC Compression.
2. Any upgrade activity after migration?
If you upgrade the current environment first then not there would not be additional work. However if you do not then you will need to explore a few options you could have depending on your requirements and desires for your exadata.
3. Which migration method is best suited in our case?
I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum. At a high level I typically have recommended when moving to Exadata that you setup the database to utilize the features of the exadata for best results. The Exadata migrations I have done thus far have been done using Golden Gate where we examine the partitioning of tables, partition the ones that make sense, implement advanced compression and HCC compression where it makes sense, etc. This gives us an environment that fits with the Exadata rather then a drop an existing database in place though that works very well. Doing it with Golden Gate eliminates the migration issues for the database version difference as well as other migration potential issues as it offers the most flexibility, but there is a cost for Golden Gate to be aware of as well so may not work for you and Golden Gate will keep your downtime way down as well and give you opportunity to ensure that the upgrade/implementation will be smooth by giving some real work load testing to be done..
4. Things to be noted before migration activity?
Again I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum. In short here are some items that may help keep in mind exadata is a platform that has some advantages that no other platform can offer, while a drop in place does work and does make improvements, it is nothing compared to the improves that could be if you plan well and implement with the features Exadata has to offer. The use of Real Application Testing Database Replay and flashback database will allow you to implement the features, test then with a real workload and tune it well before production day and allow you to be nearly 100% confident that you have a well running tuned system on the Exadata before going live. The use of Golden Gate allows you to get an in Sync database run many replays of workloads on the Exadata without losing the sync giving you time and ability to test different workload partitioning and compression options. Very nice flexibility.
Hope this helps...
Mike Messina -
Can UCCHECK be performed on already UNICODE migrated system during ECC upgr
Hi there,
Can UCCHECK be performed on already UNICODE migrated system during ECC upgrade?
If NO, why? What is the technical reason?
PrakashHi,
first unicode means
Unicode is a 16-bit code to represent universal character set, which is used to facilitate a better exchange of data between different systems.
for uncheck
The transaction UCCHECK is used to find Unicode related issues. You can use transaction UCCHECK to examine a Unicode program set for syntax errors without having to set the program attribute "Unicode checks active" for every individual program. From the list of Unicode syntax errors, you can go directly to the affected programs and remove the errors. It is also possible to automatically create transport requests and set the Unicode program attribute for a program set.
why you are checking second time.
Regards,
Madhu -
Can i seperate ABAP conversion from the Unicode Migration Process ?
Hi all,
I don't have detailed technical information about ABAP programs and i need to deliver a project (unicode migration project) which needs some time arrangements. Because of that, we're trying to find out other ways to decrease the total time.
As SAP guide says; before the migration, we must convert the ABAP programs. When we check them, only ABAP programs that our developers wrote, need to be converted. So the question is; is it possible to seperate these steps ? I mean, go on the unicode migration without converting ABAP programs and meanwhile, change ABAP programs in another SAP system (similar to the live one). With this process, save some time and then import the converted ABAP programs to the migrated unicode system. Is it possible ?
Any help will be appreciated, thanks in advance.I currently am using Postbox during the trial period because the frustration of trying to solve this ridiculous problem I posted about above was too much. Unlike most people, I won't rant and rave about OS X's decline after Lion and Mountain Lion as I never had major issues after upgrading to either of them. It's just beyond belief that the magnitude of this last problem being able to break an app within something that is just supposed to "work" that is beyond me.
Anyway, I was wondering if anyone who uses Postbox might be able to offer a suggestion or two about how to import mailboxes from a Time Machine backup as I would still like to have access to those folders which were "On My Mac" with all of the archived messages which were contained within each of them? I know that you can import from Apple Mail, but that's only if it is currently set up they way you want it to be. And as I no longer have those settings because I can't restore to the point before I deleted all the accounts, etc. from Mail in my frustration -- I'm sure I can't just point Postbox to some plist file to have it mimic my old Mail setup.
Anyone? -
ECC6 CUUC and Oracle 11 migration
Hi experts!
I would like to know if you experienced already a SAP ECC6 / EHP4 Upgrade combined with Unicode Conversion (so CUUC) coupled with an Oracle database migration from 10.2 to 11.2 while doing the Unicode db import (so a heterogeneous os/db migration at this stage i.e. Oracle 10.2 db export parallelized with Oracle 11.2 db import).
Is the CUUC coupled with Oracle 11 migration certified by SAP?
For information, the source system is 4.6c.
Thanks for your feedback.
ChrisHi, thanks for your answer.
Yes, SAP ERP6 works fine with Oracle 10.2.
Our customer target is Oracle 11 in our case.
I understood from your message than upgrading to Oracle 11 can be done while doing the Unicode Conversion.
Do you know if this procedure is certified (or agreed) by SAP when done during the CUUC procedure?
I guess it makes sense but wanted to get the confirmation as I did not find any notes or guides relating the CUUC process AND the opportunity to do the upgrade to Oracle 11...
Thanks for your highlight.
Chris -
Paper Jam and Scanning issues for the HP Officejet 6600
Had to warranty my printer for same issue and refurbished one is worse.
Hi @Uguessedit,
Welcome to the HP Forums!
I am sorry to hear that your HP Officejet 6600 refurbished printer is worse than your previous printer, but I am happy to help!
For further assistance, I will need some additional information:
If you are using a Windows or Mac Operating System, and the version number. To find the exact version, visit this link. Whatsmyos.
If the printer is connected, Wireless, or USB.
If the printer is able to make copies. Copying a Multipage Original in Order (Collating).
If the power cable is plugged into a surge protector, or directly to the wall outlet. Issues when Connected to an Uninterruptible Power Supply/Power Strip/Surge Protector. This applies to Inkjet printers as well.
What type of scanning issues are you having? Are you getting any errors?
In the meantime, please see this guide, A 'Paper Jam' Error Displays on the HP Officejet 6600 e-All-in-One and 6700 Premium e-All-in-One Pri....
Thank you for posting, and have a good day!
RnRMusicMan
I work on behalf of HP
Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
Click the “Kudos Thumbs Up" to say “Thanks” for helping! -
Acrobat 7 Pro Windows Scanning Issues for Non-Admins
Hello,
We have several users that are having scanning issues. WE recently removed administrator rights from our end users and now for some reason Acrobat 7 Professional doesn't show any scanners in the list when they try to create a PDF from the scanner. Windows shows the scanner in the scanners and cameras section of the control panel. Any help would be appreciated.
-JaredI was able to install it on Win 7, even the 64-bit version. But neither AcroTray or the Adobe PDF print driver for AA 7 works on Win 7. That means you have to use workarounds to get things done if you succeed in installing. it is time for you to probably buy a new copy.
-
Oracle Database Migration Verifier - Can it be used for diff table structur
Hello,
We have re-engineered the existing sybase tables to a new structure in Oracle for few of the tables.For example a table in sybase is normalized to two tables in oracle.In these cases can the "Oracle Database Migration Verifier" be mapped such that the columns in one table in sybase be mapped to two table is Oracle with their respective column names.
In a gist can the tool be used even if the structure is not the same in the source and targer databases.
Please let me know if you need more clarifications regarding my query.
Regards,
Ramanathan.Knot really. The DMV was a simple tool for verifiying that what you now had in Oracle was what you had in Sybase. It does not do what you are expecting.
B
Maybe you are looking for
-
NEED HELP PUTTING MULTIPLE PAGES TOGETHER IN 1 PDF
HOW DO I PUT 10 PAGES TOGETHER TO OPEN IN 1 PDF?
-
Hi Folks, I am karthik ramanathan doing my project in SAP BW and my topic is 'Extraction of Business Metadata from SAP BW'. I am new to this field and I would like to know what are the Business Metadata needed & useful for the SAP customers. And know
-
I activated the Intel My WiFi technology on my T510 but the system will not let me enable it. When I click on enable in the My WiFi application it never turns on. I have done everything in the KB article below. http://forum.lenovo.com/t5/T400-T500-an
-
Error itms-6000 using Producer.
I'm trying submit an ebook to iTunes Producer and I'm getting the error message ITMS-6000. Anyone know how to fix it or what it is? Thanks, Mark
-
Would like to see a way that we can open .eml files so that I can retrieve my Voicemails sent to my exchange account. When I receive voicemails from my office they come to my email account as a attachment. The problem is the email comes to me with a