Capture of SQL7 database on OMB
When I am performing Capture on OMW ver.1.2.4 its giving me an
error 'Failed to truncate Source Model. ORA-00942: table or
views does not exists".
Would appreciate any help.
Thanks.
null
Anish,
You can workaround this by creating a new user (or deleting and
recreating the repository user) and restarting the workbench
using the new login. The workbench will then rebuild the
repository.
If this situation is easy to reproduce, and provide a testcase
is available, it should be easy to fix.
Turloch
Oracle Migration Workbench Team
Anish (guest) wrote:
: When I am performing Capture on OMW ver.1.2.4 its giving me an
: error 'Failed to truncate Source Model. ORA-00942: table or
: views does not exists".
: Would appreciate any help.
: Thanks.
null
Similar Messages
-
Design Capture of AS400 "Database" Files
I would like to know if anyone has any experience with the Desaign Capture of AS400 file information. We are using Designer 6/6i and we have the Transparent Gateway running on the AS400. We can access and download data from the AS400 easily because the the Transparent Gateway makes files look like ORACLE tables. But we cannot seem to connect using Designer.
Any help would be appreciated.I read some place that sql loader doesn't support filemaker. You might have to export FMP to access than to Oracle. PowerDesigner might help also.
Sorry not much help. -
E-R Design Capture for a Database.
Hi.
I am using oracle 8.1.7, and all the tables for my application is owned by a user
newscott.
Now I want to create E-R diagram for my application in other words reverse engineering.
which all tools I will need to do this.
and which steps I have to follow.
I will appreciate it very much for the
help provided .
sachinSachin,
The Design Editor is the place to start. Reverse engineer your tables into the Designer Repository. From there you can then do a table to entity retrofit. This is a simple one to one mapping and will require further manual work to arrive at the true logical data model.
David -
Oracle distributed doc capture - database error
hi all,
i work at project ( capture , archive ) and i implement the ODDC ( instalation and integration )
i do all configure include scanning profiles ( the last step )
then when i do test the system i find that there is no data saved upon scanning and after commit else data in the commit table i build and mapping with the index fields and ECAudit table , but no data in tables like EcBatches and more,
i see more than one column with type blob ( i mean column in capture system core tables like ECBatches)in many tables ( these tables not contain any data ! )
i use the evaluation period license
oracle 10g release 2
win server 2003 stander edition R2 sp2
any Idea please ??!!!!!its started !!!, but is there any blob field in the capture system usual database ??
i mean the system database that the capture build it ??? is there any blob field ..... any idea please ??
what about the check integrity utility is it stable ? what can do for me ?
i tired with this problem ... any one can help me please ?
is there any data must add to the database before the commit operation ??
i dont have any data before and not all tables collect data just the ECAUDIT and my commit tables where the index fields saved i name it ( dbcommit ) with this tables
database info
user: capture
password: capture
database: capture
commit table:
DBCOMMIT
column name: DataType size description
PERMIT_ID integer
DOC_TYPE varchar2 500 ( from 1 to 14 from pick list)
AREA_ID integer default = 09
SCAN_DATE date
USER_ID integer
DOC_PATH varchar2 500 document path on capture server
INDEX_DATE Date
when i do select statment like :
select * from ECBATCHES;
i get the error SP2-0678
its mean :
The "SP2" error messages are messages issued by SQL*Plus. The SP2-0678 error message text is "Column or attribute type can not be displayed by SQL*Plus." What you probably tried to do is to dump a binary data type, i.e. BLOB, to the screen in SQL*Plus. The problem is that SQL*Plus can only handle text data. It cannot handle binary data. Therefore, you are getting an error message telling you that this data type can not be displayed in your SQL*Plus session. Try your SELECT statement again but eliminate the BLOB column(s) from your query.
the question is why there is some column type " BLOB " in the table like " ECBATCHES " and the other column is not blob its varchar2 and integer .... and these column empty !!! there is error somewhere but i tired re install it in new machine and database but nothing change !
any IDEA please :(
Edited by: yos on Nov 21, 2008 3:56 AM -
Migration - dropping multiple objects from captured database
Hi,
I am doing some migrations from MSSQL to Oracle using SQL Developer. So far I have found it to be a great tool and very useful.
However one area I can't seem to figure out is the step between capturing the database and converting it to the Oracle schema. I have captured my MSSQL database and can view it in the "Captured Objects" window - at this point there are a number of objects (e.g. tables and views) that I ether need to drop or rename. I can click on each one individually and do this, but this takes time and is rather laborious. If I multi-select some objects the option to drop the object disappears. Is there some way to drop multiple objects?
Ideally I'd like to be able to open up a SQL Worksheet and point it at the captured database so I could manipulate the objects with SQL, is that possible? (I could not see a way of doing it).
Thanks in advance.Hi;
What is DB version? Is there any additional info at alert.log?
Please see:
Error ORA-29533 or ORA-29537 When Loading a Java Class File into the Database. [ID 98786.1]
Regard
Helios -
Can you help me about change data captures in 10.2.0.3
Hi,
I made research about Change Data Capture and I try to implement it between two databases for two small tables in 10g release 2.MY CDC implementation uses archive logs to replicate data.
Change Data Capture Mode Asynchronous autolog archive mode..It works correctly( except for ddl).Now I have some questions about CDC implementation for large tables.
I have one senario to implement but I do not find exactly how can I do it correctly.
I have one table (name test) that consists of 100 000 000 rows , everyday 1 000 000 transections occurs on this table and I archive the old
data more than one year manually.This table is in the source db.I want to replicate this table by using Change Data Capture to other stage database.
There are some questions about my senario in the following.
1.How can I make the first load operations? (test table has 100 000 000 rows in the source db)
2.In CDC, it uses change table (name test_ch) it consists of extra rows related to opearations for stage table.But, I need the orjinal table (name test) for applicaton works in stage database.How can I move the data from change table (test_ch) to orjinal table (name test) in stage database? (I don't prefer to use view for test table)
3.How can I remove some data from change table(name test_ch) in stage db?It cause problem or not?
4.There is a way to replicate ddl operations between two database?
5. How can I find the last applied log on stage db in CDC?How can I find archive gap between source db and stage db?
6.How can I make the maintanence of change tables in stage db?Asynchronous CDC uses Streams to generate the change records. Basically, it is a pre-packaged DML Handler that converts the changes into inserts into the change table. You indicated that you want the changes to be written to the original table, which is the default behavior of Streams replication. That is why I recommended that you use Streams directly.
<p>
Yes, it is possible to capture changes from a production redo/archive log at another database. This capability is called "downstream" capture in the Streams manuals. You can configure this capability using the MAINTAIN_* procedures in DBMS_STREAMS_ADM package (where * is one of TABLES, SCHEMAS, or GLOBAL depending on the granularity of change capture).
<p>
A couple of tips for using these procedures for downstream capture:
<br>1) Don't forget to set up log shipping to the downstream capture database. Log shipping is setup exactly the same way for Streams as for Data Guard. Instructions can be found in the Streams Replication Administrator's Guide. This configuration has probably already been done as part of your initial CDC setup.
<br>2) Run the command at the database that will perform the downstream capture. This database can also be the destination (or target) database where the changes are to be applied.
<br>3) Explicitly define the parameters capture_queue_name and apply_queue_name to be the same queue name. Example:
<br>capture_queue_name=>'STRMADMIN.STREAMS_QUEUE'
<br>apply_queue_name=>'STRMADMIN.STREAMS_QUEUE' -
Source database in NOARCHIVELOG temporarily for maintenance
Hi all,
We have a big maintenance to run next week end. There will be mostly index rebuild, statistics gathered and so. We are using 10.2.0.3. Source database is a RAC one.
We would like to put our database in NOARCHIVELOG temporarily during this maintenance windows. This may sound like an easy task, but i rather consider it as a risky one - I need a 100% safe procedure!!
I wonderer, if there is arcticles in Oracle documentation or Metalink that could help.
I think I know several of the steps required:
- Stop the capture
- Put the database in NOARCHIVELOG mode
- Re-instantiate the tables using dbms_capture_adm.prepare_table_instantiation on the source database
- Ré-instantiate the tables using dbms_apply_adm.set_table_instantiation_scn on the destination database
- I think I will have to advance the SCN on the capture but I'm not sure if I have to change the FIRST_SCN or the START_SCN or both?
- Put the database back in ARCHIVELOG mode
- Start the capture
I'm not sure in wich order i have to do these tasks.
Thank you for any advice/link on that subjet!
JocelynThanks a lot Rijesh for your response,
In the meantime, I do my own search and came to the same conclusion.
I also experimented with note 471695.1 on Metalink (Required Steps to Recreate a Capture Process). It "almost" works! It recreates the capture correctly, but it does not carry the existing table rules... (the content of DBA_STREAMS_TABLE_RULES before dropping the capture).
So it's probably going to be much easier to recreate the whole thing from scratch (we have an existing procedure that is very mature).
Regards,
Jocelyn -
N96 won't detect my captured pictures
hi
i have nokia n96 and 2-days ago i got corrupted (mass) memory error so i restarted the phone and it was ok.
But now Photos application won't detect my Captured pictures. The directory tree of Images was intact after corruption (probably) so nothing changed there.
I've got arout 500 pictures and they all show up in All pictures.
Is there any way to make them back into Captured ? Any database file to delete and to refresh the image gallery ?
Thanks,
Kenantried it.. lost ALL data - problem fixed -or is it... how can i know when i just lost all images
*so this reminds me not to do ever (hard) *#7370# reset
close the thread please
Message Edited by kenaneo on 09-Sep-2009 05:17 PM -
[LV2013SP1] Strange cluster ordering from a database
Hello,
I'm not an expert in LabView since I come from the
embedded world (C/C++) yet I found something
quite strange when reading from a database.
Here's attached some screen captures of the
database content and structure, then the LabView
diagram with the probe output (yeah, had to stretch
it wide because there is no word wrap option).
Finally there is a little Excel snapshot of the result.
The database reading is performed with a certain
column ordering, the "casting" cluster do mirror
the same order and datatype, yet 'Variant to Data'
scramble things.
And even though I do use dis/assemble cluster
by name to avoid confusion, the resulting cluster
at probe 20 is right. Go figure...
Could someone help and explain what's going
on under the hood ? At least with C/C++ you have
a little grasp on how memory is ordered, but
LabView really bogs me...
David
Attachments:
capture_001_10092014_171106.png 170 KB
capture_002_10092014_171116.png 125 KB
capture_003_10092014_171213.png 129 KBOk, I've found the bug in LabView 2013 SP1.
The 'Pdt_D' cluster had suffered several modifications and
reordering, but 'Variant to Data' doesn't follow the cluster
virtual ordering but the cluster's memory ordering.
In short, if you create a cluster one way, delete members,
add some more, adapted and reorder, the 'Variant to Data'
VI will take the members' order in which they were added
to the cluster, regardless of the hypathetic ordering you
might have specified afterward.
Probably a problem of chained list, one listing the members
as they are added, one listing the members in the specified
ordering, perhaps the bug comes from pointing the wrong
list.
If you delete the 'Pdt_D' cluster, create a brand new cluster
constant, then add one by one the members of the right
datatype, 'Variant to Data' works like expected and dispatch
the variant's data into the belonging members.
There's is no use providing you with a snapshot because
it will all look the same that I have already provided you with,
just that the cluster has been recreated from scratch and
not been reordered. That's a rather annoying spurious bug
you'd better be strong hearthed to deal with.
And for the VI, it's a rather complex proprietary application
I cannot disclose, but I'll create an example VI further in
time so that you can witness it by yourself.
Sorry for not being specific enough with my problem.
David -
Using Change Data Capture in SSIS - how to handle schema changes
I was asked to consider change data capture for a database recently. I can see that from the database perspective, its quite nice. When I considered how I'd do this in SSIS, it seemed pretty obvious that I might have a problem, but I wanted to
confirm here.
The database in question changes the schema about once per month in production. We have a lot of controls in our environment, so everytime a tables schema is changed, I'd have to do a formal change request to deal with a change to my code
base, in this case my SSIS package; it can be a lot of work. If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package
with every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?
Thanks,
KeithHi Keith,
What is your exact requirement?
If you want to capture the object_created, object_deleted or object_altered informations, you can try using
Extended events .
As mentioned in your OP:
"If I wanted to track the data changes for inserts, update and deletes using an SSIS package to send the data changes to the destination tables, would I have to change my SSIS package with
every schema change, or is there a way to keep the exact same SSIS package with CDC without having to change it every month?"
If you want the databases in two different environments to be in sync, then take periodic
backup and apply(restore) on the another destination DB.
(or)
you can also try with
SQL Server replication if it is really needed.
As I understand from your description, if you want the table data & schema to be in sync in two different database:
then create job [script that will drop the destination DB table & create the copy of source DB table ] as per your requirement:
--CREATE DATABASE db1
--CREATE DATABASE db2
USE db1
GO
CREATE TABLE tbl(Id INT)
USE db2
GO
IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE name = 'tb1' and TYPE = 'u')
DROP TABLE dbo.tb1
SELECT * INTO db2.dbo.tb1 FROM db1.dbo.tbl
SELECT * FROM dbo.tb1
--DROP DATABASE db1,db2
sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **. -
Oracle SQL Developer 3.1 Migration Third Party Databases Issues
Hi,
i had following issues with migrating from db2 v8 to oracle 11.2.
Online:
Due to missing privileges and roles for user db user migrations some steps have failed (CREATE USER -> ORA-01031 ...).
After correcting this like described in "Creating a Database User for the Migration Repository" in sqldev online help this has worked.
The problems are:
a) on the overview page at the end of the migration assistent all steps (CAPTURE, CONVERT, GENERATE, DATAMOVE) are shown as complete, even if nothing has done
b) on page 6/9 into migration assistant all changes for datatype convertion are ignored, for example CHAR to VARCHAR2
c) generated files are not visible, even if you mark refresh on file view
d) after restarting sqldev, generarted files are visible in file view, but when you add generated files to svn error message "svn: File: xxx has inconsitent newlines" is shown
e) after sucessful migration on opened migration project pane "data quality" sourcenumrows are NULL, even if they always NOT NULL and count(*) on any table on both sites are equal
Offline:
Generated skripts contains errors:
./startDump.sh: line 157: syntax error near unexpected token `done'
'/startDump.sh: line 157: `done < "schemas.dat"
Can anybody help?
Thanks in advance
AndréHi kgronau,
thanks for your fast answer.
Today i have found 2 new issue.
When you have opened a migration project from repository, on pane "data quality" sourcenumrows are always null
and
sourcename and targetname shows always databse object names from the database on the first migration project in repository independently of extra section in drop down box model and source.
kgronau wrote:
André,
I used SQL Dev 3.1 and I captured a DB2 database. Then I've changed the rule to map char to varchar2 and started the migration.
When I now check out my custom tables all all of them that had in the source model a char column are now using varchar2.I have tried to changed dataype for target database in place over the drop down box, not the edit rule button. It's a little bit confusing to have this option, when it doesn't works.
After using the edit rule button, all works fine. Just the summary page 9/9 doesn't report changed datatype assignment.
>
Could you please explain what you mean with your option c and d?c)
Yes i'm meaning View -> Files. Sorry but on german windows i have just german menu items. That is sometimes tricky to retranslate for support questions and also not helpful when using the online help where all menu items referenced in english :-(
(Do you have an ideo how can i configure sqldev with english menu aon german windows?)
I think, this problemn is special for output folders under subversion control.
d)
Generated Files after the end of the migration are just visible in output folders under subversion control after restarting sqldev
>
Edited by: kgronau on Mar 7, 2012 12:30 PM
Are you talking about Opening a File viewer window (View -> Files)? In my case I have chosen d:\temp\DB2 as output and monitored it during the migration. It isn't refreshed until I manually click on the refresh button - but once the migration has finished and written the output and when I then click on the refresh button I'll see all the directories and the files included.
Edited by: kgronau on Mar 7, 2012 12:39 PM
When a migration has finished then SQL Developer 3.1 now creates in the top directory an unload_script.sh file which calls the other unload scripts.That's right, all scripts are generated.
Also the data unload scripts were created - I need to find a DB2 on Unix to check the script - a quick check of the windows scripts worked correctly.
Edited by: kgronau on Mar 7, 2012 1:22 PM
These unload shell scripts to unload the data out of a DB2 database are also working.
Unfortunately I'm not able to test the shell script used for a source model unload as my UDB is running on Windows.
Didn't the online source model collection work? For me it looks like it did as you mentioned you changed the char data types to varchar2 and this requires already a connection to the source database - except you used the scripts that were generated using the startDump.sh which has failed. Yes, online source model collection did work. Just the unix shell script produces an error on the source unix system with db2. Please see below th generated script.
So please provide here some more details../startDump.sh was startet for testing purposes without any arguments
./startDump.sh: line 157: syntax error near unexpected token `done'
'/startDump.sh: line 157: `done < "
if [[ $# != 3 ]]; then
echo "Usage: startDump <database> <user> <password>";
exit 1;
fi
ROWTAG="'<row>'";
ENDROWTAG="'</row>'";
COLTAG="'<col><![CDATA['";
ENDCOLTAG="']]></col>'";
# Clear any other dat files
echo "Clearing older data files"
rm -f *.dat
echo "Connnecting to $1 as $2";
db2 -r connect.dat "connect to $1 user $2 using $3";
if [[ $? != 0 ]]; then
echo "Connection failed.";
exit 20;
fi
# GET SCHEMA QUERY.
echo "Get all schemas";
db2 +o -x -r schemas.dat "select SCHEMANAME SCHEMA_NAME from SYSCAT.SCHEMATA WHERE DEFINER <> 'SYSIBM' AND
SCHEMANAME <> 'NULLID' AND SCHEMANAME <> 'SQLJ'
AND SCHEMANAME <> 'SYSTOOLS'";
if [[ $? != 0 ]]; then
echo "Get schemas failed.";
exit 30;
fi
# Loop through file containing schema names and extract db objects for each of them
while read SCHEMA_NAME
do
# Create schema directory
rm -rf "${SCHEMA_NAME}";
mkdir "${SCHEMA_NAME}";
if [[ $? != 0 ]]; then
echo "Could not create schema directory ${SCHEMA_NAME}.";
exit 40;
fi
echo "Get all tables for schema $SCHEMA_NAME";
tablesFile="${SCHEMA_NAME}/""tables.dat";
# GET TABLES QUERY. */
db2 -x +o -r $tablesFile "select "$ROWTAG", "$COLTAG"||COLUMNS.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||COLUMNS.TABNAME||"$ENDCOLTAG",
"$COLTAG"||COLUMNS.COLNAME||"$ENDCOLTAG", "$COLTAG"||(CASE WHEN (COLUMNS.CODEPAGE = 0 and (COLUMNS.TYPENAME = 'VARCHAR' OR COLUMNS.TYPENAME = 'CHAR'
OR COLUMNS.TYPENAME = 'LONG VARCHAR' OR COLUMNS.TYPENAME = 'CHARACTER')) THEN COLUMNS.TYPENAME || ' FOR BIT DATA'
ELSE COLUMNS.TYPENAME END)||"$ENDCOLTAG", "$COLTAG"||CHAR(COLUMNS.LENGTH)||"$ENDCOLTAG",
"$COLTAG"||CHAR(COLUMNS.SCALE)||"$ENDCOLTAG", "$COLTAG"||COLUMNS.NULLS||"$ENDCOLTAG",
"$COLTAG"||COALESCE(COLUMNS.DEFAULT, '')||"$ENDCOLTAG", "$ENDROWTAG" from
SYSCAT.COLUMNS COLUMNS, SYSCAT.TABLES TABLES WHERE
COLUMNS.TABSCHEMA = '${SCHEMA_NAME}' AND
COLUMNS.TABNAME = TABLES.TABNAME AND
COLUMNS.TABSCHEMA = TABLES.TABSCHEMA AND
TABLES.TYPE = 'T'
ORDER BY COLUMNS.TABNAME, COLUMNS.COLNO";
if [[ $? != 0 ]]; then
echo "No tables found.";
fi
# GET SYNONYMS QUERY. */
echo "Get all synonyms for schema $SCHEMA_NAME";
synonymsFile="${SCHEMA_NAME}/""synonyms.dat";
db2 -x +o -r $synonymsFile "select "$ROWTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||BASE_TABSCHEMA||"$ENDCOLTAG",
"$COLTAG"||BASE_TABNAME||"$ENDCOLTAG", "$ENDROWTAG" from syscat.tables
where tabschema = '${SCHEMA_NAME}' and type = 'A'";
if [[ $? != 0 ]]; then
echo "No synonyms found.";
fi
# GET VIEW QUERY. */
echo "Get all views for schema $SCHEMA_NAME";
viewsFile="${SCHEMA_NAME}/""views.dat";
db2 -x +o -r $viewsFile "select "$ROWTAG", "$COLTAG"||VIEWSCHEMA||"$ENDCOLTAG", "$COLTAG"||VIEWNAME||"$ENDCOLTAG",
"$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
"$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||READONLY||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG", "$ENDROWTAG"
from syscat.views
WHERE VIEWSCHEMA = '${SCHEMA_NAME}'
ORDER BY VIEWNAME";
if [[ $? != 0 ]]; then
echo "No views found.";
fi
# GET INDEXES QUERY. */
echo "Get all indexes for schema $SCHEMA_NAME";
indexesFile="${SCHEMA_NAME}/""indexes.dat";
db2 -x +o -r $indexesFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
"$COLTAG"||TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||INDEXTYPE||"$ENDCOLTAG",
"$COLTAG"||UNIQUERULE||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXES
WHERE INDSCHEMA = '${SCHEMA_NAME}' AND UNIQUERULE <> 'P'
ORDER BY TABNAME, INDNAME";
if [[ $? != 0 ]]; then
echo "No indexes found.";
fi
# GET INDEX DETAILS QUERY. */
echo "Get all index details for schema $SCHEMA_NAME";
indexeDetailsFile="${SCHEMA_NAME}/""indexDetails.dat";
db2 -x +o -r $indexeDetailsFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
"$COLTAG"||COLNAME||"$ENDCOLTAG", "$COLTAG"||CHAR(COLSEQ)||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXCOLUSE
WHERE INDSCHEMA = '${SCHEMA_NAME}'";
if [[ $? != 0 ]]; then
echo "No index details found.";
fi
# GET TRIGGERS QUERY. */
echo "Get all triggers for schema $SCHEMA_NAME";
triggersFile="${SCHEMA_NAME}/""triggers.dat";
db2 -x +o -r $triggersFile "select "$ROWTAG", "$COLTAG"||TRIGSCHEMA||"$ENDCOLTAG",
"$COLTAG"||TRIGNAME||"$ENDCOLTAG", "$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||TABSCHEMA||"$ENDCOLTAG",
"$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||TRIGEVENT||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG",
"$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
"$COLTAG"||COALESCE(REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG"
from SYSCAT.TRIGGERS
WHERE TRIGSCHEMA = '${SCHEMA_NAME}'";
if [[ $? != 0 ]]; then
echo "No triggers found.";
fi
# The for GET Promary Key CONSTRAINT QUERY. */
echo "Get all primary keys for schema $SCHEMA_NAME";
primarykeysFile="${SCHEMA_NAME}/""primarykeys.dat";
db2 -x +o -r $primarykeysFile "select "$ROWTAG", "$COLTAG"||X.CONSTNAME||"$ENDCOLTAG", "$COLTAG"||X.TYPE||"$ENDCOLTAG",
"$COLTAG"||X.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||X.TABNAME||"$ENDCOLTAG", "$COLTAG"||Z.COLNAME||"$ENDCOLTAG",
"$COLTAG"||CHAR(Z.COLSEQ)||"$ENDCOLTAG", "$COLTAG"||COALESCE(X.REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG" from
(select CONSTNAME, TYPE, TABSCHEMA, TABNAME, REMARKS from SYSCAT.TABCONST where (type = 'P' OR type = 'U')) X
FULL OUTER JOIN
(select COLNAME, COLSEQ, CONSTNAME, TABSCHEMA, TABNAME from SYSCAT.KEYCOLUSE) Z
on
(X.CONSTNAME = Z.CONSTNAME and X.TABSCHEMA = Z.TABSCHEMA and X.TABNAME = Z.TABNAME)
WHERE X.TABSCHEMA='${SCHEMA_NAME}'
ORDER BY X.CONSTNAME";
if [[ $? != 0 ]]; then
echo "No primary keys found.";
fi
# The for GET Check constraints QUERY. */
echo "Get all Check constraints for schema $SCHEMA_NAME";
constraintsFile="${SCHEMA_NAME}/""checkConstraints.dat";
db2 -x +o -r $constraintsFile "SELECT "$ROWTAG", "$COLTAG"||A.CONSTNAME||"$ENDCOLTAG", "$COLTAG"|| COALESCE(TEXT, '') ||"$ENDCOLTAG", "$COLTAG"|| A.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"|| A.TABNAME ||"$ENDCOLTAG", "$COLTAG"|| COLNAME ||"$ENDCOLTAG", "$ENDROWTAG" FROM SYSCAT.CHECKS A , SYSCAT.COLCHECKS B
WHERE A.CONSTNAME = B.CONSTNAME AND A.TABSCHEMA = B.TABSCHEMA AND A.TABNAME=B.TABNAME AND A.TABSCHEMA = '${SCHEMA_NAME}'";
if [[ $? != 0 ]]; then
echo "No check constraints found.";
fi
done < "schemas.dat"
# GET PROCEDURES QUERY. */
. getProcedures.sh schemas.dat
# The for GET Foreign Key CONSTRAINT QUERY. */
. getForeignKeys.sh schemas.dat -
IBM DB2 to Oracle Database Migration Using SQL Developer
Hi,
We are doing migration of the whole database from IBM DB2 8.2 which is running in WINDOWS to Oracle 11g Database in LINUX.
As part of pre-requisites we have installed the Oracle SQL Developer 4.0.1 (4.0.1.14.48) in Linux Server with JDK 1.7. Also Established a connection with Oracle Database.
Questions:
1) How can we enable the Third Party Database Connectivity in SQL Developer?
I have copied the files db2jcc.jar and db2jcc_license_cu.jar from the IBM DB2 (Windows) to Oracle (Linux)
2) Will these JAR files are universal drivers? will these jar files will support in Linux platform?
3) I got a DB2 full privileged schema name "assistdba", Shall i create a new user with the same name "assistdba" in the Oracle Database & grant DBA Privillege? (This is for Repository Creation)
4) We have around 35GB of data in DB2, shall i proceed with ONLINE CAPTURE during the migration?
5) Do you have any approx. estimation of Time to migrate a 35 GB of data?
6) In-case of any issue during the migration activity, shall i get an support from Oracle Team (We have a Valid Support ID)?
7) What are all the necessary Test Cases to confirm the status of VALID Migration?
Request you to share the relevant metalink documents!!!
Kindly guide me in-order to go-ahead with the successful migration.
Thanks in Advance!!!
Nagu
[email protected]Hi Klaus,
Continued with the above posts - Now we are doing another database migration from IBM DB2 to Oracle, which is very less of data (Eg: 20 Tables & 22 Indexes).
As like previous database migration, we have done the pre-requirement steps.
DB Using SQL Developer
Created Migration Repository
Connected with the created User in SQL Developer
Captured the Source Database
Converted Captured Model to Oracle
Before Translation Phase we have clicked on the "Proceed Summary"
Captured Database Objects & Converted Database Objects has been created under PROJECT section.
Here while checking the status of captured & converted database objects, It's showing the below chart as sample:
OVERVIEW
PHASE TABLE DETAILS TABLE PCT
CAPTURE 20/20 100%
CONVERT 20/20 100%
COMPILE 0/20 0%
TARGET STATUS
DESC_OBJECT_NAME
SCHEMANAME
OBJECTNAME
STATUS
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:ARG_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H0INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H1INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H2INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H3INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H5INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPIREP1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPISWIFT1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPITRAN1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OBJ_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OPR_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:PRD_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:S1TABLE01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STMT_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STM_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
INDEX
TRADEIN1
SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:X0IAS39:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
Missing
We have seen only "Missing" in the chart, also we couldn't have any option to trace it in Log file.
Only after the status is VALID, we can proceed with the Translation & Migration PHASE.
Kindly help us how to approach this issue now.
Thanks
Nagu -
GoldenGate Downstream Integrated capture from SQL Server 2012 as source
Hi,
Previously, I had done OGG integrated capture from downstream mining database. In that case SourceDB, Downstream MiningDB and TargetDB, all databases were Oracle.
MiningDB was in sync with source for log shipping and OGG extract was hooked with MiningDB to capture changes and apply on targetDB by replicat.
This time, I am looking for similar solution for SourceDB as MS SQL Server 2012. Exploring my options as I am not allowed to let OGG extract directly capture from source database.
I have to come up with a solution where source database can ship logs to another location (miningDB) and let OGG extract capture from there.
What could be my options ??? Appreciate your suggestions???
Thanks
NeetuHi,
You can move post to Golden Gate forum and also add following more information:
I see you are using Integrated it means the version is either 11 or higher.
1, Add GG version 11/12 ?
2. Add Oracle DB Version 10/11/12
In Oracle downstream, basically you ship the archive logs to the downstream server but have you thought how is the transnational logs handled in SQL Server ?
HTH,
Pradeep -
Is it possible ?Change data capture from VSAM , DB2 , SQL Server to Oracle
Dear Professionals ,
We plan to build a warehouse project.Source systems are
* VSAM files in zOS
* SQL Server
* DB2
Warehouse database is Oracle .
Every night the changes in the source systems will be applied to Oracle warehouse DB.
But only the changes will be applied . Exporting VSAM files to flat file and load to oracle and find the data differences in Oracle is not accepted. Or moving the entire tables to oracle and finding the changes in oracle is not accepted.Only the changes will pass through the network .
Is "Oracle Connect" and "VSAM adaptor" capable of this ?
Is there a solution for SQL Server and DB2 change data capture ?
Is it Possible ? If possible is it a headache or a simple install and forget process?
Thank you
Bunyamin..Bunyamin,
I do not know about VSAM, but I heard/read that Oracle Data Integrator is able to do change data capture on several databases. It uses the source database mechanism to do CDC. So maybe give it a try at the fusion middleware forum where ODI is being discussed -
DB Migration from MYSQL to ORACLE Using Offline Capture
Hi
Am doing a database migration from MySQL to Oracle using SQL Developer (version 2.1.1.64). So far, I've successfully captured the MySQL database and converted it to the Oracle Model. However, when generating offline scripts to create the converted model schema into Oracle DDL scripts it managed to generate SQL to create: 1) User 2) Sequences 3) Tables 4) Triggers and 5) constraints.
It has created the SQL to add the primary key constraints and index constraints. Although it did the foreign key constraints in the SQL, the foreign key constraints seems to have missed the cascading options for the foreign key constraint. I.e. theres no reference of whether the foreign key constraint will restrict on delete or cascade etc.
We have a foreign keys in the MySql database that have different cascading options and these have not being ported over into the migration SQL. Therefore, all the foreign keys generated in the SQL by default are cascade to restrict on delete.
Does 'Generate Oracle DDL' not take into account a foreign key's on delete cascading option?
Any help or information would be greatly appreciated.
ThanksHello,
that reminded me for the following thread:
Migration Microsoft SQL Sever 2005 to Oracle 11g cascade on delete problem
That is a similar issue, isn't it?
I opened a bug for that, and it will be fixed in SQL Developer 3.1 (not in any 3.0 Early Adopter version). If you hit the same issue, there is no other way then using the workaround as used in the mentioned thread.
Regards
Wolfgang
Maybe you are looking for
-
Runtime error: RAISE_EXCEPTION while release invoice doc to accounting
Hi guys, System going to dump (Run time error: RAISE_EXCEPTION) while releasing invoice document to accounting. In VF02 screen after giving billing document as input value and if you try to release it to accounting then system is going dump and giv
-
hi i want to display <b>material no material short description material long description.</b> im my output is coming . <b>material no material short description material long description</b> but long description is coming in only one line it may b
-
SHARED button on left panel didn't show up
I have 2 pc (one laptop and one newly bought desktop). I have tried to establish home sharing. ultimately the laptop can viewed desktop under the left side SHARED button. But I can't see the SHARED button on my desktop left panel after turn-on home s
-
Your subscription funded by PayPal has been cancel...
Hi We forgot to have enough credit in our Paypal account to pay for the unlimited world. Now we get email from skype saying its all cancelled. Can I work out how to resolve this? No!!! Our Paypal account now has enough credit. If I try getting a new
-
Why won't files open in Camera Raw from Bridge?
I am trying to open files in Camera raw from Bridge but get the following message. "Bridge's parent application is not active." Both Bridge and CS5 are open and I have opened camera raw previously. Any suggestions. Cheers