After DB migration(Informix to Oracle), SQLs need to review & change?

Dear all,
We migrate our system DB from Informix to Oracle 9i.
The current problem is: All data in CHAR fields will have "SPACE".
For example,
There is a table named TABLE1, and one field named FIELD_1 in it.
The type of FIELD_1 is CHAR, 4 characters.
In Informix, the data in TABLE1 is
FIELD_1
===============
CM
ALL
TEST
After migrate to Oracle, the data "looks like the same".
FIELD_1
===============
CM
ALL
TEST
But, when using TOAD or SQL/PLUS to query the length of these data, like:
SELECT LENGTH(FIELD_1) AS LEN, FIELD_1 FROM TABLE1;
And, the return result is:
LEN FIELD_1
==== ==============
4 CM
4 ALL
4 TEST
All length is 4..........Orz......
So, all SQL in current AP need to review and change(if necessary).
for example, the statement likes:
WHERE FIELD_1 = 'CM' will change to WHERE TRIM(FIELD_1) = 'CM'
Does anybody have the same probelm?
And another question is, if we always use TRIM in the WHERE statement, will cause bad performance or not?
(because I try to change one SQL and get the return result.....takes more time than before)
Thanks a lot.
Best Regards,
Claire

The current solution is review all codes in AP and change the WHERE statement.Don't do that! You'll have all kinds of problems and any new code you develop, you'll have to remember this hack. Instead, fix the tables once so they are correct. Continuing my example:
SQL> alter table t modify (object_name varchar2(30));
Table altered.
SQL> update t
  2  set object_name = trim(object_name);
7852 rows updated.
SQL> commit;
Commit complete.
SQL> select length(object_name), count(*)
  2  from t
  3  group by length(object_name);
LENGTH(OBJECT_NAME)             COUNT(*)
                   1                    1
                   3                   22
                   4                   14
                   5                   33
                   6                   46
                   7                  213
                   8                  283
                   9                  266
                  10                  295
                  11                  288
                  12                  291
                  13                  339
                  14                  354
                  15                  367
                  16                  428
                  17                  396
                  18                  368
                  19                  377
                  20                  348
                  21                  340
                  22                  353
                  23                  306
                  24                  347
                  25                  320
                  26                  325
                  27                  269
                  28                  332
                  29                  299
                  30                  232
29 rows selected.For each table, you can do one modify and one update statement to fix it. With a little clever use of the data dictionary, you might even be able to script it.
Much easier than hacking your code.
Message was edited by:
Eric H

Similar Messages

  • After WLS10.3 migration getBinaryStream in Oracle.sql.BLOB not reading data

    Please some one help me. After the 10.3 migration, the values in CSV file(as BLOB in DATABASE) is not read by getBinaryStream method. It returns 0. It was working before. Please find below the code:
    import oracle.jdbc.OracleResultSet;
    import oracle.sql.BLOB;
    public InputStream RetriveISfromBlob(int batchid)
                        BLOB blob;
                        InputStream instream=null;
                        try
                             Statement stmt = connection.createStatement ();
                             ResultSet resSet = stmt.executeQuery
                             ("SELECT content FROM file_upload WHERE batch_id="+batchid);
                             resSet.next();
                             System.out.println("after query");
                             /*Get the BLOB locator.*/
                             blob = ((OracleResultSet)resSet).getBLOB(1);               
                             /*get the blob's outputstream
                             any data read from this stream comes from the BLOB*/
                             instream = blob.getBinaryStream();
    Below is code from Another class which call the above method RetriveISfromBlob:
    InputStream inputstream = fileDataDAO.RetriveISfromBlob(batchId);
                   System.out.println("afterfileread");
                   CommaFileInputStream reader = new CommaFileInputStream(inputstream);
                   reader.setIgnoreFirstLine(true);
                   CommaRecord comma = reader.getCommaRecord();
                   System.out.println("Number of records -" + comma.size());     /This returns o but CSV file has lot of datas
    NOTE: When I use Ojdbc14.jar only the above code returns 0. When i use Ojdbc_6g.jar it throws NULL POINTER EXCEPTION because the code was written like import oracle.jdbc.driver.OracleResultSet; But in all cases the data in BLOB was not read
    Edited by: 833987 on Feb 4, 2011 9:07 AM

    Hi,
    The problems 1, 2 and 3 under the heading Issue 2 should be fixed in the Early Adopter release 3.1 EA3, which is now available.
    David

  • Migration informix to oracle

    Can I use Oracle Migration Workbench to migrate from Informix Dynamic Server 7.0 to Oracle 8.1
    Thanks in Advance
    Alessandro Guimaraes

    Hi,
    We have not tested against version 7.0, but the Workbench does migrate 7.23 and 7.3.
    download it and try it. If it doesn't work, then you can upgrade your schema from 7.0 to 7.3,. Use the workbench to migrate the schema. You can then use the workbench to generate Informix unload and Oracle SQL Loader scripts to migrate the data from Informix 7.0.
    If you need further clarification then please contact [email protected]
    Regards
    John

  • Database migration informix to Oracle

    Hello
    Could you tell me the technical improves of changes our Informix DB to Oracle. We have an SAP system with 800 GB over Informix and a good functionality and response time, therefore I´d like to know how Oracle could help us to increase this features.
    So, the question is, why to migrate our database??

    Scalability, reliability, security, performance, availability.
    If you have no issues with the above then you may not have a compelling business need to migrate.
    If you have problems with SAP running on Informix then Oracle may/will be able to resolve them.

  • Why migrating Informix to Oracle ?

    Can anyone tell how can I convince my client to migrate to Oracle from Informix?
    My customer may ask are there any technical issue beside using the WorkBench ?
    Thank you

    There are many reasons to move to Oracle. However, most of these reasons are not business reasons. If the business decides to move from Informix, then the logical choices arising will be DB2 or Oracle.
    In this case, the next steps are comparing DB2 to Oracle. Here are some of the Features which Oracle supports but are limited by DB2
    Here are some various points that may be used.
    Multi version read consistency
    IBM has table and page locking, leading to escalating locks and potential deadlock under load. It also allows dirty reads and has the potential for writers to block readers and
    vice versa.
    Then there's the clustering story. Most IBM cluster additions need extensive rewrites. For example, their TPC C benchmark
    From examination of their TPC FDR -
    TPC Benchmark - Full Disclosure Report for IBM Netfinity 8500R using IBM DB2 Universal Database V7.1 and Microsoft Windows 2000 Advanced Server - Submitted for
    Review, July 3, 2000. (58 pages of Database design scripting (at 100+ lines of script per page))
    Also a paper given at a user group - DB2 UDB EEE as an OLTP Database, Gene Kligerman, DB2 and Business Intelligence Technical Conference, Las Vegas, Oct 16-20,
    2000
    The list goes on from Business intelligence features such as range partitioning available only on as/400 to essential security features like fine grained auditing.
    regards,
    Barry
    <disclaimer>
    These opinions are my own and do not construe a corporate opinion
    </disclaimer>

  • Application is throwing NSAPI and other errors after the migration

    We started seeing NSAPI and other errors after the migration of new codes and recently make some changes
    1) During the migration, all the 4 WLS cluster instances were bounced simultaneously. What is the best practice and recommendation from BEA Oracle? Do we need to bounce it one by one for any specific reason?
    2) we recently introduced the 4th weblogic cluster instance on Solaris 10 zone. All other WLS instances are running on Solaris 8.
    Is it okay to have two different OS versions( Solaris 8 and Solaris 10) and different hardware to host WLS cluster instances? Does BEA Oracle support this kind of configuration?
    3) We are consistently seeing below errors in the iplanet web server error logs even though site is working fine
    [11/Sep/2009:08:53:50] failure ( 4453): for host XX.XX.XX.XX trying to POST /servlet/FSO, wl-proxy reports: exception occurred for backend host 'YY.YY.YY.YY/23120/0': 'CONNECTION_REFUSED [os error=0, line 1732 of URL.cpp]: Error connecting to host YY.YY.YY.YY.YY:23120'
    Do we need to upgrade our WLS proxy plugin? . Current version of WLS proxy plug-in is libproxy_61.so. Our Weblogic version is WLS 92 MP2
    I'll really appreciate if some can anwsers those questions

    OK booted in to Virtualbox, seems too be working fine this way, maybe it is the ATi drivers
    any Help would be appreciated

  • Javax.sql.rowset.serial.SerialBlob cannot be cast to oracle.sql.BLOB

    Hi there,
    I'm facing this exception.
    My code:
    final ReportTemplateAttachment rta = new ReportTemplateAttachment();
    FileBlob fb = new FileBlob();
    fb.setContentType(attachmentDto.getContentType().getValue());
    fb.setCreationDate(sysDate);
    fb.setCreationUser(userId);
    final SerialBlob blob = new SerialBlob(attachmentDto.getFileVal());//it's a  byte[]
    //SerialBlob is of type javax.sql.rowset.serial.SerialBlob
    fb.setFileData(blob);
    fb.setName(attachmentDto.getFileName());
    fb.setLastUpdateDate(sysDate);
    fb.setLastUpdateUser(userId);
    fb.setLength(blob.length());
    rta.setFileBlob(fb);and the object rta will be inserted into another object.
    The variable of interest in FileBlob hbm is defined as follows:
    <property name="fileData" type="blob">
                <column name="FILE_DATA" />
    </property>where the type blob is oracle.sql.BLOB
    If I change the oracle.sql.BLOB to javax.sql.rowset.serial.SerialBlob in the hbm, I can insert but when I try to get the blob back I get a deserialize exception so this change is not an option.
    but when I save the main Object (a cascade operation) I get the following exception:
    org.springframework.jdbc.UncategorizedSQLException:
    Hibernate flushing: could not insert: [pt.sc.data.entities.FileBlob];
    uncategorized SQLException for SQL
    [insert into WP_ADMIN.file_blob (name, content_type, length, file_data, status, creation_user, creation_date, last_update_user, last_update_date, id)
    values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)];
    SQL state [null];
    error code [0];
    An SQLException was provoked by the following failure: java.lang.ClassCastException:
    javax.sql.rowset.serial.SerialBlob cannot be cast to oracle.sql.BLOB;
    nested exception is java.sql.SQLException: An SQLException was provoked by the following failure:
    java.lang.ClassCastException: javax.sql.rowset.serial.SerialBlob cannot be cast to oracle.sql.BLOBAny light on the subject?
    Thanks in advance,
    mleiria

    You don't seem to understand the difference between an object model and a database. Hibernate, being an ORM package, works on the object model and will, based on what you request it to do, generate proper SQL statements to get the database part going. But you shouldn't be thinking about databases when using Hibernate; your main focus is the object model. There is no blob, only binary data. Binary data in Java is generally handled through a byte array.
    If you want to think in databases where blobs do exist, you should be using JDBC directly, not Hibernate.
    (I'm far from being an expert in Hibernate)I would really suggest taking a few hours and reading through the manual. You don't need to be an expert but you should at least know what kind of tool you're working with.

  • Error opening Oracle SQL Developer 1.2.1

    In my IDE, I saw extensions pane with title "Entensions - Log" which has
    the following contents:
    oracle.ide.db
    Warning: Classpath entry \DISTAPPS$\ORACLE\v10\Oracle10g\jdbc\lib\ojdbc14.jar not found
    Warning: Classpath entry \DISTAPPS$\ORACLE\v10\Oracle10g\jlib\orai18n.jar not found
    oracle.jdeveloper.db.connection
    Warning: Classpath entry \DISTAPPS$\ORACLE\v10\Oracle10g\jdbc\lib\ojdbc14.jar not found
    Warning: Classpath entry \DISTAPPS$\ORACLE\v10\Oracle10g\jlib\orai18n.jar not found
    (I am runing it on windows XP, the problem happened after I killed a running Oracle SQL Developer session. Now I cannot connect to anything even I tried to reinstall Oracle SQL Developer again) Anybody had the same issue, is there any way to fix it.
    Thanks,
    -caoy

    Oracle SQL Developer 1.2.0 is working correctly at Argonne National Lab with Oracle 10g Client Version 10.2.1 and Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - 64bit. The correct UNC and java class path is:
    \\Canary\DISTAPPS$\ORACLE\V10\Ora10g\jdbc\lib.
    SQL*Plus: Release 10.2.0.1.0 - Production on Sat Nov 17 16:23:58 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> exit
    Successfully tested Oracle SQL Developer to this same database and retrieved data correctly.
    Please see me in my office.
    H. Johnstad
    CIS 2-5069
    [email protected]

  • Error after update Oracle SQL Developer Migration Tools

    Hi,
    I'm receiving a error after the update of Oracle SQL Developer (v.1.1.3.27.66) to work with mySQL migration tool.
    This error appear when it´s restarting:
    Exception Stack Trace:
    java.lang.IllegalAccessError: tried to access class oracle.ide.net.IdeURLStreamHandler from class oracle.ide.net.URLFileSystem$1
         at oracle.ide.net.URLFileSystem$1.createURLStreamHandler(URLFileSystem.java:87)
         at oracle.ide.boot.URLStreamHandlerFactoryQueue.createURLStreamHandler(URLStreamHandlerFactoryQueue.java:119)
         at java.net.URL.getURLStreamHandler(URL.java:1106)
         at java.net.URL.<init>(URL.java:393)
         at java.net.URL.<init>(URL.java:283)
         at oracle.ide.net.URLFactory.newURL(URLFactory.java:636)
         at oracle.ide.layout.URL2String.toURL(URL2String.java:104)
         at oracle.ideimpl.editor.EditorUtil.getURL(EditorUtil.java:150)
         at oracle.ideimpl.editor.EditorUtil.getNode(EditorUtil.java:122)
         at oracle.ideimpl.editor.EditorUtil.loadContext(EditorUtil.java:91)
         at oracle.ideimpl.editor.TabGroupState.loadStateInfo(TabGroupState.java:950)
         at oracle.ideimpl.editor.TabGroup.loadLayout(TabGroup.java:1751)
         at oracle.ideimpl.editor.TabGroupXMLLayoutPersistence.loadComponent(TabGroupXMLLayoutPersistence.java:31)
         at oracle.ideimpl.controls.dockLayout.DockLayoutInfoLeaf.loadLayout(DockLayoutInfoLeaf.java:123)
         at oracle.ideimpl.controls.dockLayout.AbstractDockLayoutInfoNode.loadLayout(AbstractDockLayoutInfoNode.java:631)
         at oracle.ideimpl.controls.dockLayout.AbstractDockLayoutInfoNode.loadLayout(AbstractDockLayoutInfoNode.java:628)
         at oracle.ideimpl.controls.dockLayout.AbstractDockLayoutInfoNode.loadLayout(AbstractDockLayoutInfoNode.java:614)
         at oracle.ideimpl.controls.dockLayout.DockLayout.loadLayout(DockLayout.java:302)
         at oracle.ideimpl.controls.dockLayout.DockLayoutPanel.loadLayout(DockLayoutPanel.java:128)
         at oracle.ideimpl.editor.Desktop.loadLayout(Desktop.java:356)
         at oracle.ideimpl.editor.EditorManagerImpl.init(EditorManagerImpl.java:1879)
         at oracle.ide.layout.Layouts.activate(Layouts.java:784)
         at oracle.ide.layout.Layouts.activateLayout(Layouts.java:186)
         at oracle.ideimpl.MainWindowImpl$6.runImpl(MainWindowImpl.java:734)
         at oracle.javatools.util.SwingClosure$1Closure.run(SwingClosure.java:50)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:199)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:597)
         at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:273)
         at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:183)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:173)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:168)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:160)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:121)
    Anyone knows why that is happining?
    Thanks in advance,
    Felipe.
    Message was edited by:
    felipe.b

    Hello,
    I get the same error after updating through Help->Check for Updates. I am at version 1.1.2.25 and the update does not change the version number of Oracle SQL Developer (Help->About), it remains at 1.1.2.25 (although if I look in extensions tab, it says: oracle.sqldeveloper 1.1.3.27.66 Loaded, possible bug in displaying version number?). These are the updates applied:
    Upgraded Extensions
    Oracle Microsoft Access Browser version 1.1.3.27.66
    Oracle SQL Developer - Snippet version 10.1.3.27.66
    Oracle SQL Developer version 1.1.3.27.66
    Oracle SQL Developer - Reports version 10.1.3.27.66
    Oracle SQL Server Browser version 1.1.3.27.66
    Oracle MySQL Browser version 1.1.3.27.66
    Oracle SQL Developer - Worksheet version 10.1.3.27.66
    Oracle SQL Developer - Extras version 1.1.3.27.66
    Oracle SQL Developer - SearchBar version 10.1.3.27.66
    Oracle SQL Developer - Object Viewer version 10.1.3.27.66
    I ran the update several times and was able to reproduce the error under a specific condition. If I am connected to a database at the time of the update I get the error. If I am not conneceted at the time of the update I do not get the error.
    As stated, by Donal, sql developer seems to get over the problem after restarting itself twice.
    Mark

  • How do I migrate views from MS SQL 2008 to Oracle 11g through SQL Developer

    Is there any way to migrate the views from MS SQL 2008 to Oracle 11g through SQL Developer? Please give me some detail steps. Thanks for your help.
    Kevin

    Hi Kevin,
    user13531850 wrote:
    Hi Turloch,
    When I use migrate to oracle, I got a problem, the migrate tool create a new schema for me in my case (AZTECA_KSMMS), it migrates all the stuffs under that schema (AZTECA_KSMMS). However my application need the all the Oracle data under schema AZTECA instead of AZTECA_KSMMS. Is there any way to specify specific schema (AZTECA) for target oracle database? Schema remapping is available:
    First Capture (separately) then during right click convert on the captured model there is a Specify the conversion options with a Object Naming tab where the schema (and other) name changes are editable.
    I have not used this recently.
    Also during the migration process, when I choose repository, there is a check box for truncate to reset repository to empty state, Do I need to check that truncate Check Box so the repository will be cleared from last migration?The repository can hold multple migration attempts. Check truncate to get rid of previous attempts information. This cleans up the repository - not the destination database.
    There are also online database and offline database options during the migration process, what are the difference between these two choices? After I migrated to Oracle, all my views has a red cross icon next to it. Does that mean the view migration is failed or not? Please give me your comments. Thanks for your help.offline: for big (amount of data) databases with simple data types,
    uses bcp + files + scripts + sqlldr.
    online: for small (amount of data) databases (easier),
    uses (Java) jdbc.
    The view is likely to be broken - recompiling it may help.
    The Oracle schema is created using a .sql file - see under generated in the directory you gave originally in the wizard. There is a .out file that contains the result of running this script including any errors. During conversion there are also likely to be warnings displayed on the UI.
    There may be a single issue that is causing multiple issues - if viewa depends on functionb, and functionb is broken, viewa will also fail.
    >
    Kevin-Turloch
    SQLDeveloper Team

  • Oracle SQL Developer 3.1 Migration Third Party Databases Issues

    Hi,
    i had following issues with migrating from db2 v8 to oracle 11.2.
    Online:
    Due to missing privileges and roles for user db user migrations some steps have failed (CREATE USER -> ORA-01031 ...).
    After correcting this like described in "Creating a Database User for the Migration Repository" in sqldev online help this has worked.
    The problems are:
    a) on the overview page at the end of the migration assistent all steps (CAPTURE, CONVERT, GENERATE, DATAMOVE) are shown as complete, even if nothing has done
    b) on page 6/9 into migration assistant all changes for datatype convertion are ignored, for example CHAR to VARCHAR2
    c) generated files are not visible, even if you mark refresh on file view
    d) after restarting sqldev, generarted files are visible in file view, but when you add generated files to svn error message "svn: File: xxx has inconsitent newlines" is shown
    e) after sucessful migration on opened migration project pane "data quality" sourcenumrows are NULL, even if they always NOT NULL and count(*) on any table on both sites are equal
    Offline:
    Generated skripts contains errors:
    ./startDump.sh: line 157: syntax error near unexpected token `done'
    '/startDump.sh: line 157: `done < "schemas.dat"
    Can anybody help?
    Thanks in advance
    André

    Hi kgronau,
    thanks for your fast answer.
    Today i have found 2 new issue.
    When you have opened a migration project from repository, on pane "data quality" sourcenumrows are always null
    and
    sourcename and targetname shows always databse object names from the database on the first migration project in repository independently of extra section in drop down box model and source.
    kgronau wrote:
    André,
    I used SQL Dev 3.1 and I captured a DB2 database. Then I've changed the rule to map char to varchar2 and started the migration.
    When I now check out my custom tables all all of them that had in the source model a char column are now using varchar2.I have tried to changed dataype for target database in place over the drop down box, not the edit rule button. It's a little bit confusing to have this option, when it doesn't works.
    After using the edit rule button, all works fine. Just the summary page 9/9 doesn't report changed datatype assignment.
    >
    Could you please explain what you mean with your option c and d?c)
    Yes i'm meaning View -> Files. Sorry but on german windows i have just german menu items. That is sometimes tricky to retranslate for support questions and also not helpful when using the online help where all menu items referenced in english :-(
    (Do you have an ideo how can i configure sqldev with english menu aon german windows?)
    I think, this problemn is special for output folders under subversion control.
    d)
    Generated Files after the end of the migration are just visible in output folders under subversion control after restarting sqldev
    >
    Edited by: kgronau on Mar 7, 2012 12:30 PM
    Are you talking about Opening a File viewer window (View -> Files)? In my case I have chosen d:\temp\DB2 as output and monitored it during the migration. It isn't refreshed until I manually click on the refresh button - but once the migration has finished and written the output and when I then click on the refresh button I'll see all the directories and the files included.
    Edited by: kgronau on Mar 7, 2012 12:39 PM
    When a migration has finished then SQL Developer 3.1 now creates in the top directory an unload_script.sh file which calls the other unload scripts.That's right, all scripts are generated.
    Also the data unload scripts were created - I need to find a DB2 on Unix to check the script - a quick check of the windows scripts worked correctly.
    Edited by: kgronau on Mar 7, 2012 1:22 PM
    These unload shell scripts to unload the data out of a DB2 database are also working.
    Unfortunately I'm not able to test the shell script used for a source model unload as my UDB is running on Windows.
    Didn't the online source model collection work? For me it looks like it did as you mentioned you changed the char data types to varchar2 and this requires already a connection to the source database - except you used the scripts that were generated using the startDump.sh which has failed. Yes, online source model collection did work. Just the unix shell script produces an error on the source unix system with db2. Please see below th generated script.
    So please provide here some more details../startDump.sh was startet for testing purposes without any arguments
    ./startDump.sh: line 157: syntax error near unexpected token `done'
    '/startDump.sh: line 157: `done < "
    if [[ $# != 3 ]]; then
    echo "Usage: startDump <database> <user> <password>";
    exit 1;
    fi
    ROWTAG="'<row>'";
    ENDROWTAG="'</row>'";
    COLTAG="'<col><![CDATA['";
    ENDCOLTAG="']]></col>'";
    # Clear any other dat files
    echo "Clearing older data files"
    rm -f *.dat
    echo "Connnecting to $1 as $2";
    db2 -r connect.dat "connect to $1 user $2 using $3";
    if [[ $? != 0 ]]; then
    echo "Connection failed.";
    exit 20;
    fi
    # GET SCHEMA QUERY.
    echo "Get all schemas";
    db2 +o -x -r schemas.dat "select SCHEMANAME SCHEMA_NAME from SYSCAT.SCHEMATA WHERE DEFINER <> 'SYSIBM' AND
    SCHEMANAME <> 'NULLID' AND SCHEMANAME <> 'SQLJ'
    AND SCHEMANAME <> 'SYSTOOLS'";
    if [[ $? != 0 ]]; then
    echo "Get schemas failed.";
    exit 30;
    fi
    # Loop through file containing schema names and extract db objects for each of them
    while read SCHEMA_NAME
    do
    # Create schema directory
    rm -rf "${SCHEMA_NAME}";
    mkdir "${SCHEMA_NAME}";
    if [[ $? != 0 ]]; then
    echo "Could not create schema directory ${SCHEMA_NAME}.";
    exit 40;
    fi
    echo "Get all tables for schema $SCHEMA_NAME";
    tablesFile="${SCHEMA_NAME}/""tables.dat";
    # GET TABLES QUERY. */
    db2 -x +o -r $tablesFile "select "$ROWTAG", "$COLTAG"||COLUMNS.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||COLUMNS.TABNAME||"$ENDCOLTAG",
    "$COLTAG"||COLUMNS.COLNAME||"$ENDCOLTAG", "$COLTAG"||(CASE WHEN (COLUMNS.CODEPAGE = 0 and (COLUMNS.TYPENAME = 'VARCHAR' OR COLUMNS.TYPENAME = 'CHAR'
    OR COLUMNS.TYPENAME = 'LONG VARCHAR' OR COLUMNS.TYPENAME = 'CHARACTER')) THEN COLUMNS.TYPENAME || ' FOR BIT DATA'
    ELSE COLUMNS.TYPENAME END)||"$ENDCOLTAG", "$COLTAG"||CHAR(COLUMNS.LENGTH)||"$ENDCOLTAG",
    "$COLTAG"||CHAR(COLUMNS.SCALE)||"$ENDCOLTAG", "$COLTAG"||COLUMNS.NULLS||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(COLUMNS.DEFAULT, '')||"$ENDCOLTAG", "$ENDROWTAG" from
    SYSCAT.COLUMNS COLUMNS, SYSCAT.TABLES TABLES WHERE
    COLUMNS.TABSCHEMA = '${SCHEMA_NAME}' AND
    COLUMNS.TABNAME = TABLES.TABNAME AND
    COLUMNS.TABSCHEMA = TABLES.TABSCHEMA AND
    TABLES.TYPE = 'T'
    ORDER BY COLUMNS.TABNAME, COLUMNS.COLNO";
    if [[ $? != 0 ]]; then
    echo "No tables found.";
    fi
    # GET SYNONYMS QUERY. */
    echo "Get all synonyms for schema $SCHEMA_NAME";
    synonymsFile="${SCHEMA_NAME}/""synonyms.dat";
    db2 -x +o -r $synonymsFile "select "$ROWTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||BASE_TABSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||BASE_TABNAME||"$ENDCOLTAG", "$ENDROWTAG" from syscat.tables
    where tabschema = '${SCHEMA_NAME}' and type = 'A'";
    if [[ $? != 0 ]]; then
    echo "No synonyms found.";
    fi
    # GET VIEW QUERY. */
    echo "Get all views for schema $SCHEMA_NAME";
    viewsFile="${SCHEMA_NAME}/""views.dat";
    db2 -x +o -r $viewsFile "select "$ROWTAG", "$COLTAG"||VIEWSCHEMA||"$ENDCOLTAG", "$COLTAG"||VIEWNAME||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
    "$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||READONLY||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG", "$ENDROWTAG"
    from syscat.views
    WHERE VIEWSCHEMA = '${SCHEMA_NAME}'
    ORDER BY VIEWNAME";
    if [[ $? != 0 ]]; then
    echo "No views found.";
    fi
    # GET INDEXES QUERY. */
    echo "Get all indexes for schema $SCHEMA_NAME";
    indexesFile="${SCHEMA_NAME}/""indexes.dat";
    db2 -x +o -r $indexesFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
    "$COLTAG"||TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||INDEXTYPE||"$ENDCOLTAG",
    "$COLTAG"||UNIQUERULE||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXES
    WHERE INDSCHEMA = '${SCHEMA_NAME}' AND UNIQUERULE <> 'P'
    ORDER BY TABNAME, INDNAME";
    if [[ $? != 0 ]]; then
    echo "No indexes found.";
    fi
    # GET INDEX DETAILS QUERY. */
    echo "Get all index details for schema $SCHEMA_NAME";
    indexeDetailsFile="${SCHEMA_NAME}/""indexDetails.dat";
    db2 -x +o -r $indexeDetailsFile "select "$ROWTAG", "$COLTAG"||INDSCHEMA||"$ENDCOLTAG", "$COLTAG"||INDNAME||"$ENDCOLTAG",
    "$COLTAG"||COLNAME||"$ENDCOLTAG", "$COLTAG"||CHAR(COLSEQ)||"$ENDCOLTAG", "$ENDROWTAG" from SYSCAT.INDEXCOLUSE
    WHERE INDSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No index details found.";
    fi
    # GET TRIGGERS QUERY. */
    echo "Get all triggers for schema $SCHEMA_NAME";
    triggersFile="${SCHEMA_NAME}/""triggers.dat";
    db2 -x +o -r $triggersFile "select "$ROWTAG", "$COLTAG"||TRIGSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||TRIGNAME||"$ENDCOLTAG", "$COLTAG"||DEFINER||"$ENDCOLTAG", "$COLTAG"||TABSCHEMA||"$ENDCOLTAG",
    "$COLTAG"||TABNAME||"$ENDCOLTAG", "$COLTAG"||TRIGEVENT||"$ENDCOLTAG", "$COLTAG"||VALID||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(TEXT, '')||"$ENDCOLTAG",
    "$COLTAG"||COALESCE(REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG"
    from SYSCAT.TRIGGERS
    WHERE TRIGSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No triggers found.";
    fi
    # The for GET Promary Key CONSTRAINT QUERY. */
    echo "Get all primary keys for schema $SCHEMA_NAME";
    primarykeysFile="${SCHEMA_NAME}/""primarykeys.dat";
    db2 -x +o -r $primarykeysFile "select "$ROWTAG", "$COLTAG"||X.CONSTNAME||"$ENDCOLTAG", "$COLTAG"||X.TYPE||"$ENDCOLTAG",
    "$COLTAG"||X.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"||X.TABNAME||"$ENDCOLTAG", "$COLTAG"||Z.COLNAME||"$ENDCOLTAG",
    "$COLTAG"||CHAR(Z.COLSEQ)||"$ENDCOLTAG", "$COLTAG"||COALESCE(X.REMARKS, '')||"$ENDCOLTAG", "$ENDROWTAG" from
    (select CONSTNAME, TYPE, TABSCHEMA, TABNAME, REMARKS from SYSCAT.TABCONST where (type = 'P' OR type = 'U')) X
    FULL OUTER JOIN
    (select COLNAME, COLSEQ, CONSTNAME, TABSCHEMA, TABNAME from SYSCAT.KEYCOLUSE) Z
    on
    (X.CONSTNAME = Z.CONSTNAME and X.TABSCHEMA = Z.TABSCHEMA and X.TABNAME = Z.TABNAME)
    WHERE X.TABSCHEMA='${SCHEMA_NAME}'
    ORDER BY X.CONSTNAME";
    if [[ $? != 0 ]]; then
    echo "No primary keys found.";
    fi
    # The for GET Check constraints QUERY. */
    echo "Get all Check constraints for schema $SCHEMA_NAME";
    constraintsFile="${SCHEMA_NAME}/""checkConstraints.dat";
    db2 -x +o -r $constraintsFile "SELECT "$ROWTAG", "$COLTAG"||A.CONSTNAME||"$ENDCOLTAG", "$COLTAG"|| COALESCE(TEXT, '') ||"$ENDCOLTAG", "$COLTAG"|| A.TABSCHEMA||"$ENDCOLTAG", "$COLTAG"|| A.TABNAME ||"$ENDCOLTAG", "$COLTAG"|| COLNAME ||"$ENDCOLTAG", "$ENDROWTAG" FROM SYSCAT.CHECKS A , SYSCAT.COLCHECKS B
    WHERE A.CONSTNAME = B.CONSTNAME AND A.TABSCHEMA = B.TABSCHEMA AND A.TABNAME=B.TABNAME AND A.TABSCHEMA = '${SCHEMA_NAME}'";
    if [[ $? != 0 ]]; then
    echo "No check constraints found.";
    fi
    done < "schemas.dat"
    # GET PROCEDURES QUERY. */
    . getProcedures.sh schemas.dat
    # The for GET Foreign Key CONSTRAINT QUERY. */
    . getForeignKeys.sh schemas.dat

  • Migrating data from Oracle 9i to SQL Server 2005

    I am new to both. I need to first migrate data from oracle to sql server. After this I need to create a daily nightly batch process to insert new records from oracle to sql server into that table.
    As my knowledge in SQL server is zero. Can somebody help me how I can accomplish this.
    Somebody told me that I can use sql server import/export to do initial data dump into sql server and after that I can create a link in in oracle to do new iserts for new records. does any one have some example on this. I will really apprecite this if someone can give me step by step example. Thanks

    I have been to SQL Server training, but my SQL server databases are off the shelf system, so I don't have to muck with them. Anyway, Sql Server is just MS Access on steroids, so some of the same concepts apply. You need to create an external table links to oracle. The following tidbits I found by googling might help you.
    http://www.sqlmag.com/Article/ArticleID/22264/sql_server_22264.html
    http://www.lazydba.com/sql/1__152.html
    http://www.sswug.org/see/35034
    http://decipherinfosys.wordpress.com/2007/07/16/linked-servers-in-sql-server/
    Some of the above require subscriptions (free and or paid). Hope this helps.

  • How to Migrate Stored procedure on Sql server 2008 to Oracle Database

    Guys, I need help vey badly as I am new in this field.
    Problem is that, I have to migrate stored procedure on Sql server 2008 to oracle Oracle database:
    Whole scenario--
    1. Sql Server 2008 application on Windows server (source machine)
    2. I have to migrate 70 Stored Procedure
    3. To Oracle Database on Linux machine (Target machine)
    Any method (no problem)
    Please, help me or give me any reference as I don't know which keyword is differ in both database.
    Thanks in advance

    Hi,
      You could the free Oracle SQL*Developer to do this.
    There is information and a download link here -
    Oracle SQL Developer&lt;/title&gt;&lt;meta name=&quot;Title&quot; content=&quot;Oracle SQL Developer&quot;&gt;&lt;meta n…
    and information on using it for migrations here -
    http://www.oracle.com/technetwork/database/migration/index-084442.html
    You could use it in 2 ways -
    1. Go through a migration but just pull the stored procedure code from the file created after you generate the SQL from the SQL*Server database
    2. Use the scratch editor accessed from -
    - Tools - Migration - Scratch Editor
    and paste the SQL*Server stored procedure code into the window and it will convert it to Oracle code. The tool is very good but may have problems if you have very complicated procedures that use SQL*Server specific utilities.
    Regards,
    Mike

  • Issue during migrating from Sybase to Oracle using Oracle SQL Developer

    I am using SQL Developer v 3.2.20.09 to migrate from Sybase to Oracle Pl/SQL 12c
    While migrating the stored procedure the following block did not convert. I got NULL instead of object_id:
    Sybase Block:
    IF OBJECT_ID(‘dbo.CheckEst’) IS NOT NULL
    BEGIN
    DROP PROCEDURE dbo.CheckEst
    IF OBJECT_ID(‘dbo.CheckEst’) IS NOT NULL
    PRINT ‘<<>>’
    ELSE
    PRINT ‘<<>>’
    END
    Oracle Block after conversion:
    BEGIN
    IF NULL/*TODO:OBJECT_ID(‘dbo.CheckEst’)*/ IS NOT NULL THEN
    BEGIN
    DROP PROCEDURE CheckEst;
    IF NULL/*TODO:OBJECT_ID(‘dbo.CheckEst’)*/ IS NOT NULL THEN
    DBMS_OUTPUT.PUT_LINE(‘<<>>’);
    ELSE
    DBMS_OUTPUT.PUT_LINE(‘<<>>’);
    END IF;
    END;
    END IF;
    END;
    Lines 1 & 4 got converted to NULL. I have many places where such code is written.
    Is there any quick way to overcome such an issue? or what needs to be done in such case?

    Hi,
      You are using an older version of SQL*Developer.  Could you download the latest 4.0.2 version available from here -
    Oracle SQL Developer
    and check if you still have the problem ?
    Regards,
    Mike

  • Migrating from non-logging Informix to Oracle (use of transactions)

    I wonder what happens when I migrate an Informix non-logging database to Oracle. Since my database doesn't use transaction (that's to say: a single DML sentence defines a transaction and transactions using begin/commit/rolback trans are not used), applications are written with no transaction philosophy. But Oracle is ANSI-compliant and then when I execute a DML sentence, a transaction begin and it'll finish until I send commit or logout. Then, even when migration is ok, my applications will create very large transactions.
    What should I do? Is there some parameter to configure Oracle in such way it create single-DML transaction (I heard there's something like that in SQL*Plus, but I'm not sure)? Or should I rewrite applications sending commits after every sentence (The worst case, I think)?
    Thanks in advance
    Omar Muqoz

    Hi Oscar,
    Without actually viewing the Client application code, I can only make general assumptions..
    You will have to change the client code anyway in order for it to work with Oracle e.g. Informix E/SQL -> Oracle Pro*C. The E/SQL Client code will have to be updated to reflect various changes in the DB environment; for example:
    1. The use of REF CURSORS (passing them back to the client code)
    2. Changes to the hardcoded Informix SQL statements to make them Oracle friendly (especially OUTER joins if you migrating to Oracle 8i)
    3. Altering any E/SQL code that dynamically builds SQL statements (to make sure these SQL statements are syntactically correct in the Oracle model).
    4. DB Connection methodologies.
    5. Changing Informix #include files to reference equivalent Oracle #includes
    6. Differences in date structs and how E/SQL and Pro*C handle dates (Oracle did not support milliseconds until 9i)
    7. Exception handling.
    8. Datatype changes between Informix and Oracle.
    Again, there is no simple solution. A migration project that migrated the DB and Applications 'in tandem' would make it easier to remove logic from the client code and place it in the server (always a good thing) but this may not be feasable in your case.

Maybe you are looking for

  • How can I get the love back?

    Remember the anthem in 1984? The call for independence? The hammer that shattered the drone which kept us all in line - servants to the man? I do. I begged the hammer to be thrown and cheered when the glass shattered. I evangelized for the awakening

  • Greyed out songs on iPhone - iOS 8 and iTunes 11

    I recently bought an iPhone 6 and have been trying to load tracks purchased from the iTunes Store onto the phone. For some reason, the majority of these tracks appear in the music drop-down menu of my device in iTunes on my Mac, but they are greyed-o

  • How to release the budget in SAP Funds Management

    Hi How to release the budget in SAP Funds Management. Please let me know the proceedure and the transaction code Tks

  • Solution to remove Daily Scoop!

    Go to the website;  https://support.mobileposse.com It will ask for the phone number and an e-mail (I used one that is kind of a spare) It is very simple!  YEAH!  Thank you to my awesome Verizon Wireless tech support!

  • Randomly my macbook pro enters sleep mode

    i recently purchased a macbook pro 2.4 GHz intel core i5 OSX lion. ( endning dec 2011) as ive been using it i only downloaded SKYPE & JAVA SCRIPT. & from the appstore this app called uberstrike HD. ( ive had about 2-4 updates) now my computer sometim