Query on Oracle Database migration verifier

Hi,
I would like to know whether the migration verifier can be used to verify the data migration done between two different Oracle database, where the schema structures of both the databases are totally different.
Thanks,
Vijaya.

What "migration verifier" are you talking about?
I would be reasonably confident that no automated tool could verify data migration between different schema designs unless you told the tool about every mapping your ETL routine did. If you can do that, though, you can always roll your own queries.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • Oracle Database Migration Verifier - Can it be used for diff table structur

    Hello,
    We have re-engineered the existing sybase tables to a new structure in Oracle for few of the tables.For example a table in sybase is normalized to two tables in oracle.In these cases can the "Oracle Database Migration Verifier" be mapped such that the columns in one table in sybase be mapped to two table is Oracle with their respective column names.
    In a gist can the tool be used even if the structure is not the same in the source and targer databases.
    Please let me know if you need more clarifications regarding my query.
    Regards,
    Ramanathan.K

    not really. The DMV was a simple tool for verifiying that what you now had in Oracle was what you had in Sybase. It does not do what you are expecting.
    B

  • Oracle Database Migration Verifier

    I would like to ask a few questions.
    Is there any workaround to use this tool against a MS SQL server 7.0 database?
    Is there any date or proyect to adapt this tool using jtds (or something else) to access SQL 7.0?
    Just in case...
    I would like to request support to MS SQL server 7.0 for ODMig. Verifier, and, if this tooll could be VPD aware, It could be quite and improvement, since it was almost natural to our data model to migrate databases to schemas, and then use VPD to allow us to simplify management. This way, different column numbers will not be treated as a warning or error.
    Thank you very much for your answer.

    Right now, the dmv is not a supported tool really. we put it up to help out users who had databases that needed to be independently verified that the tables and data were correct when they were migrated to Oracle.
    We will be moving this fucntionality in the SQL Developer Migration Workbench over time and we should capture your requirements at that time.

  • Oracle Database Migration Verifier - Is it part of Oracle 10g Client or Ser

    Hello,
    I have some restrictions on downloading the tool from Oralce Website.
    Can I know if this is part of Oracle 10g Suite ,either Client or Server.
    Please let me know this useful information.
    Regards,
    Ramanathan.K

    Its not available any more. We are rebuilding this as part of SQL Developer and in the meantime we have withdrawn access to this.
    Thanks
    Barry

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Oracle Database Migration Assistant for Unicode (DMU) is now available!

    Oracle Database Migration Assistant for Unicode (DMU) is a next-generation GUI migration tool to help you migrate your databases to the Unicode character set. It is free for customers with database support contracts. The DMU is built on the same GUI platform as SQL Developer and JDeveloper. It uses dedicated RDBMS functionality to scan and convert a database to AL32UTF8 (or to the deprecated UTF8, if needed for some reasons). For existing AL32UTF8 and UTF8 databases, it provides a validation mode to check if data is really encoded in UTF-8. Learn more about the tool on its OTN pages.
    There is a new Database Migration Assistant for Unicode. We encourage you to post all questions related to the tool and to the database character set migration process in general to that forum.
    Thanks,
    The DMU Development Team

    HI there!
    7.6.03 ? Why do you use outdated software for your migration.
    At least use 7.6.06 or 7.7.07 !
    About the performance topic - well, you've to figure out what the database is waiting for.
    Activate time measurement, activate the DBanalyzer with a short snapshot interval (say 120 or 60 seconds) and check what warnings you get.
    Also you should use the parameter check to make sure that you don't run into any setup-induced bottlenecks.
    Apart from these very basic prerequisites for the analysis of this issue, you may want to check
    SAP Note 1464560 FAQ: R3load on MaxDB
    Maybe you can use some of the performance features available in the current R3load versions.
    regards,
    Lars
    p.s.
    open a support message if you're not able to do the performance analysis yourself.

  • Step-by-step approach for "manual" Oracle Database Migration

    Hi All,
    I'm looking for a step-by-step approach for "manual" Oracle Database Migration. Basically, the minutes followed in a real-time production DB implementation.
    I assume the steps may vary for different setups. However any scenario-based docs will be of great help!

    The
    Oracle Upgrade Guide
    that comes with your download as far as I know is very detailed.

  • Store SQL Query in Oracle database

    Hello,
    I am storing SQL query in database. here is an example of query.
    INSERT INTO TABLE_1 Values (' " + getValue1() + " ', ' " + getValue2() + " ' )
    getValue1() and getValue2() are functions to retrieve values.
    When I read query from the database and execute it, it doesn't read values from function.
    After I read query from database I would expect it to be
    INSERT INTO TABLE_1 Values ('value1', ' value2' ) but it is not reading values and trying to execute whatever I got from database.
    Any help?
    Thanks

    Thanks for your reply. Let me give you more info.
    All I am trying to do is storing queries in oracle
    database to use it in future for reporting. As I am
    doing reporting where clause will be different every
    time. Reporting involves queries not updates. But in any case that means that you would use a java SQL string that is useable via a prepared statement. As such your statement would look like....
    INSERT INTO TABLE_1 Values (?,?)
    Is it not possible to substitute variable string I
    got from database?What do you mean?
    SQL is SQL. You construct SQL so it is valid and runs. Can you construct SQL so it runs a function/select and uses that value in some more SQL - depends on the database, but usually.
    But that has nothing to do with java - it is SQL.
    Query contains variables that I have in Java code .Then it is SQL. It is java. You use bind variables (see PreparedStatement) and assign values to it.
    The SQL for such a query would look like the following...
    select field1, field2 from table1 where id=?
    In your java you would then use something like PreparedStatement.setInt(1, 100), to fill in the value for the '?' in the above.
    You might also note that your solution will not work for any arbritrary SQL statement unless you are also storing meta data about the SQL itself. For instance the following SQL statements would return the same result set but your java code would have to populate the bind variables different for both...
    select field1, field2 from table1 where id=? and name=?
    select field1, field2 from table1 where name=? and id=?

  • Querying an Oracle Database Using XSL

    I know from years of experience as an Oracle DBA that to suggest that a program generate SQL is anathema, but I'm going to ask anyway!
    Is there currently (or soon) to be a means of submitting XSL (or, perhaps, some variant based on the most recently published "standard") to an Oracle database and receiving an XML result set?
    Note that I have not mentioned writing any SQL myself. I'm either looking for some middleware to translate the XSL "query" into the appropriate SQL query (with appropriate after-query transformations) or for the database engine itself to be able to parse and optimize XSL.
    I just read a position paper (dated 1998) on this site that says that Oracle intends to closely follow developments in the world of XML Query. And, I've read in a "Byte" article ("Where Are The XML Apps We Can See And Touch? Report From XML DevCon 2001", April 2001, Jon Udell) that Steve Muench had demonstrated some means of super-imposing an XML schema over top of a database in order to query it via XPath. Where does the product stand in regards to this type of functionality?
    Thanks in advance,
    David Park
    [email protected]

    For your reference:
    1/XML SQL Utility(XSU) in XDK for Java (also with PL/SQL API)
    http://technet.oracle.com/tech/xml/xdk_java
    - The XSU can transform data retreived from object-relational database tables or views into XML.
    - The XSU can extract data from a XML document and using a canonical mapping, insert the data into the appropriate columns/attributes of a table or a view.
    - The XSU can extract data from a XML document and apply this data to updating or deleteing values of the appropriate columns/attributes.
    2/ The documents for XMLType and DBUri(available in Oracle9i).
    http://otn.oracle.com/docs/products/oracle9i/doc_library/901_doc/appdev.901/a88894/adx05xml.htm#1012692
    3/ Oracle Text also have XPATH-based search.
    http://otn.oracle.com/docs/products/oracle9i/doc_library/901_doc/text.901/a90122/csectio4.htm#33034

  • Problem querying an Oracle Database using PHP

    Hi there, im rather new to Oracle so pardon my ignorance if this is completely obvious but I am creating a web form that queries an oracle database, firstly is a user form which looks like this.
    <br><br>
    Search for Investments<br>
    <form action="query.php" method="POST"><br>
    <input type = "text" name = "invest"><br>
    <input type="submit" value="Search" name="submit" /><br>
    </form><br>
    -----------------------<br><br>
    and the php script which looks like this<br><br>
    -----------------------<br><br>
    <?php<br><
    print "<HTML><PRE>";<br><br>
    require_once('connect.php');<br>
    select_data($conn);<br><br>
    $invest = $_POST['invest'];<br>
    echo "Returning results related to $invest";<br><br>
    function select_data($conn)<br>
    { $stmt = ociparse($conn,"select * from investmentbase where INVTYPE='$invest'");<br>
    ociexecute($stmt,OCI_DEFAULT);<br><br>
    echo "<table border = 1 class=results>";<br><br>
    echo "<tr>";<br>
    echo "<td>Reference Number</td>";<br>
    echo "<td>User ID</td>";<br>
    echo "<td>Investment Type</td>";<br>
    echo "<td>Investment Description</td>";<br>
    echo "<td>Amount Required</td>";<br>
    echo "<td>Due Date<td>";<br>
    echo "</tr>";<br><br>
    while (ocifetch($stmt)){<br>
    echo "<tr>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"INVREF")." ";<br>
    echo "</td>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"USERID")." ";<br>
    echo "</td>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"INVTYPE")." ";<br>
    echo "</td>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"INVDESCRIPTION")." ";<br>
    echo "</td>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"AMOUNTREQUIRED")." ";<br>
    echo "</td>";<br>
    echo "<td>";<br>
    echo ociresult($stmt,"REQUIREDATE");<br>
    echo "</td>";<br>
    echo "</tr>";<br><br>
    }<br>
    echo "</table>";<br>
    }<br><br>
    ocilogoff($conn);<br>
    print "</PRE></HTML>";<br>
    ?> <br><br>
    When i search for a value that is in the database no results are returned, however the form is definitely posting the variable as the echo statement on the PHP script displays it. Any ideas would be gratefully appreciated
    Many Thanks
    Paul

    Do you need to set $invest before calling do_select() and pass it as a
    parameter? Also, watch out for SQL Injection security risks. Try
    using a bind variable.
    $invest = $_POST['invest'];
    echo "Returning results related to $invest";
    select_data($conn, $invest);
    function select_data($conn, $invest)
      $stmt = ociparse($conn,"select * from investmentbase where INVTYPE=:ibv");
      ocibindbyname($stmt, ':ibv', $invest);
      ociexecute($stmt,OCI_DEFAULT);
      . . .Although the PHP 4 naming style for oci8 functions can be used with
    PHP 5, I know there is a possibility you are using the PHP 4 OCI8
    extension. If you are, then upgrade at least OCI8. There are some
    notes on this in
    Re: frustrations with oci_fetch_array()
    -- cj

  • IBM DB2 to Oracle Database Migration Using SQL Developer

    Hi,
    We are doing migration of the whole database from IBM DB2 8.2 which is running in WINDOWS to Oracle 11g Database in LINUX.
    As part of pre-requisites we have installed the Oracle SQL Developer 4.0.1 (4.0.1.14.48) in Linux Server with JDK 1.7. Also Established a connection with Oracle Database.
    Questions:
    1) How can we enable the Third Party Database Connectivity in SQL Developer?
    I have copied the files db2jcc.jar and db2jcc_license_cu.jar from the IBM DB2 (Windows) to Oracle (Linux)
    2) Will these JAR files are universal drivers? will these jar files will support in Linux platform?
    3) I got a DB2 full privileged schema name "assistdba", Shall i create a new user with the same name "assistdba" in the Oracle Database & grant DBA Privillege? (This is for Repository Creation)
    4) We have around 35GB of data in DB2, shall i proceed with ONLINE CAPTURE during the migration?
    5) Do you have any approx. estimation of Time to migrate a 35 GB of data?
    6) In-case of any issue during the migration activity, shall i get an support from Oracle Team (We have a Valid Support ID)?
    7) What are all the necessary Test Cases to confirm the status of VALID Migration?
    Request you to share the relevant metalink documents!!!
    Kindly guide me in-order to go-ahead with the successful migration.
    Thanks in Advance!!!
    Nagu
    [email protected]

    Hi Klaus,
    Continued with the above posts - Now we are doing another database migration from IBM DB2 to Oracle, which is very less of data (Eg: 20 Tables & 22 Indexes).
    As like previous database migration, we have done the pre-requirement steps.
    DB Using SQL Developer
    Created Migration Repository
    Connected with the created User in SQL Developer
    Captured the Source Database
    Converted Captured Model to Oracle
    Before Translation Phase we have clicked on the "Proceed Summary"
    Captured Database Objects & Converted Database Objects has been created under PROJECT section.
    Here while checking the status of captured & converted database objects, It's showing the below chart as sample:
    OVERVIEW
    PHASE               TABLE DETAILS          TABLE PCT
    CAPTURE               20/20                              100%
    CONVERT               20/20                              100%
    COMPILE                 0/20                                   0%
    TARGET STATUS
    DESC_OBJECT_NAME
    SCHEMANAME
    OBJECTNAME
    STATUS
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:ARG_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H0INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H1INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H2INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H3INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H5INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPIREP1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPISWIFT1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPITRAN1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OBJ_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OPR_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:PRD_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:S1TABLE01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STMT_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STM_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:X0IAS39:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    We have seen only "Missing" in the chart, also we couldn't have any option to trace it in Log file.
    Only after the status is VALID, we can proceed with the Translation & Migration PHASE.
    Kindly help us how to approach this issue now.
    Thanks
    Nagu

  • Oracle Database Migration 10g  between Cross Platform...!

    Hi,
    Like to know any is there any third party tool
    available in the market for cross platform EASY database migration
    between AIX - 10g R2 to Windows - 10g R2.
    Before there used be a third party tool called DBPUT which is no longer
    available now.Looking for similar type.
    Any Suggestion Please.
    Regards,

    Hi ,
    Do it yourself , I suggest you  to use export/import .
    Master Note For Oracle Database Upgrades and Migrations (Doc ID 1152016.1)
    Export/Import DataPump: The Minimum Requirements to Use Export DataPump and Import DataPump (System Privileges) (Doc ID 351598.1)

  • Oracle Database migration to Exadata

    Dear Folks,
    I have a requirement to migrate our existing Oracle Database to Exadata Machine. Below is the source & destination details:
    Source:
    Oracle Database 11.1.0.6 Verson & Oracle DB 11.2.0.3
    Non-Exadata Server
    Linux Enivrionment
    DB Size: 12TB
    Destination:
    Oracle Exadata 12.1
    Oracle Database 12.1
    Linux Environment
    System Dowtime would be available for 24-30 hours.
    Kindly clarify below:
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    2. Any upgarde activity after migration?
    3. Which migration method is best suited in our case?
    4. Things to be noted before migration activity?
    Thanks for your valuable inputs.
    Regards
    Saurabh

    Saurabh,
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    This would help if you wanted to drop the database in place as this would allow a standby database to be used which would reduce downtime or a backup and recovery to move the database as is into the Exadata.  This does not however allow you the chance to put in some things that could help you on the Exadata such as additional partitioning/adjusting partitioning, Advanced Compression and HCC Compression.
    2. Any upgrade activity after migration?
    If you upgrade the current environment first then not there would not be additional work.  However if you do not then you will need to explore a few options you could have depending on your requirements and desires for your exadata.
    3. Which migration method is best suited in our case?
    I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  At a high level I typically have recommended when moving to Exadata that you setup the database to utilize the features of the exadata for best results.  The Exadata migrations I have done thus far have been done using Golden Gate where we examine the partitioning of tables, partition the ones that make sense, implement advanced compression and HCC compression where it makes sense, etc.  This gives us an environment that fits with the Exadata rather then a drop an existing database in place though that works very well.  Doing it with Golden Gate eliminates the migration issues for the database version difference as well as other migration potential issues as it offers the most flexibility, but there is a cost for Golden Gate to be aware of as well so may not work for you and Golden Gate will keep your downtime way down as well and give you opportunity to ensure that the upgrade/implementation will be smooth by giving some real work load testing to be done..
    4. Things to be noted before migration activity?
    Again I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  In short here are some items that may help keep in mind exadata is a platform that has some advantages that no other platform can offer, while a drop in place does work and does make improvements, it is nothing compared to the improves that could be if you plan well and implement with the features Exadata has to offer.  The use of Real Application Testing Database Replay and flashback database will allow you to implement the features, test then with a real workload and tune it well before production day and allow you to be nearly 100% confident that you have a well running tuned system on the Exadata before going live.  The use of Golden Gate allows you to get an in Sync database run many replays of workloads on the Exadata without losing the sync giving you time and ability to test different workload partitioning and compression options.  Very nice flexibility.
    Hope this helps...
    Mike Messina

  • Hyperion Oracle database migration

    Hi,
    We have Hyperion 11.1.2.0 running on Oracle database 11.1. We want to upgrade the oracle database from 11.1 to 11.2. Is that possible and will there be an impact on Hyperion that is running fine now?
    Regards,
    Ragav.

    John,
    Thanks for your response. The document does not talk about migrating the data from the old DB to the new DB server.
    1. Should the data be migrated for all schemas to the new DB server before the re configuration?
    2. From the doc - If you updated the datasource configuration for any deployed Web applications, rerun EPM System Configurator and select the Deploy to Application Server task for every Web Application with updated database connection properties. Alternatively, you can manually update the datasource configuration using WebLogic Administration Console.
    So once the Configure Database is done, the applications should be deployed to the Application server using "Deploy to Application Task". Is that right?
    Regards,
    Ragav.

  • Changes to be made in EP server during Oracle database migration

    Hi,
    Currently we have a scenario where both our Oracle database 10.2 and NW 7.0 server are on same system which has got the hardware configuration as 32 bit 2003 windows machine. Due to memory constraints DB team is planning to migrate the database in to another system which is of 64 bit unix OS.
    Now before they migrate the Oracle database from current system to new system i wanted to know on what all parameters and changes have to carried out from NW/ EP end so that after the database is moved EP points to the same Oracle database in the new system and runs smoothly.
    Thanks in Advance,
    Regards,
    Vipin.

    > Currently we have a scenario where both our Oracle database 10.2 and NW 7.0 server are on same system which has got the hardware configuration as 32 bit 2003 windows machine. Due to memory constraints DB team is planning to migrate the database in to another system which is of 64 bit unix OS.
    This is a heterogeneous migration (you change the operating system) - which requires a certified migration consultant to do that task. If you do it on your own you will loose support for the target system (see http://service.sap.com/osdbmigration)
    > Now before they migrate the Oracle database from current system to new system i wanted to know on what all parameters and changes have to carried out from NW/ EP end so that after the database is moved EP points to the same Oracle database in the new system and runs smoothly.
    You will have to use the Java export/import method, not copying database files over.
    Check the official system copy guides.
    Markus

Maybe you are looking for