Column-Oriented Oracle Database

Hi all,
I'm researching the concept of an Column-Oriented Database for Oracle. I have read that column-based databases are a much faster way to query data than a traditional row-based database (like Oracle) but is slower to insert rows.
Does anyone know if there are any tools or packages that Oracle or a 3rd party provides that could make an Oracle instance act in a Column-Oriented way?
I need to stick with oracle since my ERP uses Oracle as the back end (accessed using PL-SQL packages).
I am currently on Oracle 9i.
Thanks,
Scott

Thanks Venkat...
The reason I am looking into Column Oriented databases is for faster querying. I do not want to aggregate the data but, instead, just have a copy done nightly (or weekly) that would allow people to query production data in a column-oriented way. This way, we have the best of both worlds :
1) Inserting new data using Oracle (using Traditional row-based methods)
2) Querying old data using an Oracle-like copy (using Column-based methods)
I know I can replicate the data, but just don't know if there is a database out there that can be used in a column-oriented way that looks like oracle to the ERP system that I use that sits on top of the database.
Thanks for any help anyone can give.
Scott

Similar Messages

  • How to get JDBC remarks column info oracle database

    How can i add a description to tables in oracle database which describes about the table and retrieve it from JDBC remarks column

    Perform the following ORACLE sql command:
    COMMENT ON TABLE  MyTable IS 'This table is cool'

  • Error in Loading a BLOB Column in Oracle Database from 1 to diff  schema

    Hi ALL,
    I am in a POC where in I have to load a BLOB data from 1 schema to a different schema ie from Staging to Target.
    I load my staging(Oracle Schema ) through Oracle PLSQL. Now I have to load to my ODS.
    I am using the LKM (LKM SQL to Oracle) and IKM (IKM Oracle Incremental Update PLSQL).
    It errors out in the 3rd step 3-Loading-SS_0 Load Data
    The script used and the error message is in the attachment. However if I run the script manually it runs and the Load happens successfully. Also I was able to load the same BLOB objects it the tables were in the same schema(In this case LKM is bypassed).
    Any Thoughts on this?
    The Error I receive is:
    java.lang.NumberFormatException: For input string: "4294967295"
         at java.lang.NumberFormatException.forInputString(Unknown Source)
         at java.lang.Integer.parseInt(Unknown Source)
         at java.lang.Integer.parseInt(Unknown Source)
         at oracle.jdbc.driver.OracleResultSetMetaData.getPrecision(OracleResultSetMetaData.java:303)
         at com.sunopsis.sql.SnpsQuery.getResultSetParams(SnpsQuery.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Thanks & Regards,
    Krishna

    Hi,
    Are you at the same database? If yes is better to grant select, at the schemas where is the source, to your Staging Area user.
    Then, at topology, define all physical schemas under the same Data Server.
    If you are at distinct database a good solution is to use a DBLink (KM) because when a SQL to SQL KM is used, the data go thru agent, making a conversion to Java.
    I'm not sure if it can handle LOB's fields... Every time that I needed to use it was in the ways that I described.
    Does it make any sense to you?
    Cezar

  • How to insert and retreive zip files in to blob column in oracle database?

    This is the code to retrieve               
    while (rs.next() ) {
                        Blob fileBlobContent = rs.getBlob(1);
                        String fileName = rs.getString(2);                    
                        String value = rs.getString(3);
                        long fileNum = new Long(value).longValue();
                        java.io.InputStream is =
                             ((oracle.sql.BLOB) fileBlobContent).getBinaryStream();
                        FileOutputStream fos =
                             new FileOutputStream("c:\\temp\\"+fileName);
                        int c = -1;
                        while ((c = is.read()) != -1) {
                             fos.write(c);
    This is the code to insert into the row aftre inserting empty blob
                   // step 3 - now put the contents of the file
                   int length = 0;
                   int buff_size = 1024;
                   //Writer out_clob = ((oracle.sql.CLOB)fileCobContent).getCharacterOutputStream();
                   OutputStream outstream = ((oracle.sql.BLOB) fileBlobContent).getBinaryOutputStream();
                   long chars_read = data.length();
                   byte[] buffer = new byte[buff_size];
                   while ((length+buff_size) < chars_read) {
                        outstream.write(buffer, length, buff_size);
                        length += buff_size;
                        //outstream.flush();
                   // write remaining data
                   int remaining = (int)(chars_read-length);
                   outstream.write(buffer, length, remaining);
                   // steps 4-5:
                   outstream.flush();
    Any help is greatly appreciated. Thanks!

    String localStrFile = importFormFileArg.getFileName();
    try
    localStrFileExt = localStrFile                    .substring(localStrFile.indexOf(".") + 1);
         localInputStream = importFormFileArg.getInputStream();
         localBr = new BufferedReader(
         new InputStreamReader(                         localInputStream));
              /** Read line by line and count the tokens present */
         while ((localStrReadLine = localBr.readLine()) != null)
         localIntRecords++;
                        /* parse the file contents with the delimiter comma */
                        if ("xls".equalsIgnoreCase(localStrFileExt))
                             localIsMandatoryPresent = isMandatoryFieldsPresent(
                                  localStrReadLine, ',');*/
                             System.out.println(localStrReadLine+" first \n\n\n ");
                             localStk = new StringTokenizer(
                                  localStrReadLine, "\t");
                             StringTokenizer localStk1 = new StringTokenizer(
                                  localStrReadLine, ",");
    // here i am unable to getting the records pls any body helpme                                             }
    Edited by: mulamahi on Oct 26, 2007 12:32 AM

  • How to insert and retrieve zip files in to blob column in oracle database?

    Hi All,
    I have a requirement where i need to insert zip files to BLOB and retrieve them.
    Please suggest me any good example or redirect me to them.

    You already have a post on this subject here: read and write compressed data from blob in ADF
    Please do not post duplicate questions.

  • How to find encrypted columns in oracle 10g database

    Hi,
    How to find encrypted columns in oracle 10g database? We can see using view dba_encrypted_columns or all_encrypted_columns .
    my question is apart from this is there anyother views or tables?
    Thanks..

    user602872 wrote:
    Hi,
    How to find encrypted columns in oracle 10g database? We can see using view dba_encrypted_columns or all_encrypted_columns .
    my question is apart from this is there anyother views or tables?Hmm not which I could find,
    SQL> select * from dict where lower(table_name) like '%encrypted%';
    TABLE_NAME
    COMMENTS
    DBA_ENCRYPTED_COLUMNS
    Encryption information on columns in the database
    ALL_ENCRYPTED_COLUMNS
    Encryption information on all accessible columns
    USER_ENCRYPTED_COLUMNS
    Encryption information on columns of tables owned by the user
    SQL>HTH
    Aman....

  • Encryptind and decrypting database column in oracle 10g

    hi guys...
    i am sai sandeep,i got a doubt how to encrypt a database column in oracle 10g..?
    i am using a table " emp_uid " ,and strtucture as follows,
    create table emp_uid(user_id varchar2(20),pwd varchar2(20));
    i need to encrypt a pwd column in the emp_uid.
    how to do it..?
    thanking u  advance.....

    Ok, here's a basic example...
    SQL> create table myusers (username varchar2(30), password varchar2(40));
    Table created.
    SQL> create or replace procedure add_user(username in varchar2
      2                                      ,password in varchar2) is
      3  begin
      4    insert into myusers (username, password)
      5      values (add_user.username
      6             ,dbms_crypto.hash(utl_raw.cast_to_raw(add_user.username||'!'||add_user.password)
      7                              ,dbms_crypto.hash_sh1)
      8             );
      9    commit;
    10  end;
    11  /
    Procedure created.
    SQL> exec add_user('Fred','Fr3ddy')
    PL/SQL procedure successfully completed.
    SQL> select * from myusers
      2  /
    USERNAME                       PASSWORD
    Fred                           E5C975DB4C0A1CF65683E36421A6305F09F4EA9A
    SQL> set serverout on;
    SQL> create or replace procedure loginuser(username in varchar2
      2                                       ,password in varchar2) is
      3    v_hash     varchar2(40);
      4    v_username varchar2(30);
      5  begin
      6    v_hash := dbms_crypto.hash(utl_raw.cast_to_raw(loginuser.username||'!'||loginuser.password), dbms_crypto.hash_sh1);
      7    select username
      8    into   v_username
      9    from   myusers
    10    where  username = loginuser.username
    11    and    password = v_hash;
    12    dbms_output.put_line('User: '||v_username||' logged in.');
    13  exception
    14    when no_data_found then
    15      dbms_output.put_line('Username/Password is not valid!');
    16  end;
    17  /
    Procedure created.
    SQL> exec loginuser('Fred','Freddy');
    Username/Password is not valid!
    PL/SQL procedure successfully completed.
    SQL> exec loginuser('Fred','Fr3ddy');
    User: Fred logged in.
    PL/SQL procedure successfully completed.
    Ideally you would do the hashing of the password inside the client side application so only the Hashed value goes over the network, but the above demonstrates the principle of using hashes to store passwords.  Because it's a one way algorithm, only a brute force method can be used to try and determine the original password.  There is no way to directly un-hash the value.  To check for a valid login, we don't retrieve the password and try to unhash it to compare against what the user has supplied, we actually take what the user has supplied and hash that in the same way and then compare the hashes.
    The point of including the username or some other data in the hashing process means that if two users have the same password, they will still have different hash values, so it won't be apparent they are the same passwords.  In my example, the point of putting another character between the concatenation of username and password is in case the username and password together would give the same result e.g.
    If we had one user "Fred" with password "Fr3ddy" then just concatenating the strings would give "FredFr3ddy".
    If we had another user "FredF" and he happened to choose a password "r3ddy" then just concatenating those would also give "FredFr3ddy"
    by introducing a known breaking character they would be different e.g. "Fred!Fr3ddy" and "FredF!r3ddy" and hence give different hash values.
    That's the basics of how passwords are stored for security.
    It would take a lot of processing power and brute force methods just to determine a single password for a single user when using hashing methods of security.
    With encryption, a brute force method could be used to find the decryption key, and once found that could be used to decrypt ALL the encyrpted data, hence it is less secure, especially when some clever person will no doubt have written the key down somewhere so they don't forget it.  With hashing there's no key to write down. 

  • Column 'Blocked' of view v$instance in Oracle Database 10g

    Hi All,
    What is the description of column "blocked" of view "v$instance" in Oracle Database 10g?
    I could not find the information in Oracle® Database Reference
    10g Release 2 (10.2)
    Thanks and Regards,
    Vaibhav

    SQL> describe v$instance
    Name Null? Type
    INSTANCE_NUMBER NUMBER
    INSTANCE_NAME VARCHAR2(16)
    HOST_NAME VARCHAR2(64)
    VERSION VARCHAR2(17)
    STARTUP_TIME DATE
    STATUS VARCHAR2(12)
    PARALLEL VARCHAR2(3)
    THREAD# NUMBER
    ARCHIVER VARCHAR2(7)
    LOG_SWITCH_WAIT VARCHAR2(15)
    LOGINS VARCHAR2(10)
    SHUTDOWN_PENDING VARCHAR2(3)
    DATABASE_STATUS VARCHAR2(17)
    INSTANCE_ROLE VARCHAR2(18)
    ACTIVE_STATE VARCHAR2(9)
    BLOCKED VARCHAR2(3)
    SQL> select version from v$instance;
    VERSION
    10.2.0.2.0

  • Oracle Database Columns to PDF

    Hi
      I would like to select some columns from the oracle data base using the select statement and upload into the pdf file. But in Oracle database my columns name  are like SALES_FORECAST,FORECAST_ACCURACY ,....... I dont want use the same names but i would like to give the descriptions like Sales Forecast, Forecast Accuracy,.....
    Where can I give these description for the columns .
    My BLT is like this 1.  PDF actions
                                2. SQL Query
                                3.PDF Table.
    Thanks
    Madhu

    Hi Madhu,
    if you want to use spaces for your PDF table columns, you can use the following method.
    In your BLS, write a MII query to retrieve the values from your DB. Create a MII XML output document with all columns you want to use in your PDF. Here you can use spaces in your columns names that will be displayed in the PDF table.
    Then use a repeater action to run through all output rows of your query. Use a "Row" action from the MII XML Output to add all the query rows to your XML Output document. Finally, use the XML Output document as source for your PDF table.
    This way you can use the column names you like.
    Michael

  • Sequence on Oracle Database column

    Hi All,
    In my scenario I have to insert data/records into a Oracle Database.The Database table has primary key with Sequence(sequence will generate next unique value for the new record to be inserted). So how can I insert the records into that table. I have created JDBC receiver message type for that particular table.
    Please let me know the possible ways.
    Thanks
    Praveen Kumar

    Using Stored Proecdure is one option.Write a stored procedure that takes the rest of the column values as inputs and then uses the sequencne plus these values to insert data into the datbase.
    Can you provide an example of how the insert query with the Call to the Sequence looks so that a standard insert option can also be looked into?
    Regards
    Bhavesh

  • What is the best way to put LabVIEW DSC data into an Oracle database?

    I have been collecting data using LabVIEW DSC 7.0 for several years and have always accessed the data from the Citadel database via the Historical Data Viewer.  I would now like to begin putting this data into an Oracle database.  My company stores all their data in Oracle and it would provide me all the benefits of their existing infrastructure such as automated backups, data mining tools, etc.
    My initial thought is to use "Read Trace.vi" in LabVIEW to pull historical data from the citadel database at regular intervals (e.g. 1 minute) and insert this data into Oracle via ODBC.  In this way, I do not need to track the value changes in order to know when to write to Oracle.  I also considered replicating the citadel database using some other method, but I recall that the tables used by citadel are somewhat complicated.  I only need a simple table with columns for channel, timestamp, and data.  The "Read Trace.vi" will provide me data in this format.
    I do not need to update the Oracle database in real time, a few minute delay is acceptable. If anyone has a better idea or additional insight please let me know. Thanks.

    In terms of connectivity, you want to use ADO, not ODBC. Beyond that, it all depends on the structure of the data and what you are going to want to do with it. This is a very big question that you need to be getting some in-depth assistance.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • CORBA: Using IOR with Aurora Visibroker in Oracle Database

    My problem is about Oracle CORBA session oriented:
    I would like to store an IOR in an Oracle Database in order to use it
    when I need the object associated but,
    If I have the IOR of an object instantiated from a class placed in the
    Name Service Context of an Oracle Database, this IOR is valid out of
    its session if the session has finished (if this is not the case, can I have
    an infinity session)? In order to use this object
    must I always authenticate by user and password before or can I use
    the object directly after call stringtoIOR?
    I am using CORBA implementation (Aurora) of Oracle 8i.
    Thank you very much and excuse my English.
    null

    Even If create table manually in the Oracle Database using varchar2 but still when i copy the data query mysql database it shows spaces in the value column.

  • Help me please--CORBA: Using IOR with Aurora Visibroker in Oracle Database

    My problem is about Oracle CORBA session oriented:
    I would like to store an IOR in an Oracle Database in order to use it
    when I need the object associated but,
    If I have the IOR of an object instantiated from a class placed in the
    Name Service Context of an Oracle Database, this IOR is valid out of
    its session if the session has finished (if this is not the case, can I have
    an infinity session)? In order to use this object
    must I always authenticate by user and password before or can I use
    the object directly after call stringtoIOR?
    I am using CORBA implementation (Aurora) of Oracle 8i.
    Thank you very much and excuse my English.

    Even If create table manually in the Oracle Database using varchar2 but still when i copy the data query mysql database it shows spaces in the value column.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Import error on Oracle Database Express 10.2.0.1.0

    Hi,
    I try to import data from oracle V10.01.02 running on SUSE10 Linux to
    oracle 10g (10.1.0.2.0) running on Windows Server 2003.
    I am able to import the big part from my ata, but not all data.
    The begin of my log file is:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0
    - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.01.00 via conventional path
    And after some time I receive:
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "SAFCI"."EMP"."NAME"
    (actual: 65, maximum: 64)
    Column 1 1000005025
    Column 2 ??????? ???????
    Column 3 19-SEP-2002:00:00:00
    Column 4 9089
    Column 5 ??. ?????
    Column 6 1.9828
    Column 7 377.77
    Column 8 75.55
    Column 9 ???????????? ???????? ? ??? ???? ? 32 ??.
    Column 10 19-SEP-2002:00:00:00
    Column 11 ?????? ?????, ?.?. 172747675, ???. 23.11.200?
    Column 12 ????????? ?????????
    Column 13 ? ????
    Column 14 T
    Column 15
    Column 16 F
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "SAFCI"."EMP"."NAME"
    (actual: 65, maximum: 64)
    Column 1 1000006408
    Column 2 ??????? ???????
    Column 3 05-NOV-2002:00:00:00
    Column 4 9089
    Column 5 ??. ?????
    Column 6 1.939
    Column 7 82
    Column 8 16.4
    Column 9 ?????????? ? ???? ???? ? 40 ??.
    Column 10 05-NOV-2002:00:00:00
    Column 11 ?????? ?????, ?.?. 172747675, ???. 23.11.200?
    Column 12 ????????? ?????????
    Column 13 ? ????
    Column 14 T
    Column 15
    Column 16 F 36943 rows imported
    I can not understand this problem, because I exported the hole user and
    also try to import the hole user in my new system.
    Pls., can some one point me to some paper about this problem or help me
    to solve the problem.
    Thanks
    configuration Oracle10g (EXP source)
    SQLWKS> select * from nls_database_parameters
    2>
    PARAMETER VALUE
    NLS_LANGUAGE BRAZILIAN PORTUGUESE
    NLS_TERRITORY BRAZIL
    NLS_CURRENCY R$
    NLS_ISO_CURRENCY BRAZIL
    NLS_NUMERIC_CHARACTERS ,.
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD/MM/RR
    NLS_DATE_LANGUAGE BRAZILIAN PORTUGUESE
    NLS_SORT WEST_EUROPEAN
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY Cr$
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.1.0.2.0
    configuration Oracle10g Express (IMP destination)
    SQL> select * from nls_database_parameters;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET AL32UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.2.0.1.0

    Hi
    Your import database use a multibyte characterset, your export db a singlebyte cs.
    This means, a char can need more than 1 byte.
    Try this before import (and before create tables!!):
    alter system set nls_length_semantics=char;Greetings
    Sven

Maybe you are looking for