Tables in TimesTen 11

Why is that Tables command not yielding the expected output in TimesTen 11. If it is deprecated, what is the alternative for that?
Thanks...

The 'tables' command is not deprecated in 11g but its functionality has changed due to the major changes to the security model in 11g.
In TimesTen 11g, the ttIsql 'tables' command only lists tables owned by the current user (i.e. the user as whom you are connected to TimesTen).
If you have suitable privileges, the new command 'alltables' behaves similarly to how the 'tables' command behaved in previous versions. You also have the
Also, depending on your privileges, you can also query against the various metadata views such as ALL_OBJECTS, DBA_OBJECTS, USER_OBJECTS etc.
Chris

Similar Messages

  • Unable to create Entity objects for tables in TimesTen database using ADF

    Hi,
    I am not able to create Entity and View objects for tables in TimesTen database using ADF. I have installed TimesTen client on my machine.
    I have created a database connection by using connection type as "Generic JDBC" and giving driver class and JDBC URL. I am attaching screen shot of the same.
    I am right clicking on Model project and selecting New option after that I am selecting ADF Business components and in it I am selecting Business components from tables and there I am querying for tables.I am getting list of tables and when I am trying to create a Entity object from the table after clicking finish Jdev is closing by itself giving an error.
    Can anyone please help me how to create Entity objects for tables using TimesTen as database.I might be missing some jars or the way I am creating connection might be wrong or any plugins required to connect to TimesTen.

    What is the actual error being given by Jdev? Are you sure that the JDBC connection is using the TimesTen JDBC driver JAR and not some other JDBC driver or the Generic JDBC/ODBC bridge?
    Is ADF even supported with TimesTen?
    Chris

  • Way to get the actual size of a table in Timesten 70512

    hi gurus
    Is there a way to obtain the size of bytes actually occupied by a table in TT version 70512(not ttsize) or a
    corresponding view like DBA_SEGMENTS in ORACLE?
    Thanks

    Hi KevinMao,
    You can use SYS.MONITOR table (PERM_IN_USE_SIZE attribute). You should read note 956830.1 on oracle support about monitoring database and table size in TimesTen.

  • Alter table in timesten

    Hi,
    I need some suggestion on changing the column's data type for an existing table, i understand "The format of TimesTen columns cannot be altered. It is possible to add or remove columns but not to change column definitions. To add or remove columns, use the ALTER TABLE statement. To change column definitions, an application must first drop the table and then recreate it with the new definitions."
    the schema for the table looks below,
    CREATE TABLE CHQ_DICT_INT (
         CHQINTID          TT_SMALLINT NOT NULL PRIMARY KEY,
         CHQINTTEXT          VARCHAR2(256) NOT NULL,
         TT_AUTOTIMESTAMP     BINARY(8)
    now we are in a situation wherein the column type of SPC_CHQINTID needs to be changed to TT_INTEGER from TT_SMALLINT, and its being primary key and non null column, is there way to still alter the column at the runtime? the idea is not to stop the application accessing the table.
    and also is it possible to modify the column definition if its not a primary key and not a null like,
    CREATE TABLE CACHE (
         MDN TT_BIGINT NOT NULL PRIMARY KEY,
         CHQINTID TT_SMALLINT,
         ROLE CHAR(1),
         TT_AUTOTIMESTAMP BINARY(8)
    );

    I'm afraid that for both cases, the only way to do this currently in TimesTen is to:
    1. Unload the table's data to an external file using ttBulkCp.
    2. Drop the table and re-create it with the correct column definitions.
    3. Reload the table's data using ttBulkCp.
    Of course this requires application downtime, stopping and dropping (and then re-creating/redeploying) replication etc.
    Chris

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Are there any timesten installation for data warehouse environment?

    Hi,
    I wonder if there is a way to install timesten as an in memory database for data warehouse environment?
    The DW today consist of a large Oralcle database and I wonder if and how a timesten implementation can be done.
    what kind of application changes involve with such an implementation and so on?
    I know that the answer is probably complex but if anyone knows about such an implementation and some information about it , it would be great to learn from that experience.
    Thanks,
    Adi

    Adi,
    It depends on what you want to do with the data in the TimesTen database. If you know the "hot" dataset that you want to cache in TimesTen, you can use Cache Connect to Oracle to cache a subset of your Oracle tables into TimesTen. The key is to figure out what queries you want to run and see if the queries are supported in TimesTen.
    Assuming you know the dataset you need to cache and you have control of your application code to change the connection to TimesTen (using ODBC or JDBC), you can give it a try. If you are using a third party tool, you need to see if the tool supports JDBC or ODBC access to the database and change the tool to point to your TimesTen database instead of the Oracle database.
    If you are using the TimesTen Cache Connect to Oracle product option, data synchronization between Oracle and TimesTen is handled automatically by the product.
    Without further details of what you'd like to do, it's difficult to provide more detailed recommendation.
    -scheung

  • The following SQL works in 11g but not in TimesTen

    This is a simple test on hs.sales table in oracle 11g. You can easily do it yourself
    Created sales table in TimesTen in the image of hs.sales as follows:
    Command> desc sales;
    Table HR.SALES:
    Columns:
    PROD_ID NUMBER NOT NULL
    CUST_ID NUMBER NOT NULL
    TIME_ID DATE NOT NULL
    CHANNEL_ID NUMBER NOT NULL
    PROMO_ID NUMBER NOT NULL
    QUANTITY_SOLD NUMBER (10,2) NOT NULL
    AMOUNT_SOLD NUMBER (10,2) NOT NULL
    Spooled out rows (comma separated) from 11g to a flat file and used ttbulkcp to import them into TimesTen
    ttbulkcp -i -s "," DSN=ttdemo1 SALES /var/tmp/abc.log
    So far so good
    Command> select count(1) from sales;
    < 918843 >
    Run the following code in Oracle 11g
    select cust_id "Customer ID",
    2 count(amount_sold) "Number of orders",
    3 sum(amount_sold) "Total customer's amount",
    4 avg(amount_sold) "Average order",
    5 stddev(amount_sold) "Standard deviation"
    6 from sales
    7 group by cust_id
    8 having sum(amount_sold) > 94000
    9 and avg(amount_sold) < stddev(amount_sold)
    10 order by 3 desc
    11 ;
    Customer ID Number of orders Total customer's amount Average order Standard deviation
    11407 248 103412.66 416.986532 623.479751
    10747 256 99578.09 388.976914 601.938312
    42167 266 98585.96 370.62391 592.079099
    4974 235 98006.16 417.047489 625.670115
    12783 240 97573.55 406.556458 591.6785
    6395 268 97010.48 361.979403 577.991448
    2994 227 94862.61 417.89696 624.53793
    429 231 94819.41 410.473636 615.038404
    1743 238 94786.13 398.26105 582.26845
    9 rows selected.
    The same code in TimesTen fails
    Command> select cust_id "Customer ID",
    count(amount_sold) "Number of orders",
    sum(amount_sold) "Total customer's amount",
    avg(amount_sold) "Average order",
    stddev(amount_sold) "Standard deviation"
    from sales
    group by cust_id
    having sum(amount_sold) > 94000
    and avg(amount_sold) < stddev(amount_sold)
    order by 3 desc;
    *2811: Not a group by expression*
    The command failed.
    Any ideas what is causing this problem?
    Thanks

    I suppose we can write our own user defined function to work out STDDEV in TT.
    Command> select sqrt((sum(power(AMOUNT_SOLD * 1.0, 2)) - (COUNT(1) * POWER(AVG(AMOUNT_SOLD * 1.0), 2)))/(COUNT(1) -1)) FROM SALES;
    < 273.172955207024629817079833788659174255 >
    1 row found.
    In Oracle 11g on the same table (1 million rows) it returns
    [email protected]> set timing off
    [email protected]> set numwidth 20
    [email protected]> select sqrt((sum(power(AMOUNT_SOLD * 1.0, 2)) - (COUNT(1) * POWER(AVG(AMOUNT_SOLD * 1.0), 2)))/(COUNT(1) -1)) FROM SALES;
    SQRT((SUM(POWER(AMOUNT_SOLD*1.0,2))-(COUNT(1)*POWER(AVG(AMOUNT_SOLD*1.0),2)))/(COUNT(1)-1))
    273.1729552070246298
    [email protected]> select STDDEV(AMOUNT_SOLD) from SALES;
    STDDEV(AMOUNT_SOLD)
    273.1729552070246298
    Any volunteers to write this user defined function?
    Cheers

  • Partitioning support in TimesTen

    Does Oracle TimesTen support partitioning?
    If so, how can I add any partitioned table in TimesTen cache group?
    and does it make any performance improvement in Oracle TimesTen as well??

    A partitioned table in Oracle is a single logical table with its data spread across several physical partitions based on some partitioning scheme. The partitioning is invisible to regular SQL access (i.e. when you insert, update, delete or query the table your SQL does not need to know anything about the underlying partitioning. The TimesTen caching mechanism uses regular SQL for its synchronisation operations so the partitoning is invisible to us also. The partitioned table is cached as a single table in TimesTen (subject to any WHERE clause used in the cache group definition or LOAD statement). Any inserts will be inserted into the cached table in TimesTen. When they are propagated to Oracle they will be inserted there using a regular SQL INSERT statement and the partitioning layer in Oracle will make sure they go into the correct partition.
    Regards,
    Chris

  • TimesTen evaluation questions

    Hi,
    We are evaluating TimesTen.
    Our motivation is not performance, but rather network problems: in case of network problems, we wish the client to keep working against a local TimesTen cache, and hopefully merge it with the main DB server once the network recovers.
    We'd appreciate hints on the following:
    1) We heard TimesTen (or extentions of it) can be configured to persist data on the disk, instead of memory.
    Could anyone please point us to the appropriate documentation on how to configure this?
    2. Does TimesTen support stored procedures?
    3. Does TimesTen support Sequences?
    Thanks very much.

    When thinking about this kind of issue you need to understand that TimesTen is a complete database in its own right. When it is acting as a 'cache' for Oracle, the caching should be though of more as a form of replication to/from Oracle. TimesTen sequences are local to TimesTen; they are not a 'cached copy' of an oracle sequence and they are therefore not co-ordinated with Oracle sequences. You cannot realistically cache a table in TimesTen such that it is writeable in both Timesten and oracle. This would be a form of multi-master replication with all the attendant problems. If you are inserting data into an AWT cache group in TimesTen and are using a sequence to generate the key then the sequence value generation occurs only in TimesTen. As long as the underlying tables are not also being inserted into directly in Oracle (which they should not be) or via a different TT cache (a configuration that you should avoid) then there is no problem with uniqueness. In this case there is no issue with network outages.
    TimesTen sequences, along with everythign else, are documented in the comprehensive TimesTen documentation set (see the SQL Reference for details on sequences).
    Chris

  • Unable to create cache groups from CASE-SENSITIVE oracle table's name

    Hello All
    I have some case-sensitive tables in a oracle database and their columns are the same too. I've tried to cache these tables into TimesTen under a read-only cache group. I think timesten cannot find
    case-sensitive tables because as soon as I changed name of the tables, the creation could succeeded. What can I do to overcome this issue? I don't want lose case-sensitive feature. Is it because of
    I'm using an old version of TimesTen(11.2.1.4.0)

    Hi Chris
    Thanks for your answer. I'm using SQL Developer(both graphical and by command) to manage Timesten db. When I'm about to select root table for cache group i can see the table and when I
    select on it, the caching procedures can not be done and it says your table does not have Primary Key; you can see below that this is not true and the table has two primary key. When I'm
    trying to create the cache group via command in work sheet the error is "TT5140: could not find HLR.SUBSCRIBER. may not have privileges"
    in Oracle:
    CREATE TABLE "HLR"."Subscriber"
    "SSI" NUMBER(10,0) NOT NULL ENABLE,
    "CCNC" VARCHAR2(50 BYTE) NOT NULL ENABLE,
    "Code" VARCHAR2(128 BYTE) DEFAULT NULL NOT NULL ENABLE,
    "Account" NVARCHAR2(32),
    "Mnemonic" NVARCHAR2(15),
    "Region" NVARCHAR2(32),
    "UserAddress" NVARCHAR2(32),
    "Name" NVARCHAR2(32) NOT NULL ENABLE,
    "VPNCode" NUMBER(10,0),
    "VPNCCNC" VARCHAR2(50 BYTE),
    "SubOrgId" NUMBER(10,0),
    "SubscriberTypeId" NUMBER(2,0) DEFAULT 5 NOT NULL ENABLE,
    "StatusId" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
    "SubscriberClass" NUMBER(2,0),
    "DefinedIpAddressId" NUMBER(10,0),
    CONSTRAINT "Subscriber_PK" PRIMARY KEY ("SSI", "CCNC") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE,
    CONSTRAINT "FK_DefinedIpAdd_Subscriber" FOREIGN KEY ("DefinedIpAddressId") REFERENCES "HLR"."DefinedIPAddress" ("Id") ENABLE,
    CONSTRAINT "Fk_Status_Subscriber" FOREIGN KEY ("StatusId") REFERENCES "HLR"."Status" ("Id") ENABLE,
    CONSTRAINT "Fk_SubOrg_Subscriber" FOREIGN KEY ("SubOrgId") REFERENCES "HLR"."SubOrganization" ("Id") ENABLE,
    CONSTRAINT "Fk_SubscriberType_Subscriber" FOREIGN KEY ("SubscriberTypeId") REFERENCES "HLR"."SubscriberType" ("Id") ENABLE,
    CONSTRAINT "Fk_VPN_Subscriber" FOREIGN KEY ("VPNCode", "VPNCCNC") REFERENCES "HLR"."VPN" ("SSI", "CCNC") ENABLE
    in TimesTen:
    CREATE READONLY CACHE GROUP "PRO1"
    AUTOREFRESH MODE INCREMENTAL INTERVAL 5 MINUTES
    STATE PAUSED
    FROM "HLR"."Subscriber"
    "SSI" NUMBER(10,0) NOT NULL ,
    "CCNC" VARCHAR2(50 BYTE) NOT NULL ,
    "Code" VARCHAR2(128 BYTE) NOT NULL ,
    "Account" NVARCHAR2(32),
    "Mnemonic" NVARCHAR2(15),
    "Region" NVARCHAR2(32),
    "UserAddress" NVARCHAR2(32),
    "Name" NVARCHAR2(32) NOT NULL ,
    "VPNCode" NUMBER(10,0),
    "VPNCCNC" VARCHAR2(50 BYTE),
    "SubOrgId" NUMBER(10,0),
    "SubscriberTypeId" NUMBER(2,0) DEFAULT 5 NOT NULL ,
    "StatusId" NUMBER(2,0) DEFAULT 1 NOT NULL ,
    "SubscriberClass" NUMBER(2,0),
    "DefinedIpAddressId" NUMBER(10,0),
    PRIMARY KEY("CCNC","SSI")
    )

  • Updates executed in Oracle don't appear in TimesTen

    Hello. I'm evaluating Oracle + Timesten cache group feature and got the following problem. I performed init scripts on Oracle, created both Oracle and TimesTen users (the have the same names and passwords), created test table (named sub), grid and global dynamic async write cache group from this table. Then I perform an insert query and see changes in Oracle. But when I perform insert query on Oracle I don't see the inserted data from TimesTen. Here is the full set of commands I perform to create cache group
    As instance administrator I create user (tengri) and cache administrator (tengriadm):
    Command> create user tengri identified by tengri;
    User created.
    Command> create user tengriadm identified by tengriadm;
    User created.
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE TO tengriadm;
    As cache manager user (tengriadm) I create grid, cache group, start replication and attach node to grid:
    Command> call ttCacheUidPwdSet('tengriadm','tengriadm');
    Command> call ttGridCreate('ttGrid');
    Command> call ttGridNameSet('ttGrid');
    Command> CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH GLOBAL CACHE GROUP ttGroup
    FROM tengri.sub (
    id integer not null primary key,
    msisdn varchar(32) not null,
    name varchar (64),
    balance integer,
    short_num varchar (32),
    address varchar (512),
    type integer not null,
    is_locked integer not null,
    charging_profile_id integer,
    cos_id integer,
    currency_id integer,
    language_id integer
    Command> call ttRepStart;
    Command> call ttGridAttach(1,'tt1','10.50.3.160',5001);
    As instance administrator:
    Command> grant select, update, insert, delete on tengri.sub to tengriadm;
    As cache manager user I perform insert query:
    Command> insert into tengri.sub values (1, '1', '1', 1, '1', '1', 1, 1, 1, 1, 1, 1);
    1 row inserted.
    And check it on Oracle
    select count(*) from sub;
    it returns me 1
    Then I perform insert on Oracle
    insert into tengri.sub values (2, '2', '1', 1, '1', '1', 1, 1, 1, 1, 1, 1);
    And perform select on TimesTen
    Command> select count(*) from tengri.sub;
    < 1 >
    1 row found.
    Still 1 entry. My first mind was about commit issue but I double checked it on Oracle, executed commit command explicitly and still don't see any changes on TimesTen. Could anyone clarify this behavior?
    Thanks in advance!

    For local dynamic cache groups, dynamic load is only possible for SQL statements that include a full key equality predicate on a defined primary key or foreign key. For statements that meet these requirements TimesTen will first try and access the requested data locally. If no matching rows are found then it will try in Oracle. If the data is found in Oracle it will be retrieved, inserted into the cache tables in TimesTen (ready for the next access to it) and then the application SQL will execute against the local data and return its result. This works for SELECT, UPDATE, INSERT and DELETE operations.
    For global dynamic cache groups TimesTen tracks, at a cache instance level, whether data is al;ready present somewhere in the grid (and if so where it is located). For SQL that qualifies for dynamic load, if no matching rows are found locally then they will be retrieved (moved) from another grid member (if the data is already present in the grid) but if not it will be loaded dynamically from Oracle as for the local cache group case.
    In the current implementation of Cache Grid Oracle serves many functions and the concept is of a ;cache' grid. It is not currently possible to use Cache Grid without a backend Oracle database. This may change in a future release.
    Chris

  • How to monitor errors in timesten

    Hello DB experts,
    I have cached one table in timesten. I am inserting records in timesten.those records are successfully inserted in Timesten. we are inserting records in Timesten through sequence.
    These data are not replicated in oracle.it has been observed from tterrors.log file , it is giving " Unique constraint violation error in log file".
    Now my doubt is how end user know that these reocrds are failed and not replicated to oracle. because from application it is not throwing any errors.
    Is there any way that i can found out my oracle table and cache table are not in sync ??
    Thanks.

    It would help if you told us a bit about your setup. it sounds like you have an AWT cache group? If so then it is not possible for TimesTen to return an error to the application since the propagation to Oracle is asynchronous and occurs after the commit of the transaction in TimesTen. As well as in the TimesTen daemon log there is also a file <databasename>.awterrs in the same directory as the checkpoint files which will also have details on any errors encountered by AWT.
    The error you refer to suggests that the data in TimesTen and Oracle are not the same, or you have not created the same set of indexes/constraints in TimesTen as you have in Oracle. Or maybe you are inserting/updating this data in both TimesTen and Oracle? You should review your setup and usage to make you are conforming to all best practices.
    Chris

  • Oracle Timesten with oracle apex

    I use Oracle 11gr2 and apex 4.2.4 how can i use oracle timesten with oracle apex?
    I want to use timesten to improve  performance on report by adding two tables to oracle timsten.

    As far as I am aware, recent (11g onwards) releases of Oracle Heterogenous services do not work with TimesTen as they now require an OBC 3.x driver and the TimesTen driver is currently 2.0. Even if they did work, this would not be a useful solution. It might allow Apex to access TimesTen after a fashion (though that is far from certain) but the performance would be very poor due to all the network hops and software layers between the application and TimesTen.
    If you put one or two tables in TimesTen then one problem from an Apex perspective is that it is now dealing with two databases; the TimesTen cache containing two tables and the oracle database containing all the other tables. Is Apex designed to cope with this? Does it have the concept of data located in multiple databases where one of them is not the Oracle database? Also, do you need transactions or queries (joins) that span the TimesTen tables and the tables in Oracle DB? If so then this also  will not work as that is not possible today.
    I have to say that as far as Apex goes I think this is likely a non-starter. However, if you do try it and have any success then please do post the results here as we'd be interested to hear about it.
    Chris

  • Suggestions required for Read-only cache group in timesten IMDB cache

    Hi
    In IMDB Cache , If the underlying oracle RAC is having two schemas ( "KAEP" & "AAEP" , having same sturcture and same name of objects ) and want to create a Read-only cache group with AS pair in timesten.
    Schema                                              
        KAEP  
    Table  
        Abc1
        Abc2
        Abc3                                    
    Schema
        AAEP
    Table
        Abc1
        Abc2
        Abc3
    Can a read-only cache group be created using union all query  ?
    The result set of the cache group should contain both schema records in timesten read-only cache group will it be possible ?
    Will there be any performance issue?

    You cannot create a cache group that uses UNION ALL. The only 'query' capability in a cache group definition is to use predicates in the WHERE clause and these must be simple filter predicates on the  tables in the cache group.
    Your best approach is to create separate cache groups for these tables in TimesTen and then define one or more VIEWS using UNION ALL in TimesTen in order to present the tables in the way that you want.
    Chris

  • TimesTen and 4TB database

    Hi,
    I am investigating this product for use with a 4TB client/server OLTP oracle database. We plan to migrate to an n-tier architecture (Oracle iAS in the middle) and plan on keeping the oracle database. How will TimesTen help in this scenario? Will I have to store the entire 4TB in TT on the middle tier? How will updates on the backend be replicated to TT. In real time? How much memory will I require to store that much data in TT?
    For now I am looking for some high-level feedback on how TT actually works in my scenario.
    Thank you

    In this kind of scenario, typically you will cache just selected tables, or subsets of tables, in TimesTen in order to improve response times for key parts of the application. Before you can decide on how or if TimesTen can help you need to determine which parts of the application you need to speed up and whether the current bottleneck(s) relate to database access and/or the network hops related to database access. Then you should evaluate what data would need to be cached in TimesTen to enable those parts of the application to run against TimesTen rather than Oracle. You also need to evaluate what is involved in moving the application, and its SQL, from Oracle to TimesTen. What API(s)? does the application use to communicate with Oracle? Is any PL/SQL involved? Any esoteric database types? You should also perform some basic validation to confirm that for your data and queries, TimesTen provides a useful speed up over Oracle.
    Chris

Maybe you are looking for

  • Consuming a Webservice & 'bapi_transaction_commit' in adobe form

    Hi All, I am working on a scenario wherein i am consuming a webservice in an offline adobe form. This webservice is a function module exposed as webservice. Now if i attempt to use following statement in my Function Module: call function 'bapi_transa

  • Merge the Attachment Files from DMS into smart forms

    Hi Gurus, I have a requirement of merging documents and printing it on a smart form. I need to read the active files of SAP DMS. It could be DOC or PDF or Image. I will have a smart form with some data to be printed related to the document(may be 2 p

  • Apple TV is weak for Flickr

    pretty sure i will be returning the apple tv i bought yesterday because they really did a lame job with the interface.... login for youtube but not flickr?... and the interface did include login/id based access in the previous version?! basically pat

  • IDOC_INPUT_INVOIC_MRM URGENT ERROR plz help

    hello expert im using IDOC_INPUT_INVOIC_MRM to create Idoc. im facing a problem. the docnum in EDIDC and EDIDD is not generated. it always equal to 00000000000000 so the program cant create the idoc. do you have any idea plzz, it s urgent im facing t

  • New document type to be copied from old document type

    Dear Sir i have created one document in document type draw no i want to delete that document from draw and i have to copy its all document to my new docmuent type Z09 please guide how can i do it. regards kunal