TimesTen scalability

I have a question concerning my research. Is there a way to connect to more than one TimesTen database or TimesTen Cache using one connection. The situation is that I have more than one T10 instances on different nodes and don't want to know on which database the data is, I just want to ask for the data and want a particular instance or node on which TimesTen with this data resides to answer. Is there some kind of mechanism that stores information on which node the data is or do I have to ask each node if it has the data?

Cache grid delivers a number of benefots and is the platform for further developments in the future. The main benefits in this release are to deliver horizontal scalability for both processing and storage for middle tier cache use cases based on commodity hardware.
There is not really any concept of load balancing here as such. An application will usually be co-located with the TimesTen datastore so that it can use the very high performance direct connection mode but it is also possible that the application may be located on a different machine and will connect to Timesten via client/server. In either case, the application connects to a specific datastore within the grid. All SQL operations that the appliation executes are sent to the datastore to which it is connected.
If the SQL operation touches TT tables and/or tables in a local cache group then the operation is executed completely locally (although depending on configuration and usage it might involve the need to dynamically load some [not yet] cached data from Oracle).
If the operation touches tables in one or more global cache groups then it may execute completely locally (if all necessary data is already present in, and owned by, the local datastore) or it may require some data to be dynamically loaded from another grid member or from Oracle (or both). Once the data has been brought to the local datastroe, the query becomes a local only query. TimesTen does not at this time support any form of distributed query or distributed transaction (at least not without an external Transaction manager).
Local execution in TimesTen is very fast while dyanmic loading of data is relatively slow. Therefore in order to deliver high levels of overall performance, applications using cache grid need to exhibit a high degree of 'node affinity' with respect to their data access. Put another way, althouh global cache groups allows any node to access any data anytime the cost of excessive data movement diminishes overall performance and the best scenario is where the application connections are distributed sich that the data will naturally partition itself such that most accesses are local only with just a small percentage needing dynamic load (at steady state).
Attaching and detaching grid nodes is an explicit action initiated by calling built in procedures within TimesTen. It would normally be performed by a DBA or a management program of some kind based on changes in overall workload or the need to replace failed hardware etc.
Chris

Similar Messages

  • TimesTen and minimal logging

    As a matter of interest does TimesTen support minimal logging operations at database level? This will mean that TT contents are lost when TT goes down unexpectly. The advantage would be that fully logged operations will not be required, thus potentially speeding up TimesTen.
    Thanks,
    Mich

    TimesTen used to support 'no logging' mode (Logging=0) and a 'diskless logging' mode (Logging=2) but both have been removed in recent releases. Although both sounded attractive for special use cases, the limitations inherent in both modes meant they were of very limited usein reality and having to support them complicated the log manager code and was an impedance to implementing things like the parallel log manager that was introduced in TimesTen 11.2.1. Basically, delivering maximum performance, scalability and functionality for mainstream use cases was deemed more important than supporting niche options of limited usability.
    Chris

  • Timesten permsize is triple oracle's tablespace file

    (1)  I create a oracle 11g2 instance,  and  a user  dev   and its tablespace
    dev's tablespace size: 576,724,992      (about 476,800  records in main tables)
    (2)  then create a timesten instance to cache dev's tables
    timesten  data store file size:   1783951360 
    timesten  perm memory size:   1723348 KB     (PERM_IN_USE_SIZE of  sys.monitor ):
    two question:
    (1)  why on earth timesten triple the data size for  the very same data
    (2)  how can I evaluate timesten perm size for the very same data from oracle, simply triple?

    It depends... Variable length types (VARCHAR2/NVARCHAR2/VARBINARY) may be stored INLINE or NOT INLINE. If stored INLINE then they always occupy the maximum declared size. This is not good from a storage perspective but is very  good from a performance and scalability perspective. If they are stored NOT INLINE then they occupy a fixed overhead plus the actual size of data stored. NOT INLINE are better for space usage efficiency if the values are typically reasonably large but for small values the overheads are quite high. By default columns declared with a size <= 128 bytes are stored INLINE and anything declared as larger is stored NOT INLINE but there is explicit syntax to control this on a per column basis. See the TimesTen SQL Reference for more details.
    Chris

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Can I config timesten not to hold all data in memory?

    In our production environment we use a server with 32GB memory to run Timesten, but developer machine only have 2GB memory. Now we are going to fix some issue that require duplicate the production data to Developer machine, can we have config timesten not to hold all data in memory so that it is possible to duplicate the production data to Developer machine? Is timesten support something like cache table as hsql?
    http://hsqldb.sourceforge.net/web/hsqlFAQ.html#BIGRESULTS

    TimesTen is an in-memory database. All the data that is managed directly by TimesTen must be 'in memory' and the machine hosting the datastore must have enough physical memory or system performance will be severely degraded.
    The only way to have a combined in-memory and on-disk solution is to introduce a backend Oracle database (10g or 11g) and use TimesTen cache connect. Of course, this then becomes a very different configuration and depending on what 'issue' it is that you need to investigate and fix the change to a Cache Connect configuration may hinder that investigation.
    Chris

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

  • TimesTen DB Installation problem on Windows 7 OS

    Hi,
    I tried installing timeten DB version 7.0 on windows 7 OS. During installation i am getting unable to create Registry Key error.
    Because of the above error i am not able to install the DB. I require help to solve the above issue.
    I also want to know if TimeTen support windows 7 OS.

    Windows 7 is not currently officially supported but i routinely use TimesTen 7.0 and 11.2.1, both 32 and 64 bit, on Windows-7 (64-bit) with no issues.
    What version of Windows 7 are you using and is it 32 or 64 bit? Are you trying to install 32 or 6 bit TimesTen 7.0? Also, why are you not using TimesTen 11.2.1?
    Did the error give any more information (such as which registry key)? Are you running the install as an Administrator? Have you disabled any anti-virus software or other security software prior to running the install?
    Chris

  • How to make scalable text button

    I'm learning Captivate at the moment and I cannot understand this at all...
    I want to creat a text button. Captivate allows NO options to change the grey text box or blue rollover effect. So I find out that the only way to do it is to creat your own image button, BUT Captivate does not allow Illustrator import, thus the images are not scalable. This means that if I want to creat a text button that will have various sizes of widths the image will have to be stretched and this looks awful.
    Someone please tell me that my only option is to not create a seperate image for every instance of text due to variance in width.......
    Please help.
    I tried calling the Adobe Technical Support line... I even spoke with a specialist. They did not speak English well and kept telling me over and over to import my own image... but that does not solve the problem, it still stretches!!! They basically said there is no way around this. I cannot believe this to be the case... Even powerpoint allows text button customization....
    Thank you

    The look and feel of the default text button object in Captivate is actually controlled by your operating system.  So if you look at the same course onh a Windows 2000, Windows XP, and Windows 7 box, the buttons will all look slightly different.   This is one reason that Captivate gives you the option to use other button types such as image buttons, which you can also create for yourself if you are handy with an image editor.  There is currently no import facility for vector art from Illustrator or other similar programs.
    Since the OS controls the default text button appearance, you don't have a lot of control over things like the corner radius etc, because in the OS, this might actually be square (e.g. in Win2000).
    It sounds more to me like what you REALLY want is to be able to use some of the Captivate text caption types as if they were buttons.  Some of these have rounded corners.  Alternatively, you can create a rounded rectangle using the drawing tool and configure the corner radius and background colour any number of ways.  The only things you'd be missing are rollover effects (doable with rollover captions) and interactivity.  Lilybiri suggested that you can just place an invisible clickbox over the top of the caption to make it work like a button.  The user will never know that it's the clickbox instead of the caption that they're clicking.
    If you want even more options as far as interactivity triggered by other mouse events such as mouse over and mouse out, you can make almost any Captivate screen object interactive using the Event Handler widget from Infosemantics. 
    Free trial versions are available for download if you feel like playing.

  • Timesten LOCK issue

    We are facing Timesten LOCK timeout issue in client environment very frequently but it has not happening in local test environment tried to reproduce for many times. Below is the Error message
    An exception occured while inserting data to USER_DATA_SATGING_TABLE : [TimesTen][TimesTen 11.2.1.8.0 ODBC Driver][TimesTen]TT6003: Lock request denied because of time-out
    Details: Tran 199.4005 (pid 11169) wants Xni lock on rowid BMUFVUAAABPBAAAOjo, table DATA_STAGING_TABLE. But tran 194.2521 (pid 11169) has it in X (request was X). Holder SQL (DELETE FROM USER_DATA_STAGING_TABLE WHERE STATUS= 'COMPLETE') -- file "table.c", lineno 15230, procedure "sbTblStorLTupSlotAlloc":false
    SYS.SYSTEMSTATS output on lock details:
    < lock.locks_granted.immediate                               
    , 5996359, 0, 0 >
    < lock.locks_granted.wait                                    
    , 0, 0, 0 >
    < lock.timeouts                                              
    , 8, 0, 0 >
    < lock.deadlocks                                             
    , 0, 0, 0 >
    < lock.locks_acquired.table_scans                            
    , 10256, 0, 0 >
    < lock.locks_acquired.dml                                    
    , 4, 0, 0 >
    After doing rollback of transaction using ttXactAdmin, it is going fine without any issue and it is reoccurring in 2/3 days later. 
    Previous lock information by using ttXactAdmin command.
    PID
    Context       
    TransID
    TransStatus Resource  ResourceID      
    Mode  SqlCmdID        
    Name
    Program File Name: java
    9291
    0x16907fd0     
    198.6732   Active 
    Database  0x01312d0001312d00   IX
    0
    Command   1056230912      
    S
    1056230912
    0x16988810     
    199.4371   Active 
    Database  0x01312d0001312d00   IX
    0
    Row  
    BMUFVUAAABxAAAADD9   X
    1055528832      
    USER.USER_DATA_STAGING_TABLE
    Table
    718096          
    IXn   1055528832      
    USER.USER_DATA_STAGING_TABLE
    EndScan   BMUFVUAAAAKAAAAGD1   U
    1055528832      
    USER.USER_DATA_STAGING_VIEW
    Table
    718176          
    IXn   1055528832      
    USER.USER_DATA_STAGING_VIEW
    Command   1056230912      
    S
    1056230912
    0x1882e610     
    201.16275  Active 
    Database  0x01312d0001312d00   IX
    0
    TTVERSION : TimesTen Release 11.2.1.8.0 (64 bit Linux/x86_64) (tt11:17001) 2011-02-02T02:20:46Z
    sys.odbc.ini file configuration:
    UID=user
    PWD=user
    Driver=/opt/TimesTen/tt11/lib/libtten.so
    DataStore=/var/TimesTen/tt11/ds/TT_USER_DS
    LogDir=/var/TimesTen/tt11/ttlog/TT_User
    CacheGridEnable = 0
    PermSize         = 1000
    TempSize         = 200
    CkptLogVolume    = 200
    DatabaseCharacterSet=WE8ISO8859P1
    Connections      = 128
    Here we are using XLA message listener to get notification a USER.USER_DATA_STAGING_VIEW which in turn looks for a table USER.USER_DATA_STAGING_TABLE. Delete operation on this table locks table and blocks for further insert.
    I want to know
    1) why this happening ?
    2) why is this happening in specific environment and not able to reproduce the same in other environment.
    3) how to resolve this issue?
    This is very critical issue which blocks us for further move in client solution. Expecting your reply ASAP.

    For critical issues you should always log an SR as Steve advised. These forums are not monitored 24x7 and are not the best way to get rapid assistance for critical problems.
    For this issue it seems like a DELETE operation on the table was holding a lock on a row that an INSERT operation needed to get a lock on (looks like an INDEX 'next key' lock). The inserter was not able to acquire the lock within the defined timeout (which by default is 10 seconds unless you have configured it to be different). 10 secodns is a long time so the question is why the DELETE operation was not committed (which would release) the locks within 10 seconds. Maybe it was deleting a very large number of rows? If that is the issue then the best solution is to break such a large delete up into a series of smaller deletes. For example:
    repeat
       DELETE FIRST 256 FROM USER_DATA_STAGING_TABLE WHERE STATUS= 'COMPLETE';
       COMMIT;
    until 'rows deleted' = 0
    Alternatively you could just increase the lock timeout value (but that is really just 'hiding' the issue).
    Chris

  • View creation not working +timesten

    Hi All,
    i am not able to create view in Timesten with following SQL
    can any one tell me , do't we have OVER ( PARTITION BY ) clause inTimesten ?
    thanks ,
    -AK
    Edited by: tt0008 on Jul 1, 2010 1:19 AM

    No, TimesTen does not support this syntax at present. You can see exactly what SQL syntax is supported by looking in the TimesTen SQL guide.
    Chris

  • Unable to replicate oracle data into timesten

    I have created CACHE GROUP COMPANY_MASTER
    Cache group :-
    Cache Group TSLALGO.COMPANY_MASTER_TT:
      Cache Group Type: Read Only
      Autorefresh: Yes
      Autorefresh Mode: Incremental
      Autorefresh State: On
      Autorefresh Interval: 1 Minute
      Autorefresh Status: ok
      Aging: No aging defined
      Root Table: TSLALGO.COMPANY_MASTER
      Table Type: Read Only
    But whenever I start timesten server the following lock seen in ttxactadmin <dsn_name>
    Program File Name: timestenorad
    30443   0x7fab902c02f0        7.22     Active      Database  0x01312d0001312d00   IX    0
                                                       Table     1733200              S     4221354128           TSLALGO.COMPANY_MASTER
                                                       Row       BMUFVUAAAAaAAAAFBy   S     4221354128           SYS.TABLES
                                                       Row       BMUFVUAAACkAAAALAF   Sn    4221354128           SYS.CACHE_GROUP
    Due to the following lock oracle data is not replicated in timesten.
    When we check sqlcmdid it shows following output
    Query Optimizer Plan:
    Query Text: CALL ttCacheLockCacheGp(4, '10752336#10751968#10751104#10749360#', 'S', '1111')
      STEP:             1
      LEVEL:            1
      OPERATION:        Procedure Call
      TABLENAME:
      TABLEOWNERNAME:
      INDEXNAME:
      INDEXEDPRED:
      NONINDEXEDPRED:
    Please suggest why timesten take lock on following table.

    966234 wrote:
    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.What is the file you are trying to download? Is it for Windows or Linux or All Platforms?
    Thanks,
    Hussein

  • Cannot connect to TimesTen using ODBC

    Hi,
    I have a TT datastore setup on an AIX server. I have a DSN on the AIX server setup in such a fashion that I can connect to the TT datastore and query the TimesTen database.
    Now I installed a TT client on my windows XP laptop and I am trying to establish a connection from the ODBC to the AIX TT database.
    On the XP client I have setup a dsn as test3 it has the following TTC settings:
    TTC_SERVER = aix server name
    TTC_SERVER_DSN = CacheData_tt70 (this is the dsn used to connect to tt dadatabase on the AIX server machine)
    TTC_TIMEOUT = 60
    Now when I try connecting from the Windows XP box I use the following command:
    connect "dsn=test3;UID=ttadmin;PWD=ttadmin";
    I am getting the following message:
    12701: DatabaseCharacterSet attribute required for data store creation. Refer to the TimesTen documentation for information on selecting a character set.
    What I don't understand is that I have ALREADY created the datastore on the AIX server. I do not see what the issue is here.
    Last thing ......The datastore was created with character set AL32UTF8. The Oracle database that is used in the Cache Connect was created w/ a character set of AL32UTF8.
    Any help from anyone would be appreciated.
    Regards Chris Tabb

    Hi Susan,
    The .ODBC.INI file on AIX is quite long (sorry for that). I am using the "CacheData_tt70", here it is:
    # Copyright (C) 1999, 2007, Oracle. All rights reserved.
    # The following are the default values for connection attributes.
    # In the Data Sources defined below, if the attribute is not explicitly
    # set in its entry, TimesTen 7.0 uses the defaults as
    # specified below. For more information on these connection attributes,
    # see the accompanying documentation.
    # Lines in this file beginning with # or ; are treated as comments.
    # In attribute=_value_ lines, the value consists of everything
    # after the = to the end of the line, with leading and trailing white
    # space removed.
    # Authenticate=1 (client/server only)
    # AutoCreate=1
    # CkptFrequency (if Logging == 1 then 600 else 0)
    # CkptLogVolume=0
    # CkptRate=0 (0 = rate not limited)
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # Connections=64
    # DatabaseCharacterSet (no default)
    # Diagnostics=1
    # DurableCommits=0
    # ForceConnect=0
    # GroupRestrict (none by default)
    # Isolation=1 (1 = read-committed)
    # LockLevel=0 (0 = row-level locking)
    # LockWait=10 (seconds)
    # Logging=1 (1 = write log to disk)
    # LogAutoTruncate=1
    # LogBuffSize=65536 (measured in KB)
    # LogDir (same as checkpoint directory by default)
    # LogFileSize=64 (measured in MB)
    # LogFlushMethod=0
    # LogPurge=1
    # MatchLogOpts=0
    # MemoryLock=0 (HP-UX, Linux, and Solaris platforms only)
    # NLS_LENGTH_SEMANTICS=BYTE
    # NLS_NCHAR_CONV_EXCP=0
    # NLS_SORT=BINARY
    # OverWrite=0
    # PermSize=2 (measured in MB; default is 2 on 32-bit, 4 on 64-bit)
    # PermWarnThreshold=90
    # Preallocate=0
    # PrivateCommands=0
    # PWD (no default)
    # PWDCrypt (no default)
    # RecoveryThreads=1
    # SQLQueryTimeout=0 (seconds)
    # Temporary=0 (data store is permanent by default)
    # TempSize (measured in MB; default is derived from PermSize,
    # but is always at least 6MB)
    # TempWarnThreshold=90
    # TypeMode=0 (0 = Oracle types)
    # UID (operating system user ID)
    # WaitForConnect=1
    # Oracle Loading Attributes
    # OracleID (no default)
    # OraclePWD (no default)
    # PassThrough=0 (0 = SQL not passed through to Oracle)
    # RACCallback=1
    # TransparentLoad=0 (0 = do not load data)
    # Client Connection Attributes
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # PWD (no default)
    # PWDCrypt (no default)
    # TTC_Server (no default)
    # TTC_Server_DSN (no default)
    # TTC_Timeout=60
    # UID (operating system user ID)
    [ODBC Data Sources]
    TT_tt70=TimesTen 7.0 Driver
    TpcbData_tt70=TimesTen 7.0 Driver
    TptbmDataRepSrc_tt70=TimesTen 7.0 Driver
    TptbmDataRepDst_tt70=TimesTen 7.0 Driver
    TptbmData_tt70=TimesTen 7.0 Driver
    BulkInsData_tt70=TimesTen 7.0 Driver
    WiscData_tt70=TimesTen 7.0 Driver
    RunData_tt70=TimesTen 7.0 Driver
    CacheData_tt70=TimesTen 7.0 Driver
    ttdbgdev=TimesTen 7.0 Driver
    TpcbDataCS_tt70=TimesTen 7.0 Client Driver
    TptbmDataCS_tt70=TimesTen 7.0 Client Driver
    BulkInsDataCS_tt70=TimesTen 7.0 Client Driver
    WiscDataCS_tt70=TimesTen 7.0 Client Driver
    RunDataCS_tt70=TimesTen 7.0 Client Driver
    ttdbgdevCS=TimesTen 7.0 Driver
    # Instance-Specific System Data Store
    # A predefined instance-specific data store reserved for system use.
    # It provides a well-known data store for use when a connection
    # is required to execute commands.
    [TT_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/TT_tt70
    DatabaseCharacterSet=US7ASCII
    # Data source for TPCB
    # This data store is created on connect; if it doesn't already exist.
    # (AutoCreate=1 and Overwrite=0). For performance reasons, database-
    # level locking is used. However, logging is turned on. The initial
    # size is set to 16MB.
    [TpcbData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/TpcbData
    DatabaseCharacterSet=US7ASCII
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Data source for TPTBM demo
    # This data store is created everytime the benchmark is run.
    # Overwrite should always be 0 for this benchmark. All other
    # attributes may be varied and performance under those conditions
    # evaluated. The initial size is set to 20MB and durable commits are
    # turned off.
    [TptbmData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/TptbmData
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Source data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the source data store.
    [TptbmDataRepSrc_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/TptbmDataRepSrc_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Destination data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the destination data store.
    [TptbmDataRepDst_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/TptbmDataRepDst_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Data source for BULKINSERT demo
    # This data store is created on connect; if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0).
    [BulkInsData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/BulkInsData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=32
    WaitForConnect=0
    Authenticate=0
    # Data source for WISCBM demo
    # This data store is created on connect if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0). For performance reasons,
    # database-level locking is used. However, logging is turned on.
    [WiscData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/WiscData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Default Data source for TTISQL demo and utility
    # Use default options.
    [RunData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/RunData
    DatabaseCharacterSet=US7ASCII
    Authenticate=0
    # Sample Data source for the xlaSimple demo
    # See manual for discussion of this demo.
    [Sample_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/Sample
    DatabaseCharacterSet=US7ASCII
    TempSize=16
    PermSize=16
    Authenticate=0
    # Sample data source using OracleId.
    # Before using the CacheData DSN, uncomment both the OracleId and
    # DatabaseCharacterSet attributes and insert the appropriate values for
    # the name of your Oracle database and its database character set.
    # See the jdbc demo README for information on how to obtain these values.
    [CacheData_tt70]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.a
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/Test1
    DatabaseCharacterSet=AL32UTF8
    OracleId=dbgdev
    PermSize=16
    UID=oracle
    # Authenticate=1
    # New data source definitions can be added below.
    [ttdbgdev]
    Driver=/u01/app/oracle/TimesTen/tt70/lib/libtten.so
    DataStore=/u01/app/oracle/TimesTen/tt70/info/DemoDataStore/Test1
    PermSize=20
    TempSize=20
    DatabaseCharacterSet=AL32UTF8
    #UID=chris
    OracleID=dbgdev
    # This following sample definitions should be in the .odbc.ini file
    # that is used for the TimesTen 7.0 Client.
    # The Server Name is set in the TTC_SERVER attribute.
    # The Server DSN is set in the TTC_SERVER_DSN attribute.
    # Data source for TPCB
    [TpcbDataCS_tt70]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=TpcbData_tt70
    # Data source for TPTBM demo
    [TptbmDataCS_tt70]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=TptbmData_tt70
    # Data source for BULKINSERT demo
    [BulkInsDataCS_tt70]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=BulkInsData_tt70
    # Data source for WISCBM demo
    [WiscDataCS_tt70]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=WiscData_tt70
    # Default Data source for TTISQL demo and utility
    [RunDataCS_tt70]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=RunData_tt70
    # Default Data Source for ttdbgdev
    [ttdbgdevCS]
    TTC_SERVER=LocalHost_tt70
    TTC_SERVER_DSN=ttdbgdev
    Now for the Windows ODBC.ini file, I am trying to use the "Test3" connection. Specifics of this connection are in the windows registry. Please note I have tried setting the connectionCharacterSet to AL32UTF8 as well as a few others:
    [ODBC 32 bit Data Sources]
    dwhdev=Oracle in OraHome92 (32 bit)
    dwhqas=Oracle in OraHome92 (32 bit)
    RunDataCS_tt70_32=TimesTen Client 7.0 (32 bit)
    TpcbDataCS_tt70_32=TimesTen Client 7.0 (32 bit)
    TptbmDataCS_tt70_32=TimesTen Client 7.0 (32 bit)
    BulkInsDataCS_tt70_32=TimesTen Client 7.0 (32 bit)
    WiscDataCS_tt70_32=TimesTen Client 7.0 (32 bit)
    test1=TimesTen Client 7.0 (32 bit)
    MS Access Database=Microsoft Access Driver (*.mdb) (32 bit)
    Excel Files=Microsoft Excel Driver (*.xls) (32 bit)
    dBASE Files=Microsoft dBase Driver (*.dbf) (32 bit)
    test2=TimesTen Client 7.0 (32 bit)
    test3=TimesTen Client 7.0 (32 bit)
    dbgdev=Oracle in OraHome92 (32 bit)
    [dwhdev]
    Driver32=c:\oracle\BIN\SQORA32.DLL
    [dwhqas]
    Driver32=c:\oracle\BIN\SQORA32.DLL
    [RunDataCS_tt70_32]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [TpcbDataCS_tt70_32]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [TptbmDataCS_tt70_32]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [BulkInsDataCS_tt70_32]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [WiscDataCS_tt70_32]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [test1]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [MS Access Database]
    Driver32=C:\WINDOWS\system32\odbcjt32.dll
    [Excel Files]
    Driver32=C:\WINDOWS\system32\odbcjt32.dll
    [dBASE Files]
    Driver32=C:\WINDOWS\system32\odbcjt32.dll
    [test2]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [test3]
    Driver32=C:\TimesTen\tt70_32\bin\ttcl70.dll
    [dbgdev]
    Driver32=c:\oracle\BIN\SQORA32.DLL
    Regards,
    Chris

  • Timesten replication with multiple interfaces sharing the same hostname

    Hi,
    we have in our environment two Sun T2000 nodes, running SunOS 5.10 and hosting a TT server currently in Release 7.0.5.9.0, replicated between each other.
    I would like to have some more information on the behavior of the replication w.r.t. network reliability when using two interfaces associated to the same hostname, the one used to define the replication element.
    To make an example we have our nodes sharing this common /etc/hosts elements:
    151.98.227.5 TBMAS10df2 TBMAS10df2-10 TBMAS10df2-ttrep
    151.98.226.5 TBMAS10df2 TBMAS10df2-01 TBMAS10df2-ttrep
    151.98.227.4 TBMAS9df1 TBMAS9df1-10 TBMAS9df1-ttrep
    151.98.226.4 TBMAS9df1 TBMAS9df1-01 TBMAS9df1-ttrep
    with the following element defined for replication:
    ALTER REPLICATION REPLSCHEME
    ADD ELEMENT HDF_GNP_CDPN_1 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS9df1-ttrep"
    SUBSCRIBER tt41data ON "TBMAS10df2-ttrep"
    RETURN RECEIPT BY REQUEST
    ADD ELEMENT HDF_GNP_CDPN_2 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS10df2-ttrep"
    SUBSCRIBER tt41data ON "TBMAS9df1-ttrep"
    RETURN RECEIPT BY REQUEST;
    On this subject moving from 6.0.x to 7.0.x there has been some changes I would like to better understand.
    6.0.x reported in the documentation for Unix systems:
    If a host contains multiple network interfaces (with different IP addresses),
    TimesTen replication tries to connect to the IP addresses in the same order as
    returned by the gethostbyname call. It will try to connect using the first address;
    if a connection cannot be established, it tries the remaining addresses in order
    until a connection is established.
    Now On Solaris I don't know how to let gethostbyname return more than one interface (the documention notes at this point:
    If you have multiple network interface cards (NICs), be sure that “multi
    on” is specified in the /etc/host.conf file. Otherwise, gethostbyname will not
    return multiple addresses).
    But I understand this could be valid for Linux based systems not for Solaris.
    Now if I properly understand the above, how was the 6.0.x able to realize the first interface in the list (using the same -ttrep hostname) was down and use the other, if gethostbyname was reporting only a single entry ?
    Once upgraded to 7.0.x we realized the ADD ROUTE option was added to teach TT how to use different interfaces associated to the same hostname. In our environment we did not include this clause, but still the replication was working fine regardless of which interface we were bringing down.
    My both questions in the end lead to the same doubt on which is the algorithm used by TT to reach the replicated node w.r.t. entries in the /etc/hosts.
    Looking at the nodes I can see that by default both routes are being used:
    TBMAS10df2:/-# netstat -an|grep "151.98.227."
    151.98.225.104.45312 151.98.227.4.14000 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.47307 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.48230 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.46050 151.98.227.4.14005 1049792 0 1049800 0 ESTABLISHED
    TBMAS10df2:/-# netstat -an|grep "151.98.226."
    151.98.226.5.14000 151.98.226.4.47699 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.14005 151.98.226.4.47308 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.44949 151.98.226.4.14005 1049792 0 1049800 0 ESTABLISHED
    Tried to trace with ttTraceMon but once I brought down one of the interfaces did not see any reaction on either node, if you have some info it would be really appreciated !
    Cheers,
    Mike

    Hi Chris,
    Thanks for the reply, I have few more queries on this.
    1.Using the ROUTE CLAUSE we can use multiple IPs using priority level set, so that if highest priority level set in thr ROUTE clause for the IP is not active it will fall back to the next level priority 2 set IP. But cant we use ROUTE clause to use the multiple route IPs for replication simultaneously?
    2. can we execute multiple schema for the same DSN and replication scheme but with different replication route IPs?
    for example:
    At present on my system, I have a replication scheme running for a specific DSN with stand alone Master-Subscriber mechanism, with a specific route IP through VLAN-xxx for replication.
    Now I want to create and start another replication scheme for the same DSN and replication mechanism with a different VLAN-yyy route IP to be used for replication in parallel to the existing replication scheme. without making any changes to the pre-existing replication scheme.
    for the above scenarios, will there be any specific changes respective to the different replication schema mechanism ie., Active Standby and Standalone Master Subscriber mechanism etc.,
    If so what are the steps. like how we need to change the existing schema?
    Thanks In advance.
    Naveen

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • TimesTen Error802/6220: permanent data partition free space insufficient

    Hi,
    I am new to TimesTen and try to evaluate it as IMDB cache.
    I am facing this error repeatedly. I have increased perm size from 6.25 GB to 10 GB.
    After inserting about 460.000 rows I get the error again. Is it possible that 460.000 rows need 3.75 GB space?
    In Oracle database these rows occupy about 200 MB space.
    Any ideas for this situation?
    Regards
    Thomas

    Hi Jim,
    thank you for your answer. My drive has run full, so my TT DB crashed and I didn't get it started.
    I think I will setup another DB to continue my tests.
    By the way, I didn't mesure a big gap to the database in writing large amount of data...
    I think I will chaange OS to Linux for the next test.
    Kind regards
    Thomas

Maybe you are looking for