Timesten Queries

Dear Forum,
Background: We have single database instance, 22 application instances - (RAC is not allowed solution as per higher management decision)
If we implement IMDB Cache:
1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
4. What is the minimum infrastructure required
5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
Sorry for flooding with the queries. But any pointer/advise/guidance would be greatly appreciated.
Thanks
ORA_IMPL

Thanks for your detailed answer. This really helps and gives me an insight into the TT. Appreciate if you can answer the following cross queries tagged as 'DM and underlined text'
thanks for sharing experience, your contribution is very helpful for junior collegues like us.
Hi,
Thes questions are not easily answerd in isolation. A decision to adopt TimesTen, and especially TimesTen Grid, needs to take int oaccoutn many factors. For example, adding TimesTen into the picture will not be transparent to the applicaiton. Application code changes, possibly significant ones, will be needed. Also, the beyhaviour and characteristics of cache grid impose constraints on the kind of SQL that is grid capable. Also, application data access needs to be (a) mostly localised to the grid node wher the data currently rsides and (b) have a strong temporal locality of reference if good performance is to be achieved. So, there is a lot to consider and investigate before you can make an informed decision as to whether TT cache grid is the correct solution for you. Having said that, I will try to answer your questions as best I can:
1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
CJ>> I don't really understand this question. Reads would normally be satisfied from the IMDB Cache and so will be much, much faster than reading from Oracle. For grid capable queries that reference data not present in the local grid node the data will be retrieved from another grid node (preferentially) or if the data is not in any grid node then it will be retrieved from oracle. Clearly there is significant overhead in thsi case compared to the case where the data is already present in the local grid node. There are many factors thta can affect the relative performance but let us say that typically a local access will take a few microseconds, a retrieval from another grid member a millisecond or two and a retrieval from oracle may take several milliseconds. The only way to know is to test your queries with your schema on your typical hardware and measure it. For write operations, all changes are propagated asynchronously to Oracle (there is no synchronous option) and so overhead is very small and updates are very fast (much faster than in Oracle). However, if a query/DML tries to access a row located in a different grid member and for which there are comitted, unpropagated updates then that access will block until those updates have rached Oracle (to ensure data consistency). In this case there could be a large overhead. Again, this depends very much on your specific setup and data access patterns and so you need to measure it.
DM: if there are any unpropogated changes in node 1, node 2 tries to read the same data. Which data the node 2 will read (the old data or the node 2 read will be blocked OR the node 1 unpropogated changes are committed & node 2 read is facilitated)
2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
CJ>> That question has a big answer! Best to read some of the information on Cache Grid in the documentation. Essentially cache grid implements the concept of data ownership and enforces that there will only ever be one copy of any piece of cached data within the grid at any one time. Only the grid node that 'owns' the data can read/write it. If a different grid node needs access to the data the data itself is transferred to that grid node and the ownership updated. Depednin on how data is distributed and the application access pattern there may be very little overhead or a great deal of overhead. Your application must be designed in a 'grid aware' fashion to gain full benefit from grid.
DM: The database contains around 60 % of semi static reference data. This data is used by all different applications dynamically to validate their work content. Essentially all of these application are reading the same data. It is a prime requirement that any changes to the semi static ref data is propogated in timely fashion to all downstream processing
3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
CJ>> There is no one optimum size. It dpends on data volumes, type of hardware and the workload to be supported.
4. What is the minimum infrastructure required
CJ>> Realistically you need to use replicated grid nodes and Oracle Clusterware for grid management. The very smallest grid that could be configyured is therefore two physical machines (each acting as the replicaiton standby for the other) running TimesTen and Clusterware. You also need,m for Clusterware, some supported shared stirage for OCR and voting disks (e.g. NetAPp filers or some kind of SAN).
5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
CJ>> TimesTen does not currently support XML nor CLOB/BLOBs. You can cache Oracle CLOB/BLOB data in TimesTen VARCHAR2/VCARBINARY columns. Access to this data in TimesTen will be via the SQL API and you must retrieve / update the entoire CLOB/BLOB value. Any XML interpretation must be done by the application.
DM: Currently all the applications to and fro communications are happening in CLOB/BLOB and XML. If we have to rely on SQL API and update the whole CLOB/BLOB value, it seems that it will increase the processing time. I see this may be a constraining factor to use the TT.
Context: Application deals majority of the times with XML files. It receives payment in XML, conversion to internal format (again xml), applying business logics and converting it again to the outbound formats (xml) and sending across. They are currently stored as BLOBS and CLOBS in database. Size of database is 4TB.
There is one single database instance and 20 application instances. Application uses JDO as their ORM framework. With the growing volumes of data, JDO along with multiple times calls to database during the processing of each transaction(mainly due to blobs/clobs) is known bottleneck for the performance.
I am trying to find the improvement solution. TT may not fit the bill. Any other oracle feature worth investinging.

Similar Messages

  • TimesTen Queries not returning

    We are evaluating TimesTen IMDB Cache as an option to improve our performance of queries that do aggregate operations across millions of rows.
    My environment is a Windows Server 2003 R2 X64 with Intel Xeon X5660 16 CPU cores, and 50G of RAM. The database is Oracle Enterprise 11.2.0.2.
    I have the following DSN parameters.
    DataStore Path + Name : H:\ttdata\database\my_ttdb
    Transaction Log Directory : H:\ttdata\logs
    Database Character Set: WE8MSWIN1252
    First Connection:
    Permanent Data Size: 26000
    Temporary Data Size: 1600
    IMDB Cache:
    PassThrough :1
    Rest of the parameters are default.
    I have created 2 read only cache groups and 1 asynchronous write-through cache group.
    The first read only cache group is on a table A with 108 rows , the second read only cache group is on table B with 878689 rows and table C is the fact table with 20.5 million rows.
    I have loaded these cache groups. Now I am trying to do join queries across these tables that do aggregation and group by on some measures in table C.
    I have seen using dssize that the data has been loaded in permanent data area.
    These queries execute in Oracle in around 5s and my expectation was that these would return in sub-seconds.
    But these queries are not returning back at all even after hours. I have looked at the query plan and I do not have any lookups which say that they are not indexed.
    I have even tried simpler join queries without any aggregation. Even those get stuck. The only queries that I have been able to run are select * from tables.
    What may be wrong in this setup/configuration ? How do I debug what is causing the problem.
    Thanks,
    Mehta

    Dear user2057059,
    Could you specify more details about your question:
    - Tables structure (columns, indexes, constraints)
    - Could you post your query with execution plan
    20 M rows it is not a big database especially for your hardware.
    In my example:
    CPU: Intel Core 2 Duo CPU 2.33 GHz,
    RAM: 4 GB DDR2
    HDD: 100 Gb SATA-II
    OS: Fedora 8 x64 (Linux 2.6.23.1-42.fcb)
    +
    Oracle TimesTen 7.0.5.0.0 (64 bit Linux)
    Command> select count(*) from accounts;
    Query Optimizer Plan:
      STEP:                1
      LEVEL:               1
      OPERATION:           TblLkSerialScan
      TBLNAME:             ACCOUNTS
      IXNAME:              <NULL>
      INDEXED CONDITION:   <NULL>
      NOT INDEXED:         <NULL>
    < 30000000 >
    1 row found.
    average time - 1,920321 sec, direct connection. Regards,
    Gennady

  • Some simple queries on TimesTen

    Hi, I am completely new to TimesTen, I was just wondering how it can interact with standard Oracle Databases
    Q1.What is the connection method to TimesTen ? i.e. is it Sql*Net ?
    Q2. What is used to backup TimesTen databases ? Are they basically a backup of the transaction logs, therefore any file based backup method ? eg standard backup facility of the OS on which TimesTen is deployed ?
    Q3. I presume tools like Data Pump, export/import etc do not work with TimesTen ?
    Q4. Can TimesTen be used as a Data Source for Oracle BI Server ( i.e. OBIEE ) or alternatively to hold the BI Repository ?
    Q5. Are all the basic DML and DDL commands for Sql, available to be used with TimesTen ?
    thanks,
    Jim

    Hello Jim. With regards to your questions:
    Q1.What is the connection method to TimesTen ? i.e. is it Sql*Net ?
    A1: When TimesTen communicates with Oracle it uses regular SQL*Net mechanisms. To Oracle DB it appears as just another client.
    Q2. What is used to backup TimesTen databases ? Are they basically a backup of the transaction logs, therefore any file based backup method ? eg standard backup facility of the OS on which TimesTen is deployed ?
    A2: TimesTen provides it own backup and restore utilities (ttBackup/ttRestore) which allow you create online, transactionally consistent backups, both full and incremental. These tools are the only supported way to backup a TimesTen database. You should not use OS level file backup tools to backup an active database as the resulting 'backup' will not be consistent and will most likely not be usable.
    Q3. I presume tools like Data Pump, export/import etc do not work with TimesTen ?
    A3: Correct, however TimesTen does have its own set of tools that provide many capabilities.
    Q4. Can TimesTen be used as a Data Source for Oracle BI Server ( i.e. OBIEE ) or alternatively to hold the BI Repository ?
    A4: Yes, TimesTen is supported as a data source for OBIEE. In fact TimesTen is one of the key technologies within the OBIEE stack in the Exalytics BI engineered system. It is not currently supported to store the BI Repository in TimesTen.
    Q5. Are all the basic DML and DDL commands for Sql, available to be used with TimesTen ?
    A5: Yes, though there are differences in the syntax and features supported by TImesTen compared to Oracle DB. But TimesTen supports most of the usual SQL.
    You can find put a lot more detail about TimesTen by skimming through the presentations, whitepapers and documentation, available here:
    http://www.oracle.com/technetwork/database/database-technologies/timesten/overview/index.html
    Chris

  • Oracle Timesten Architecture Queries

    Hi Gurus
    We have six number of different physical locations and so want to deploy the telecom application on each locations which using the same data.
    each Locations is connected with lease lines
    We want to deploy the Oracle IMDB Solution
    Oracle Timesten
    Oracle Timesten Cache Connect
    Oracle Database (Enterprise Edition)
    We may have two RHEL server boxes on each sites
    Constraint:
    one location installation of Oracle Database (Enterprise Edition)
    Please suggest the different possible architecture scenarios.
    Regards
    Hitgon

    My reply was only assuming Oracle at a single primary site. The architecture I was suggesting is:
    Remote Location1-Application<----Read Only-------TimesTen1 | <------TimesTen Replication--------|
    Remote Location2-Application<----Read Only-------TimesTen2 | <------TimesTen Replication--------|
    Remote Location3-Application<----Read Only-------TimesTen3 | <------TimesTen Replication--------| <-----TimesTen A/S Pair <--- Autorefresh <----- | Oracle Database
    (2 machines)
    Remote Location4-Application<----Read Only-------TimesTen4 | <------TimesTen Replication--------|
    Remote Location5-Application<----Read Only-------TimesTen5 | <------TimesTen Replication--------|
    This is by far the best architecture for performance, resilience etc.
    If the volume/rate of data changes that must be refreshed to the caches is very low then you could consider this architecture but it will impose significantly more load on the Oracle DB.
    Remote Location1-Application<----Read Only-------TimesTen1 | <---------------Auto Refresh------Timesten Cache Connect --------------------------|
    Remote Location2-Application<----Read Only-------TimesTen2 | <---------------Auto Refresh------Timesten Cache Connect --------------------------|
    Remote Location3-Application<----Read Only-------TimesTen3 | <---------------Auto Refresh------Timesten Cache Connect --------------------------| Oracle Database
    Remote Location4-Application<----Read Only-------TimesTen4 | <---------------Auto Refresh------Timesten Cache Connect --------------------------|
    Remote Location5-Application<----Read Only-------TimesTen5 | <---------------Auto Refresh------Timesten Cache Connect --------------------------|
    Chris
    Edited by: ChrisJenkins on Jul 2, 2012 11:15 AM
    Edited by: ChrisJenkins on Jul 2, 2012 11:16 AM
    Edited by: ChrisJenkins on Jul 2, 2012 11:17 AM

  • Queries with ":" failing in TimesTen 11.2.1  ..NodeId :NodeId

    After upgrading to TimesTen 11.2.1, we've notice that some of our current sql statements are failing.  These same instructions executes from the applications on TT6.0. This is once of the selects that is failing.
    select NodeId , Idx from Santera.eswitchInterface where (  ( NodeId >= :NodeId ) and (  ( NodeId > :NodeId ) or ( Idx > :Idx )  )  ) order by NodeId , Idx (1, 0)(2, 0)
    CharacterSet is set to TimesTen8.
    Has something changed between TT6.0 and TT11.2 concerning this functionality?

    I just noticed your mention of the ':' character in the title of the post. I suspect that your issue is not related to the ':' specifically but to the change in how multiple parameters with the same name are handled. Please see the description of the DSN/connection attribute 'DuplicateBindMode' in the TimesTen 11.2.1 Reference Guide. To quote it here:
    DuplicateBindMode
    This attribute determines whether applications use traditional TimesTen parameter binding for duplicate occurrences of a parameter in a SQL statement or Oracle-style parameter binding.
    Traditionally, in TimesTen, multiple instances of the same parameter name in a SQL statement are considered to be multiple occurrences of the same parameter. When assigning parameter numbers to parameters, TimesTen assigns parameter numbers only to the first occurrence of each parameter name. The second and subsequent occurrences of a given name do not get their own parameter numbers. In this case, A TimesTen application binds a value for every unique parameter in a SQL statement. It cannot bind different values for different occurrences of the same parameter name nor can it leave any parameters or parameter occurrences unbound.In Oracle Database, multiple instances of the same parameter name in a SQL statement are considered to be different parameters. When assigning parameter numbers, Oracle Database assigns a number to each parameter occurrence without regard to name duplication. An Oracle Database application, at a minimum, binds a value for the first occurrence of each parameter name. For the subsequent occurrences of a given parameter, the application can either leave the parameter occurrence unbound or it can bind a different value for the occurrence.
    The default value for this attribute is 0 (Oracle mode). It looks to me like you need to use 1 (TimesTen legacy mode). A better longer term fix is to change you SQL to be:
    select NodeId , Idx from Santera.eswitchInterface where (  ( NodeId >= :NodeId1 ) and (  ( NodeId > :NodeId2 ) or ( Idx > :Idx )  )  ) order by NodeId , Idx
    and change the application code to bind an input value for both ':NodeId1' and ':NodeId2'. This will work in either binding mode and will allow you to move to the default (Oracle) binding mode which will future proof your code against any such future time when we may decide to deprecate TimesTen binding mode.
    Chris

  • Oracle Timesten Geo-redundent architecure deployment queries

    Hi,
    We have 6 sites and each site connect with lease line.
    There are 5 different physical Remote Site we want to deployed the active/standby timesten database on 5 sites
    We want to create the READ ONLY LOCAL CACHE GROUP for each 5 Sites which is auto refresh from single central database site(2-Node RAC)
    Data Modification always happen on Central Site.
    Oracle timesten active/standby is in local LAN on each site but each site is connect through WAN to the Central Local site where the Oracle RAC Database resides
    and configure the Cache Connect with each site for Auto refresh the Read Only Local Cache Group on each site.
    ARCHITECTURE OVERVIEW
    Remote Site Location-1 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    ||
    Remote Site Location-2 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------|| (CENTRAL LOCATION SITE)
    ||
    Remote Site Location-3 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------|| (2-Node RAC Database)
    ||
    Remote Site Location-4 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    ||
    Remote Site Location-5 ==>(Read Only Local cache Group )Oracle TT A/S<<====== Auto Refresh <---------||
    We have go through below Oracle Technical Paper.
    http://www.oracle.com/technetwork/database/performance/wp-imdb-cache-130299.pdf
    please check the Figure 7==>Incremental Autorefresh of Read-Only Local Cache Groups
    Please suggest your advice and suggestion for above architecture deployment
    Regards
    Hitgon
    Edited by: hitgon on Jul 10, 2012 6:44 PM
    Edited by: hitgon on Jul 10, 2012 6:46 PM

    There is a great deal of stuff you need to understand and bear in mind when implementing any complex system and this is n oexception. You should start by studying the TimesTen documentation. In particular you should familiarise yourself with the informaiton in the Cache User's Guide and the replication Guide. These contain lots of very important information regarding the setup/deployment and operation of A/S pairs and cache groups, configuring cache connect for RAC etc. You should also read through the troubleshooting guide to familiarise yourself with what things to look at if things do not seem to work as expected.
    If you are not already very familiar with TimesTen I would also strongly recommed that you take the time to read the rest of the documentation. TimesTen is not Oracle Db and while it is very compatible with Oracle in many areas there are also a lot of significant differences which you need to take into account when developing applications, managing the system etc. if you are able to take an OU training course on Timesten then I would recommend that but if not then reading the documentation is a good second best.
    We do not 'recommend' operating systems specifically but 64-bit Linux is certainly a good choice. You might like to consider Oracle Enterprise Linux instead of Redhat; it has some advantages.
    Within each site both nodes in the TimesTen active/standby pair should be on the same LAN. I would recommend GigaBit ethernet as a minimum.
    While it is possible to write your own scripts to handle deployment, monitoring, failover and recovery of A/S pairs it is much, much easier (and much more robust) if you deploy Oracle Clusterware at each site to provide fully automated management of the A/S pairs. That is our very strong recommendation and is also best practice.
    From a TimesTen perspective, system clock synchronisation is only needed within each site (i.e. the system clocks on both nodes in an A?S pair need to be closely aligned). However, it may be desirable to have all the nodes in all the sites have their clocks aligned for other reasons.
    You need to ensure that the bandwidth and latency of the WAN connections is adequate for the amount of refresh traffic that you will have. There is no easy way to estimate/calculate this; you will need to determine this empirically.
    Those are probably the most important things. As you progress then you can of course ask questions in this forum and use Oracle Support.
    Chris

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Timesten replication with multiple interfaces sharing the same hostname

    Hi,
    we have in our environment two Sun T2000 nodes, running SunOS 5.10 and hosting a TT server currently in Release 7.0.5.9.0, replicated between each other.
    I would like to have some more information on the behavior of the replication w.r.t. network reliability when using two interfaces associated to the same hostname, the one used to define the replication element.
    To make an example we have our nodes sharing this common /etc/hosts elements:
    151.98.227.5 TBMAS10df2 TBMAS10df2-10 TBMAS10df2-ttrep
    151.98.226.5 TBMAS10df2 TBMAS10df2-01 TBMAS10df2-ttrep
    151.98.227.4 TBMAS9df1 TBMAS9df1-10 TBMAS9df1-ttrep
    151.98.226.4 TBMAS9df1 TBMAS9df1-01 TBMAS9df1-ttrep
    with the following element defined for replication:
    ALTER REPLICATION REPLSCHEME
    ADD ELEMENT HDF_GNP_CDPN_1 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS9df1-ttrep"
    SUBSCRIBER tt41data ON "TBMAS10df2-ttrep"
    RETURN RECEIPT BY REQUEST
    ADD ELEMENT HDF_GNP_CDPN_2 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS10df2-ttrep"
    SUBSCRIBER tt41data ON "TBMAS9df1-ttrep"
    RETURN RECEIPT BY REQUEST;
    On this subject moving from 6.0.x to 7.0.x there has been some changes I would like to better understand.
    6.0.x reported in the documentation for Unix systems:
    If a host contains multiple network interfaces (with different IP addresses),
    TimesTen replication tries to connect to the IP addresses in the same order as
    returned by the gethostbyname call. It will try to connect using the first address;
    if a connection cannot be established, it tries the remaining addresses in order
    until a connection is established.
    Now On Solaris I don't know how to let gethostbyname return more than one interface (the documention notes at this point:
    If you have multiple network interface cards (NICs), be sure that “multi
    on” is specified in the /etc/host.conf file. Otherwise, gethostbyname will not
    return multiple addresses).
    But I understand this could be valid for Linux based systems not for Solaris.
    Now if I properly understand the above, how was the 6.0.x able to realize the first interface in the list (using the same -ttrep hostname) was down and use the other, if gethostbyname was reporting only a single entry ?
    Once upgraded to 7.0.x we realized the ADD ROUTE option was added to teach TT how to use different interfaces associated to the same hostname. In our environment we did not include this clause, but still the replication was working fine regardless of which interface we were bringing down.
    My both questions in the end lead to the same doubt on which is the algorithm used by TT to reach the replicated node w.r.t. entries in the /etc/hosts.
    Looking at the nodes I can see that by default both routes are being used:
    TBMAS10df2:/-# netstat -an|grep "151.98.227."
    151.98.225.104.45312 151.98.227.4.14000 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.47307 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.48230 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.46050 151.98.227.4.14005 1049792 0 1049800 0 ESTABLISHED
    TBMAS10df2:/-# netstat -an|grep "151.98.226."
    151.98.226.5.14000 151.98.226.4.47699 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.14005 151.98.226.4.47308 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.44949 151.98.226.4.14005 1049792 0 1049800 0 ESTABLISHED
    Tried to trace with ttTraceMon but once I brought down one of the interfaces did not see any reaction on either node, if you have some info it would be really appreciated !
    Cheers,
    Mike

    Hi Chris,
    Thanks for the reply, I have few more queries on this.
    1.Using the ROUTE CLAUSE we can use multiple IPs using priority level set, so that if highest priority level set in thr ROUTE clause for the IP is not active it will fall back to the next level priority 2 set IP. But cant we use ROUTE clause to use the multiple route IPs for replication simultaneously?
    2. can we execute multiple schema for the same DSN and replication scheme but with different replication route IPs?
    for example:
    At present on my system, I have a replication scheme running for a specific DSN with stand alone Master-Subscriber mechanism, with a specific route IP through VLAN-xxx for replication.
    Now I want to create and start another replication scheme for the same DSN and replication mechanism with a different VLAN-yyy route IP to be used for replication in parallel to the existing replication scheme. without making any changes to the pre-existing replication scheme.
    for the above scenarios, will there be any specific changes respective to the different replication schema mechanism ie., Active Standby and Standalone Master Subscriber mechanism etc.,
    If so what are the steps. like how we need to change the existing schema?
    Thanks In advance.
    Naveen

  • Creation of Procedure, view in the TimesTen IMD

    Hi,
    Query No. 1
    Existing procedure of Oracle Database we are trying to compile in TimesTen In-Memory database.
    One temporary table of that same procedure we have cached in the TimesTen IMD. but there are other objects (sequence, table, functions) in the procedure which are not cached in the TT IMD. while compilation in the TT database it is giving "table or view does not exist" error.
    Is there are any solution to retrieve data from Oracle database though objects are not part of the TT IMD.
    Our understanding here is, if particular Objects are cached in the TT IMD then it will access that object information from TT IMD database and rest of which are not cached in the TT IMD it should take from backend(Oracle Database). so please confirm whether our understanding is proper or not. if not then please guide us on the same.
    Query No. 2
    Can we create view in the TimesTen IMD for Object which are part of Oracle Database. and those objects are not cached in the TT IMD.

    For #1:
    TimesTen does support a feature called PassThrough which allows 'transparent' access to objects in the Oracle database under specific circumstances. However, the frunctionality of PassThrough is very limited when PL/SQL is being used. In general, any object referenced by a PL/SQL procedure that is executing in TimesTen must exist in TimesTen.
    Even when PassThrough can be used, queries and transactions cannot span TimesTen and Oracle. Every individual query will always execute completely in TimesTen (all objects must reside in TimesTen) or Oracle (all objects must reside in Oracle. You cannot execute a single SQL statement that references objects in both TimesTen and Oracle. Similarly, transactions on TimesTen are separate from Transactions in Oracle; we do not do any form of distributed transaction. If you update objects in both TimesTen and Oracle within one transaction you actually have two separate transactions; one in Oracle and one in TimesTen. When you commit, it is possible that one of the transactions will succeed and the other fail. This type of mixed usage is not recommended.
    For #2:
    No. You could cache tables from Oracle into TimesTen and then create a view in TimesTen on the cached tables.
    Chris

  • Hierarchical Query in TimesTen?

    I found an old forum question from 2006 asking about availability of Hierarchical queries in TimesTen, the response given at that time was in the negative.
    Now almost 9 years later - have Hierarchical Queries become available in TimesTen?
    I am referring to queries using    " .... start with ... connect by ...  "
    I tried it in SQLDeveloper but got a syntax error.
    Wondering if this is because its still not available or possibly the syntax is different in TimesTen than in Oracle
    Thanks

    Nope, these are still not supported. Sorry.
    Chris

  • Oracle 11g result cache and TimesTen

    Oracle 11g has introduced the concept of result cache whereby the result set of frequently executed queries are stored in cache and used later when other users request the same query. This is different from caching the data blocks and exceuting the query over and over again.
    Tom Kyte calls this just-in-time materialized view whereby the results are dynamically evaluated without DBA intervention
    http://www.oracle.com/technology/oramag/oracle/07-sep/o57asktom.html
    My point is that in view of utilities like result_cache and possible use of Solid State Disks in Oracle to speed up physical I/O etc is there any need for a product like TimesTen? It sounds to me that it may just asdd another layer of complexity?

    Oracle result cache ia a useful tool but it is distinctly different from TimesTen. My understanding of Oracle's result cache is caching results set for seldom changing data like look up tables (currencies ID/code), reference data that does not change often (list of counter parties) etc. It would be pointless for caching result set where the underlying data changes frequently.
    There is also another argument for SQL result cache in that if you are hitting high on your use of CPUs and you have enough of memory then you can cache some of the results set thus saving on your CPU cycles.
    Considering the arguments about hard wired RDBMS and Solid State Disks (SSD), we can talk about it all day but having SSD does not eliminate the optimiser consideration for physical I/O. A table scan is a table scan whether data resides on SCSI or SSD disk. SSD will be faster but we are still performing physical IOs.
    With regard to TimesTen, the product positioning is different. TimesTen is closer to middletier than Oracle. It is designed to work closely to application layer whereas Oracle has much wider purpose. For real time response and moderate volumes there is no way one can substitue TimesTen with any hard wired RDBMS. The request for result cache has been around for sometime. In areas like program trading and market data where the underlying data changes rapidly, TimesTen will come very handy as the data is real time/transient and the calculations have to be done almost realtime, with least complications from the execution engine. I fail to see how one can deploy result cache in this scenario. Because of the underlying change of data, Oracle will be forced to calculate the queries almost everytime and the result cache will be just wasted.
    Hope this helps,
    Mich

  • Are there any timesten installation for data warehouse environment?

    Hi,
    I wonder if there is a way to install timesten as an in memory database for data warehouse environment?
    The DW today consist of a large Oralcle database and I wonder if and how a timesten implementation can be done.
    what kind of application changes involve with such an implementation and so on?
    I know that the answer is probably complex but if anyone knows about such an implementation and some information about it , it would be great to learn from that experience.
    Thanks,
    Adi

    Adi,
    It depends on what you want to do with the data in the TimesTen database. If you know the "hot" dataset that you want to cache in TimesTen, you can use Cache Connect to Oracle to cache a subset of your Oracle tables into TimesTen. The key is to figure out what queries you want to run and see if the queries are supported in TimesTen.
    Assuming you know the dataset you need to cache and you have control of your application code to change the connection to TimesTen (using ODBC or JDBC), you can give it a try. If you are using a third party tool, you need to see if the tool supports JDBC or ODBC access to the database and change the tool to point to your TimesTen database instead of the Oracle database.
    If you are using the TimesTen Cache Connect to Oracle product option, data synchronization between Oracle and TimesTen is handled automatically by the product.
    Without further details of what you'd like to do, it's difficult to provide more detailed recommendation.
    -scheung

  • A question about cache group error in TimesTen 7.0.5

    hello, chris:
    we got some errors about cache group :
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
    2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
    and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
    we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

    Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
    The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
    It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
    1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
    2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
    In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
    If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
    Chris

  • Query Take more time on timesten

    Hi
    One query takes lot of time on timesten , while the same query takes less time on oracle
    Query :-
    select *
                    from (SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE,
                                NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(sum(C.reld_m2m_exp)),
                                              -1,
                                              abs(sum(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER         A,
                                 ORDER_LMT_MASTER      B,
                                 RMS_ENTITY_LIMIT_DTLS C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
                             AND C.RELD_EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'E'
                             AND C.RELD_EXM_EXCH_ID = B.OLM_EXCH_ID(+)
                             AND C.RELD_EXM_EXCH_ID <> 'ALL'
                             AND B.OLM_SEM_SMST_SECURITY_ID(+) =
                                 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                             AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID,
                                    OLM_PRODUCT_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE SEGMENTID,
                               NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(SUM(C.reld_m2m_exp)),
                                              -1,
                                              abs(SUM(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER         A,
                                 ORDER_LMT_MASTER      B,
                                 RMS_ENTITY_LIMIT_DTLS C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID
                             AND B.OLM_EPM_EM_ENTITY_ID = C.RELD_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'E'
                             AND B.OLM_EXCH_ID = 'ALL'
                             AND B.OLM_SEM_SMST_SECURITY_ID(+) =
                                 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                             AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID,
                                    OLM_PRODUCT_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE,
                                 NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(sum(C.reld_m2m_exp)),
                                              -1,
                                              abs(sum(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER             A,
                                 DRV_ORDER_INST_LMT_MASTER B,
                                 RMS_ENTITY_LIMIT_DTLS     C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
                             AND C.RELD_EM_ENTITY_ID =
                                 B.OLIM_EPM_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'D'
                             AND C.RELD_EXM_EXCH_ID = B.OLIM_EXCH_ID(+)
                             AND C.RELD_EXM_EXCH_ID <> 'ALL'
                             AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLIM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID
                          UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
                          SELECT RELD_EM_ENTITY_ID,
                                 RELD_EXM_EXCH_ID,
                                 RELD_SEGMENT_TYPE SEGMENTID,
                                 NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
                                 ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
                                 ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
                                 ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
                                       2) MAR_EXP,
                                 ROUND(decode(sign(SUM(C.reld_m2m_exp)),
                                              -1,
                                              abs(SUM(C.reld_m2m_exp)),
                                              0),
                                       2) M2M_EXP,
                                 NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
                                 EM_RP_PROFILE_ID
                            FROM ENTITY_MASTER             A,
                                 DRV_ORDER_INST_LMT_MASTER B,
                                 RMS_ENTITY_LIMIT_DTLS     C
                           WHERE A.EM_CONTROLLER_ID = 100100010000
                             AND A.EM_ENTITY_ID = B.OLIM_EPM_EM_ENTITY_ID
                             AND B.OLIM_EPM_EM_ENTITY_ID =
                                 C.RELD_EM_ENTITY_ID(+)
                             AND C.RELD_SEGMENT_TYPE = 'D'
                             AND B.OLIM_EXCH_ID = 'ALL'
                             AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
                             AND ((A.EM_ENTITY_TYPE IN ('CL') AND
                                 A.EM_CLIENT_TYPE <> 'CC') OR
                                 A.EM_ENTITY_TYPE <> 'CL')
                           GROUP BY RELD_EM_ENTITY_ID,
                                    RELD_EXM_EXCH_ID,
                                    RELD_SEGMENT_TYPE,
                                    RELD_ACC_TYPE,
                                    OLIM_SUSPNSN_FLG,
                                    EM_RP_PROFILE_ID)
                   ORDER BY RELD_EM_ENTITY_ID,
                            RELD_SEGMENT_TYPE,
                            RELD_EXM_EXCH_ID;
    Please suggest  what should i check for this.

    As always when examining SQL performance, start by checking the query execution plan. If you use ttIsql you can just prepend EXPLAIN to the query and the plan will be displayed. e.g.
    EXPLAIN  select ...........;
    Check the plan is optimal and all necessary indexes are in place. You may need to add indexes depending o what the plan shows.
    Please note that Oracle database can, and usually does, execute many types of query in parallel using multiple CPU cores. TimesTen does not currently support parallelisation of individual queries. Hence in some cases Oracle database may indeed be faster than TimesTen due to the parallel execution that occurs in Oracle.
    Chris

  • Process of initalization of DB into memory for timesten

    Hi
    Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing ?
    Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
    want to know the internal functional flow of timesten when any command is fired against it.
    Regards
    Siva Kumar

    TimesTen is a fully persistent database. The database contents, including tables, indexes and other objects along with the data exist both in memory (the operational database) and in the checkpoint files on disk (a persistent copy). When the database is started up (ramLoaded) the most recent checkpoint fikle is loaded into memory. There is no need to 'create' any structures etc. at this point since they are already present in the checkpoint image loaded from disk. Although it is possible to have a TimesTen database that gets loaded into memory when the first application connection occurs and gets unloaded from memory when the last application connection disconnects in general we recommend that you explicitly control when the database is ramLoaded/ramUnloaded using the 'manual' ramPolicy and the ttAdmin -ramLoad / -ramUnload commands. In this way you can avoid excessive loading and unloading of the database to/from memory.
    TimesTen works similarly to other databases; when a SQL statement is prepared (parsed in Oracle database parlance) the TimesTen optimiser used the information in the data dictionary tables to generate a query plan. There is no concept of a data dictionary 'cache' since all data in TimesTen is permanently in memory. Active query plans are cached within TimesTen memory since they are transient data. When executing a query (using a plan) TimesTen accesses index and table data directly in memory. There is no 'buffer cache' since all data stored in TimesTen is permanently resident in memory. In general, for OLTP type queries, a query is parameterised, prepared just once and then executed many, many times. This is the way to achieve maximum performance with any SQL database.
    As a result of these (and many other) optimisations/simplifications Timesten can achieve very high performance through simpler algorithms that require less CPU cycles to do the same work as in a more complex database.
    Does that help?
    Chris

Maybe you are looking for