How to load cache group?

Dear ChrisJenkins,
My project has a timesten . There is a table (using read only cache group) in timesten.
ex :
create table A as (id number, content varchar(20));
insert into A values (1, 'a1');
insert into A values (2, 'a2');
insert into A values (n, 'an');
commit;
The table (A) loaded 10 rows ('a1' --> 'a10'). if I execute the sql following :
"Load cache group A where id >=2 and id <=11"
, how will the timesten execute the sql above ?
I suggest :
the timesten don't load rows (id=2-->10) because the memory has the rows ,
the timesten only load rows (id=11) because the memory don't has the row
Is it true ?
Thanks,rgds
TuanTA

In your example you are using a regular table not a readonly cache group table. If you are using a readonly cache group then the table would be created like this:
CREATE READONLY CACHE GROUP CG_A
AUTOREFRESH MODE INCREMENTAL INTERVAL 10 SECONDS STATE PAUSED
FROM
ORACLEOWNER.A ( ID NUMBER, CONTENT VARCHAR(20));
This assumes that the table ORACLEOWNER.A already exists in Oracle with the same schema. The table in TimesTen will start off empty. ALso, you cannot insert, delete or update the rows in this table directly in TimesTen (that is why it is called a READONLY caceh group); if you try you will get an error. All data for this table has to originate in Oracle. Let's say that in Oracle you now do the following:
insert into A values (1, 'a1');
insert into A values (2, 'a2');
insert into A values (10, 'a10');
commit;
Stilll the table in TimesTen is empty. We can load the table with the data from Oracle using:
LOAD CACHE GROUP CG_A COMMIT EVERY 256 ROWS;
Mow the table in TimesTen had the same rows as the table in Oracle. Also, the LOAD operation changes the AUTOREFRESH state from PAUSED to ON. You still cannot directly insert/update and delete to this table in TimesTen but any data changes arising due to DML executed on the Oracle table will be captured and propagates to TimesTen by the AUTOREFRESH mechanism. If you now did, in Oracle:
UPDATE A SET CONTENT = 'NEW' WHERE ID = 3;
INSERT INTO A VALUES (11, 'a11');
COMMIT;
Then, after the next autorefresh cycle (every 10 seconds in this example), the table in TimesTen would contain:
1, 'a1'
2, 'a2'
3, 'NEW'
4, 'a4'
5, 'a5'
6, 'a6'
7, 'a7'
8, 'a8'
9, 'a9'
10, 'a10'
11, 'a11'
So, your question cannot apply for READONLY cache groups...
If you used a USERMANAGED cache group then your question could apply (as long as the cache group was not using AUTOREFRESH and the table had not been marked READONLY). In that case a LOAD CACHE GROUP cpmmand will only load qualifying rows that do not already exist in the cache table in TimesTen. If rows with the same primary key exist in Oracle they are not loaded, even if the other columns have different values to those in TimesTen. Contrast this with REFRESH CACHE GROUP which will replace all matching rows in TimesTen with the rows from Oracle.
Chris

Similar Messages

  • Error loading Cache group but Cache group created with out error

    Hi
    I have created a cache group but when I load that cache group I get following error:
    Command> load cache group SecondCache commit every 1 rows;
    5056: The cache operation fails: error_type=<Oracle Error>, error_code=<972>, error_message:ORA-00972: identifier is too long
    5037: An error occurred while load TESTUSER.SECONDCACHE:Load failed (ORA-00972: identifier too long)
    The command failed.
    Please help.
    Looking forward for your reply.
    /Ahmad

    Hi Chris!
    Thanks for urgent response. I solved my problem to some extent but want to share.
    Acctualy I was having a column named # which is a primary key also. When I change that column name from # to some other name like some characters then the cahe group is loaded successfuly.
    Is there anyway in TimesTen to load columns names # .
    I read in the documentation of TimesTen that it allows columns names as # , so it is the reason it is creating the cache group but fails to load do not know the reason.
    The code for creating cache group is as follows:
    create cache group MEASCache from testuser."MEAS"("UPDATED" number not
    null,"UNOCCUPIEDRECORD" number not null,"VALUECURRENT" number not null,"EQSFREF
    " number not null,"IMPLEMENTED" number not null,"FORMAT" number not null,"#" number not null,primary key("#"))
    When I change the # column to like eg Identity it works fine.
    /Ahmad

  • Load cache group with parallel error, 907

    hello, chris:
    we met another question, when we create a cache group, then load the data with parallel 8, it appeared unique conflict, we check the data but didn't found any data question, so we load the data again without parallel parameter, it works well, all the data load in. then use unload and load with parallel 8 again, it appeared unique confict again, what happend??
    thank you...
    The script ls:
    create readonly cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE
    autorefresh
    mode incremental
    interval 29000 milliseconds
    /* state on */
    from
    FSZW_OCECS.SP_SUBSCRIBER_RELATION (
    SUBS_RELATION_ID TT_BIGINT NOT NULL,
    PRIVID VARCHAR2(32 BYTE) INLINE NOT NULL,
    SUBSID TT_BIGINT,
    SWITCH_FLAG VARCHAR2(2 BYTE) INLINE,
    DISCOUNT_CODE VARCHAR2(8 BYTE) INLINE NOT NULL,
    DISCOUNT_SERIAL TT_INTEGER,
    START_DATE DATE NOT NULL,
    END_DATE DATE,
    MOBILENO VARCHAR2(15 BYTE) INLINE NOT NULL,
    APPLY_DATE DATE,
    primary key (SUBS_RELATION_ID))
    where NODEID = '334' or NODEID IS NULL,
    FSZW_OCECS.SP_SUBSCRIBER_ATTRINFO (
    SUB_ATTACH_ID TT_BIGINT NOT NULL,
    SUBS_RELATION_ID TT_BIGINT,
    SUB_ATTACH_INFO VARCHAR2(16 BYTE) INLINE NOT NULL,
    SUB_ATTACH_TYPE VARCHAR2(2 BYTE) INLINE,
    primary key (SUB_ATTACH_ID),
    foreign key (SUBS_RELATION_ID)
    references FSZW_OCECS.SP_SUBSCRIBER_RELATION (SUBS_RELATION_ID));
    Command> load cache group SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows PARALLEL 8;
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<907>, error_message: [TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    5037: An error occurred while loading FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE:Load failed ([TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    Command> load cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows;
    5746074 cache instances affected.

    This looks like a bug to me but I haven't been able to find a known candidate. Are you able to log an SR and provide a testcase so we can reproduce it here and verify if it is a new bug? Thanks.

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

  • Cpu usage high when loading cache group

    Hi,
    What are the possible reasons that results high cpu usage when loading read-only cache group with big root table (~ 1 million records)? I have tried setting Logging=0 (without cache agent), 1 or 2 but it doesn't help. Are there any other tuning configuration required to avoid high cpu consumption?
    ttVersion: TimesTen Release 6.0.2 (32 bit Solaris)
    Any help would be highly appreciated. Thanks in advance.

    High CPU usage is not necessarily a problem as long as the CPU is being used to do useful work. In that case high CPU usage shows that things are being processed taling maximum advantage of available CPU power. The single most common mistake is to not properly size the primary key hash index in TimesTen. Whenever you create a table with a PK in TimesTen (whetehr it is part of cache group or just a standalone table) myou must always specify the size of the PK hash index using the UNIQUE HASH ON (pk colukns) PAGES = n clause (see the documentation). n should be set to the maximum number of rows expected in the table / 256. The default is sized for a table of just 4000 rows! If you try and load 1M rows into this table we will be wasting a lot of CPU time serially scanning the (very long) hash chains in each bucket for every row inserted...

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Uanble to create Cache Group from Cache Administrator

    Folks,
    I am attempting to create a cache group from the Cache Administrator.
    I have set all the data source properties and am able to login to the data source but when I attempt to create a cache group i.e. I specify the name & type of cache group, I get this message in red at the bottom saying "Gathering table information, please wait" and... that's it. Nothing happens!
    I am able to move the cursor etc. but the cache group is not defined.
    Anybody have any suggestions as to what I'm doing wrong? Any help would be appreciated!
    keshava

    You cannot have multiple root tables within one cache group. The requirements for putting tables together into one cache group are very strict; there must be one top level table (the root table) and there can optionally be multiple child tables. The child tables must be related via foreign keys either to the root table or to a child table higher in the hierarchy.
    The solution for your case is to put one ofthe root tables and the child table into one cache group and the other root table into a separate cache group. If you do that you need to take care of a few things:
    1. You cannot define any foreign keys between tables in different cache groups in TimesTen (the keys can exist in Oracle) so the application must enforce the referential integrity itself for those cases.
    2. If you load data into one cache group (using LOAD CACHE GROUP or 'load on demand') then Timesten will not automatically load the corresponding data into the other cache group (sicne it does not know about the relationship). The application will need to load the data into the other cache group explicitly.
    There are no issues regarding transactional consistency when changes are pushed to Oracle. TimesTen correctly maintains and enforces transactional consistency regardless of how tables are arranged in cache groups.
    Chris

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • Load Cache and UnLoad Cache Problem

    Hi,
    I have inserted 150k Rows from TimesTen and it is been replicated successfully from TimesTen to my Oracle DB.I checked no of rows in TimesTen and Oracle ,showing same rows as 150K rows.
    AT Oracle End
    Count starts for me is from 2 so 153599 rows i will be getting 2 to 153600 rows
    SQL> Select Count(*) from oratt.test_rep;
    COUNT(*)
    153599
    SQL> Select Col108 from oratt.test_rep where Col108=153600;
    COL108
    153600
    SQL> Update oratt.test_rep set Col108=Col108+1 where Col108=153600;
    1 row updated.
    SQL> Select Col108 from oratt.test_rep where Col108=153600;
    no rows selected
    SQL> Select Col108 from oratt.test_rep where Col108=153601;
    COL108
    153601
    AT TimesTen End
    Command> UNLOAD CACHE GROUP CACHEADMIN.TESTCACHE;
    Command> LOAD CACHE GROUP CACHEADMIN.TESTCACHE COMMIT every 1000 Rows;
    153599 cache instances affected.
    Command> Select Col108 from oratt.test_rep where Col108=153600;
    < 153600 >
    1 row found.
    Command> Select Col108 from oratt.test_rep where Col108=153601;
    5213: Bad Oracle login error in OCISessionBegin(): ORA-01017: invalid username/password; logon denied rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "MYDB", uid = "Userid", pwd is hidden, TNS_ADMIN = "", ORACLE_HOME= ""
    5109: Cache Connect general error: BDB connection not open.
    0 rows found.
    The command failed.
    Command> cachegroups;
    Cache Group CACHEADMIN.TESTCACHE:
    Cache Group Type: Asynchronous Writethrough (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.TEST_REP
    Table Type: Propagate
    Why i am getting this error i have update my row in Oracle but it is not LOADED in TimesTen, the old value is there in TimesTen .
    Thanks!

    This is a dynamic cache group so when you run a dynamic load capable statement such as this Select Col108 from oratt.test_rep where Col108=*153600;* (presumably Col108 is a key? column) then if there are no matching rows in TiemsTen, TimesTen will attempt to go to oracle to fetch the row(s). These rows will then be inserted into the Tt cache (for future access) as well as being returned to the application. The error occurs because your ttIsql session does not have correct credentials for Orcle (maybe you omitted the OraclePWD= attribute when you connected to ttIsql?).
    If you do not want/need this dynamic load behaviour then you should create the cache group as a non-dynamic cache group.
    With regard to your question about bi-directional cache groups, no we do not support those. If you do change data in the Oracle table which is cached by executing DML against it directly in Oracle then those changes may get overwritten by later changes propagated from TimesTen. If your workload is partitioned so that different sets of rows are updated in Oracle versus TimesTen then that is okay of course. Any updates made in Oracle will not automatically be propagated to TimesTen. You can manually refresh the cache group to pick up any new data if you want to.
    Chris

  • Unloading a large cache group

    Hi,
    We have a read only cache group consisting of three tables. I am able to load this cache group in approximately 40 minutes using parallelism on the Oracle tables and on the load cache group statement. The cache group has just over 93 million rows. We have a requirement where we need to update a number of rows in one of the Oracle tables (approximately 6 million Oracle rows). The approach I had planned to take was -
    1. Alter the cache group to set the AUTOREFRESH state to OFF.
    2. Unload the cache group.
    3. Perform the update on the Oracle table
    4. Alter the cache group to set the AUTOREFRESH state to PAUSED.
    5. Load the cache group.
    I tested this in our pre-production environment which has similiar sizes to production and I found the unload of the cache group took just under 4 hours to complete. While it was running I was issuing a number of ttxactadmin commands against the datastore and it seemed most of the time the process had a TransStatus of "Committing". When I ran an strace against the process I could see a lot of reading happening against the log files. Is this behaviour correct? i.e.: should it take this long to unload a cache group? Is there a better way to perform a mass update like this on the Oracle base table?
    Thanks
    Mark

    Hi,
    With the current implementation of TimesTen, committing or rolling back very large transactions is very slow and results in a lot of disk I/O as TimesTen works throught all the log records for the transatcion on disk in order to reclaim space (the reclaim phase pof commit and rollback processing). The trick is to keep transactions relatively small (few thousand rows at most). For 'smaller' transactions TimesTen does not need to go to disk and commit/rollbakc is much faster.
    The best way to unload a very large number of rows is to repeatedly execute the sequence:
    UNLOAD CACHE GROUP mycg WHERE rownum <= 10000;
    commit;
    in a loop until it indicates that no rows were unloaded. If you are using TimesTen 11.2.1 then this logic could easily be incorporated into a PL/SQL procedure for ease of use.
    Chris

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • How many Read Only Cache Groups?

    How many Read Only Cache Groups we can createin in one DSN?
    I mean if e.g. 100 are possible?
    Thanks
    BR
    Andrzej
    Edited by: user8181100 on 2009-04-07 05:53

    As many as you like. There is no fixed upper limit.
    Chris

  • How to Load the Entities on Cache ?

    Hi..
    I have an cmp (read only) entity bean , and whenever i calling the bean, its
    loading each time in database ,and it results the performance degradation
    of the loading of page. Hence i need to keep the cmp entity on memory
    cache when the application server starts, so that i will access the entities
    on cache memory rather than database each time .
    Tell me how to load the entity beam (cmp read only) on memory when the
    application server starts ?

    Many Application Servers have special features for CMP beans that are read-only or read-mostly.
    It's not required by the EJB specification but it's pretty common. E.g., in SUN's application servers,
    you can mark the CMP bean as read-only using the <is-read-only-bean> element in
    sun-ejb-jar.xml. You can also set the <refresh-period-in-seconds> element to let the container
    know when it needs to be refreshed. Doing so will prevent the majority of the database acceses.
    --ken                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

Maybe you are looking for

  • When I receive an email, I get a message that says delivery failure

    Every time I receive an email for the last 2 days, I get another email saying this: This report relates to a message you sent with the following header fields: Message-id: <[email protected]> Date: Thu, 07 Mar 20

  • Serial number not working

    Hi all, I am trying to install Design Standard CS4 on my new laptop. I have an Adobe CS4 folder on my desktop, opened the Illustrator folder, opened Adobe CS4 and double clicked on setup Application (2,942kb) It goes through testing the system (and g

  • 2 external monitors for imac?

    Is it possible to connect two external monitors to the new 2013 IMAC?  If so, how?

  • PS-Tools For Mac?

    Is anyone aware of a program similar to PS-TOOLS for the Mac? I need to be able to remotely reboot a PC on the same network from my G-5 (10.4.11). Thanks!

  • YM problem with 9700

    It starts few days ago. Till then everything was perfect. Some of my contacts in my YM list, can`t see what i`m typing. Anyone with same problem or any ideea how to solve it? Thanks alot!