Unloading a large cache group

Hi,
We have a read only cache group consisting of three tables. I am able to load this cache group in approximately 40 minutes using parallelism on the Oracle tables and on the load cache group statement. The cache group has just over 93 million rows. We have a requirement where we need to update a number of rows in one of the Oracle tables (approximately 6 million Oracle rows). The approach I had planned to take was -
1. Alter the cache group to set the AUTOREFRESH state to OFF.
2. Unload the cache group.
3. Perform the update on the Oracle table
4. Alter the cache group to set the AUTOREFRESH state to PAUSED.
5. Load the cache group.
I tested this in our pre-production environment which has similiar sizes to production and I found the unload of the cache group took just under 4 hours to complete. While it was running I was issuing a number of ttxactadmin commands against the datastore and it seemed most of the time the process had a TransStatus of "Committing". When I ran an strace against the process I could see a lot of reading happening against the log files. Is this behaviour correct? i.e.: should it take this long to unload a cache group? Is there a better way to perform a mass update like this on the Oracle base table?
Thanks
Mark

Hi,
With the current implementation of TimesTen, committing or rolling back very large transactions is very slow and results in a lot of disk I/O as TimesTen works throught all the log records for the transatcion on disk in order to reclaim space (the reclaim phase pof commit and rollback processing). The trick is to keep transactions relatively small (few thousand rows at most). For 'smaller' transactions TimesTen does not need to go to disk and commit/rollbakc is much faster.
The best way to unload a very large number of rows is to repeatedly execute the sequence:
UNLOAD CACHE GROUP mycg WHERE rownum <= 10000;
commit;
in a loop until it indicates that no rows were unloaded. If you are using TimesTen 11.2.1 then this logic could easily be incorporated into a PL/SQL procedure for ease of use.
Chris

Similar Messages

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • Which kind of cache group is suitable for the intensive insertion operation

    Hi Chris,sorry for call you directly. Because you give me many good answers about my many newbile questions these days:)
    You told me that the dynamic cache group is not suitable for the intensive insertion operation
    because each INSERT to a child table has to perform an existence check against Oracle even if load the cache group into RAM manually(Please correct me if wrong).
    Here I have many log tables that they only have a primary key and no foreign references and they are basically used to reflect changes from the related main tables.
    Every insert/update/delete on the main table will insert a log record in the related logging table(No direct foreign references).
    In order to cache these log tables, I have to create a independent cache group for each one, right?
    I do not want load these logs data into RAM because my application do not use them or these logs will waste my RAM clearly.
    so here comes my question.Which kind of cache group should I use to gain the best performance with no loading them into RAM?
    As my understand,the dynamic cache group load data on demand while the regular cache group need load all the data into RAM firstly and it won't load data from oracle anymore?
    Thanks in advance
    SuoNayi

    Let me be more specific. Consider this cache group:
    CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH CACHE GROUP CG_SWT
    FROM
    TPARENT
    PPK NUMBER(8,0) NOT NULL PRIMARY KEY,
    PCOL1 VARCHAR2(100)
    TCHILD
    CPK NUMBER(6,0) NOT NULL PRIMARY KEY,
    CFK NUMBER(8,0) NOT NULL,
    CCOL1 VARCHAR2(20),
    FOREIGN KEY ( CFK ) REFERENCES TPARENT ( PPK )
    INSERTS into TPARENT will not do any existence check in Oracle. An INSERT INTO TCHILD has to verify that the corresponding parent row exists. If the parent row exists in TimesTen then no check is doen in Oracle. If the parent row does not exist in TimesTen then we have to check if it exists in Oracle and if it does we will load it into TimesTen from Oracle (along with any other child rows) before completing the INSERT in TimesTen. So in the case where the parent always exists already in TimesTen there is no overhead but on the other case there is a lot of overhead.
    If your log table is truly not related to the main table (not in TT and not in Oracle either) then they should go into separate cache groups. If each insert into the log table has a unique key and there is no possibility of duplicates then you do not need to load anything into RAM. You can start with an empty table and just insert into it (since each insert is unique). Of course, if you just keep inserting you will eventually fuill up the memory in TimesTen. So, you need a mechanism to 'purge' no longer needed rows from TimesTen (they will still exist in Oracle of course). There are really two options; investigate TimesTen auotmatic aging (see documentation) - thsi may be adeuate of the insert rate is not too high - or implement a custom purge mechanism using UNLOAD CACHE GROUP (see documentation).
    Chris

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Load cache group with parallel error, 907

    hello, chris:
    we met another question, when we create a cache group, then load the data with parallel 8, it appeared unique conflict, we check the data but didn't found any data question, so we load the data again without parallel parameter, it works well, all the data load in. then use unload and load with parallel 8 again, it appeared unique confict again, what happend??
    thank you...
    The script ls:
    create readonly cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE
    autorefresh
    mode incremental
    interval 29000 milliseconds
    /* state on */
    from
    FSZW_OCECS.SP_SUBSCRIBER_RELATION (
    SUBS_RELATION_ID TT_BIGINT NOT NULL,
    PRIVID VARCHAR2(32 BYTE) INLINE NOT NULL,
    SUBSID TT_BIGINT,
    SWITCH_FLAG VARCHAR2(2 BYTE) INLINE,
    DISCOUNT_CODE VARCHAR2(8 BYTE) INLINE NOT NULL,
    DISCOUNT_SERIAL TT_INTEGER,
    START_DATE DATE NOT NULL,
    END_DATE DATE,
    MOBILENO VARCHAR2(15 BYTE) INLINE NOT NULL,
    APPLY_DATE DATE,
    primary key (SUBS_RELATION_ID))
    where NODEID = '334' or NODEID IS NULL,
    FSZW_OCECS.SP_SUBSCRIBER_ATTRINFO (
    SUB_ATTACH_ID TT_BIGINT NOT NULL,
    SUBS_RELATION_ID TT_BIGINT,
    SUB_ATTACH_INFO VARCHAR2(16 BYTE) INLINE NOT NULL,
    SUB_ATTACH_TYPE VARCHAR2(2 BYTE) INLINE,
    primary key (SUB_ATTACH_ID),
    foreign key (SUBS_RELATION_ID)
    references FSZW_OCECS.SP_SUBSCRIBER_RELATION (SUBS_RELATION_ID));
    Command> load cache group SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows PARALLEL 8;
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<907>, error_message: [TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    5037: An error occurred while loading FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE:Load failed ([TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    Command> load cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows;
    5746074 cache instances affected.

    This looks like a bug to me but I haven't been able to find a known candidate. Are you able to log an SR and provide a testcase so we can reproduce it here and verify if it is a new bug? Thanks.

  • Automation for creating cache groups process

    Hi All,
    I've got ~100 tables in Oracle DB and I'd like to cache them.
    I dont want to define a 100 cache group because it is time consuming.
    Are there any method to automate the creating cache groups process?
    Thanks.

    Hi 928879,
    Unfortunately, there is no way to automate this process. You can write a script for unloading the tables' descriptions but the defining a cache groups (create read only cache group) you should write by hands.
    regards,
    Gennady

  • Error in creating Cache Group

    Hi,
    When i tried to create cache group i am getting below error
    CREATE READONLY CACHE GROUP customer_orders
    FROM myuser.customer
    (cust_num NUMBER(6) NOT NULL,
    region VARCHAR2(10),
    name VARCHAR2(50),
    address VARCHAR2(100),
    PRIMARY KEY(cust_num)),
    myuser.orders
    (ord_num NUMBER(10) NOT NULL,
    cust_num NUMBER(6) NOT NULL,
    when_placed DATE NOT NULL,
    when_shipped DATE NOT NULL,
    PRIMARY KEY(ord_num),
    FOREIGN KEY(cust_num) REFERENCES myuser.customer(cust_num)) ;
    5220: Permanent Oracle connection failure error in OCIServerAttach(): ORA-12154: TNS:could not resolve the connect identifier specified rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "orcl_db", uid = "XXXXXXX", pwd is hidden, TNS_ADMIN = "C:\TimesTen11.2.2", ORACLE_HOME= ""
    But my Oracle database Name is MYdatabase
    Oracle LSNRCTL
    LSNRCTL> status
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1522)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 11.2.0.2.0 - Production
    Start Date 07-AUG-2012 10:31:38
    Uptime 4 days 3 hr. 1 min. 55 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File C:\TimesTen11.2.2\listener.ora
    Listener Log File E:\app\XXXXXXX\diag\tnslsnr\localhost\listener\alert\log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1522ipc)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1522)))
    Services Summary...
    Service "MYdatabaseXDB" has 1 instance(s).
    Instance "MYdatabase", status READY, has 1 handler(s) for this service...
    Service "MYdatabase" has 1 instance(s).
    Instance "MYdatabase", status READY, has 1 handler(s) for this service...
    Service "orcl" has 1 instance(s).
    Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    how to change OracleNetServiceName = "orcl_db" to OracleNetServiceName = "MYdatabase"
    Thanks!

    You should create the cache groups by using cachadmin user, not use object owner user.
    In Oracle DB:
    SQL> @grantCacheAdminPrivileges "cacheadmin"
    Please enter the administrator user id
    The value chosen for administrator user id is cacheadmin
    ***************** Initialization for cache admin begins ******************
    0. Granting the CREATE SESSION privilege to CACHEADMIN
    1. Granting the TT_CACHE_ADMIN_ROLE to CACHEADMIN
    2. Granting the DBMS_LOCK package privilege to CACHEADMIN
    3. Granting the RESOURCE  privilege to CACHEADMIN
    4. Granting the CREATE PROCEDURE  privilege to CACHEADMIN
    5. Granting the CREATE ANY TRIGGER  privilege to CACHEADMIN
    6. Granting the DBMS_LOB package privilege to CACHEADMIN
    7. Granting the SELECT on SYS.ALL_OBJECTS privilege to CACHEADMIN
    8. Granting the SELECT on SYS.ALL_SYNONYMS privilege to CACHEADMIN
    9. Checking if the cache administrator user has permissions on the default
    tablespace
         Permission exists
    11. Granting the CREATE ANY TYPE privilege to CACHEADMIN
    ********* Initialization for cache admin user done successfully *********
    SQL>In TimesTen:
    Command> CREATE USER cacheadmin IDENTIFIED BY oracle;
    User created.
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE, DROP ANY TABLE TO cacheadmin;
    Command>
    Command> CREATE USER oratt IDENTIFIED BY oracle;
    User created.
    Command> grant create session to oratt;
    Command>
    [oracle@tt1 ~]$ ttIsql "DSN=db_cache;UID=cacheadmin;PWD=oracle;OraclePWD=oracle"
    Copyright (c) 1996-2010, Oracle.  All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "DSN=db_cache;UID=cacheadmin;PWD=oracle;OraclePWD=oracle";
    Connection successful: DSN=db_cache;UID=cacheadmin;DataStore=/u01/app/oracle/datastore/db_cache;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/u01/app/oracle/product/11.2.1/TimesTen/tt1/lib/libtten.so;PermSize=100;TempSize=32;TypeMode=0;CacheGridEnable=0;OracleNetServiceName=ORCL;
    (Default setting AutoCommit=1)
    Command> call ttCacheUidPwdSet('cacheadmin','oracle');
    Command>
    Command> CREATE READONLY CACHE GROUP readcache
           >   AUTOREFRESH INTERVAL
           >   5 SECONDS
           > FROM oratt.readtab (
           >        a NUMBER NOT NULL PRIMARY KEY,
           >        b VARCHAR2(100) );
    Command> Additionally dont forget to issue the grants for cacheadmin user in Oracle DB
    SQL> GRANT SELECT ON readtab TO cacheadmin;
    Grant succeeded.Regards,
    Gennady

  • A question about cache group error in TimesTen 7.0.5

    hello, chris:
    we got some errors about cache group :
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
    2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
    and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
    we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

    Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
    The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
    It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
    1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
    2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
    In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
    If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
    Chris

  • Problem creating cache group for a table with data type varchar2(1800 CHAR)

    Hi,
    I am using TimesTen 7.0 with Oracle 10.2.0.4 server. While creating Cache Group for one of my table I'm getting the following error.
    5121: Non-standard type mapping for column TICKET.DESCRIPTION, cache operations are restricted
    5168: Restricted cache groups are deprecated
    5126: A system managed cache group cannot contain non-standard column type mapping
    The command failed.
    One of my filed type in oracle table is Varchar2(1800 CHAR). If I change the filed size to <=1000 it (E.g. Varchar2(1000 CHAR)) then the Create Cache command works fine.
    MyDatabase Character Set is UTF8.
    Is it possible to solve without changing the filed size in the Oracle Table?
    Request your help on this.
    Thanks,
    Sunil

    Hi Chris.
    The TimesTen server and the Oracle Client is installed on a 32-bit system.
    1. ttVersion
    TimesTen Release 7.0.5.0.0 (32 bit Linux/x86) (timesten122:17000) 2008-04-04T00:09:04Z
    Instance admin: root
    Instance home directory: /appl/TimesTen/timesten122
    Daemon home directory: /var/TimesTen/timesten122
    Access control enabled.
    2. Oracle DB details
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for Linux: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    Oracle Client - Oracle Client 10.2.0.4 running in a 32 bit Linux/x86
    3. ODBC Details
    Driver=/appl/TimesTen/timesten122/lib/libtten.so
    DataStore=/var/TimesTen/data
    PermSize=1700
    TempSize=244
    PassThrough=2
    UID=testuser
    OracleId=oraclenetservice
    OraclePwd=testpwd
    DatabaseCharacterSet=UTF8
    Thanks,
    Sunil

  • New FAQ Entry on JVM Parameters for Large Cache Sizes

    I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
    What JVM parameters should I consider when tuning an application with a large cache size?
    If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
    Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
    Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
    You may also want to refer to the following articles:
    * Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
    * The most complete list of -XX options for Java 6 JVM
    * My Favorite Hotspot JVM Flags
    Edited by: Charles Lamb on Oct 22, 2009 9:13 AM

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • Can an unique index be created on read only cache group

    Hi
    Can an unique index be created on read only cache group
    Regards
    Siva Kumar

    No, I do not think so. Creating a unique index could cause autorefresh operations to fail if the data being refreshed contains duplicate values that would not be allowed by the index. You can create regular indexes on a table in a readonly cache group.
    Chris

Maybe you are looking for

  • Reading Unicode data from a file...

    I am writing an application that needs to read some configuration data from a file. An end user edits the configuration file to provide the configuration data. The Java code reads this file and uses the configuration data supplied by the user. The us

  • How to map text payload (Not XML) to IDOC

    Hi All, In a scenario wherein I am getting a Text file as a Outbound Payload in XI (It is not a XML file - simple text Payload file which we see in SXMB_MONI) through XI Adapter and I need to map the text file to IDOC. I can not use here File Adapter

  • RFC Lookup error :No RFC authorization for function module

    Hi All, I have created RFC in the PI system and enabled it as remote. I am working on PI7.1 I have imported RFC into Repository.But when i am trying to execute RFC lookup function in the graphical mapping I am getting No RFC authorization for functio

  • Problem  inserting date using the date function

    Hi there, I am getting this error The value "#CREATEODBCDATE(StartDate)#" could not be converted to a date. I used different methods including the Now() function but getting always the converted to a date error. The database fields are set to datetim

  • BAPI to update the bulk component in the packaging order.

    Hi , I am working for the process order. I want to update the bulk component in the packaging order with the goods receipt batch number of the bulk order. Is there any BAPI that I can use for updating the bulk component or I need to use the BDC? Rega