AWT cache group with CacheAwtParallelism

I have some question.
TTversion : TimesTen Release 11.2.2.3.0 (64 bit Linux/x86_64) (tt112230:53376) 2012-05-24T09:20:08Z
We are testing a AWT cache group ( with CacheAwtParallelism=4 ).
Application(1 process) to the DML generates to TimesTen(DSN=TEST).
At this point, Are delivered to the 4 parallel DML?
[TEST]
Driver=/home/TimesTen/tt112230/lib/libtten.so
DataStore=/home/TimesTen/DataStore/TEST/test
PermSize=1024
TempSize=512
PLSQL=1
DatabaseCharacterSet=KO16MSWIN949
ConnectionCharacterSet=KO16MSWIN949
OracleNetServiceName=ORACLE
OraclePWD=tiger
CachegridEnable=0
LogBufMB=512
LogFileSize=1024
RecoveryThreads=8
LogBufParallelism=8
CacheAwtParallelism=4
ReplicationParallelism=4
ReplicationApplyOrdering=0
UID=scott
PWD=tiger
Thank you very much.
GooGyum

Let me try and elaborate a littleon 'parallel AWT' (and parallel replication). AWt uses the Timesten replicatio ninfrastructure to capture changes made to AWT cached tables and propagate those changes to Oracle DB. The replication infrsatructure captures changes to tables by mining the TimesTen transaction (redo) logs. The replication/AWT capture/propagate/apply processing is completely decoupled from application transaction execution.
In TimesTen releases earlier than 11.2.2, the replication infrastructure was completely single threaded in terms of capture/propagate/apply. This means that if you have a TimesTen datastore with several application processes, each with multiple threads, all executing DML against TImesten there is just a single replication thread capturing all these changes, propagating them to the target and applying them there. This was clearly a performance bottleneck in some situations. In 11.2.2 the replciation infrastructiure has been parallelised to improve performance. This is a very dififcult task as we still need to guarantee 'correctness' in all scenarios. The implementation tracks both operation and commit order dependencies at the source (i.e. where the transactions are executed) and encodes this dependency information into the replication stream. Changes are captued, propagated and applied in parallel and on the apply side the edependency information is used to ensure that non dependant transactions can be applied in parallel (still subject to commit order enformcement) while dependant transactions are always applied in a serial fashion. So, depending on the actual workload you may see significant performance improvements using parallel replication / parallel AWT.
Note that parallelism is applied between transactions; there is no parallelism for the operations within an individual transaction.
In the case mentioned, CacheAwtParallelism=4, this means that up to 4 threads will be used to apply transactions in parallel to Oracle. The actual degree of parallelism obtained is subject to inter-transactional dependencies in the workload and adjusts dynamically in real-time.
Chris

Similar Messages

  • Equivalent to -duplicate w/awt cache group?

    Is there any equivalent to ttrepadmin -duplication when using awt cache groups?
    The behavior we're observing is this:
    1. create cache group
    2. load data into cache group using ttbulkcp, ttisql, etc.
    3. start rep agent
    No data makes it into Oracle. However, if we reverse steps 2 & 3 (start rep agent first, then load data), everything's fine.
    My concern is: what happens if the rep agent falls over for some reason and has to be restarted? If data doesn't get propagated to Oracle unless the rep agent is running when the txn is committed, that could be a problem.
    Can anyone clarify this for me -- is this intended behavior? Thanks.

    Bill,
    When the replication agent is down, the AWT transactions are in the transaction logs and will be sent to Oracle when the rep agent comes back up.
    What you described below is not the expected behavior. There is something else going on that needs to be looked at. Please file an SR so we can take a look at the details.
    Susan

  • Load cache group with parallel error, 907

    hello, chris:
    we met another question, when we create a cache group, then load the data with parallel 8, it appeared unique conflict, we check the data but didn't found any data question, so we load the data again without parallel parameter, it works well, all the data load in. then use unload and load with parallel 8 again, it appeared unique confict again, what happend??
    thank you...
    The script ls:
    create readonly cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE
    autorefresh
    mode incremental
    interval 29000 milliseconds
    /* state on */
    from
    FSZW_OCECS.SP_SUBSCRIBER_RELATION (
    SUBS_RELATION_ID TT_BIGINT NOT NULL,
    PRIVID VARCHAR2(32 BYTE) INLINE NOT NULL,
    SUBSID TT_BIGINT,
    SWITCH_FLAG VARCHAR2(2 BYTE) INLINE,
    DISCOUNT_CODE VARCHAR2(8 BYTE) INLINE NOT NULL,
    DISCOUNT_SERIAL TT_INTEGER,
    START_DATE DATE NOT NULL,
    END_DATE DATE,
    MOBILENO VARCHAR2(15 BYTE) INLINE NOT NULL,
    APPLY_DATE DATE,
    primary key (SUBS_RELATION_ID))
    where NODEID = '334' or NODEID IS NULL,
    FSZW_OCECS.SP_SUBSCRIBER_ATTRINFO (
    SUB_ATTACH_ID TT_BIGINT NOT NULL,
    SUBS_RELATION_ID TT_BIGINT,
    SUB_ATTACH_INFO VARCHAR2(16 BYTE) INLINE NOT NULL,
    SUB_ATTACH_TYPE VARCHAR2(2 BYTE) INLINE,
    primary key (SUB_ATTACH_ID),
    foreign key (SUBS_RELATION_ID)
    references FSZW_OCECS.SP_SUBSCRIBER_RELATION (SUBS_RELATION_ID));
    Command> load cache group SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows PARALLEL 8;
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<907>, error_message: [TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    5037: An error occurred while loading FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE:Load failed ([TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
    Command> load cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows;
    5746074 cache instances affected.

    This looks like a bug to me but I haven't been able to find a known candidate. Are you able to log an SR and provide a testcase so we can reproduce it here and verify if it is a new bug? Thanks.

  • AWT cache group

    Hi ,
    I have created 2 AWT cachegroup. while i am inserting data from TImesten Command Prompt ,
    If i am writing insert query then data is commited in Timesten as well as in Oracle in both the tables.
    When i am inserting data from application, Data is not inserting in Timesten but it passthrough oracle.
    one cache group can insert data in timesten where the another cache group  is pass through to oracle.
    can anyone help me for same.

    Hello,
    Data is not inserting in Timesten but it passthrough oracle.
    I think that you should post the question to the TimesTen Forum:
    https://forums.oracle.com/community/developer/english/oracle_database/timesten_in-memory_database
    Hope this help,
    Best regards,
    Jean-Valentin Lubiez

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • Bidirectional replication of cache groups to cache

    We setup a Bidirectional replication of AWT cache groups to AWT cache groups.
    ================================================
    create asynchronous writethrough cache group T1_CACHE from T1
         a NUMBER(12) NOT NULL,
         b NUMBER(12),
         c NUMBER(12),
         PRIMARY KEY (a)
    CREATE REPLICATION rep.mytt
    ELEMENT a DATASTORE
    MASTER mydata ON "ser1"
    SUBSCRIBER mydata ON "ser2" RETURN TWOSAFE
    ELEMENT b DATASTORE
    MASTER mydata ON "cx_pdscp2"
    SUBSCRIBER mydata ON "cx_pdscp1" RETURN TWOSAFE;
    ========================================================
    I write a application.It forks 30 child process and insert values into t1.The replication between the master and subscriber is ok, but the replication agent of master to Oracle can not work well. After 3~5 minutes, it print the log:
    ========================================================
    19:15:15.13 Err : REP: 6359: MYDATA:receiver.c(5668): TT16038: Failed to begin transaction for caller: rxBegin
    Tx()
    19:15:15.13 Err : REP: 6359: MYDATA:receiver.c(5668): TT864: TT0864: Operation prohibited with an active trans
    action -- file "dbAPI.c", lineno 3822, procedure "sb_xactBeginQ()"
    19:15:15.13 Err : REP: 6359: MYDATA:receiver.c(5002): TT16187: Transaction 1187176202/995; Error: transient 1,
    permanent 0
    =======================================================
    Pls help me! Thanks!

    Hi,
    This configuration is not supported; you can configure a bi-directional 2-safe scheme without AWT (as long as you only use it in a logical active/standby fashion).
    If you want to achieve high availability using 2-safe replication and AWT then you must use the explicit ACTIVE STANDBY pair replication (you will need TimesTen 7.0 to do this). This fully supports the combination of 2-safe replication and AWT.
    Regards, Chris

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

  • About cache group

    一个程序向TimesTen的数据表中插入数据能正常运行。但这个表和ORACLE做Cache Group时就不行。
    I have a wired problem: a program can insers data into a table of TimesTen when there is no Cache Group with oracle.
    However, it can not do this while it is connected to oracle using cache group. Any idea why this happens?
    error message:
    *** ERROR in tt_main.c, line 90:
    *** [TimesTen][TimesTen 7.0.3.0.0 ODBC Driver][TimesTen]TT5102: Cannot load backend library 'libclntsh.so' for Cache Connect.
    OS error message 'ld.so.1: test_C: ???: libclntsh.so: ????: ???????'. -- file "bdbOciFuncs.c", lineno 257,
    procedure "loadSharedLibrary()"
    *** ODBC Error/Warning = S1000, Additional Error/Warning = 5102

    I think I can exculde the above possibilities, as I have checked all the settings above.
    We could use SQL statements as input, and inserting and query can be done at both ends.
    It is only the program that does not work. My "connection string" is the following:
    connstr=====DSN=UTEL7;UID=utel7;PWD=utel7;AutoCreate=0;OverWrite=0;Authenticate=1
    Maybe it is a mistaken properity, or permission, or a switch parameter? Please give some suggestions.
    Thank you very much.
    Create cache group command is:
    Create Asynchronous Writethrough Cache Group utel7_load
    From
    utel7.load(col0 binary_float, col1 binary_float ......
    My odbc.ini is the following:
    # Copyright (C) 1999, 2007, Oracle. All rights reserved.
    # The following are the default values for connection attributes.
    # In the Data Sources defined below, if the attribute is not explicitly
    # set in its entry, TimesTen 7.0 uses the defaults as
    # specified below. For more information on these connection attributes,
    # see the accompanying documentation.
    # Lines in this file beginning with # or ; are treated as comments.
    # In attribute=_value_ lines, the value consists of everything
    # after the = to the end of the line, with leading and trailing white
    # space removed.
    # Authenticate=1 (client/server only)
    # AutoCreate=1
    # CkptFrequency (if Logging == 1 then 600 else 0)
    # CkptLogVolume=0
    # CkptRate=0 (0 = rate not limited)
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # Connections=64
    # DatabaseCharacterSet (no default)
    # Diagnostics=1
    # DurableCommits=0
    # ForceConnect=0
    # GroupRestrict (none by default)
    # Isolation=1 (1 = read-committed)
    # LockLevel=0 (0 = row-level locking)
    # LockWait=10 (seconds)
    # Logging=1 (1 = write log to disk)
    # LogAutoTruncate=1
    # LogBuffSize=65536 (measured in KB)
    # LogDir (same as checkpoint directory by default)
    # LogFileSize=64 (measured in MB)
    # LogFlushMethod=0
    # LogPurge=1
    # MatchLogOpts=0
    # MemoryLock=0 (HP-UX, Linux, and Solaris platforms only)
    # NLS_LENGTH_SEMANTICS=BYTE
    # NLS_NCHAR_CONV_EXCP=0
    # NLS_SORT=BINARY
    # OverWrite=0
    # PermSize=2 (measured in MB; default is 2 on 32-bit, 4 on 64-bit)
    # PermWarnThreshold=90
    # Preallocate=0
    # PrivateCommands=0
    # PWD (no default)
    # PWDCrypt (no default)
    # RecoveryThreads=1
    # SQLQueryTimeout=0 (seconds)
    # Temporary=0 (data store is permanent by default)
    # TempSize (measured in MB; default is derived from PermSize,
    # but is always at least 6MB)
    # TempWarnThreshold=90
    # TypeMode=0 (0 = Oracle types)
    # UID (operating system user ID)
    # WaitForConnect=1
    # Oracle Loading Attributes
    # OracleID (no default)
    # OraclePWD (no default)
    # PassThrough=0 (0 = SQL not passed through to Oracle)
    # RACCallback=1
    # TransparentLoad=0 (0 = do not load data)
    # Client Connection Attributes
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # PWD (no default)
    # PWDCrypt (no default)
    # TTC_Server (no default)
    # TTC_Server_DSN (no default)
    # TTC_Timeout=60
    # UID (operating system user ID)
    [ODBC Data Sources]
    TT_tt70=TimesTen 7.0 Driver
    TpcbData_tt70=TimesTen 7.0 Driver
    TptbmDataRepSrc_tt70=TimesTen 7.0 Driver
    TptbmDataRepDst_tt70=TimesTen 7.0 Driver
    TptbmData_tt70=TimesTen 7.0 Driver
    BulkInsData_tt70=TimesTen 7.0 Driver
    WiscData_tt70=TimesTen 7.0 Driver
    RunData_tt70=TimesTen 7.0 Driver
    CacheData_tt70=TimesTen 7.0 Driver
    Utel7=TimesTen 7.0 Driver
    TpcbDataCS_tt70=TimesTen 7.0 Client Driver
    TptbmDataCS_tt70=TimesTen 7.0 Client Driver
    BulkInsDataCS_tt70=TimesTen 7.0 Client Driver
    WiscDataCS_tt70=TimesTen 7.0 Client Driver
    RunDataCS_tt70=TimesTen 7.0 Client Driver
    # Instance-Specific System Data Store
    # A predefined instance-specific data store reserved for system use.
    # It provides a well-known data store for use when a connection
    # is required to execute commands.
    [TT_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/TT_tt70
    DatabaseCharacterSet=US7ASCII
    # Data source for TPCB
    # This data store is created on connect; if it doesn't already exist.
    # (AutoCreate=1 and Overwrite=0). For performance reasons, database-
    # level locking is used. However, logging is turned on. The initial
    # size is set to 16MB.
    [TpcbData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TpcbData
    DatabaseCharacterSet=US7ASCII
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Data source for TPTBM demo
    # This data store is created everytime the benchmark is run.
    # Overwrite should always be 0 for this benchmark. All other
    # attributes may be varied and performance under those conditions
    # evaluated. The initial size is set to 20MB and durable commits are
    # turned off.
    [TptbmData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmData
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Source data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the source data store.
    [TptbmDataRepSrc_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepSrc_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Destination data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the destination data store.
    [TptbmDataRepDst_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepDst_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Data source for BULKINSERT demo
    # This data store is created on connect; if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0).
    [BulkInsData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/BulkInsData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=32
    WaitForConnect=0
    Authenticate=0
    # Data source for WISCBM demo
    # This data store is created on connect if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0). For performance reasons,
    # database-level locking is used. However, logging is turned on.
    [WiscData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/WiscData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Default Data source for TTISQL demo and utility
    # Use default options.
    [RunData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/RunData
    DatabaseCharacterSet=US7ASCII
    Authenticate=0
    # Sample Data source for the xlaSimple demo
    # see manual for discussion of this demo
    [Sample_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/Sample
    DatabaseCharacterSet=US7ASCII
    TempSize=16
    PermSize=16
    Authenticate=0
    # Sample data source using OracleId.
    [CacheData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/CacheData
    DatabaseCharacterSet=US7ASCII
    OracleId=MyData
    PermSize=16
    # New data source definitions can be added below. Here is my datastore!!!
    [Utel7]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/tt70_data/utel7
    DatabaseCharacterSet=ZHS16GBK
    Uid=utel7
    Authenticate=0
    OracleID=db3
    OraclePWD=utel7
    PermSize=6000
    Connections=20
    #permsize*20%
    TempSize=400
    CkptFrequency=600
    CkptLogVolume=256
    LogBuffSize=256000
    LogFileSize=256

  • Suggestions required for Read-only cache group in timesten IMDB cache

    Hi
    In IMDB Cache , If the underlying oracle RAC is having two schemas ( "KAEP" & "AAEP" , having same sturcture and same name of objects ) and want to create a Read-only cache group with AS pair in timesten.
    Schema                                              
        KAEP  
    Table  
        Abc1
        Abc2
        Abc3                                    
    Schema
        AAEP
    Table
        Abc1
        Abc2
        Abc3
    Can a read-only cache group be created using union all query  ?
    The result set of the cache group should contain both schema records in timesten read-only cache group will it be possible ?
    Will there be any performance issue?

    You cannot create a cache group that uses UNION ALL. The only 'query' capability in a cache group definition is to use predicates in the WHERE clause and these must be simple filter predicates on the  tables in the cache group.
    Your best approach is to create separate cache groups for these tables in TimesTen and then define one or more VIEWS using UNION ALL in TimesTen in order to present the tables in the way that you want.
    Chris

  • Cpu usage high when loading cache group

    Hi,
    What are the possible reasons that results high cpu usage when loading read-only cache group with big root table (~ 1 million records)? I have tried setting Logging=0 (without cache agent), 1 or 2 but it doesn't help. Are there any other tuning configuration required to avoid high cpu consumption?
    ttVersion: TimesTen Release 6.0.2 (32 bit Solaris)
    Any help would be highly appreciated. Thanks in advance.

    High CPU usage is not necessarily a problem as long as the CPU is being used to do useful work. In that case high CPU usage shows that things are being processed taling maximum advantage of available CPU power. The single most common mistake is to not properly size the primary key hash index in TimesTen. Whenever you create a table with a PK in TimesTen (whetehr it is part of cache group or just a standalone table) myou must always specify the size of the PK hash index using the UNIQUE HASH ON (pk colukns) PAGES = n clause (see the documentation). n should be set to the maximum number of rows expected in the table / 256. The default is sized for a table of just 4000 rows! If you try and load 1M rows into this table we will be wasting a lot of CPU time serially scanning the (very long) hash chains in each bucket for every row inserted...

  • Problem creating cache group for a table with data type varchar2(1800 CHAR)

    Hi,
    I am using TimesTen 7.0 with Oracle 10.2.0.4 server. While creating Cache Group for one of my table I'm getting the following error.
    5121: Non-standard type mapping for column TICKET.DESCRIPTION, cache operations are restricted
    5168: Restricted cache groups are deprecated
    5126: A system managed cache group cannot contain non-standard column type mapping
    The command failed.
    One of my filed type in oracle table is Varchar2(1800 CHAR). If I change the filed size to <=1000 it (E.g. Varchar2(1000 CHAR)) then the Create Cache command works fine.
    MyDatabase Character Set is UTF8.
    Is it possible to solve without changing the filed size in the Oracle Table?
    Request your help on this.
    Thanks,
    Sunil

    Hi Chris.
    The TimesTen server and the Oracle Client is installed on a 32-bit system.
    1. ttVersion
    TimesTen Release 7.0.5.0.0 (32 bit Linux/x86) (timesten122:17000) 2008-04-04T00:09:04Z
    Instance admin: root
    Instance home directory: /appl/TimesTen/timesten122
    Daemon home directory: /var/TimesTen/timesten122
    Access control enabled.
    2. Oracle DB details
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for Linux: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    Oracle Client - Oracle Client 10.2.0.4 running in a 32 bit Linux/x86
    3. ODBC Details
    Driver=/appl/TimesTen/timesten122/lib/libtten.so
    DataStore=/var/TimesTen/data
    PermSize=1700
    TempSize=244
    PassThrough=2
    UID=testuser
    OracleId=oraclenetservice
    OraclePwd=testpwd
    DatabaseCharacterSet=UTF8
    Thanks,
    Sunil

  • Error loading Cache group but Cache group created with out error

    Hi
    I have created a cache group but when I load that cache group I get following error:
    Command> load cache group SecondCache commit every 1 rows;
    5056: The cache operation fails: error_type=<Oracle Error>, error_code=<972>, error_message:ORA-00972: identifier is too long
    5037: An error occurred while load TESTUSER.SECONDCACHE:Load failed (ORA-00972: identifier too long)
    The command failed.
    Please help.
    Looking forward for your reply.
    /Ahmad

    Hi Chris!
    Thanks for urgent response. I solved my problem to some extent but want to share.
    Acctualy I was having a column named # which is a primary key also. When I change that column name from # to some other name like some characters then the cahe group is loaded successfuly.
    Is there anyway in TimesTen to load columns names # .
    I read in the documentation of TimesTen that it allows columns names as # , so it is the reason it is creating the cache group but fails to load do not know the reason.
    The code for creating cache group is as follows:
    create cache group MEASCache from testuser."MEAS"("UPDATED" number not
    null,"UNOCCUPIEDRECORD" number not null,"VALUECURRENT" number not null,"EQSFREF
    " number not null,"IMPLEMENTED" number not null,"FORMAT" number not null,"#" number not null,primary key("#"))
    When I change the # column to like eg Identity it works fine.
    /Ahmad

  • Synchronous writethrough and  Asynchronous writethrough cache group

    Hi!
    My question is that can we use Passthrough feature in sychronous or asynchronous writethrough.
    and of which level passthrough=0,1,2,3
    please help........
    regards
    USman

    Yes, PassThrough can be used with AWT and SWT cache groups. Any value is allowed but the only values that make sense are 0, 1 and 3. For AWT and SWT, PassThrough=2 is the same as PassThrough=1.
    Chris

  • Synchronous Writethrough Cache Group in 11.2.1.6.1

    Hello,
    I have created the following SWT cachegroup -
    Cache Group CACHEADM.ENT_GRP_SWT_CG:
      Cache Group Type: Synchronous Writethrough
      Autorefresh: No
      Aging: No aging defined
      Root Table: TRAQDBA.ENT_GRP
      Table Type: Propagatebased on this DDL
    create synchronous writethrough cache group CACHEADM.ENT_GRP_SWT_CG
    from
        TRAQDBA.ENT_GRP (
                GRP_REPRESENTATIVE     NUMBER(12) NOT NULL DEFAULT 0,
                GRP_PARENT_KEY         NUMBER(12) NOT NULL,
                GRP_MEMBER_KEY         NUMBER(12) NOT NULL,
                EVIDENCE_KEY           NUMBER,
                NAME_SCORE             NUMBER(3) NOT NULL,
                ADDRESS_SCORE          NUMBER(3),
                DATE_TIME              TIMESTAMP(6),
                ALERT_ID               NUMBER,
                OLD_GRP_REPRESENTATIVE NUMBER(12),
            primary key (GRP_MEMBER_KEY));When I attempt to insert a row into the cachegroup via ttisql I get the following error -
    Command> insert into ent_grp values (1,1,1,null,100,null,sysdate,null,null);
    5213: Bad Oracle login error in OCISessionBegin(): ORA-01017: invalid username/password; logon denied rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "TRAQQA.world", uid = "TRAQDBA", pwd is hidden, TNS_ADMIN = "/app1/oracle/network", ORACLE_HOME= "/opt/oracle-local/home/oracle"
    The command failed.When connecting to ttisql I supplied the timesten table owner userid and password. I know that if I supply the oracle password at connect time that the insert will work.
    I have AWT cachegroups defined, for example -
    Cache Group CACHEADM.EXCLUDED_GRP_ENTITY_AWT_CG:
      Cache Group Type: Asynchronous Writethrough
      Autorefresh: No
      Aging: No aging defined
      Root Table: TRAQDBA.EXCLUDED_GRP_ENTITY
      Table Type: Propagateand I can insert into these cachegroups without the need to supply the oracle password. For example -
    Command> autocommit 0
    Command> insert into EXCLUDED_GRP_ENTITY values (1,'1',1,1,1,sysdate,1);
    1 row inserted.If this behaviour correct? Do I need to supply the oracle password for SWT cachegroups and not AWT cachegroups?
    Thanks in advance.
    Mark

    Hi Chris,
    Thanks very much for the information. One other thing we have noticed with the SWT cachegroups that you may be able to comment on - we are using stored procedures in the cache as API's on each of our tables. The stored procedures are owned by the same schema that owns the tables underneath the cache groups. We have an application user that has execute permission on the stored procedures and our java application connects to the application user to execute the stored procedures. We have found when, for example, we insert into the SWT cachegroup via the stored procedures a connection is made to the Oracle server using the userid that owns the stored procedure, not the userid that we have connected to TimesTen with. In our case the connection fails as we are supplying the application users oracle password in OraclePWD and not the schema owners password. Is this the expected behaviour when using stored procedures?
    Regards
    Mark
    Edited by: user557876 on Apr 7, 2011 10:21 PM

Maybe you are looking for