Connect to cache group

hi ..
i have 2 dsn timesten. if i add new cache group in first DSN, i can show content of table of the new cache group
but in second DSN. if i add new cache group and i query table of the new cache group. timesten return zero result.
why? and how to resolve this problem?
thanks

Can you please provide:
1. Details of exact TimesTen version being used
2. DSN definitions for both datastores
3. The steps you are performing at each datastore (in detail).
4. The result (including any errors etc.) after each step.
Thanks,
Chris

Similar Messages

  • Error in creating Cache Group

    Hi,
    When i tried to create cache group i am getting below error
    CREATE READONLY CACHE GROUP customer_orders
    FROM myuser.customer
    (cust_num NUMBER(6) NOT NULL,
    region VARCHAR2(10),
    name VARCHAR2(50),
    address VARCHAR2(100),
    PRIMARY KEY(cust_num)),
    myuser.orders
    (ord_num NUMBER(10) NOT NULL,
    cust_num NUMBER(6) NOT NULL,
    when_placed DATE NOT NULL,
    when_shipped DATE NOT NULL,
    PRIMARY KEY(ord_num),
    FOREIGN KEY(cust_num) REFERENCES myuser.customer(cust_num)) ;
    5220: Permanent Oracle connection failure error in OCIServerAttach(): ORA-12154: TNS:could not resolve the connect identifier specified rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "orcl_db", uid = "XXXXXXX", pwd is hidden, TNS_ADMIN = "C:\TimesTen11.2.2", ORACLE_HOME= ""
    But my Oracle database Name is MYdatabase
    Oracle LSNRCTL
    LSNRCTL> status
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1522)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 11.2.0.2.0 - Production
    Start Date 07-AUG-2012 10:31:38
    Uptime 4 days 3 hr. 1 min. 55 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File C:\TimesTen11.2.2\listener.ora
    Listener Log File E:\app\XXXXXXX\diag\tnslsnr\localhost\listener\alert\log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1522ipc)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1522)))
    Services Summary...
    Service "MYdatabaseXDB" has 1 instance(s).
    Instance "MYdatabase", status READY, has 1 handler(s) for this service...
    Service "MYdatabase" has 1 instance(s).
    Instance "MYdatabase", status READY, has 1 handler(s) for this service...
    Service "orcl" has 1 instance(s).
    Instance "orcl", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    how to change OracleNetServiceName = "orcl_db" to OracleNetServiceName = "MYdatabase"
    Thanks!

    You should create the cache groups by using cachadmin user, not use object owner user.
    In Oracle DB:
    SQL> @grantCacheAdminPrivileges "cacheadmin"
    Please enter the administrator user id
    The value chosen for administrator user id is cacheadmin
    ***************** Initialization for cache admin begins ******************
    0. Granting the CREATE SESSION privilege to CACHEADMIN
    1. Granting the TT_CACHE_ADMIN_ROLE to CACHEADMIN
    2. Granting the DBMS_LOCK package privilege to CACHEADMIN
    3. Granting the RESOURCE  privilege to CACHEADMIN
    4. Granting the CREATE PROCEDURE  privilege to CACHEADMIN
    5. Granting the CREATE ANY TRIGGER  privilege to CACHEADMIN
    6. Granting the DBMS_LOB package privilege to CACHEADMIN
    7. Granting the SELECT on SYS.ALL_OBJECTS privilege to CACHEADMIN
    8. Granting the SELECT on SYS.ALL_SYNONYMS privilege to CACHEADMIN
    9. Checking if the cache administrator user has permissions on the default
    tablespace
         Permission exists
    11. Granting the CREATE ANY TYPE privilege to CACHEADMIN
    ********* Initialization for cache admin user done successfully *********
    SQL>In TimesTen:
    Command> CREATE USER cacheadmin IDENTIFIED BY oracle;
    User created.
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE, DROP ANY TABLE TO cacheadmin;
    Command>
    Command> CREATE USER oratt IDENTIFIED BY oracle;
    User created.
    Command> grant create session to oratt;
    Command>
    [oracle@tt1 ~]$ ttIsql "DSN=db_cache;UID=cacheadmin;PWD=oracle;OraclePWD=oracle"
    Copyright (c) 1996-2010, Oracle.  All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "DSN=db_cache;UID=cacheadmin;PWD=oracle;OraclePWD=oracle";
    Connection successful: DSN=db_cache;UID=cacheadmin;DataStore=/u01/app/oracle/datastore/db_cache;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/u01/app/oracle/product/11.2.1/TimesTen/tt1/lib/libtten.so;PermSize=100;TempSize=32;TypeMode=0;CacheGridEnable=0;OracleNetServiceName=ORCL;
    (Default setting AutoCommit=1)
    Command> call ttCacheUidPwdSet('cacheadmin','oracle');
    Command>
    Command> CREATE READONLY CACHE GROUP readcache
           >   AUTOREFRESH INTERVAL
           >   5 SECONDS
           > FROM oratt.readtab (
           >        a NUMBER NOT NULL PRIMARY KEY,
           >        b VARCHAR2(100) );
    Command> Additionally dont forget to issue the grants for cacheadmin user in Oracle DB
    SQL> GRANT SELECT ON readtab TO cacheadmin;
    Grant succeeded.Regards,
    Gennady

  • How Insert Work on global cache group?

    Hi all , i'm doing some test about how many transactions for second TimesTen can process.
    With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
    now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
    And how i read in the guide the global cache group are perfect for this purpose.
    After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
    of insert on a single node.
    But i reached only 1500 as maximum value of transactions for second.
    The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
    Some questions:
    1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
    2) there is any operation performed from global cache to others when a statement is sendend?
    The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
    The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
    Thanks in advance for any suggestion.
    P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

    Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
    [my_ds]
    LogBufMB=1024
    LogFileSize=1024
    For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
    Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
    insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    connection.commit();
    This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
    int COMMIT_INTVL = 100;
    for (int i=1; i < 1000000; i++)
    insPs.setString(1,""+getSequence());
    insPs.setString(2,"TEST_CODE");
    insPs.setString(3,"TT Insert test");
    insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
    insPs.execute();
    if ( (i % COMMIT_INTVL) == 0 )
    connection.commit();
    connection.commit();
    And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
    Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
    This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
    Chris

  • About cache group

    一个程序向TimesTen的数据表中插入数据能正常运行。但这个表和ORACLE做Cache Group时就不行。
    I have a wired problem: a program can insers data into a table of TimesTen when there is no Cache Group with oracle.
    However, it can not do this while it is connected to oracle using cache group. Any idea why this happens?
    error message:
    *** ERROR in tt_main.c, line 90:
    *** [TimesTen][TimesTen 7.0.3.0.0 ODBC Driver][TimesTen]TT5102: Cannot load backend library 'libclntsh.so' for Cache Connect.
    OS error message 'ld.so.1: test_C: ???: libclntsh.so: ????: ???????'. -- file "bdbOciFuncs.c", lineno 257,
    procedure "loadSharedLibrary()"
    *** ODBC Error/Warning = S1000, Additional Error/Warning = 5102

    I think I can exculde the above possibilities, as I have checked all the settings above.
    We could use SQL statements as input, and inserting and query can be done at both ends.
    It is only the program that does not work. My "connection string" is the following:
    connstr=====DSN=UTEL7;UID=utel7;PWD=utel7;AutoCreate=0;OverWrite=0;Authenticate=1
    Maybe it is a mistaken properity, or permission, or a switch parameter? Please give some suggestions.
    Thank you very much.
    Create cache group command is:
    Create Asynchronous Writethrough Cache Group utel7_load
    From
    utel7.load(col0 binary_float, col1 binary_float ......
    My odbc.ini is the following:
    # Copyright (C) 1999, 2007, Oracle. All rights reserved.
    # The following are the default values for connection attributes.
    # In the Data Sources defined below, if the attribute is not explicitly
    # set in its entry, TimesTen 7.0 uses the defaults as
    # specified below. For more information on these connection attributes,
    # see the accompanying documentation.
    # Lines in this file beginning with # or ; are treated as comments.
    # In attribute=_value_ lines, the value consists of everything
    # after the = to the end of the line, with leading and trailing white
    # space removed.
    # Authenticate=1 (client/server only)
    # AutoCreate=1
    # CkptFrequency (if Logging == 1 then 600 else 0)
    # CkptLogVolume=0
    # CkptRate=0 (0 = rate not limited)
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # Connections=64
    # DatabaseCharacterSet (no default)
    # Diagnostics=1
    # DurableCommits=0
    # ForceConnect=0
    # GroupRestrict (none by default)
    # Isolation=1 (1 = read-committed)
    # LockLevel=0 (0 = row-level locking)
    # LockWait=10 (seconds)
    # Logging=1 (1 = write log to disk)
    # LogAutoTruncate=1
    # LogBuffSize=65536 (measured in KB)
    # LogDir (same as checkpoint directory by default)
    # LogFileSize=64 (measured in MB)
    # LogFlushMethod=0
    # LogPurge=1
    # MatchLogOpts=0
    # MemoryLock=0 (HP-UX, Linux, and Solaris platforms only)
    # NLS_LENGTH_SEMANTICS=BYTE
    # NLS_NCHAR_CONV_EXCP=0
    # NLS_SORT=BINARY
    # OverWrite=0
    # PermSize=2 (measured in MB; default is 2 on 32-bit, 4 on 64-bit)
    # PermWarnThreshold=90
    # Preallocate=0
    # PrivateCommands=0
    # PWD (no default)
    # PWDCrypt (no default)
    # RecoveryThreads=1
    # SQLQueryTimeout=0 (seconds)
    # Temporary=0 (data store is permanent by default)
    # TempSize (measured in MB; default is derived from PermSize,
    # but is always at least 6MB)
    # TempWarnThreshold=90
    # TypeMode=0 (0 = Oracle types)
    # UID (operating system user ID)
    # WaitForConnect=1
    # Oracle Loading Attributes
    # OracleID (no default)
    # OraclePWD (no default)
    # PassThrough=0 (0 = SQL not passed through to Oracle)
    # RACCallback=1
    # TransparentLoad=0 (0 = do not load data)
    # Client Connection Attributes
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # PWD (no default)
    # PWDCrypt (no default)
    # TTC_Server (no default)
    # TTC_Server_DSN (no default)
    # TTC_Timeout=60
    # UID (operating system user ID)
    [ODBC Data Sources]
    TT_tt70=TimesTen 7.0 Driver
    TpcbData_tt70=TimesTen 7.0 Driver
    TptbmDataRepSrc_tt70=TimesTen 7.0 Driver
    TptbmDataRepDst_tt70=TimesTen 7.0 Driver
    TptbmData_tt70=TimesTen 7.0 Driver
    BulkInsData_tt70=TimesTen 7.0 Driver
    WiscData_tt70=TimesTen 7.0 Driver
    RunData_tt70=TimesTen 7.0 Driver
    CacheData_tt70=TimesTen 7.0 Driver
    Utel7=TimesTen 7.0 Driver
    TpcbDataCS_tt70=TimesTen 7.0 Client Driver
    TptbmDataCS_tt70=TimesTen 7.0 Client Driver
    BulkInsDataCS_tt70=TimesTen 7.0 Client Driver
    WiscDataCS_tt70=TimesTen 7.0 Client Driver
    RunDataCS_tt70=TimesTen 7.0 Client Driver
    # Instance-Specific System Data Store
    # A predefined instance-specific data store reserved for system use.
    # It provides a well-known data store for use when a connection
    # is required to execute commands.
    [TT_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/TT_tt70
    DatabaseCharacterSet=US7ASCII
    # Data source for TPCB
    # This data store is created on connect; if it doesn't already exist.
    # (AutoCreate=1 and Overwrite=0). For performance reasons, database-
    # level locking is used. However, logging is turned on. The initial
    # size is set to 16MB.
    [TpcbData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TpcbData
    DatabaseCharacterSet=US7ASCII
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Data source for TPTBM demo
    # This data store is created everytime the benchmark is run.
    # Overwrite should always be 0 for this benchmark. All other
    # attributes may be varied and performance under those conditions
    # evaluated. The initial size is set to 20MB and durable commits are
    # turned off.
    [TptbmData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmData
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Source data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the source data store.
    [TptbmDataRepSrc_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepSrc_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Destination data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the destination data store.
    [TptbmDataRepDst_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepDst_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Data source for BULKINSERT demo
    # This data store is created on connect; if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0).
    [BulkInsData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/BulkInsData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=32
    WaitForConnect=0
    Authenticate=0
    # Data source for WISCBM demo
    # This data store is created on connect if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0). For performance reasons,
    # database-level locking is used. However, logging is turned on.
    [WiscData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/WiscData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Default Data source for TTISQL demo and utility
    # Use default options.
    [RunData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/RunData
    DatabaseCharacterSet=US7ASCII
    Authenticate=0
    # Sample Data source for the xlaSimple demo
    # see manual for discussion of this demo
    [Sample_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/Sample
    DatabaseCharacterSet=US7ASCII
    TempSize=16
    PermSize=16
    Authenticate=0
    # Sample data source using OracleId.
    [CacheData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/CacheData
    DatabaseCharacterSet=US7ASCII
    OracleId=MyData
    PermSize=16
    # New data source definitions can be added below. Here is my datastore!!!
    [Utel7]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/tt70_data/utel7
    DatabaseCharacterSet=ZHS16GBK
    Uid=utel7
    Authenticate=0
    OracleID=db3
    OraclePWD=utel7
    PermSize=6000
    Connections=20
    #permsize*20%
    TempSize=400
    CkptFrequency=600
    CkptLogVolume=256
    LogBuffSize=256000
    LogFileSize=256

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • Active stanby cache group replication on same machine

    Hi,
    To validate a replication design, I am trying to set up an active standby pair of a cache group on a my linux(ubuntu) laptop. I have setup two datastores on the same TT instance - cgdsn and master2
    I created a usermanaged cache group on the active TT DS and started an active standby replication using following sql:
    command>create active standby pair cgdsn on "simu-t61" , master2 on "simu-t61" return receipt STORE cgdsn PORT 21000 TIMEOUT 30 STORE master2 PORT 20000 TIMEOUT 30;
    I read in an earlier thread that replicating on same machine will require ports to be specifically provided to the replication command.
    Here is the output to check the status of the master datastore:
    <output>
    oracle@simu-t61:~$ ttrepadmin -showstatus cgdsn_tt70
    Enter password for 'ora':
    Replication Agent Status as of: 2010-03-14 20:22:43
    DSN : cgDSN_tt70
    Process ID : 18496 (Started)
    Replication Agent Policy : manual
    Host : SIMU-T61
    RepListener Port : 21000
    Last write LSN : 0.19340024
    Last LSN forced to disk : 0.19340024
    Replication hold LSN : 0.19320752
    Replication Peers:
    Name : MASTER2
    Host : SIMU-T61
    Port : 0 (Not Connected)
    Replication State : STARTED
    Communication Protocol : 24
    TRANSMITTER thread(s):
    For : MASTER2
    Start/Restart count : 1
    Send LSN : 0.19320752
    Transactions sent : 0
    Total packets sent : 0
    Tick packets sent : 0
    Total Packets received: 0
    </output>
    I then tried to duplicate the standby but got the following error
    <output>
    oracle@simu-t61:~$ ttRepAdmin -duplicate -from cgdsn_tt70 -host "simu-t61" -keepCG -cacheUid ora -cachePwd ora -localhost "simu-t61" -verbosity 2 "dsn=master2_tt70;UID=;PWD="
    Enter password for 'ora':
    *20:27:02 Contacting remote main daemon at 127.0.1.1 port 17000*
    *20:27:02 Duplicate Operation Ends*
    TT12039: Could not get port number of TimesTen replication agent on remote host. Either the replication agent was not started, or it was just started and has not communicated its port number to the TimesTen daemon
    </output>
    From log file
    <output>
    00:30:33.20 Info: : 8143: Got hello from pid 20405, type utility (/usr/lib/oracle/xe/TimesTen/tt70/bin/ttRepAdminCmd -duplicate -from cgdsn_tt70 -host simu-t61 -keepCG -cacheUid ora -cachePwd ora -localhost simu-t61 -verbosity 2 dsn=master2_tt70;UID=;PWD= )
    00:30:33.20 Info: : 8143: Accepting incoming message from 127.0.1.1 with remote protocol (we are TimesTen 7.0.5.0.0.tt70, they are TimesTen 7.0.5.0.0.tt70 remote)
    00:30:33.20 Info: : 8143: 20405 ------------------: Utility program registering
    00:30:33.20 Info: : 8143: maind: done with request #1336.4606
    00:30:33.20 Info: : 8143: maind 1336: socket closed, calling recovery (last cmd was 4607)
    00:30:33.20 Info: : 8143: Starting daRecovery for 20405
    00:30:33.20 Info: : 8143: Finished daRecovery for pid 20405.
    00:30:33.20 Info: : 8143: maind got #1335.4608 from 20405, not in restore: path=/tmp/master2
    *00:30:33.20 Err : : 8143: TT14000: TimesTen daemon internal error: Got notInRestore command with unknown data store '/tmp/master2'*
    00:30:33.20 Info: : 8143: maind: done with request #1335.4608
    00:30:33.20 Info: : 8143: maind 1335: socket closed, calling recovery (last cmd was 4608)
    00:30:33.20 Info: : 8143: Starting daRecovery for 20405
    00:30:33.20 Info: : 8143: 20405 ------------------: process exited
    00:30:33.20 Info: : 8143: Finished daRecovery for pid 20405.
    00:30:34.14 Info: REP: 20246: CGDSN:transmitter.c(1358): TT16114: Attempting to connect to MASTER2 on SIMU-T61 (127.0.1.1); port: 20000
    00:30:35.14 Info: REP: 20246: CGDSN:transmitter.c(1358): TT16114: Attempting to connect to MASTER2 on SIMU-T61 (127.0.1.1); port: 20000
    00:30:36.14 Info: REP: 20246: CGDSN:transmitter.c(1358): TT16114: Attempting to connect to MASTER2 on SIMU-T61 (127.0.1.1); port: 20000
    0
    </output>
    Output from netstat
    <output>
    oracle@simu-t61:~$ netstat -a | grep 21000
    tcp 0 0 *:21000 *:* LISTEN
    </output>
    Am I somehow supposed to provide the port number of my active ds in the duplicate command?
    thanks,
    Raj
    Edited by: user8936481 on Mar 14, 2010 9:41 PM

    Thanks Chris and jspalmer, changing the -from fixed the problem! This and the documentation are the only places where I seem to get any help on TT....
    Now, I moved from my previous problem to the next one :-) Here is the output
    <output>
    oracle@simu-t61:~$ ttRepAdmin -duplicate -from cgdsn -host "simu-t61" -uid ora -pwd ora -keepCG -cacheUid ora -cachePwd ora -localhost "simu-t61" -verbosity 2 "dsn=master2_tt70;UID=;PWD="
    Enter password for 'ora':
    09:21:48 Contacting remote main daemon at 127.0.1.1 port 17000
    09:21:48 Contacting the replication agent for CGDSN ON SIMU-T61 (127.0.1.1) port 21000
    09:21:48 Beginning transfer from CGDSN ON SIMU-T61 to MASTER2 ON SIMU-T61
    09:21:59 Checkpoint transfer 10 percent complete
    09:21:59 Checkpoint transfer 100 percent complete
    09:21:59 Checkpoint transfer phase complete
    09:22:00 Log transfer 100 percent complete
    09:22:00 Log transfer phase complete
    09:22:00 Transfer complete
    09:22:04 Duplicate Operation Ends
    TT12078: Failed to reset is_local_store
    TT12078: TT15001: User lacks privilege WRITE -- file "comp.c", lineno 4620, procedure "sbPtCheckPriv". File: repSelf.c, line: 946
    </output>
    My TT internal user was created with following commands:
    ttIsql TT_tt70
    Command> CREATE USER ora IDENTIFIED BY 'ora';
    Command> GRANT ADMIN, DDL TO ora;
    Reading the error I granted 'WRITE' to the user and ran the -duplicate command again but it told me :
    <output>
    09:26:39 Duplicate Operation Ends
    TT16231: The duplicate operation on this store was not successfully completed -- file "db.c", lineno 10493, procedure "sbDbConnect"
    </output>
    That's probably because I had already run the command before granting WRITE. So I tried to drop the master2 but it tells me that cache groups must be dropped first. So I tried to ttIsql into master2 but it would not let me do that either! Again it shows me the error:
    <output>
    oracle@simu-t61:~$ ttisql master2_tt70
    Copyright (c) 1996-2008, Oracle. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    All commands must end with a semicolon character.
    connect "DSN=master2_tt70";
    Enter password for 'ora':
    *16231: The duplicate operation on this store was not successfully completed*
    The command failed.
    Done.
    </output>
    What shall I try next?

  • Synchronous Writethrough Cache Group in 11.2.1.6.1

    Hello,
    I have created the following SWT cachegroup -
    Cache Group CACHEADM.ENT_GRP_SWT_CG:
      Cache Group Type: Synchronous Writethrough
      Autorefresh: No
      Aging: No aging defined
      Root Table: TRAQDBA.ENT_GRP
      Table Type: Propagatebased on this DDL
    create synchronous writethrough cache group CACHEADM.ENT_GRP_SWT_CG
    from
        TRAQDBA.ENT_GRP (
                GRP_REPRESENTATIVE     NUMBER(12) NOT NULL DEFAULT 0,
                GRP_PARENT_KEY         NUMBER(12) NOT NULL,
                GRP_MEMBER_KEY         NUMBER(12) NOT NULL,
                EVIDENCE_KEY           NUMBER,
                NAME_SCORE             NUMBER(3) NOT NULL,
                ADDRESS_SCORE          NUMBER(3),
                DATE_TIME              TIMESTAMP(6),
                ALERT_ID               NUMBER,
                OLD_GRP_REPRESENTATIVE NUMBER(12),
            primary key (GRP_MEMBER_KEY));When I attempt to insert a row into the cachegroup via ttisql I get the following error -
    Command> insert into ent_grp values (1,1,1,null,100,null,sysdate,null,null);
    5213: Bad Oracle login error in OCISessionBegin(): ORA-01017: invalid username/password; logon denied rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "TRAQQA.world", uid = "TRAQDBA", pwd is hidden, TNS_ADMIN = "/app1/oracle/network", ORACLE_HOME= "/opt/oracle-local/home/oracle"
    The command failed.When connecting to ttisql I supplied the timesten table owner userid and password. I know that if I supply the oracle password at connect time that the insert will work.
    I have AWT cachegroups defined, for example -
    Cache Group CACHEADM.EXCLUDED_GRP_ENTITY_AWT_CG:
      Cache Group Type: Asynchronous Writethrough
      Autorefresh: No
      Aging: No aging defined
      Root Table: TRAQDBA.EXCLUDED_GRP_ENTITY
      Table Type: Propagateand I can insert into these cachegroups without the need to supply the oracle password. For example -
    Command> autocommit 0
    Command> insert into EXCLUDED_GRP_ENTITY values (1,'1',1,1,1,sysdate,1);
    1 row inserted.If this behaviour correct? Do I need to supply the oracle password for SWT cachegroups and not AWT cachegroups?
    Thanks in advance.
    Mark

    Hi Chris,
    Thanks very much for the information. One other thing we have noticed with the SWT cachegroups that you may be able to comment on - we are using stored procedures in the cache as API's on each of our tables. The stored procedures are owned by the same schema that owns the tables underneath the cache groups. We have an application user that has execute permission on the stored procedures and our java application connects to the application user to execute the stored procedures. We have found when, for example, we insert into the SWT cachegroup via the stored procedures a connection is made to the Oracle server using the userid that owns the stored procedure, not the userid that we have connected to TimesTen with. In our case the connection fails as we are supplying the application users oracle password in OraclePWD and not the schema owners password. Is this the expected behaviour when using stored procedures?
    Regards
    Mark
    Edited by: user557876 on Apr 7, 2011 10:21 PM

  • XAER_RMERR error when configuring readonly cache group

    Hello,
    I am having some problem with XA transaction controll on websphere.
    error log looks like below.
    ====================================================================
    [09. 6. 26 13:28:15:511 KST] 0000002b ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl open FFDC0009I: FFDC가 /was/WebSphere61/AppServer/profiles/AppSrv01/logs/ffdc/server1_0000002b_09.06.26_13.28.15_2.txt 문제 스트림 파일을 열었습니다.
    [09. 6. 26 13:28:15:536 KST] 0000002b ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC가 /was/WebSphere61/AppServer/profiles/AppSrv01/logs/ffdc/server1_0000002b_09.06.26_13.28.15_2.txt 문제 스트림 파일을 닫았습니다.
    [09. 6. 26 13:28:15:551 KST] 0000002b ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl open FFDC0009I: FFDC가 /was/WebSphere61/AppServer/profiles/AppSrv01/logs/ffdc/server1_0000002b_09.06.26_13.28.15_3.txt 문제 스트림 파일을 열었습니다.
    [09. 6. 26 13:28:15:555 KST] 0000002b ServiceLogger I com.ibm.ws.ffdc.IncidentStreamImpl resetIncidentStream FFDC0010I: FFDC가 /was/WebSphere61/AppServer/profiles/AppSrv01/logs/ffdc/server1_0000002b_09.06.26_13.28.15_3.txt 문제 스트림 파일을 닫았습니다.
    [09. 6. 26 13:28:15:557 KST] 0000002b RegisteredRes E WTRN0078E: 트랜잭션 관리자가 트랜잭션 자원 시작을 호출하려는 중에 오류가 발생했습니다. 오류 코드는 XAER_RMERR입니다. 예외 스택 추적은 다음과 같습니다: javax.transaction.xa.XAException: errorCode=XAER_RMERR, a resource manager error has occured in the transaction branch.
         at com.timesten.jdbc.xa.XAJdbcOdbc.XAStandardError(XAJdbcOdbc.java:298)
         at com.timesten.jdbc.xa.XAJdbcOdbc.SQLXAStart(XAJdbcOdbc.java:67)
         at com.timesten.jdbc.xa.TimesTenXAResource.start(TimesTenXAResource.java:273)
         at com.ibm.ws.rsadapter.spi.WSRdbXaResourceImpl.start(WSRdbXaResourceImpl.java:1417)
         at com.ibm.ejs.j2c.XATransactionWrapper.start(XATransactionWrapper.java:1467)
         at com.ibm.ws.Transaction.JTA.JTAResourceBase.start(JTAResourceBase.java:145)
         at com.ibm.ws.Transaction.JTA.RegisteredResources.startRes(RegisteredResources.java:1240)
         at com.ibm.ws.Transaction.JTA.RegisteredResources.enlistResource(RegisteredResources.java:648)
         at com.ibm.ws.Transaction.JTA.TransactionImpl.enlistResource(TransactionImpl.java:3294)
         at com.ibm.ws.Transaction.JTA.TranManagerSet.enlist(TranManagerSet.java:405)
         at com.ibm.ejs.j2c.XATransactionWrapper.enlist(XATransactionWrapper.java:693)
         at com.ibm.ejs.j2c.ConnectionManager.lazyEnlist(ConnectionManager.java:1909)
         at com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.lazyEnlist(WSRdbManagedConnectionImpl.java:2219)
         at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.beginTransactionIfNecessary(WSJdbcConnection.java:643)
         at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.prepareStatement(WSJdbcConnection.java:2083)
         at com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.prepareStatement(WSJdbcConnection.java:2038)
         at com.huni.framework.front.dao.HmmDAO.showPrg(HmmDAO.java:181)
         at com.huni.framework.ejbcommand.CommandExecuter.initCommand(CommandExecuter.java:53)
         at com.huni.framework.ejbcommand.CommandExecuter.executeCommand(CommandExecuter.java:27)
         at com.huni.framework.front.bean.HMMFacadeBean.executeCommand(HMMFacadeBean.java:104)
         at com.huni.framework.front.bean.EJSRemoteStatelessHMMFacade_905ddb17.executeCommand(Unknown Source)
         at com.huni.framework.front.bean._HMMFacade_Stub.executeCommand(_HMMFacade_Stub.java:267)
         at com.huni.common.servlet.HmmServlet.doPost(HmmServlet.java:103)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1143)
         at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:591)
         at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:481)
         at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:3453)
         at com.ibm.ws.webcontainer.webapp.WebGroup.handleRequest(WebGroup.java:267)
         at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:815)
         at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1466)
         at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:119)
         at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:458)
         at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:387)
         at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:267)
         at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminators(NewConnectionInitialReadCallback.java:214)
         at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:113)
         at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165)
         at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)
         at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)
         at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)
         at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:196)
         at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:751)
         at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:881)
         at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)
    =================================================================================
    By seeing the similar forum message like this which is saying that XA transactions are not allowed with replication configured, i am wondering whether readonly cache group is relevant with this.
    I is not allowed too??
    best regards,
    Yu-Mi

    Cache Connect is not supported with XA transactions. This is mentioned in the Docs, albeit only in the Error Messages section (Error 11037 per my 7.0.3. copy).

  • RAC degradation by cache group

    Hi:
    since 3 week ago, I have been seeing that my oracle RAC database has been degraded.
    according to enterprise manager, we have found that timestend process has many commits enqueues. On Oracle RAC there is an application, which is performing close to 200 inserts per second. But in a moment on time, timesten enqueues cause the inserts for this application enqueues also on commits, so my oracle application get delays on commits.
    Cache group on timesten to oracle has been declared as readonly using autorefresh, but this refresh is executing on 1 minute interval. I have 6 cache group using this configuration, the fact is that I have 3 databases using those cache groups and replicas for each of them.
    have Someone had this kind of issue like this ?

    It's just a numerical designation for that particular protocol version of Cache Connect.

  • Proxy account for Cache group creation

    i have a customer that wants to use a proxy account on the database to create timesten cache group. Schema_owner owns all the base tables and app_user has select,insert,delte,update privs on the base tables as well as private synonym. Customer wants to use app_user account to configure cache connect setting and create cache groups. Is this possible and supported?
    thanks

    It depends on the type of cache group that you are creating. The quick start section of the TimesTen Cache Connect Guide (cacheconnect.pdf) gives full details of the type of Oracle users required, and the privileges they must have.
    Chris

  • Fatal error 78: Cannot connect to User Group LDAP Server

    After configuring Calendar server when trying to start:
    give following error:
    # ./start-cal
    Restarting calendar services
    Stopping all calendar services
    Starting all calendar services
    # enpd is started
    csnotifyd is started
    csadmind is started
    Fatal error 78: Cannot connect to User Group LDAP Server
    cshttpd is not started
    Calendar service(s) not started
    cshttpd is not started
    Calendar service(s) not started
    Following logs are from http logs of calendar server
    [13/Sep/2004:22:02:47 +0100] Vigor11 cshttpd[17916]: General Information: Log created (1095109367)
    [13/Sep/2004:22:02:47 +0100] Vigor11 cshttpd[17916]: General Notice: Sun Java System Calendar Server 6 2004Q2 (built Apr 28 2004) cshttpd starting up
    [13/Sep/2004:22:02:47 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd attempting to open Counters Database
    [13/Sep/2004:22:02:47 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd successfully opened the Counters Database
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: HTTP Module is refreshing
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd is refreshing
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd is refreshed
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: HTTP Module has refreshed
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd: argc=3 argv[0]=/opt/SUNWics5/cal/lib/cshttpd
    [13/Sep/2004:22:02:48 +0100] Vigor11 cshttpd[17916]: General Notice: session_init: attempting to open session database for cshttpd
    [13/Sep/2004:22:02:49 +0100] Vigor11 cshttpd[17916]: General Notice: session_init: session database open completed for cshttpd
    [13/Sep/2004:22:02:49 +0100] Vigor11 cshttpd[17916]: Store Critical: Error checking session database: DB->set_alloc: method not permitted in shared environment
    [13/Sep/2004:22:02:49 +0100] Vigor11 cshttpd[17916]: General Notice: LdapCacheInit: Ldap Cache not enabled.
    [13/Sep/2004:22:02:49 +0100] Vigor11 cshttpd[17916]: General Notice: cshttpd_parse_commandline: successfully bind process 17916 to processor 0
    [13/Sep/2004:22:02:49 +0100] Vigor11 cshttpd[17916]: General Critical: Fatal error 78: Cannot connect to User Group LDAP Server
    Have any body seen this before.
    Regards

    The server was running fine for few months until i restarted the calendar server. i started to see the same error and the problem was the machine name got changed at some point.
    I added the old hostname to the /etc/hosts file and restarted the calender server and it started to work fine.

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Permission problem while creating cache group

    Hi,
    I am trying to create cache groups on SQL developer in TimesTen Cache DB. I have connected to the CacheDB using schema owner/password (say HR/HR). While creating a cache group, I am required to set cache administrator, which in turn requires this user (that is the schema user) to have the CACHE_MANAGER permission.
    Is it necessary for the schema user to also be the cache_manager to be able to create a cache group?
    As per the documentation, no such requirement has been mentioned, which means I am probably missing a step in between.
    Also, if I connect to the DB via Cache Manager user (cacheuser/timesten with oracle as ORA pwd), I don't see the HR schema tables while creating a cache group.
    Could someone clarify this?
    Thanks in advance!
    Regards,
    Silky
    Applications Engineer
    Oracle India Pvt. Ltd.

    A cache admin user require CACHE_MANAGER privilege to do all cache operation. It has been documented here
    http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21634/prereqs.htm#CHDJBIAE
    Please go through the section "Configuring a TimesTen database to cache Oracle data " for setting up cache in Timesten side.
    "As the instance administrator, use the ttIsql utility to grant the cache manager user cacheuser the required privileges:
    Command> GRANT CREATE SESSION, CACHE_MANAGER, CREATE ANY TABLE TO cacheuser;
    Command> exit "
    Hope you have run required script in Oracle side also. Section : Configuring the Oracle database to cache data in TimesTen
    Regards
    Rajesh

  • User managed cache group

    Hi,
    I have created user managed cache group as follows :
    create usermanaged cache group writewherecache
    AUTOREFRESH
    MODE INCREMENTAL
    INTERVAL 30 SECONDS
    STATE ON
    from interchange.writewhere
    (PK NUMBER NOT NULL primary key,
         ATTR VARCHAR2(40),PROPAGATE)
    where (interchange.writewhere.pk between '105' and '106');
    oracle have 5 rows in table but now from TT 'select * from interchange.writewhere ' statement doesnot show any result.
    what is the problem?
    Edited by: user11969173 on Nov 4, 2009 2:30 AM

    ttmesg.log showing as follows:
    17:34:00.91 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:01.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2092 for interval 5000ms
    17:34:01.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    17:34:01.85 Info: ORA: 3049: ora-3049-1107204416-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-refresh04762: Datastore: CACHEGENI Cache agent refreshed cache group CACHEUSER.READCACHE: Number - 2092, Duration - 0ms, NumRows - 0, NumRootTblRows - 0, NumOracleBytes - 0, queryExecDuration - 0ms, queryFetchDuration - 0ms, ttApplyDuration - 0ms, totalNumRows - 0, totalNumRootTblRows - 0, totalNumOracleBytes - 0, totalDuration - 0ms
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-refresh04824: Datastore: CACHEGENI Autorefresh number 2092 finished for interval 5000ms successfully
    17:34:01.87 Info: ORA: 3049: ora-3049-1107204416-fresher01709: Datastore: CACHEGENI Autorefresh number 2092 succeeded for interval 5000 milliseconds
    17:34:05.09 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 89922, bookmark 1
    17:34:05.09 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 89832, bookmark 6
    17:34:05.10 Info: ORA: 3049: ora-3049-1105090880-eporter00385: Datastore: CACHEGENI object_id 87616, bookmark 1
    17:34:05.93 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:06.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2093 for interval 5000ms
    17:34:06.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    17:34:06.86 Info: ORA: 3049: ora-3049-1107204416-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-refresh04762: Datastore: CACHEGENI Cache agent refreshed cache group CACHEUSER.READCACHE: Number - 2093, Duration - 0ms, NumRows - 0, NumRootTblRows - 0, NumOracleBytes - 0, queryExecDuration - 0ms, queryFetchDuration - 0ms, ttApplyDuration - 0ms, totalNumRows - 0, totalNumRootTblRows - 0, totalNumOracleBytes - 0, totalDuration - 0ms
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-refresh04824: Datastore: CACHEGENI Autorefresh number 2093 finished for interval 5000ms successfully
    17:34:06.88 Info: ORA: 3049: ora-3049-1107204416-fresher01709: Datastore: CACHEGENI Autorefresh number 2093 succeeded for interval 5000 milliseconds
    17:34:10.91 Info: ORA: 3049: ora-3049-1077582144-lMarker01387: Datastore: CACHEGENI Log Table Marker marked 0 rows of log table TT_05_87616_L with logseq 2 through 2
    17:34:11.83 Info: ORA: 3049: ora-3049-1107204416-refresh04075: Datastore: CACHEGENI Starting autorefresh number 2094 for interval 5000ms
    17:34:11.83 Info: ORA: 3049: ora-3049-1107204416-refresh04097: Datastore: CACHEGENI Autorefresh thread for interval 5000ms is connected to instance geni11g on host isgcent216. Server handle 46918156651896
    and tterrors.log didnot show any error msg.
    shubha

  • Maximum number of connection profiles and group policies for Cisco ASA

    Hi,
    We have a Cisco ASA 5520 running 8.0(2) that we use only for Remote Access VPN.
    Does anyone know how many connection profiles and group policies that are supported on the box? I have not been able to find this in the manual.
    Thanks in advance for your help!
    Best regards,
    Harry

    There is no limit for connection profiles or group policies that can be configured on ASA. However the numbers do depend upon the memory available in the device as the profiles are stored in memory during execution.

Maybe you are looking for

  • ITunes 7.6 gives me an error in the middle of the installation process

    http://i98.photobucket.com/albums/l254/zeonike99/itunesinstall2.jpg it gives me this error whenever i install it.

  • Netweaver and Windows Vista

    Hi, I came to know that we cannot do the installation of Netweaver on Windows Vista platform in some of the threads...Can anyone tell me, when SAP will start supporting this on Vista? Will that be in the very near future or it takes some time?  Regar

  • Designs off center of artboard after saving?

    Hi All, There are a handful of files that I will save as svg's and when I go to reopen them in AI they will be off center of the artboard.  Here is an example: http://d.pr/j4RP+.  Anyone know how to fix this? Thx

  • Why wont ios 7 install on my iphone 4s?

    I have downloaded the update, but it wont install?! Please help!

  • Import cassette to dvd

    What do I set the config to down load music from a cassette to import into Itunes. I have the cables put into the "Y". Also what jack box do I use? earplhones or the otherr one Thanks