AWT propogation in TimesTen.

Hi Chris,
I have few general question related to TimesTen.
1. Can we create STORED Procedures in TimesTen?
2. I have created AWT cache group, I want to unload some data from AWT cache group based on where clause. Before I unload I want to make sure that the data was successfully propogated to oracle. Since AWT propogation is system managed how do I make sure my propogation was successful.
3. I was getting following error when I insert more than 3000 records. My PermSize=16 and TempSize=16.
802: Data store space exhausted
6220: Permanent data partition free space insufficient to allocate 269692 bytes of memory
The command failed.
Command>
I read in one of your TT forum where it is mentioned to increase PermSize=5000 and TempSize=128. I could not increase PermSize to 5000. It gave me error.
6203: Overflow in converting data store or log file size from megabytes to bytes, or in converting log buffer size from kilobytes to bytes
The command failed.
Done.
For timebeing I have increased PermSize=1000 and application is able to insert more than 50,000 records so far. I am afraid I might the 802 error again.
Why I am not able to increase permsize=5000 even though I have 8GB of Ram.
4. If the RAM is full then does TT uses SWAP Area ???

Hi,
Here are some answers:
1. Can we create STORED Procedures in TimesTen?
CJ>> No. At present TimesTen does not support stored procedures. Stored Procedure support is in the roadmap for some future release.
2. I have created AWT cache group, I want to unload some data from AWT cache group based on where clause. Before I unload I want to make sure that the data was successfully propogated to oracle. Since AWT propogation is system managed how do I make sure my propogation was successful.
CJ>> AWT proagation is based on the TimesTen replication technology. As soon as a transaction is comitted in TimesTen the data is 'queued' in the tranmsaction logs for AWT push to Oracle. So, it is safe to UNLOAD data at any time and as long as you do not delete the transactioon log files on disk manually (a very bad thing to do for any database...) then the queued changes are safe. AWT push to Oracle can only actually send data to Oracle when the TimesTen replication agent is running for the datastore. You should have the repagent should be running at all times.
3. I was getting following error when I insert more than 3000 records. My PermSize=16 and TempSize=16.
802: Data store space exhausted
6220: Permanent data partition free space insufficient to allocate 269692 bytes of memory
The command failed.
Command>
I read in one of your TT forum where it is mentioned to increase PermSize=5000 and TempSize=128. I could not increase PermSize to 5000. It gave me error.
6203: Overflow in converting data store or log file size from megabytes to bytes, or in converting log buffer size from kilobytes to bytes
The command failed.
Done.
For timebeing I have increased PermSize=1000 and application is able to insert more than 50,000 records so far. I am afraid I might the 802 error again.
Why I am not able to increase permsize=5000 even though I have 8GB of Ram.
CJ>> There is no 'one size fits all' value for PermSize. You need to choose a value that is adequate for the amount of data you need to store in TimesTen and which will fit into the available physical RAM on your machine. You could not use a value of 5000 (5000 MB) as you are running a 32-bit version of TimesTen and with that the maximu size of the datastore (PermSize + TempSize + LogBuffSize + ~8 MB) must be <= 2 GB. To have a datastore larger than 2 GB you must use 64-bit TimesTen (and be running a supported 64-bit O/S).
4. If the RAM is full then does TT uses SWAP Area ???
CJ>> TimesTen has no concept of data that is not 'in memory'. The only place we store data is in memory. The disk files used by TT (checkpoint and log files) are there purely to provide persistence and recoverability and not to hold 'overflow' data that wont fit in memory. On most O/S you can define a TT datastore larger than your available memory but then the O/S will have to page this in and out of memory as required. This will give very bad performance for TimesTen and will also greatly impact the overall performance of the machine. This kind of configuration is absolutely not recommended.
Someone must manage the data to ensure you do not completely fill the available datastore memory. One way to do this is to UNLOAD rows that no longer need to be in TimesTen. Another way is to use the automatic data aging feature in TimesTen 7.0.
Chris

Similar Messages

  • TimesTen to Oracle AWT Transaction semantics

    Hi,
    I had a question with regard to the transaction semantics being maintained between TimesTen and Oracle during an asynchronous write through from TimesTen to Oracle. Assume we had a single AWT cachegroup on TimesTen to update a single table on Oracle.
    If on TimesTen, we perform the following transactions -
    1. Insert a row into the TimesTen cache group.
        Commit - end of transaction 1.
    2. Update the same row on TimesTen.
        Commit - end of transaction 2.
    3. Update the same row again on TimesTen.
        Commit - end of transaction 3.
    The question is would the update to Oracle via AWT consist of 3 transactions(Insert followed by commit, update followed by commit and another update followed by a commit.) or would it happen differently? I believe that the ordering (inserts followed by update and another update) would be maintained. How about the transaction semantics?
    To add further, the TimesTen version we are using is 11.2.2.4.11 running on 64 bit Linux machine. We have a two safe commit architecture, where we have two instances of TimesTen, a primary(active) and a secondary(standby) setup, with the AWT running of the secondary(standy) instance.
    Any input would be very helpful.
    Thank you!

    Thank you very much Chris. Here's what our observation is -
    We have TimesTen to Oracle AWT replication, followed by a Golden Gate replication(set to transactional mode) on the same Oracle table. The transaction done is as follows -
    1. Insert a row into the TimesTen cache group.
        Commit - end of transaction 1.
    2. Update the same row on TimesTen.
        Commit - end of transaction 2.
    3. Update the same row again on TimesTen.
        Commit - end of transaction 3.
    Now on the Golden gate end, we expect that we receive 3 separate messages/events to indicate that there were 3 separate transactions. However, what we observe is that we mostly receive 2 messages/events and sometimes only 1 message/event for the same transaction activities listed above.
    Note that we also have an XLA subscriber on these tables and we get the 3 _COMMIT messages correctly, which I believe is indicative of the transaction being correctly persisted into TimesTen. However, since the Golden Gate output is unpredictable, I was wanting to know, whether AWT is smart enough or for efficiency purposes, merges two or more transactions into 1.
    In our case, the results are varying out of Golden gate -
    1. Sometimes, the insert followed by two updates were received in one event/message.
    2. The insert was received in one event/message, followed by both updates in one more message.
    Is it at all possible, the AWT is smart enough to merge transactions? I read this piece - Oracle TimesTen and others Oracle Technologies: Updatable cache and transactional order and I wanted your thoughts on the same.
    Thanks a ton!

  • AWT cache group with CacheAwtParallelism

    I have some question.
    TTversion : TimesTen Release 11.2.2.3.0 (64 bit Linux/x86_64) (tt112230:53376) 2012-05-24T09:20:08Z
    We are testing a AWT cache group ( with CacheAwtParallelism=4 ).
    Application(1 process) to the DML generates to TimesTen(DSN=TEST).
    At this point, Are delivered to the 4 parallel DML?
    [TEST]
    Driver=/home/TimesTen/tt112230/lib/libtten.so
    DataStore=/home/TimesTen/DataStore/TEST/test
    PermSize=1024
    TempSize=512
    PLSQL=1
    DatabaseCharacterSet=KO16MSWIN949
    ConnectionCharacterSet=KO16MSWIN949
    OracleNetServiceName=ORACLE
    OraclePWD=tiger
    CachegridEnable=0
    LogBufMB=512
    LogFileSize=1024
    RecoveryThreads=8
    LogBufParallelism=8
    CacheAwtParallelism=4
    ReplicationParallelism=4
    ReplicationApplyOrdering=0
    UID=scott
    PWD=tiger
    Thank you very much.
    GooGyum

    Let me try and elaborate a littleon 'parallel AWT' (and parallel replication). AWt uses the Timesten replicatio ninfrastructure to capture changes made to AWT cached tables and propagate those changes to Oracle DB. The replication infrsatructure captures changes to tables by mining the TimesTen transaction (redo) logs. The replication/AWT capture/propagate/apply processing is completely decoupled from application transaction execution.
    In TimesTen releases earlier than 11.2.2, the replication infrastructure was completely single threaded in terms of capture/propagate/apply. This means that if you have a TimesTen datastore with several application processes, each with multiple threads, all executing DML against TImesten there is just a single replication thread capturing all these changes, propagating them to the target and applying them there. This was clearly a performance bottleneck in some situations. In 11.2.2 the replciation infrastructiure has been parallelised to improve performance. This is a very dififcult task as we still need to guarantee 'correctness' in all scenarios. The implementation tracks both operation and commit order dependencies at the source (i.e. where the transactions are executed) and encodes this dependency information into the replication stream. Changes are captued, propagated and applied in parallel and on the apply side the edependency information is used to ensure that non dependant transactions can be applied in parallel (still subject to commit order enformcement) while dependant transactions are always applied in a serial fashion. So, depending on the actual workload you may see significant performance improvements using parallel replication / parallel AWT.
    Note that parallelism is applied between transactions; there is no parallelism for the operations within an individual transaction.
    In the case mentioned, CacheAwtParallelism=4, this means that up to 4 threads will be used to apply transactions in parallel to Oracle. The actual degree of parallelism obtained is subject to inter-transactional dependencies in the workload and adjusts dynamically in real-time.
    Chris

  • ASYNCHRONOUS WRITETHROUGH cache groups

    Hi,
    What service in TimesTen controls AWT to Oracle?.
    Do we need to start any services to enable AWT to oracle?.
    If oracle connection is readily available to TimesTen will the data update to oracle immediately or will Timesten wait for something to be triggered?
    TIA...
    Regards,
    Praksh T

    The AWT push to oracle is handled by the TimesTen replication agent. So, you have to be sure that the repagent is running for the datastore:
    ttAdmin -repstart DSNname
    or connect to the datastore via ttIsql and then
    'call ttRepStart;'
    AWT push, like TimesTen to TimesTen replication, is continuous transactional replication. If there is data queued, then it will be sent 'as fast as possible'. The limiting factor is often the rate at which Oracle can absorb the incoming data.
    Chris

  • Log purge

    Hi,
    AWT groups in TimesTen data store.
    Replication and cache agent are started.
    In the beginning, log purge works well. Later I found that more and more logs are in log directory.
    So I run:
    call ttLogHolds
    < 245, 40265728, Replication , PERF7420:_ORACLE >
    < 841, 48642048, Checkpoint , rccpp.ds0 >
    < 841, 48646144, Checkpoint , rccpp.ds1 >
    Command> call ttRepSTart;
    12026: The agent is already running for the data store.
    The command failed.
    It shows that replication agent is running. I try to stop it and restart it, and run ttCkpt and ttLogHolds.
    Command> call ttLogHolds;
    < 414, 56417344, Replication , PERF7420:_ORACLE >
    < 845, 35186688, Checkpoint , rccpp.ds0 >
    < 845, 38623232, Checkpoint , rccpp.ds1 >
    3 rows found.
    It seems that there is some issue about replication agent.
    Regards,
    Nesta
    Edited by: nesta on Mar 4, 2010 6:50 AM

    Hi Nesta,
    Just because the repagent is running doesn't mean it is able to apply stuff in Oracle. Is Oracle up? Is it accepting connections? Check the TimesTen daemon log and the dsname.awterrs file to see what they show.
    Chris

  • Aging behavior in Timesten

    Hi ,
    I have installed Timesten Release 11.2.2.7.4
    AWT cache group is created and we have applied LRU aging on cache group.
    below are My Doubts :
    1) while LRU is applied on cache group , does data is also aged out from Data store file ?
    2) the file size of Data store is increased how can we shrink ds file once ds file is exhausted its permanent size.

    The checkpoint files on disk are simply an image of the in-memory database. They are 'sparse' files and the reported size (via ls etc.) reflects the highest address memory block that has ever been used during the life of the database. Hence the normal behaviour, if you do not create the database with PreAllocate=1 (which BTW is highly recommended for production systems) is for the checkpoint files to start off relatively small and to grow over time until they reach their maximum size of PermSize + ~32 MB. After this they will not grow any further. This is normal and expected. You *cannot* use the site of the checkpoint files to imply anything about how full or otherwise the database is; for that you need to use the available metrics such as the columns in SYS.MONITOR (and reported by the ttIsql 'assize' and 'monitor' commands) or the specific built-ins and ttIsql commands for computing the actual memory used by specific tables.
    Memory for individual tables is allocated and de-allocated in units of a 'page' where a page is defined as a block of memory that can hold 256 rows. As data is inserted into a table, new pages are allocated from the free heap(s) as required. When rows are deleted that space becomes available for new rows to be inserted into the same table. If, as rows are deleted' a page becomes completely free then it will be returned to the free heap for reuse by any other table.
    Hope that helps,
    Chris

  • Can a PL/SQL code of timesten be called in oracle or vice versa

    Hi
    In IMDB cache setup with AWT cache group , the pl/sql code or procedure that is written in oracleDB can it be called in TimesTen and vice versa
    example: In a stored procedure the DML's that are performed will be updating the cache tables and log table in oracleDB.
    Will there be any performance impact.
    Regards
    Siva Kumar

    A PL/SQL procedure can exist in Oracle DB, in TimesTen, or in both. You control that by where you create the procedure. Procedures that exist in Oracle can really only be called in Oracle and can only access data in Oracle. Procedures that exist in TimesTen can only be called in TimesTen and can only access data in TimesTen. There is a limited capability, using the TimesTen PassThrough capability to call PL/SQL procedures located in Oracle, from Timesten, and for Timesten PL/SQL procedures to access data in Oracle. Using PassThrough does have some overhead.
    Chris

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • Handling update conflicts or overwriting of records from timesten to oracle

    Hi Chris,
    My question is this that I have read the documentation and found out the following:-
    An update is committed on a cache table in an AWT cache group. The same update is committed on the cached Oracle table using a passthrough operation. The cache table update, which is automatically and asynchronously propagated to Oracle, may overwrite the passed through update that was processed directly on the cached Oracle table depending on when the propagated update and the passed through update is processed on Oracle.
    To handle or to restrict this what is the option with which I can go because I want this as an exception that is to be handled but not to be ignored or overwritten by Timesten in database.
    Moreover, I want to work with AWT cache group preferably.
    Can you please help me with this part.

    Chris,
    First thanks for showing your interest:-
    Let me explain it to you with different example:-
    On two separate data stores (DS1, DS2), there is an AWT table for the same Oracle base table. A row is updated on DS1 and is committed. A row is updated on DS2 and is committed. Because the cache group behavior is asynchronous, the change on DS2 may be applied to the Oracle database before the change on DS1, resulting in the DS1 change overwriting the DS2 change.
    I want to handle this overwriting so that I will be able to know which record is overwritten by which new value.
    Or there is one more scenario:
    If at some time my TimesTen is not connected to oracle database but I have the table cached in my TimesTen and some update is done on it for particular record and the same record is updated in the oracle database also with some different value then whenever I will connect my timesten to database it will overwrite the value which is updated earlier in database.
    I hope now that you are able to understand what I am trying to do here.
    Please suggest me some method to handle all these things.
    Thanks for your support.

  • Handling oracle commit erros at Appilcaton level in case of AWT

    Hi All,
    Incase AWT, All transactions committed successfully in the TimesTen database will be successfully propagated to and committed in the Oracle database. Execution errors on Oracle cause the transaction in the Oracle database to be rolled back
    BUT AWT recognizes this Errors later , Since AWT propagates the commoted updates to oracle syncronously (The errors may be reported in the AWT error file long after the commit to TimesTen occurs). How these kind of errors handled at Application level, since Timesten do not raise thease errors immediately , mean while application might assumes that it got commited in oracle , it may procede with another set of transactions .
    Kindly help on this
    Thanks,
    -AK

    This is a problem with any type of asynchronous propagation. Details of any AWT apply errors are reported in the datastores .awterrs file. You can process that file to determine if there were any errors and maybe take recovery action. If your application cannot tolerate this type of behaviour then asynchronous propagation is not appropriate for you and you should use a different option.
    Chris

  • Timesten running slower than Oracle RDBMS

    Hi,
    I've installed timesten, & just wanted to compare the performance of following pl/sql block on timesten with same block on Oracle.
    declare
    temp_date date;
    temp_date1 date;
    my_id number;
    my_data varchar2(200);
    cursor c1 is select MASTER_ID, MDATA
    from AKS_TAB_MASTER;
    cursor c2(p_id number) is select detail_ID, dDATA from
    AKS_TAB_DETAIL
    where master_id=p_id;
    begin
    for t in c1 loop
    open c2(t.master_id);
    fetch c2 into my_id,my_data;
    insert into aks_temp values(t.master_id,my_id,t.MDATA,my_data);
    close c2;
    end loop;
    end;
    I've created a cache group in Timesten to cache the tables AKS_TAB_MASTER & AKS_TAB_DETAIL
    I've created table AKS_TAB_DETAIL in Oracle and timesten separately to avoid pass-through
    Some how, TimesTen is taking 4 times more time than Oracle.
    I've gone through link TimesTen Database Performance Tuning and my database parameters as as follows:
    Permanent Data Size 640
    Temporary DAta Size 300
    Replicate Parallelism Buffer MB 480
    Log File Size(MB) NULL
    Log Buffer Size (MB) 320
    Cach AWT Method 1-PLSQL
    CAche AWT Parallelism NULL
    PL/SQL Connection Memory Limit (MB) 320
    PL/SQL Optimisation Level 2
    Pl/SQL Memory Size(MB) 240
    PL/SQL Timeout(seconds) 600
    Still I'm getting poor performance from TimesTen.
    Any Idea, what might be wrong on my instance.
    Please suggest.
    Thanks
    Amit

    Hello Chris, Please find the details here:
    1.   Output of the ttVersion command (so we know the TimesTen version and platform).
    C:\TimesTen\tt1122_64\bin>ttVersion
    TimesTen Release 11.2.2.5.0 (64 bit NT) (tt1122_64:53396) 2013-05-23T16:26:12Z
      Instance admin: shuklaam
      Instance home directory: C:\TimesTen\TT1122~1\
      Group owner: ES\Domain Users
      Daemon home directory: C:\TimesTen\TT1122~1\srv\info
      PL/SQL enabled.
    2.    The full set of DSN attributes for this DSN from sys.odbc.ini
    Please see the screen shots from page 1 to 5 in the doc available at
    https://docs.google.com/file/d/0BxQyEfoOqCkDX05JNVdqOWItSEE/edit?usp=sharing
    Please let me know if you are looking for something else.
    3.    The full definitions of the cache group(s) including indexes.:
    I've created Two CACHE GROUPS as follows:
    a. AKS_DT_CG:
    -- Database is in Oracle type mode
    create readonly cache group MTAX.AKS_DT_CG
        autorefresh
            mode incremental
            interval 300000 milliseconds
            /* state on */
    from
        MTAX.AKS_TAB_DETAIL (
                DETAIL_ID NUMBER(38) NOT NULL,
                MASTER_ID NUMBER(38),
                DDATA     VARCHAR2(135 BYTE) NOT INLINE,
            primary key (DETAIL_ID));
    b. AKS_MT_CG:
    -- Database is in Oracle type mode
    create readonly cache group MTAX.AKS_MT_CG
        autorefresh
            mode incremental
            interval 300000 milliseconds
            /* state on */
    from
        MTAX.AKS_TAB_MASTER (
                MASTER_ID NUMBER(38) NOT NULL,
                MDATA     VARCHAR2(128 BYTE) INLINE,
                STATUS    VARCHAR2(7 BYTE) INLINE,
            primary key (MASTER_ID));
    To View trhe indexes, please see the screen shot on page 6 in the doc available at
    https://docs.google.com/file/d/0BxQyEfoOqCkDX05JNVdqOWItSEE/edit?usp=sharing
    4.    The definition (in TimesTen) of the table aks_temp including indexes.
    -- Database is in Oracle type mode
    create table MTAX.AKS_TEMP (
            MID   NUMBER,
            DID   NUMBER,
            MDATA VARCHAR2(200 BYTE) NOT INLINE,
            DDATA VARCHAR2(200 BYTE) NOT INLINE);
    There is not index on this table.
    5.    The row counts (in TimesTen) for the tables AKS_TAB_MASTER & AKS_TAB_DETAIL.
    Command> select count(*) from AKS_TAB_MASTER;
    < 81183 >
    1 row found.
    Command> select count(*) from AKS_TAB_DETAIL;
    < 175176 >
    1 row found.
    Command>
    Please let me know if you need any othe info to debug it.
    Many Thanks
    Amit

  • Performance tuning of Timesten

    Our application writes data to the TT tables on a milliseconds scale. The data in TT is then written to Oracle database through replication cache group. When there is a large mount of data and frequently writing, we observe a lag in writing data from TT to Oracle. I would like to ask what adjustments we need to make on our system to optimize the writing from TT to Oracle database? (Oracle 10g, TimesTen 7.0)

    What value do you have set for LogBuffSize in the DSN definition? If the log buffer is very small this could be a factor.
    However, the issue here is generally that all AWT propagation for a single datastore is handled over a single connection into Oracle DB and that is usually a bottleneck which imposes a very real limit on the throughput that AWT propagation can sustain. Other than tuning oracle to maximise the throughpout that you can achieve over a single connection there is not much that can be done in TT 7.0. if you are consistently generating updates at a higher rate than hat at which they can be pushed to oracle over a single connection then you need to either reduce the update rate (easy to say but probably impossible to do!) or consider splitting the datatsore into 2 (or more) and distributing the traffic across those (each datastore will have its own separate AWT connection to Oracle).
    This issue will be addressed in a future TimesTen release when we will support paralell AWT push to Oracle. Please don't ask when this release will be available; all I can say is that it is expected this calendar year but that is, as always, subject to change.
    Chris

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • TimesTen evaluation questions

    Hi,
    We are evaluating TimesTen.
    Our motivation is not performance, but rather network problems: in case of network problems, we wish the client to keep working against a local TimesTen cache, and hopefully merge it with the main DB server once the network recovers.
    We'd appreciate hints on the following:
    1) We heard TimesTen (or extentions of it) can be configured to persist data on the disk, instead of memory.
    Could anyone please point us to the appropriate documentation on how to configure this?
    2. Does TimesTen support stored procedures?
    3. Does TimesTen support Sequences?
    Thanks very much.

    When thinking about this kind of issue you need to understand that TimesTen is a complete database in its own right. When it is acting as a 'cache' for Oracle, the caching should be though of more as a form of replication to/from Oracle. TimesTen sequences are local to TimesTen; they are not a 'cached copy' of an oracle sequence and they are therefore not co-ordinated with Oracle sequences. You cannot realistically cache a table in TimesTen such that it is writeable in both Timesten and oracle. This would be a form of multi-master replication with all the attendant problems. If you are inserting data into an AWT cache group in TimesTen and are using a sequence to generate the key then the sequence value generation occurs only in TimesTen. As long as the underlying tables are not also being inserted into directly in Oracle (which they should not be) or via a different TT cache (a configuration that you should avoid) then there is no problem with uniqueness. In this case there is no issue with network outages.
    TimesTen sequences, along with everythign else, are documented in the comprehensive TimesTen documentation set (see the SQL Reference for details on sequences).
    Chris

  • Should I use TimesTen for unstable table

    Hi all,
    I have a table that there are a lot of accessing operations to it, include: select, insert, update, delete. My table has 80 columns, about 1M rows, one primary key (2 cols), 5 indexes in other cols.
    I want to increase select speed.
    Should I use TimesTen Cache for this table? Which cachegroup type (read-only, awt, swt,...) I should use?
    Thanks!

    Sure you could use TimesTen to cache this table from the Oracle database. As you are performing DML against it I would recommend you look at the Asynchronous Write-Through (AWT) Cache Group.
    Start here -> http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21634/concepts.htm#BABFBIEC
    Info on AWT -> http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21634/define.htm#CHDJAJAC

Maybe you are looking for

  • PriorityTask ignored by version 3.6.0 but working with 3.5.3

    Hello, The priority task is used for a few entry processors with Coherence 3.5.3 no longer work with 3.6.0. I get a "DEBUG ... Interrupted PartitionedCache, Thread[DistributedCache:LoaderDistributedCache,5,Cluster] " 160 seconds after my processors s

  • Brand field in IS-Retail

    Hello all, Does anyone knows if i can make the Brand management in Is-Retail not Customizable but changeable as  master data? Thank you Idan

  • PDF export format options are ignored when exporting

    I can't get the PDF export format options to work.  No exception is thrown; it's as though the export format options are simply ignored.  In broad terms, the relevant code looks like this: ExportOptions exportOptions = new ExportOptions(); exportOpti

  • HT1933 Too many http redirects in iTunes movies

    From my iphone 5 email; in the movie purchase receipt email, I click "report a problem". When iTunes opens I get a message: "too many http redirects". Is there any fix for this?

  • Setting Color for MenuItem Accelerator Nimbus Layout

    Hi, I'm trying to change the Color for my GUI during runtime. I've created an Dialog including a ColorChooser. If a user selects a new color and confirms the change with "OK" the color of all items in the GUI should be changed. This is working for mo