IMDB Cache and transaction logs

Hi,
We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
[EntResPP]
Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
DataStore=/prod100/oradata/EntResPP
LogPurge=1
PermSize=60000
TempSize=2000
PLSQL=1
DatabaseCharacterSet=AL32UTF8
OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
load cache group ro commit every 256 rows parallel 4Thanks
Mark

The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
3. Start the cache agent. from the ttIsql session issue the command:
load cache group ro commit every 0 rows;
You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
6. Start applications etc. as required.
Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
Chris

Similar Messages

  • WAE 512 and transaction logs problem

    Hi guys,
    I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
    transaction-logs enable
    transaction-logs archive interval every-day at 23:00
    transaction-logs export enable
    transaction-logs export interval every-day at 23:30
    transaction-logs export ftp-server 10.253.8.125 cache **** .
    and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
    Transaction log configuration:
    Logging is enabled.
    End user identity is visible.
    File markers are disabled.
    Archive interval: every-day at 23:00 local time
    Maximum size of archive file: 2000000 KB
    Log File format is squid.
    Windows domain is not logged with the authenticated username
    Exporting files to ftp servers is enabled.
    File compression is disabled.
    Export interval: every-day at 23:30 local time
    server type username directory
    10.253.8.125 ftp cache .
    HTTP Caching Proxy logging to remote syslog host is disabled.
    Remote syslog host is not configured.
    Facility is the default "*" which is "user".
    Log HTTP request authentication failures with auth server to remote syslog host.
    HTTP Caching Proxy Transaction Log File Info
    Working Log file - size : 96677381
    age: 44278
    Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
    Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
    Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
    I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
    What can I do?
    Thx
    davide

    Hi Davide,
    You seem to be missing the path on the FTP server; which goes on the export command.
    Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
    Let me know how it goes. Thanks!
    Jose Quesada.

  • Backup and restore full and transaction log in nonrecovery mode failed due to LSN

    In SQL 2012 SP1 enterprise, when taking a full backup and followed up a transaction log backup immediately, the transaction log backup starts with an earlier LSN than the ending LSN of the full backup. As a result, I cannot restore
    the transaction log backup after the full backup both as nonrecovery on another machine. I was trying to make the two machine in sync for mirroring purpose. An example is as follows.
    full backup:       first 1121000022679500037, last 1121000022681200001
    transaction log: first 1121000022679000001, last 1121000022682000001
    --- SQL Scripts used  
    BACKUP DATABASE xxx  TO DISK = xxx WITH FORMAT
    go
    backup log  xxx to disk = xxx
    --- When restore, I tried the
    restore log BarraOneArchive  from disk=xxx  WITH STOPATMARK  = 'lsn:1121000022682000001', NORECOVERY
    Also tried StopBeforeMark, did not work either. Complained about the LSN too early to apply to the database

    I think that what I am saying is correct .I said in sync mirroring ( i was not talking about witness) if network goes for few minutes or some longer time may be 20 mins ( more than that is reare scenario IS team has backup for that) logs on Principal will
    continue to grow as they wont be able to commit because there connection with mirror is gone so commit from mirror is not coming.After network comes online Mirror will replay all logs and will soon try to come up with principal
    Books Online says this: This is achieved by waiting to commit a transaction on the principal database, until the principal server receives a message from the mirror server stating that it has hardened the transaction's log to disk. That is,
    if the remote server would go away in a way so that the primary does not notice, transactions would not commit and the primary would also be stalled.
    In practice it does not work that way. When a timeout expires, the principal will consider the mirror to be gone, and Books Online says about this case
    If the mirror server instance goes down, the principal server instance is unaffected and runs exposed (that is without mirroring the data). In this section, BOL does not discussion transaction logs, but it appear reasonable that the log records are
    retained so that the mirror can resync once it is back.
    In Async Mirroring Transaction log is sent to Mirror but it does not waits for Acknowledgement from mirror and commits the transaction.
    But I would expect that the principal still gets acknowledgement that the log records have been consumed, or else your mirroring could start failing f you backup the log too frequently. That is, I would not expect any major difference between sync and async
    mirroring in this regard. (Where it matters is when you fail over. With async mirroring, you are prepared to accept some data loss in case of a failover.)
    These are theories that could be fairly easily tested if you have a mirroring environment set up in a lab, but I don't.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Temp tables and transaction log

    Hi All,
    I am on SQL 2000.
    When I am inserting(or updating or deleting) data to/from temp tables (i.e. # tables), is transaction log created for those DML operations?
    The process is, we have a huge input dataset to process. So, we insert subset(s) of input data in temp table, treat that as our input set and do the processing in parts. Can I avoid transaction log generation for these intermediate steps?
    Soon, we will be moving to 2008 R2. Are there any features in 2008, which can help me in avoiding this transaction logging?
    Thanks in advance

    Every DML operation is logged in the LOG file. Is that possible to insert the data in small chunks?
    http://www.dfarber.com/computer-consulting-blog/2011/1/14/processing-hundreds-of-millions-records-got-much-easier.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • Multiple caches and transactions...

    As I read the documentation it is possible for several caches (belonging to the same service) to participate in one XA transaction - is this correct or are there any problems / limitations one needs to be aware of?
    Note : I am here talking about partitioned caches with no cache loader (cache beside pattern).
    Best Regards
    Magnus

    Hi Magnus,
    Yes, it is possible - please take a look at this example: Q: How do I test the Transactional Cache?.
    Regards,
    Dimitri

  • Transaction log and access log

    The transaction log (TransactionLogFilePrefix) and the access log are stored
    relative to the directory where the server is started rather than where it
    resides as with the rest of the log files. Why is this?
    Eg.
    I start the server with a batch file contained in
    projects\bat
    My server is in
    projects\server\config\myDomain
    When I start the server the access and transaction logs end up in
    projects\bat
    while all the rest of the log files (such as the domain and server log) end
    up in
    projects\server
    My batch file that starts the server looks like this
    "%JAVA_HOME%\bin\java" -hotspot -ms64m -mx64m -classpath %CLASSPATH%
    "-Dbea.home=e:\bea"
    "-Djava.security.policy==i:\projects\server\config\myDomain\weblogic.policy"
    "-Dweblogic.Domain=myDomain" "-Dweblogic.Name=adminServer"
    "-Dweblogic.RootDirectory=i:/projects/server"
    "-Dweblogic.management.password=weblogic" weblogic.Server
    Thanks for help on this,
    Myles

    The same case with me, I sent email to apple support, but got not reply.
    The apple status page indicated that every thing is fine now, what a joke.
    Many devs are in this situation too, I guess we could do nothing but waiting for their system to come up.

  • How to delete Transaction Logs in SQL database

    Hi,
    Can any one explain me the process how to delete the transcation logs in SQL database.
    Thanks
    Sunil

    Sunil,
    Yes you can take online backup in MS SQL server.
    The transaction log files contain information about all changes made to the database. The log files are necessary components of the database and may never be deleted. Why you want to delete it?
    I am taking any backup, do i need to turn off the SAP server that is running at the moment or can i take it online
    There are three main types of SQL Server Backup: Full Database Backup, Differential Database Backup and Transaction Log Backup. All these backups can be made when the database is online and do not require you to stop the SAP system.
    Check below link for details
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/89/68807c8c984855a08b60f14b742ced/frameset.htm
    Thanks
    Sushil

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • Problems with transaction-logs on cache engines

    Good Day All,
    I have a Cache Engine 550 here and the transaction log working.log file got quite large.
    I was not able to export it to my ftp server so I logged into the Cache engine via ftp and downloaded the file to a PC.
    I then deleted the working.log file on the Cache Engine and rebooted the cache engine.
    The working.log file was not re-created as I had hoped it might be.
    I have created a file called working.log in the correct directory. This file does not seem to get updated though so this must not be right either.
    Any suggestions?
    regards,
    amanda

    Hi Zach,
    Thank you so much for writing back. I am running an archaic version of the software... i can check tomorrow. As to the logging.... i had not enabled transaction-logging in itself so it was a silly config error ...
    :) amanda

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • What is the current schedule for 6.1.2 and will it fix the Exchange transaction log issue in 6.1?

    Just spent the entire night with virtually no sleep with our firm's group of IT engineers trying to keep our Exchange system online due to massive transaction log growth. Confirmed the issue related to 6.1 calendar bug with Activesync. The workarounds are not practical for large groups of users who depend on their mobile devices for work. Our users have no way of knowing that they are causing an issue so the Apple guidance isnt terribly useful to communicate to 1000 users. When can we expect a resolution? The problem is only going to get worse as more and more users hit the bug. Does anyone know if the issue will resolve as soon as someone installs the 6.1.2 update, assuming that has the fix. Im not trying to bash anyone but this is a very serious problem in enterprise deployments.

    The update was released some time today. 6.1.2 appears to specifically fix the Exchange issue causing excess comms and logging issues. However, although the update is available i do not see the notification badge on the Settings icon. Is this controlled by Apple or is there a user setting i am missing somewhere? I would prefer that all users see the badge to expedite user action.

  • After clearing cache and cookies, my same dropdown list keeps returning when I log in to Facebook.

    After clearing cache and deleting cookies, my same dropdown list keeps on returning whenever I log in to Facebook. I need to know how to delete login history.

    That data is called Form History
    Options/Preferences > Privacy > Clear your recent history > Everything + Form history

  • Block size in tt for writing data to transaction log and checkpoint files

    Hello,
    what block size is TimesTen using when its writing data to transaction log and checkpoint files? Does it use some fixed block size during filesystem writes?

    Although in theory logging can write 2 KB blocks in almost all circumstances it will write 4 KB or larger so yes a filesystem with a 4 KB block size is fine for both checkpointing and logging.
    Chris

Maybe you are looking for