Transactions involving multiple caches and a database

Hi all,
I'm curious if the following is possible with the transaction support in Coherence.
I have the need to write data to two caches and a database from within my Tomcat container, and the whole operation must be atomic.
Example:
Write to database (this is NOT via a CacheStore)
Write to cache 1 (uses write-through to database)
Write to cache 2 (uses write-through to database)
If any operation fails, the whole transaction needs to rollback. This feels like an XA transaction, but it looks like it won't work like I expect because the cache must be the last resource, but I have two caches. The ordering of operations is not important.
Thanks,
Rob

MagnusE wrote:
When using write-through there is (as far as I know) no way to get a fully transactional behaviour (assuming you have more than one cache node) since each node is responsible for persisting its own data items (they will each use a separate connection to the database).
If you on the other hand uses a "cache beside" pattern this can be made transactional using XA. As long as both caches belong to the same cache service they count as a single "last resource"....
/MagnusYou also need to use the same transaction isolation and concurrency setting for it being the same last resource. Practically you should only have a single CacheAdapter instance enrolled in the same transaction.
Best regards,
Robert

Similar Messages

  • Weblogic5.1, Transaction involving SQL Svr and AS400/DB2

              Hi, after reading quite a few posts regarding this issue I believe to demarcate a
              transaction involving SQL Server and AS400/DB2 is not impossible. But we have to
              handle the rollback and other stuff on our own. Am I correct? Is there any hack around
              this?
              I've tried to use an EJB(ejbY) that will insert a record in AS400/DB2 and subsequently
              in ejbY, I invoke another EJB(ejbX) that reside in another WL5.1 to insert a record
              in SQL Server. Of course this is demarcated with UserTransaction. Technically is
              this possible? This is because I've encountered some exceptions with this hack and
              would like to seek some advise.
              Thanks.
              [apps.zip]
              

    Hi,
      If you have access to My Oracle Support then have a look at this note -
    How To Migrate Non-Oracle Databases For Which a SQL*Developer or Migration Workbench Option Is Not Available (Doc ID 393760.1)
    Regards,
    Mike

  • Transaction Help ? JMS and db2 database without appserver

    We have a Queue and db2 database.
    need to read the message from queue and write it to the database in a standalone application without appserver. Now question is how to have a single transaction for both the jobs?
    Please advice!!!

    If you want to receive a JMS message and write to a database in the same transaction, then it sounds as if you are asking for the ability to enlist both transactional resources within the same XA transaction and then perform a two-phase commit of the two resources. If you use an application server then this is easy to configure.
    Even if you don't want to use an application server (or some other container offering similar services), you will still need a transaction manager, but will need to enlist the transactional resources yourself. Possible but not easy. If an application server is too heavyweight, then look for lighter-weight alternatives.
    Nigel

  • Multiple caches and transactions...

    As I read the documentation it is possible for several caches (belonging to the same service) to participate in one XA transaction - is this correct or are there any problems / limitations one needs to be aware of?
    Note : I am here talking about partitioned caches with no cache loader (cache beside pattern).
    Best Regards
    Magnus

    Hi Magnus,
    Yes, it is possible - please take a look at this example: Q: How do I test the Transactional Cache?.
    Regards,
    Dimitri

  • How to handlle multiple ear and common database.

    Hi Friends,
    Need urgent help.
    I working on a project where there will be multiple ear ,each for one application.
    There are 4 project and 4 ears and one common util war file which will be shared by all ears.
    Problem is : 3 of the projects will use same database tables. I am using hibernate ORM mapping for database table.
    Q1. In the above scenario db access table are 90% common in 3 projects. Should I go for instead of 3 ears club into 1 EAR and current 3 project as a 3 module in 1 club ear or per project one ear?
    Pls suggest me the approach and advantage and disadvantages of the approach.
    Q2 : Should I put the data-beans and table mapping files (e.g account.hbn.xml) & hibenate.cfg.xml file in common war(centralized or shared) or replicate the copies of all beans mapping xml and beans classes in every ear.
    If you suggesting to use the centralized/shared data configuration approach then how EAR 's jvms will handle Transaction for every ear.
    I am using Websphere app server6.1, hibernate3 and db2 8.0
    Waiting for your interesting replies.
    Thanking you in advance.
    Navin.

    How are you going to handle upgrades if a database change is needed?
    If it is per application then you need to keep the code by app. If across applications then you need to keep the code separate.

  • What is the difference of specifying affinity to span multiple caches?

    What is the difference of methods between specifying affinity to span multiple caches and specifying it on a single cache?
    Can I do it in the same way?

    Thank you for reply.
    From the Docs of Coherence we can see that the data affinity is commonly referred to the related entries is contained within a single cache. The following is some fragmentss excerpted from Coherence Docs:
    "Data affinity describes the concept of ensuring that a group of related cache entries is contained within a single cache partition. This ensures that all relevant data is managed on a single primary cache node (without compromising fault-tolerance)."
    "Affinity may span multiple caches (as long as they are managed by the same cache service, which will generally be the case)."

  • Bridge CS6 Cache and Freezing Issues

    I downloaded Bridge CS6 and I have not been able to use it yet. I am on a MacBook Pro running 10.7.4 and every time I launch Bridge I get the following error:
    Bridge encountered a problem and is unable to read the cache. Please try purging the central cache in Cache Preferences to correct the situation.
    When I try to purge the cache Bridge freezes up. If I don't try and purge the cache I am unable to navigate to any folders on my computer and it just hangs. I eventually have to force quit to get out of it. So far Bridge CS6 has been completely unusable.
    Please help.
    Thanks.

    When I go to ~Library/Caches/ there is a folder for com.adobe.bridge5 but there isn't one for com.adobe.bridge6
    Your in the wrong folder. Follow the path you have discovered yourself using the pref settings Curt suggested
    ~library/caches/Adobe/Bridge CS6 and inhere should be 2 files:
    Adobe Bridge Plug-in Cache
    and the folder called Cache. Inhere should be 4 folders called '254', '1024', 'data' and 'full'
    All those folders carry the content of the respective quality of the thumbs and preview as well as 100 % cache and Bridge database.
    Any way, Since you have not used Bridge you have no relevant amount of cache and I would suggest you quit both PS and Bridge and manual delete both the items (plug in and folder called cache at the end of the Bridge CS6 path mentioned above.
    (both files will refresh itself after a restart of Bridge).
    To be on the save side also visit in same user library the folder Preferences and look for the file called "com.adobe.bridge5.plist" (yes it is Bridge 5 that comes with CS6, all Adobe apps have their own version number and they are presented in the same Suite version. CS6 has PS version 13 and Bridge version 5 )
    also drag this file to the trash.
    Then hold down option key (alt) while starting Bridge and this should give you the option to reset the preferences as mentioned earlier in this thread.
    Choose reset prefs and try again.

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Using ATMI and tuxedo to institue distributed transactions across multiple DBs

    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.
    Thanks
    Chris

    Andy
    I will not have multiple instances of weblogic as I cannot enfore that
    the other participants involved in the transaction have weblogic as
    their application server. That being said, I may not have the choice
    but to use WTC.
    Does this make more sense?
    Andy Piper <[email protected]> wrote in message news:<[email protected]>...
    "Chris" <[email protected]> writes:
    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.I don't really understand this. From 6.0 onwards you can do 2PC
    between weblogic instances, so as long as the things you are calling
    are transaction (EJBs for instance) it should all work out fine.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.I don't see that ATMI would give you anything different. Transaction
    management Tux is fairly similar to WebLogic (it was written by the
    same people). If you are trying to do interposed transactions
    (i.e. multiple co-ordinators) then WTC would give you this but it is
    only a beta feature in WLS 6.1. Using Tux domain gateways would also
    give you interposed behaviour but would require you write your servers
    in C or C++ ....
    andy

  • Control multiple updates and queries within one transaction in JPA

    Hi,
    I have a question regarding control multiple updates and queries within one transaction. We are using EclipseLink 2.3.1. With below code, will I be able to:
    - have all insert, update, select queries committed in one transaction;
    - queryGetBalance will return the latest OrgBalance after update;
    - if one fails, everything rolls back.
    Thanks!
    Jeffrey
    PS: I realized that I cannot use em.getTransaction().begin() and em.getTransaction().commit(), since I am using JTA.
    =============
    @PersistenceContext(unitName="Test")
    EntityManager em;
    em.setFlushMode(FlushModeType.COMMIT);
    newTransaction.setAmount(1000);
    newTransaction.setType("check");
    em.persist(newTransaction);
    orgAudit.setUpdateUser("Joe")
    orgAudit.setupUpdateTime(time);
    em.merge(orgAudit);
    Query queryUpdateBalance = em.createQuery("update OrgBalance o set o.balance = o.balance + :amount where orgId = :myOrgId");
    queryUpdateBalance.setParameter("amount", 1000);
    queryUpdateBalance.setParameter("myOrgId", 1234);
    Query queryGetBalance = em.createQuery("select OrgBalance o where o.orgId = :myOrgId");
    queryGetBalance.setHint("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH);
    queryGetBalance.setHint("javax.persistence.cache.retrieveMode", CacheRetrieveMode.BYPASS);
    queryGetBalance.getResultList();
    em.flush();
    Edited by: JeffreyW on Dec 12, 2011 10:34 AM

    Yes, the operation will be in a single transaction, assuming you are using a JTA managed SessionBean and the code is part of a SessionBean method.

  • Using ATMI and tuxedo for distrubuted transactions across multiple DBs

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • Difference between Transaction database and relational database

    Whats the Difference between Transaction database and relational database ??

    'Transaction' refers to the usage of a database.  'Relational' refers to the way in which a given database stores data.
    A 'transaction database' (or operational database) could be relational, hierarchical, et al.  A transaction database supports business process flows and is typically an online, real-time system.  The way in which that data is stored is typically
    based on the application(s).  Companies often have multiple 'transaction databases'.
    An 'operational data store' (ODS) is an integrated view or compilation of transaction data.
    The you get into data warehouse databases, where the transaction data is optimized for querying, reporting, and analysis activities.

  • Lookup-table and query-database do not use global transaction

    Hi,
    following problem:
    DbAdapter inserts data into DB (i.e. an invoice).
    Process takes part in global transaction.
    After the insert there is a transformation which uses query-database and / or lookup-table.
    It seems these XPath / XSLT functions are NOT taking part in the transaction and so we can not access information from the current db transaction.
    I know workarounds like using DbAdapter for every query needed, etc. but this will cost a lot of time to change.
    Is there any way to share transaction in both DbAdapter insert AND lookup-table and query-database?
    Thanks, Best Regards,
    Martin

    One dba contacted me and made this statement:
    Import & export utilities are not independent from characterset. All
    user data in text related datatypes is exported using the character set
    of the source database. If the character sets of the source and target
    databases do not match a single conversion is performed.So far, that does not appear to be correct.
    nls_characterset = AL32UTF8
    nls_nchar_characterset = UTF8
    Running on Windows.
    EXP produces a backup in WE8MSWIN1252.
    I found that if I change the setting of the NLS_LANG registry setting for my oracle home, the exp utility exports to that character set.
    I changed the nls_lang
    from AMERICAN_AMERICA.WE8MSWIN1252
    to AMERICAN_AMERICA.UTF8
    Unfortunately , the export isn't working right, although it did change character sets.
    I get a warning on a possible character set conversion issue from AL32UTF8 to UTF8.
    Plus, I get an EXP_00056 Oracle error 932 encountered
    ORA-00932: inconsistent datatypes: expected BLOB, CLOB, get CHAR.
    EXP-00000: export terminated unsuccessfully.
    The schema I'm exporting with has exactly one procedure in it. Nothing else.
    I guess getting a new error message is progress. :)
    Still can't store multi-lingual characters in data tables.

  • Forms/Reports: Role of the Database cache and Web cache

    Hello oracle experts,
    I am running a purely Forms and Reports based environment (9iAS).
    My question are:
    a. Is it possible to use features from the Web Cache and
    Database Cache to boost the performance of my applications?
    b. Are all components monitorable from the OEM?
    Please guide me so that i can configure my OEM to monitor my
    forms and reports services.
    thanks in advance for your reply
    Kind regards
    Yogeeraj

    Hi BradW,
    The way this is supposed to be done in Web Cache is by keeping separate copies of a cached page for different types of browsers distinguished by User-Agent header.
    In case of cache miss, Web Cache expects origin servers to return appropriate version of the page based on browser type, and the page from the origin server is just forwarded back to browser.
    Here, if the page is cacheable, Web Cache retains a separate copy for each type of User-Agent header value.
    And when there is a hit on this cached page, Web Cache returns the version of page with the User-Agent header that matches the request.
    Check out the config screen titled "Header Association" for this feature.
    About forwarding requests to different origin servers based on User-Agent header value, Web Cache does not have such capability.

Maybe you are looking for

  • Drag and drop not working in LR 5.3

    This seems to have broken after the recent update to LR 5.3.  Anyone else experiencing this?  I am stuck.

  • Internet sharing AirPort - Ethernet

    I have a wireless access point. I'm trying to share internet from my PowerBook to Windows laptop which doesn't have WiFi. In 'Sharing' I select 'Internet Sharing' then I am choosing AirPort in 'share from' field and checking 'Ethernet' as a 'share to

  • Shuffle Won't Shuffle and Only Plays Sometimes

    Help! I received a Shuffle for my birthday last month. I've charged it and loaded it, but it only plays once in a while and won't play at all when on the shuffle model. I've tried updating and upgrading, but it still only works every once in a while!

  • Ipod cant turn on

    ok well im having a problem that most ppl have had except ive tried a ghle bunc of crap and nothing works... i plug it into my comp and it stays dead....i heald all buttens 1 at a time and all at once(yes my key lock was off) previously this day it w

  • How to view xml query plans graphically

    I am using sp_whoisactive. It creates a query_plan column that contains xml. when i click on it, it launches a new query window where i see the xml. not sure when it is not launching the graphicalexecution plan. i can save the xml file. but what do i