Transactions across cache regions
Hello,
Can a single unit of work (/transaction) be guaranteed to co-ordinate across multiple cache regions (both with and without a CacheStore plugged into the nodes)?
Hi good stuff,
it depends on what you mean transactions.
Coherence transactions (both via the local transaction API and the Cache Adapters) are available to gather operations on one more caches together, to provide isolation of such transactions from one another with locking and version checking and commit changes to the cache conditionally, subject to version checking).
However, such operations are NOT atomically rolled back in case an exception was thrown during serialization or from a cache store.
So provided that your serialization/deserialization code is error-free, and you don't use synchronous writes or removes on cache-stores (write-through operations), then yes, such transactions would succeed or would fail atomically in the prepare phase (due to version checking detecting version mismatch).
However, in case write-through cache stores are used, it is not guaranteed that the outcome would be an atomic success or atomic rollback, as each cache-store operation is a separate backend transaction on its own, and failure in the writing of a later entry cannot rollback an already committed write from an earlier processed entry, or an entry successfully written on another cache node.
In case write-behind, the resulting writes due to the put operations are delayed, so failure in those are not thrown back to the client (the transaction). The remove operations are handled synchronously, but I am not sure if failure in those are thrown back to the client. My guess would be that they are not, considering that failures in other operations in write-behind are NOT thrown back to the client. In case of failures of operations not thrown back to the client, from the point of the client transaction Coherence behaves as if no cache stores were used at all.
Best regards,
Robert
Similar Messages
-
Using ATMI and tuxedo to institue distributed transactions across multiple DBs
I am creating the framework for a given application that needs to ensure that data
integrity is maintained spanning multiple databases not necessarily within an
instance of weblogic. In other words, I need to basically have 2 phase commit
"internet transactions" between a given coordinator and n participants without
having any real knowlegde of their internal system.
Originally I was thinking of using Weblogic but it appears that I may need to
have all my particular data stores registered with my weblogic instance. This
cannot be the case as I will not have access to that information for the other
participating sytems.
I next thought I would write my own TP...ouch. Everytime I get through another
iteration I kept hitting the same issue of falling into an infinite loop trying
to ensure that my coordinator and the set of participants were each able to perform
the directed action.
My next attempt has led me to the world of ATMI. Would ATMI be able to help me
here. Granted I am using JAVA so I am assuming that I would have to use CORBA
to make the calls but will ATMI enable me to truly manage and create distributed
transactions across multiple databases. Please, any advice at all would be greatly
appreciated.
Thanks
ChrisAndy
I will not have multiple instances of weblogic as I cannot enfore that
the other participants involved in the transaction have weblogic as
their application server. That being said, I may not have the choice
but to use WTC.
Does this make more sense?
Andy Piper <[email protected]> wrote in message news:<[email protected]>...
"Chris" <[email protected]> writes:
I am creating the framework for a given application that needs to ensure that data
integrity is maintained spanning multiple databases not necessarily within an
instance of weblogic. In other words, I need to basically have 2 phase commit
"internet transactions" between a given coordinator and n participants without
having any real knowlegde of their internal system.
Originally I was thinking of using Weblogic but it appears that I may need to
have all my particular data stores registered with my weblogic instance. This
cannot be the case as I will not have access to that information for the other
participating sytems.I don't really understand this. From 6.0 onwards you can do 2PC
between weblogic instances, so as long as the things you are calling
are transaction (EJBs for instance) it should all work out fine.
I next thought I would write my own TP...ouch. Everytime I get through another
iteration I kept hitting the same issue of falling into an infinite loop trying
to ensure that my coordinator and the set of participants were each able to perform
the directed action.
My next attempt has led me to the world of ATMI. Would ATMI be able to help me
here. Granted I am using JAVA so I am assuming that I would have to use CORBA
to make the calls but will ATMI enable me to truly manage and create distributed
transactions across multiple databases. Please, any advice at all would be greatly
appreciated.I don't see that ATMI would give you anything different. Transaction
management Tux is fairly similar to WebLogic (it was written by the
same people). If you are trying to do interposed transactions
(i.e. multiple co-ordinators) then WTC would give you this but it is
only a beta feature in WLS 6.1. Using Tux domain gateways would also
give you interposed behaviour but would require you write your servers
in C or C++ ....
andy -
Using ATMI and tuxedo for distrubuted transactions across multiple DBs
I am creating the framework for a given application that needs to ensure that data
integrity is maintained spanning multiple databases not necessarily within an
instance of weblogic. In other words, I need to basically have 2 phase commit
"internet transactions" between a given coordinator and n participants without
having any real knowlegde of their internal system.
Originally I was thinking of using Weblogic but it appears that I may need to
have all my particular data stores registered with my weblogic instance. This
cannot be the case as I will not have access to that information for the other
participating sytems.
I next thought I would write my own TP...ouch. Everytime I get through another
iteration I kept hitting the same issue of falling into an infinite loop trying
to ensure that my coordinator and the set of participants were each able to perform
the directed action.
My next attempt has led me to the world of ATMI. Would ATMI be able to help me
here. Granted I am using JAVA so I am assuming that I would have to use CORBA
to make the calls but will ATMI enable me to truly manage and create distributed
transactions across multiple databases. Please, any advice at all would be greatly
appreciated.
Thanks
Chris
I am creating the framework for a given application that needs to ensure that data
integrity is maintained spanning multiple databases not necessarily within an
instance of weblogic. In other words, I need to basically have 2 phase commit
"internet transactions" between a given coordinator and n participants without
having any real knowlegde of their internal system.
Originally I was thinking of using Weblogic but it appears that I may need to
have all my particular data stores registered with my weblogic instance. This
cannot be the case as I will not have access to that information for the other
participating sytems.
I next thought I would write my own TP...ouch. Everytime I get through another
iteration I kept hitting the same issue of falling into an infinite loop trying
to ensure that my coordinator and the set of participants were each able to perform
the directed action.
My next attempt has led me to the world of ATMI. Would ATMI be able to help me
here. Granted I am using JAVA so I am assuming that I would have to use CORBA
to make the calls but will ATMI enable me to truly manage and create distributed
transactions across multiple databases. Please, any advice at all would be greatly
appreciated.
Thanks
Chris
-
Caching regions on global page zero
Hi all,
I'm trying to understand some use-cases to why caching global page regions wouldn't be viable. The documentation on region caching doesn't help me here, nor does it mention its availability on global pages - not that I saw, anyway.
It was covered [url https://forums.oracle.com/forums/thread.jspa?threadID=2214451]here in the forums 2 years ago, but two years is a long time.
I've been trying to eke out the best performance in an application, and I'm trying to identify regions that could be cached under certain circumstances.
I was a little surprised to find global page regions may not be cached, even bug 14744294 addressed in 4.2.1 for global pages that aren't "0"
Consider a dynamic list in the sidebar acting as a menu, deployed on the global page. Would this be a fair candidate, except perhaps for a "current page" sub-template.
I guess since many/most of these regions would have some sort of APP_PAGE_ID dependency, it's not worth caching?
Anyone have anything of interest to add to this discussion?
Cheers
ScottLouis-Guillaume,
Good question. We have asked ourselves this and are considering removing the restriction in 4.0. I think it initially had to do with preventing unwanted side effects and anomalies although none of us can recall the details right now. We will have to carefully look at ways to make this something developers can use easily while preventing strange results. First of all, caching of page 0 doesn't make sense. You want to be able to cache regions on page 0. Now, say your page 0 has two regions, P0_CACHED and P0_DYN (one cached, one dynamic). And your page 10 has region P10_CACHED. When page 10 is rendered, you'll get:
P0_CACHED
P0_DYN
P10_CACHED
That looks okay, you get one cached region from page 0, one dynamic region from page 0, and one cached region from page 10. Of course you have controls with which to purge any individual region from the cache so you can cause P0_CACHED to be refreshed whenever you like.
But say page 20 is a cached page. When it renders you'll get:
P0_CACHED <-- not from the page 0 region cache but from the page 20 page cache, regardless of the "stale-ness" of the page 0 cache
P0_DYN <-- not rendered dynamically from page 0 but retrieved as part of the page 20 page cache. But this content may be different from the content for the same page 0 region displayed on page 10 one second ago or one second from now.
P20_DYN <-- from the page cache as is normal for a dynamic region on a cached page
Hardly a thorough treatment, I realize. Just wanted you to know some of the aspects we have to consider. We will also have to read this thread again to come back up to speed: V3 Caching - any more info? .
Scott -
ST22 timeout for all LC related transactions LIVE cache start stop working from LC10
Hi Team
we are a getting a ST22 timeout for all LC related transactions LIVE cache start stop working from LC10
LC version 7.9
OS AIX
SAP SCM 7
SDBVERIFY giving the following error
Checking package Redist Python
Checking package Loader
Checking package ODBC
Checking package Messages
Checking package JDBC
Checking package DB Analyzer
ERR: Failed
ERR: Checking installation "Legacy" failed
ERR: Group of directory /sapdb/data/config/install changed [0 =>
sdbregview -l is showing good.
any idea what might went wrong.
trying to use sdbverify -repair_permissions , but not sure about the exact syntax to use.
and it is not related to timeout parameter, we tested with different timeout values, but still the same error.
thanks
Kishore ChHello Kishore,
you could check the sizing of the liveCache data.
* Report /SAPAPO/TS_LCM_REORG_SNP has checks of the SNP planning areas for superfluous objects.
* Delete old/temporary APO data.
* /SAPAPO/TS_LCM_REORG report checked TS superfluous objects.
If you didn't create the planning versions, copy of planning versions & data load to liveCache, then create the SAP message to check your system on the question of the dataarea usage.
If you have the long running APO transactions => performance of the SCM system has to be checked.
If you have the bottleneck in liveCache & could not solve the case by yourself => create the SAP message to BC-DB-LVC and get SAP support.
Best regards, Natalia Khlopina -
Distributed transactions across RMI-IIOP client to RMI-IIOP server do not work
Hi,
Based on the links below:
http://e-docs.bea.com/wls/docs61/jta/trxrmi.html#1018506
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1067532
It appears that is possible to have distributed transactions across RMI-IIOP
clients and RMI-IIOP applications (servers).
I followed up the "Transactions Sample RMI Code" section but it appears that
the transaction context is not propagated from client to server. I am also
surprised by the note:
Note: These code fragments do not derive from any of the sample applications
that ship with WebLogic Server. They merely illustrate the use of the
UserTransaction object within an RMI application.
The above note suggests that there is no sample code available.
Is there anyone who successfully had RMI-IIOP applications (servers)
participating in distributed transactions?
Is there any sample code that illustrates RMI-IIOP applications (servers)
participating in distributed transactions?
If anyone thinks that this should work I will post my code that does not
work.
Regards,
Dan Cimpoesu
But if you look to the diagram:
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1040200
it suggests that transactional context is passed from clients to RMI-IIOP
servers.
Am I wrong?
Dan
"Andy Piper" <[email protected]> wrote in message
news:[email protected]..
"Dan Cimpoesu" <[email protected]> writes:
Transactions over IIOP are not supported or implemented in WLS 6.1 or
previous. This is a feature of WLS 7.0. In 7.0 we implement OTS.
andy
Hi,
Based on the links below:
http://e-docs.bea.com/wls/docs61/jta/trxrmi.html#1018506
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1067532
It appears that is possible to have distributed transactions across
RMI-IIOP
clients and RMI-IIOP applications (servers).
I followed up the "Transactions Sample RMI Code" section but it appearsthat
the transaction context is not propagated from client to server. I amalso
surprised by the note:
Note: These code fragments do not derive from any of the sampleapplications
that ship with WebLogic Server. They merely illustrate the use of the
UserTransaction object within an RMI application.
The above note suggests that there is no sample code available.
Is there anyone who successfully had RMI-IIOP applications (servers)
participating in distributed transactions?
Is there any sample code that illustrates RMI-IIOP applications(servers)
participating in distributed transactions?
If anyone thinks that this should work I will post my code that does not
work.
Regards,
Dan Cimpoesu -
Distributed transactions across RMI-IIOP client to server do not work
Hi,
Based on the links below:
http://e-docs.bea.com/wls/docs61/jta/trxrmi.html#1018506
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1067532
It appears that is possible to have distributed transactions across RMI-IIOP
clients and RMI-IIOP applications (servers).
I followed up the "Transactions Sample RMI Code" section but it appears that
the transaction context is not propagated from client to server. I am also
surprised by the note:
Note: These code fragments do not derive from any of the sample applications
that ship with WebLogic Server. They merely illustrate the use of the
UserTransaction object within an RMI application.
The above note suggests that there is no sample code available.
Is there anyone who successfully had RMI-IIOP applications (servers)
participating in distributed transactions?
Is there any sample code that illustrates RMI-IIOP applications (servers)
participating in distributed transactions?
If anyone thinks that this should work I will post my code that does not
work.
Regards,
Dan CimpoesuBut if you look to the diagram:
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1040200
it suggests that transactional context is passed from clients to RMI-IIOP
servers.
Am I wrong?
Dan
"Andy Piper" <[email protected]> wrote in message
news:[email protected]..
"Dan Cimpoesu" <[email protected]> writes:
Transactions over IIOP are not supported or implemented in WLS 6.1 or
previous. This is a feature of WLS 7.0. In 7.0 we implement OTS.
andy
Hi,
Based on the links below:
http://e-docs.bea.com/wls/docs61/jta/trxrmi.html#1018506
http://e-docs.bea.com/wls/docs61/jta/gstrx.html#1067532
It appears that is possible to have distributed transactions across
RMI-IIOP
clients and RMI-IIOP applications (servers).
I followed up the "Transactions Sample RMI Code" section but it appearsthat
the transaction context is not propagated from client to server. I amalso
surprised by the note:
Note: These code fragments do not derive from any of the sampleapplications
that ship with WebLogic Server. They merely illustrate the use of the
UserTransaction object within an RMI application.
The above note suggests that there is no sample code available.
Is there anyone who successfully had RMI-IIOP applications (servers)
participating in distributed transactions?
Is there any sample code that illustrates RMI-IIOP applications(servers)
participating in distributed transactions?
If anyone thinks that this should work I will post my code that does not
work.
Regards,
Dan Cimpoesu -
CURSOR across multiple Regions
Hello,
I have a page that I need to display around 300 fields with custom html mixed in. Instead of creating all the page items, I want to just creat a CURSOR and use htp.print. My problem is that its a LOT of code. I want to break it apart across multiple regions. Is their a way to OPEN a cursor so that multiple regions can use it?
Or, back to the PHP days, is their an ARRAY that this can be loaded into and printed like a cursor when needed?Is their a way to OPEN a cursor so that multiple regions can use it?You can have a process run before all regions to insert records into a collection(by a package call or PLSQL region).
Then refer to this collection in your regions(s)
<li>If you want to separte or split the content across, multiple "regions" , you cab generate them in the same PLSQL code that you posted.
For instance, you have a single PLSQL based region without any template which generate the HTML code that renders like multiple region.
In the Source, you can generate the HTML code for the region(s) in addition to printing out the data.
An Apex region normally consists of div or table elements given some styling using builtin classes(and matching the theme). So if you generate the same HTML, it would render the same as a region.You can copy the HTML content of any region template(or from the rendered page) and add that within the "htp.p" codes.
<li> A third option would be to generate the enclose different sections of your data in some identifiable HTML element and use Javascript to move them under the appropriate region.
Having said this, I have to agree with replies above,there could be a better way of displaying your data than generating the entire HTML yourself.Since you say about using a cursor, I guess multiple SQL reports regions would be a good starting point as they are used for displaying lots of records. You could tweak the look'n'feel with some kind of customized template, rather that redo the whole thing yourself. -
Any plan to support tightly coupled transactions across domains?
Hello,
is there any plan to support tightly coupled XA transactions across domains?
Our application has a few global transactions that span multiple domains. One domain updates a record in the Oracle DB. Later on in the same transaction the second domain retrieves the same record. But because of the loose coupling, the second domain cannot see the changes made by the first domain.
Thanks...
Roger
PS: In some cases the second domain is actually a WLS domain. Because the loose coupling is a limitation of the Tuxedo Domain Gateway and WTC uses GWTDOMAIN, one could assume that once Tuxedo supports tightly coupled transactions across domains, WTC would also support it.Hi Roger,
We don't have plans at the moment to solve this problem, although if it is a major problem for you, I suggest you contact Oracle support and ask them to enter an enhancement request. In general most customer have separate databases for each domain or application, thereby not normally running into this problem. Also, changing this in Tuxedo doesn't necessarily mean it would be changed in WLS as they use different transaction managers and the problem is more than a TDomain protocol issue. But generally when we make enhancements like this we try to keep GWTDOMAIN and WTC on par with one another.
Regards,
Todd Little
Oracle Tuxedo Chief Architect -
Maintaining Transactions Across JSP Pages
Hi,
I have a multi page Registration (3 steps). On each step data submitted is taken
to the database via an EJB component (Session Bean). How do I maintain a transaction
across these JSP pages (i.e. either in the EJB or in the jsp) so that the data
in the database is consistent? So if there is a problem in the 3rd step the data
submitted in the first two steps should be rolled back.
Can I use a statefull session beans, which will maintain a database connection
created during the first step, so that I can use the same connection for steps
2 & 3. In the first step after getting the database connection I will begin a
transaction and insert the first part of the data, then this connection will be
maintained by the statefull session and used for steps 2 & 3. At the end I will
commit the transaction. Will this work?
How do I maintain transaction across multiple pages? Is there is any standards
for this scenario where the transaction is maintained across multiple pages. I
cannot carry data across the jsp pages because of the complex data collected.
Any help appreciated.
Regards
-MohanRajYou can not and should not do it the way that you are proposing. Keeping a transaction
open across any interaction with the user is a big mistake. Transactions are scarce
resources. They need to be short. You will need to collect the data from the three
pages in the servlet itself. You can use the HTTPSession, or hidden fields in
the forms. Only after all of the data is collected should you begin a transaction
and update the database. Alternatively, you could store the partial data in a
temporary database table, and move it to a permanent table when all of the data
has been provided. -
Maintaining Transaction Across Multiple JSP Pages
Hi,
I have a multi page Registration (3 steps). On each step data submited is taken
to the database via an EJB component (Session Bean). How do I maintain a transaction
across these JSP pages so that the data in the database is consistent. If a there
is a problem in the 3rd step the data submitted in the first two steps should
be rolled back.
How do I maintain transaction across multiple pages.
Regards
-MohanRaj
It will take from several minutes to a long time for a user to complete a multiple page registration process. Do you really have enough database connections that each concurrent user can hold on to one?
Usually you cannot open more than 50-200 connections to a database at any given time.
Remember that some users will abandon the registration process. Can you afford that their sessions holds a db conenction until the session times out?
Consider changing your datamodel so you can run and commit a transaction at the end of processing the form data from a page. Immediately after the commit give the db connection back to the pool inside the app. server.
It can be as simple as having a column in the database of type enum, with a set of values that shows how far in the registration process the registration has procesed.
BTW. if you absolutely have to hold on to the db connection, you can stuff it into a session scoped attribute and it will be available on all pages. -
Hi,
Is it safe to distribute a single transaction across multiple threads? Currently, I have a single thread performing adds/deletes/updates to the db in a single monolithic transaction. This pretty much tops out at about 4-5k/sec. Using multiple writer threads to perform the same operations by dividing them up as jobs across threads while using the same transaction.. is this a feasible idea? If so, what performance improvements can I expect?
Thanks
Nikunj.Hi Nikunj,
Yes, you may use a transaction across multiple threads. The place where we synchronize is on lock acquistiion (and lock release at commit time). Your performance will depend on the number of processors and how distributed the keys are (i.e. if you're not operating on conflicting keys in the same transaction then you'll get better concurrency). If the lock table becomes a bottleneck, we have a parameter, je.lock.nLockTables that can be used to split the lock table into multiple lock tables. The doc says:
Number of Lock Tables. Set this to a value other than 1 when
an application has multiple threads performing concurrent JE
operations. It should be set to a prime number, and in general
not higher than the number of application threads performing JE
operations.
You may hit serialization issues on the Transaction object before you do on the lock table.
Charles Lamb -
Oracle 11g: How to ensure the same transaction across several BPEL calls?
How to ensure transaction semantics across invocations of several BPEL services with a Database operations (Insert, update)? We are using transaction REQUIRED property in all of our BPELs. We are using webserive and JCA to access and modify the same row. Our code uses a combination of JCA, Spring bean, enity services, EJBs in these BPELs. The code can be more efficient, but, at this point, we have no option but to fix the transaction issue in this code. So, our question is how to ensure the same transaction context is used in all these BPELs to inser/update the same row? We have tried to set the GetUnitOfWork in the JCA Adapter but it did not provide any solution. Apaert from setting transaction in BPEL to REQUIRED and the JCA Adapter to use Unit of work, we are out of ideas. Any help is much apprecited. We are using Oracle SOA Suite 11g 11.1.1.5 version. --chary
Hi,
I can help you if you can describe the processes.
There can be some difficulties when you try to use the same transaction especially when you use many DB transactions & BPEL processes.
Using unit of work only ,might not be enough.
Thanks
Arik -
Enabling PPR Event across the regions
Hi All
I dearly need all your thoughts on the following requirement.
I have 2 items in 2 different regions referring to different CO's. For instance, first region is having a text item and the second region is having a MessageChoice item. Now if i change the value of the first item eventually the value in the second region's messageChoice item should be populated. How can i enable PPR event across 2 items referring to different COntroller files. Any input on this will help me a lot for proceeding further.
Thanks
PraveenYou can use the pageContext.putSessionValue and getSessionValue to get the value.
Yes you can change the parameter pass to a pass. In the Controller you can call the page with the help of pageContext.setForwardUrl.
You can just check the condition like on the basic of some condition like If A then code will execute [[With 1 parameter]] else another code will execute [[with 2 parameter]]
Thanks
--Anil -
View Material Transactions across several Inventory Organizations
Client has five different inventory orgs onsite. (only one OU) There are transactions within and between orgs on a daily basis.
Oracle "Material Transactions" form displays inventory transactions only for a specific inventory organization.
Users have requested that they would like to be able to view Material Transactions for all five inventory orgs without having to change orgs each time.
Would it be possible to use a form personalization to be able to do this? Or custom report?
Has anyone come across this before. Any help would be appreciated.Hi,
Form personalization may not be possible for this requirement, please create a custom report.
Thanks
Karthik.
Maybe you are looking for
-
RFC connection Error While run BEx Report
Hi All I got BI 7.0 and create new user with roles only access: TR: RRMX to see only BEx Reporting... I create basically new customize role and put only one transcation code RRMX and assigned to user... When goto BEx designer I got that error rfc err
-
Problem with renameTo function
Hello ! I hava got a problem on renameTo function. I can rename the short file name (for example: REPORT.TXT to REPORT.ERR), but I can't rename the long file name (for example: XXX_REPORT_20040125.TXT to XXX_REPORT_20040125.ERR). The following codes
-
PM Query - For Spares Consumption
Hi, I'm looking for an SAP standard PM / MM report to give me the "Detail Spares Consumption". It should provide the "Total Spares Costs" compared to the "Planned Costs" This is to illustrate: Rows > Plant > Maintenance Order > Cost Element > Storage
-
After upgrade Yosemite "Message" wants to refresh database...
After upgrade to Yosemite everything works find except "Nachrichten" (in english "Message" so the earlier iChat) is not working. When i start "message" it shows a window (i got it in german but in english it must be something like:) "Message is updat
-
Inventory Count upon removal of stock
Guru's, I have a quick question for you experts; Inventory counts u2013 I know there are various techniques that we could use within my company but one requirement that was asked about was for that for every pick/removal from a Fixed Bin Storage Type