Implement Caching across multiple transactions in EJB3.0

Hi,
I need to implement caching of entities using JPA in EJB3.0. I am using Jboss 4.2.x for implementing this. The requirement is that the entities remain cached across different method calls.
Regards,
Deepak Dabas

Good luck with that. There's likely caching solutions that work with JPA, for example OSCache springs to mind as far as caching is concerned.

Similar Messages

  • Tree, msut be a seen across multiple transaction.

    Hi,
    After a considerable time i amable to develop a Tree in a dockig container and populate it.
    Now the requirement is this tree mut be alive across mutiple Transactions.
    Yes, i am calling docking container across two transactions. so the docking container is present. How can i make it so the the contents of the tree are not changed when jumping from one Transaction to ther Transaction.
    Is there any way to hold an instance of the object in memory ? so i can hold Tree instance, somthing like export-import ???
    All the suggestions are welcome.
    Regards..

    Using SHMA

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • Top Essentials Cache and Multiple Application Servers

    Hello,
    I'm developing a new servlets/services application in Java using tomcat and playing around with toplink essentials. Is it possible, when using multiple servers, to expire cached objects? Eg I update user account info on server 1, but 2 and 3 still have old data. The documentation and blogs I have read seem to indicate you either have to force a refresh of the object or set up readAllQueries to go direct to the db (which rather defeats the purpose of having a cache?) - for fresh data. Though I agree there are some places where up to the moment data is not always required, building a system to scale with expiry caching across multiple app servers seems like something toplink essentials SHOULD be able to do.
    Also, is there any work on when the new rev of Toplink Essentials would be out? I see posts about the 11G preview but that's regular toplink.
    Thanks in advance!

    There are several caching options in TopLink Essentials to handle stale data.
    <p>
    Some of the settings are available through properties in the persistence.xml, but for most you will need to use a DescriptorCustomizer or SessionCustomizer and use the API of ClassDescriptor (refer to JavaDocs for additional info).
    <p>
    Caching options include:
    <p>- Cache Type : (weak, hard, soft, none), a weak cache will decrease stale data.
    <p>- Isolated (shared) : You can set the descriptor to be isolated or cache not shared to avoid caching the class.
    <p>- Refresh : You can enable refreshing at the class or query level.
    <p>
    A ClassDescriptor does have an invalidation policy, but the policies for invalidating based on a time-to-live or time-of-day were not ported from TopLink to TopLink Essentials, however you could write your own pretty easily.
    <p>
    If you upgrade to using TopLink 11g (preview), which you can download and use under the Oracle OTN license, then you have support for using cache invalidation and cache coordination. This functionality is also available in the Eclipse EclipseLink project currently in incubation.
    <p>
    <p>---
    <p>James Sutherland

  • Clear template cache on multiple CF instances in a single action?

    I'm running several instances of Coldfusion on the same box
    with template caching turned 'on' for each instance. I have it set
    up where one instance will replicate file changes to the other
    instances, but with caching turned on, I currently have to log in
    to each CF Admin instance and click the "Clear Template Cache Now"
    button for the file changes to appear. As I add more instances,
    this will become an increasingly tedious process.
    Does anyone have or know of a way to clear template cache
    across multiple CF instances in a single action?

    cfJarrod wrote:
    > I'm assuming that using clearTrustedCache() will only
    clear the cache for a
    > single CF instance and not all instances on the server.
    Is that in fact true?
    Yes.
    > Ideally I'd like to run a single script that will clear
    the cache for all
    > instances.
    The Admin API is an application programming interface: you
    are supposed
    to write your own scripts that call its methods.
    Jochem
    Jochem van Dieten
    Adobe Community Expert for ColdFusion

  • Stickyness across multiple VIPs

    I was wondering if anyone knows if it is possible to implement stickyness across multiple VIPs. In other words, if a client hits a specific VIP on a specific TCP port, then hits a different VIP on a different tcp port, can stickyness be configured to stick that client to the same server?
    Thanks!

    Hi,
    u talk about stickineess on the CSM? it's possible :), but be careful...
    example:
    vserver1: vip1:port1
    rserver1: rserver_ip1:portX
    vserver2: vip2:port2
    rserver2: rserver_ip1:portY
    and one sticky group for both server farms (one for rserver1, second for rserver2).
    first session is established to vserver1:
    - sticky is recorded to rserver1 (rserver_ip1:portX)
    second session is established to vserver2:
    - sticky exists (client matched the sticky rule)
    - session is not load-balanced, but connected to rserver by sticky record (!), in other words, session is directed to rserver_ip1:portX and no to :portY
    ^^^ answer to your question: it's possible :), but be careful... in other words, it's possible if the service on the server side is running on the same real server and port (portX=portY). other, use different sticky group for second server farm.
    is it answer to your question?
    regards,
    martin

  • I want to use spiceworks to report on the status of bitlocker across multiple

    I want to use spiceworks to report on the status of bitlocker across multiple customer sites and domains, I didn't want to use MBAM as it is complicated and I just wanted to report if bitlocker was enabled or not.
    There have been a lot of discussions but I was looking for some thoughts and experiences that were concise on the whole process ie how to implement spiceworks across multiple sites, and then into the reporting. Or is that too much to ask in one discussion?
    Thanks!
    This topic first appeared in the Spiceworks Community

    If you are talking abount the ID is Settings>iCloud you will have to get the persions to give you the ID andpassword or get them to remove the iPod from their account
    Find My iPhone Activation Lock: Removing a device from a previous owner’s account
    also see:
    iCloud: Find My iPhone Activation Lock in iOS 7

  • Using ATMI and tuxedo to institue distributed transactions across multiple DBs

    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.
    Thanks
    Chris

    Andy
    I will not have multiple instances of weblogic as I cannot enfore that
    the other participants involved in the transaction have weblogic as
    their application server. That being said, I may not have the choice
    but to use WTC.
    Does this make more sense?
    Andy Piper <[email protected]> wrote in message news:<[email protected]>...
    "Chris" <[email protected]> writes:
    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.I don't really understand this. From 6.0 onwards you can do 2PC
    between weblogic instances, so as long as the things you are calling
    are transaction (EJBs for instance) it should all work out fine.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.I don't see that ATMI would give you anything different. Transaction
    management Tux is fairly similar to WebLogic (it was written by the
    same people). If you are trying to do interposed transactions
    (i.e. multiple co-ordinators) then WTC would give you this but it is
    only a beta feature in WLS 6.1. Using Tux domain gateways would also
    give you interposed behaviour but would require you write your servers
    in C or C++ ....
    andy

  • Using ATMI and tuxedo for distrubuted transactions across multiple DBs

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

  • Maintaining Transaction Across Multiple JSP Pages

              Hi,
              I have a multi page Registration (3 steps). On each step data submited is taken
              to the database via an EJB component (Session Bean). How do I maintain a transaction
              across these JSP pages so that the data in the database is consistent. If a there
              is a problem in the 3rd step the data submitted in the first two steps should
              be rolled back.
              How do I maintain transaction across multiple pages.
              Regards
              -MohanRaj
              

    It will take from several minutes to a long time for a user to complete a multiple page registration process. Do you really have enough database connections that each concurrent user can hold on to one?
    Usually you cannot open more than 50-200 connections to a database at any given time.
    Remember that some users will abandon the registration process. Can you afford that their sessions holds a db conenction until the session times out?
    Consider changing your datamodel so you can run and commit a transaction at the end of processing the form data from a page. Immediately after the commit give the db connection back to the pool inside the app. server.
    It can be as simple as having a column in the database of type enum, with a set of values that shows how far in the registration process the registration has procesed.
    BTW. if you absolutely have to hold on to the db connection, you can stuff it into a session scoped attribute and it will be available on all pages.

  • APO gATP vs R/3 ATP - To check sales order ATP across multiple plants

    Hi There,
    I am trying to evaluate gATP functionality for SD sales orders.
    The primary requirement is to have sales order ATP checking take place across multiple plants.
    E.G.
    Sales order line is entered for qty 100
    60 is available in plant A, 40 is available in plant B
    System checks both plants and creates 2 lines - one for delivery from plant A and one for delivery from plant B
    (we are currently heading down the road of writing ABAP to do this 'multi-plant' check in R/3 but the more complex the requirements get the more interested I am in understanding more about APO/gATP)
    I would like to understand the benefit of implementing APO / gATP as opposed to using standard R/3 ATP and perhaps writing custom ABAP code to search for inventory across multiple plants.
    I would appreciate any insight regarding what is required to setp gATP to perform such checking and any other feedback regarding this issue - especially if you have had to implement something similar at your company.
    I have looked here but not much clear help:
    http://help.sap.com/saphelp_scm50/helpdata/en/26/c2d63b18bc7e7fe10000000a114084/frameset.htm
    Thanks,
    Niall

    Hi Niall
    you are probably looking at RBATP (Rule based ATP). Look at transaction /sapapo/rba04 in APO where you develop your own location and product substitution rules. Going down an ABAP road in R/3 may work short-term but not long-term as the requirements may get more complex.
    Regards
    Srinivas

  • Spreading a single form across multiple pages

    I'd like to implement a single record form that spreads it's items across multiple pages for the purpose of grouping related information on each page such as "Contact Details"," Education Profile" etc. There are way too many columns for a single page.
    In Oracle forms you can create a form across multiple canvases or on different tabs of a tabbed canvas. Each item has a property to position it on which canvas.
    Is this possible in ApEx? I can see that transaction processing could be complex.
    How do I avoid, for example, inserting the record a second time if I commit from a different page?
    Any comments appreciated.
    Paul P

    Another way to do this without javascript and ajax that works pretty well is to setup a list that represents the logical "sections" and display it as a sidebar list.
    Create a hidden item on the page called PX_ACTIVE_SECTION and set the visibility of each region on the page based on the value of this item. For example: :PX_ACTIVE_SECTION = 'Contact Details'. You can also have multiple regions associated with a single tab. Set the default value to whatever section you wish to display first.
    Next, set each list item to be "current" when :PX_ACTIVE_SECTION is equal to that item ('Contact Details', 'Education Profile', etc.). Also set the URL destination of each item to: javascript:doSubmit('section name');
    Finally, add a branch back to the current page that sets PX_ACTIVE_SECTION to &REQUEST.. This traps the doSubmit call so you can set the hidden item. (Add a condition, if needed, to prevent this branch from executing due to other buttons and requests).
    The result is that the user can switch freely between sections and save after all data is entered. The page does refresh (since it doesn't use AJAX), but if your regions aren't too big, it should be reasonable.

  • Write-through Cache behavior during Transactional Operation

    If a put is called on a write-through cache during a transaction(with Optimistic Read-Committed settings) that involves multiple caches some set to write-through and others to write-behind, when will the store operation on the corresponding CacheStore be attempted?
         a) Immediately after the put() is called on the cache but before the transaction commit
         or
         b) Immediately after the transaction is committed irrespective of when the put is called

    Hi Abhay,
         The backing map (in this case, <tt>com.tangosol.net.cache.ReadWriteBackingMap</tt>) is responsible for calling the CacheStore implementation. When "commit" is called, Coherence will synchronously send the data to the backing map; the backing map then determines what to do with the data. In the case of ReadWriteBackingMap, it will either (depending on its configuration) synchronously call CacheStore (meaning that a store exception will interrupt your transaction) or queue the update for later (meaning that any store exception will occur after the cache transaction has completed).
         In 3.0, the <rollback-cachestore-failures> element under <read-write-backing-map-scheme> controls whether CacheStore exceptions are propagated back to the client. If you are using a release prior to 3.0, please see this FAQ Item on CacheStore Exceptions.
         Jon Purdy
         Tangosol, Inc.

  • Singletons across multiple JVMs

    I have an EJB application container which is spread across multiple JVMs(something like a clustered EJB container). I am using a Singleton class to hand out 'running serial numbers'. Therefore there is a Singleton class per JVM(which is not very clean but i chose this path in the absence of anything better). Each singleton comes up when it is first invoked within a JVM and reads a starting running number and a pool of numbers from the database. This way multiple Singletons will not step on each other when doling out the serial numbers and there is no need to go back to the database for every request to the 'running serial number'.
    Is there some way to do some of the initialization within these Singleton's just ONCE across the multiple instances of the JVM ie. a real Singleton across multiple JVMs?
    Thanks
    Ramdas

    Pranav,
    thanks for your suggestions.
    the idea of using a Session bean to access a serialized object was one of my design options when i first implemented this, but then i decided against it because of the overhead of having to make an RMI call for every request to the 'running serial number'. The application being written is for a performance benchmark, and the request to the number being done at least once for every transaction, I chose to use Singletons.
    It is a clustered EJB container(not like most conatiners), in the sense that a single container is spread across multiple JVMs.
    Now if your application is getting clustured
    then you have a problem, in that what you can do is
    what object you have made put it in JNDI context and
    everytime you want to edit it pull it out edit and set
    it back. This will act a your singleton object in
    clustured environment accross JVM'sI did not fully understand the second part of your solution????
    Ramdas

  • Can Windows Server Backup spread a single backup job across multiple disks if they are not set up as a virtual disk?

    This may be a dumb question, but I can't seem to find any definitive information after having done many, many searches.  Short question is - can Windows Server Backup spread a single backup job across multiple disks if they are not in a storage
    pool or some other RAID/JBOD structure?
    Background:
    I'm running Server 2012 Essentials with all Windows Updates installed.  I have been backing up approx 2.8TB of data (Bare Metal Recovery, C:, S: (shared folders), and system reserved) for the past year+ onto a storage pool made up of two-2TB external
    USB drives.  Backup is slow (takes approx 1.5 days to complete), but generally works.  Not surprisingly I was constantly getting capacity low messages so I decided to increase my backup storage pool by adding a 3TB drive and another spare 750GB drive
    for a total of 7.75TB.  Instead of having four separate external USB enclosures, I bot a 4-bay enclosure - Startech.com model #S3540BU33E to simplify this (or so I thought!).
    The first problem I had was adding the two new drives to the existing storage pool. I think that is because the Startech uses a JMicron USB controller that reports identical uniqueid's for all drives so only one shows up in the GUI interface for creating storage
    pools. After doing research on this, I set up a new storage pool and virtual disk using all four drives via Powershell and thought I was good. However, when the backup ran, it failed after filling the first drive, saying there was no remaining capacity. In
    reality there were three remaining empty drives and there storage pool reported almost 5TB of avail capacity. I assumed this was due to the identical uniqueid issue so I decided to try a different tactic.
    Instead of using a storage pool that combines all four disks into one virtual disk, I just added each of them to Windows Server Backup as individual drives thinking it would manage them collectively. I.e., when a drive filled up during a particular backup,
    it would just start using the next drive and so on. Apparently this was a foolish assumption because the backup failed again as soon as the first disk filled up.
    So now I don't know if this is still an issue with the identical uniqueid's or if Server Backup actually can't spread a single backup across multiple individual drives that aren't in a pool or other virtual disk implementation. Hence, my original question.
    My guess is that it does *not* spread them across individual disks, but I just wanted to get confirmation.
    Thanks

    Mandy,
    Thank you for following up on my question.
    Unfortunately the article you referenced doesn't address what I am trying to accomplish.
    The article focuses on saving the same backup job to multiple disks and rotating the disks between on and offsite for enhanced protection.  However, it still requires that an individual backup job fits on a single disk.
    What I am trying to determine is if a single backup job can span across more than one physical disk (during the backup process) without those physical disks being in some type of virtual disk implementation (e.g., storage pool, RAID, etc.).
    Thanks,
    Gerry

Maybe you are looking for

  • File Sharing passwords

    I am trying to share files between iBook and iMac. Both show up in the other computer's network, and printing wirelessly works as well. However, it is looking for a password for registered user in each computer, and any of the passwords I know to hav

  • I cannot read a music that I bought on Itunes Store

    I had bought an album a year ago. I can still listen to it but one of the song doesn't play. When I click on it, a widnow opens asking me for my Itunes' name and password. I fill up the information but it doesn't change anything, I cannot listen to i

  • OS X Quit unexpectedly after charging

    My macbook unibody is about one month old and is fully up to date. Something strange is now happening everyday for about a week (I do not remember when it started exactly): Every evening, after using my macbook, i close the lid, then plug the power m

  • VNI-2015

    When I submit a backup using OEM, I get error VNI-2015 and the description is authentication error. I have given SYS as SYSDBA in preferred credentials. Help please........

  • Do i need airport express if i have an imac?

    Will an airport express provide faster wireless than an imac as wireless router?