Need advise for best practice when using Toplink with external transaction

Hello;
Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
Any help will be most appreciated.
Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
public findSomeObject(ILoginUser aUser, Expression queryExpression)
ClientSession clientSession = getClientSession(aUser);
SomeObject obj = null;
try
ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
readObjectQuery.setSelectionCriteria(queryExpression);
obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
catch (DatabaseException dbe)
// throw an appropriate exception
finally
clientSession.release();
if (obj == null)
// throw an appropriate exception
return obj;
However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
public findSomeObject(ILoginUser aUser, Expression queryExpression)
Session session = getClientSession(aUser);
SomeObject obj = null;
try
ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
readObjectQuery.setSelectionCriteria(queryExpression);
if (TransactionController.getInstance().useExternalTransactionControl())
     session = session.getActiveUnitOfWork();
     readObjectQuery.conformResultsInUnitOfWork(); }
obj = (SomeObject)session.executeQuery(readObjectQuery);
catch (DatabaseException dbe)
// throw an appropriate exception
finally
if (TransactionController.getInstance().notUseExternalTransactionControl())
     session.release();
if (obj == null)
// throw an appropriate exception
return obj;
When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
public getUnitOfWork(ILoginUser aUser)
throws DataAccessException
     ClientSession clientSession = getClientSession(aUser);
     UnitOfWork uow = null;
     if (TransactionController.getInstance().useExternalTransactionControl())
          uow = clientSession.getActiveUnitOfWork();
          uow.setShouldNewObjectsBeCached(true);     }
     else
          uow = clientSession.acquireUnitOfWork();
     return uow;
}

As it generally is with this sort of question there is no exact answer.
The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
--Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Advice for Soon-to-be MacPro Owner. Need Recs for Best Practices...

    I'll be getting a Quad Core 3 Ghz with 1GB of RAM, a 250Gig HD, the ATI X1900 card. It will be my first mac after five years (replacing a well-used G4 Tibook 1Ghz).
    First the pressing questions: Thanks to the advice of many on this board, I'll be buying 4GB of RAM from Crucial (and upgrading the HD down the road when needs warrant).
    1) Am I able to add the new RAM with the 1G that the system comes with? Or will they be incompatible, requiring me to uninstall the shipped RAM?
    Another HUGE issue I've been struggling with is whether or not to batch migrate the entire MacPro with everything that's on my TiBook. I have so many legacy apps, fonts that I probably don't use any more and probably have contributed to intermittent crashes and performance issues. I'm leaning towards fresh installs of my most crucial apps: photoshop w/ plugins, lightroom, firefox with extensions and just slowly and systematically re-installing software as the need arises.
    Apart from that...I'd like to get a consensus as to new system best practices. What should I be doing/buying to ensure and establish a clean, maintenance-lite, high-performance running machine?

    I believe you will end up with 2x512mb ram from the Apple store. If you want to add 4gb more you'll want to get 4x1gb ram sticks. 5gb ram is never an "optimal" amount but people talk like it's bad or something but it's simply that the last gig of ram isn't accessed quite as fast. You'll want to change the placement so the 4x1 sticks are "first" and will be all paired up nicely so your other two 512 sticks only get accessed when needed. A little searching here will turn up explanations for how best to populate the ram for your situation. It's still better to have 5 gigs where the 5th gig of ram isn't quite as fast than 4. They will not be incompatible but you WILL want to uninstall the original RAM, then put in the 4gigs into the optimal slots then add the other two 512 chips.
    Do fresh installs. Absolutely. Then only add those fonts that you really need. If you use a ton of fonts I'd get some font checking app that will verify them.
    I don't use RAID for my home machine. I use 4 internal 500gig drives. One is my boot, the other is my data (although it is now full and I'll be adding a pair of external FW). Each HD has a mirror backup drive. I use SuperDuper to create a clone of my Boot drive only after a period of a week or two of rock solid performance following any system update. Then I don't touch it till another update or installation of an app followed by a few weeks of solid performance with all of my critical apps. That allows me to update quicktime or a security update without concern...because some of those updates really cause havoc with people. If I have a problem (and it has happened) I just boot from my other drive and clone that known-good drive back to the other. I also backup my data drive "manually" with Superduper.
    You will get higher performance with Raid of course, but doing that requires three drives (two for performance and one for backup) just for data-scratch, as well as two more for boot and backup of boot. Some folks can fit all their boot and data on one drive but photoshop and many other apps (FCP) really prefer data to be on a separate disk. My setup isn't the absolute fastest, but for me it's a very solid, low maintenance,good performing setup.

  • Best practice when using auto complete in view layer

    Hello
    I have a question regarding best way to store/cash data when using auto complete function in view layer.
    I know that there will be a lot of visitors that will use this function and I dont want to kill the application server so I need some advice.
    Its about 6000 words that should be searchable .... my first thought was to create a Singleton-bean that stores current items and that I will iterate over but its a lot of "waste" doing that way.
    I would be very glad if anyone could advice me how to do this best way if there is any de-facto standard to use when using auto completion in "view layer"
    Thanks!
    Best Regards/D_S

    I dont know what your design is, but here are some ideas:
    To me, autocomplete means you have some user specific data that the user entered prevously such as their home address, and some generic data that is not specific to any particular user. I would store all that in a database. For the user specific data I would store their userID along with the data in the database. Then, when populating a JSP page, I would call up just the data specific to that user and the generic data from the database. I would store it as an array of some type in javascript client--side. When the user clicks the autopopulate button, I would have that button call a javascript fuction that reteives the data from the javascript array and populate the various textfields. All this is done client-side so the form does not have to be re-drawn. I question why you have 6000 items. Normally, autopopulate has at most only a few dozens of items. If you still need 6000 items, I suggest adding a textfield to the form to filter what the data he needs down to a manageable amount. Example: rather than get all names from a telephone book, put a textfield on the form that allowfs an end user to enter a letter a to z such as 'b', then only fetch last names from the phone book that begins with 'b'.

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • BFILE: need advice for best practice

    Hi,
    I'm planning to implement a document management system. These are my requirements:
    (0) Oracle 11gR2 on Windows 2008 server box
    (1) Document can be of type Word, Excel, PDF or plain text file
    (2) Document will get stored in DB as BFILE in a table
    (3) Documents will get stored in a directory structure: action/year/month, i.e. there will be many DB directory objects
    (4) User has read only access to files on DB server that result from BFILE
    (5) User must check out/check in document for updating content
    So my first problem is how to "upload" a user's file into the DB. My idea is:
    - there is a "transfer" directory where the user has read/write access
    - the client program copies the user's file into the transfer directory
    - the client program calls a PL/SQL-procedure to create a new entry in the BFILE table
    - this procedure will run with augmented rights
    - procedure may need to create a new DB directory (depending on action, year and/or month)
    - procedure must copy the file from transfer directory into correct directory (UTL_FILE?)
    - procedure must create new row in BFILE table
    Is this a practicable way? Is there anything that I could do better?
    Thanks in adavance for any hints,
    Stefan
    Edited by: Stefan Misch on 06.05.2012 18:42

    Stefan Misch wrote:
    yes, from a DBA point of view...Not really just from a DBA point of view. If you're a developer and you choose BFILE, and you don't have those BFILE's on the file system being backed up and they subsequently go "missing" i would say you (the developer) are at fault for not understanding the infrastructure you are working within.
    Stefan Misch wrote:
    But what about the posibility for the users to browse their files?. This would mean I had to duplicate the files: one copy that goes into the DB and is stored as BLOB and can be used to search. Another copy will get stored on the file system just to enable the user to browse their files (i.e. what files where created for action "offers" in february 2012. The filenames contain customer id and name as well as user id). In most cases there will be less that 100 files in any of those directories.
    This is why I thought a BFILE might be the best alternative as I get both: fast index search and browsing capability for users that are used to use windows explorer...Sounds like it would be simple enough to add some metadata about the files in a table. So a bunch of columns providing things like "action", "Date", "customer id", etc.... along with the document stored in a BLOB column.
    As for the users browsing the files, you'd need to build an application to interface with the database ... but i don't see how you're going to get away from building an application to interface with the database for this in any event.
    I personally wouldn't be a fan of providing users any sort of access to a production servers file system, but that could just be me.

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Looking for best practices when creating DNS reverse zones for DHCP

    Hello,
    We are migrating from ISC DHCP to Microsoft DHCP. We would like the DHCP server to automatically update DNS A and PTR records for computers when they get an IP. The question is, what is the best practice for creating the reverse look up zones in DNS? Here
    is an example:
    10.0.1.0/23
    This would give out IPs from 10.0.1.1-10.0.2.254. So with this in mind, do we then create the following reverse DNS zones?:
    1.0.10.in-addr.arpa AND 2.0.10.in-addr.arpa
    OR do we only create:
    0.10.in-addr.arpa And both 10.0.1 and 10.0.2 addresses will get stuffed into those zones.
    Or is there an even better way that I haven't thought about? Thanks in advance.

    Hi,
    Base on your description, creating two reverse DNS zones 1.0.10.in-addr.arpa and 2.0.10.in-addr.arpa, or creating one reverse DNS zone 0.10.in-addr.arpa, both methods are all right.
    Best Regards,
    Tina

  • Any suggestions for best practice when creating assets for tablets?

    General community question,
    I'm about to start creating a tablet demo of a few assets that have already been created for web that need adapting for tablet (namely iOS on iPad). I was wondering if anyone could share some pointers about what to avoid doing or some top tips to ensure that they can be converted as quickly and easily as possible, particularly given the state of the current version of EA.
    Any suggestions or links to relevant websites would be welcome!
    Cheers,
    D

    General community question,
    I'm about to start creating a tablet demo of a few assets that have already been created for web that need adapting for tablet (namely iOS on iPad). I was wondering if anyone could share some pointers about what to avoid doing or some top tips to ensure that they can be converted as quickly and easily as possible, particularly given the state of the current version of EA.
    Any suggestions or links to relevant websites would be welcome!
    Cheers,
    D

  • Need a Tx aware datasource when using CMT with BMP entity beans?

    When using Containter-Managed Transactions with entity beans that have
              bean-managed persistence, do I need to use a transaction-aware datasource?
              Thanks,
              Ken Gertsen
              

    Hi ad13217 and thanks for reply.
    I'm sorry but my code is like this:
    javax.naming.Context ctx=new javax.naming.InitialContext();
    arguments was an error on copy, but it doesn't work.
    Thanks
    Fil

  • What's the recommended import encoding for best 'practical' ipod use?

    I just really found out that you can change the import encoding for different quality sound and such and I'm wondering what is the best or most practical quality setting for use on my IPOD? Also how about the best, period? It would be nice to have my favorite albums on my ITUNES in the best quality I can have them.
    Most of my songs are AAC encoded, 128 kbps, 44.1khz, low-complexity, stereo.
    A while ago I was messing around and changed preferences settings and so some music is encoded as MPEG-1 Layer 3, 160kbps, joint stereo, ID3-v2.2.
    How would you import?
    Thanks!

    Meg has a good memory. I do rip in AAC@256 VBR. After experimenting around for about 2 years or so, I have found that this setting yields the best audio quality for my tastes. You have to keep in mind what kind of equipment you are using. If you are using Apple earbuds & internal computer speakers, then it's doubtful that you will be able to tell much difference. I use Shure SE310 earphones & Bose Companion 5 speakers with my computer, so I can most definitely tell lower audio quality files. The last thing to remember is that ripping at a higher bitrate or with a Lossless format will increase the size of your files & thus use more room on your iPod. My AAC@256 files are 2x the size of the same files ripped @128. This doesn't matter to me as I recently purchased a 160GB Classic a few months ago. Hope this helps.

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • Best practices when using collections in LR 2

    How are you using collections and collection sets? I realize that each photographer uses them differently, but I'm looking some inspiration because I have used them only very little.
    In LR 1 I used collections a bit like virtual Folders, but it doesn't anymore work as naturally in LR2.
    In LR 1 it was like:
    Travel (1000 images, all Berlin images)
    Travel / Berlin trip (400 images, all Berlin trip images)
    Travel / Berlin trip / web (200 images)
    Travel / Berlin trip / print (100 images)
    In LR 2 it could be done like this, but this somehow feels unnatural.
    Travel (Collection Set)
    Travel / Berlin trip (CS)
    Travel / Berlin trip / All (collection, 400 images)
    Travel / Berlin trip / web (collection, 200 images)
    Travel / Berlin trip / print (collection, 100 images)
    Or is this kind of use stupid, because same could be done with keywords and smart collections, and it would be more transferable.
    Also, how heavily are you using Collections? I'm kind of on the edge now (should I start using them heavily or not), because I just lost all my collections because I had to build my library from scratch because of weird library/database problems.

    Basically, i suggest not to use collections to replicate the physical folder structure, but rather to collect images independent of physical storage to serve a particular purpose. The folder structure is already available as a selection means.
    Collections are used to keep a user defined selection of images as a persistent collection, that can be easily accessed.
    Smart collections are based on criteria that are understood by the application and automatically kept up to date, again as a persistent collection for easy access. If this is based on a simple criterium, this can also, and perhaps even easier, done by use of keywords. If however it is a set of criteria with AND, OR combinations, or includes any other metadata field, the smart collection is the better way to do it. So keywords and collections in Lightroom are complementary to eachother.
    I use (smart)collections extensively, check my website  www.fromklicktokick.com where i have published a paper and an essay on the use of (smart)collections to add controls to the workflow.
    Jan R.

  • Transfer file clean up: What is best practice when creating files with timestamps?

    Created an SSIS package to create files that will be sent which are created with a time stamp. What is the best procedure of clean up for the files? I'd like to keep at least a day of files for verification purposes.

    Run http://filepropertiestask.codeplex.com task to get the file creation date and if older delete it or a Script Task that uses the .Net FileInfo
    CreratedDate method to find it Then you can use
    Precedence Constraints to either skip the file deletion or not.
    Arthur
    MyBlog
    Twitter

Maybe you are looking for

  • Problem with content: 7 - Bad sound Data

    When I do the following... Voice = new Sound(); Voice.loadSound("test.mp3", true) I get the error... Problem with content: 7 - Bad sound Data AND Problem with content: 7 - The sound data format is not recognized. I am using the test.mp3 files that co

  • Can the creators of themes get into your computer or take info from your computer if you use said theme?

    <blockquote>Locking duplicate thread.<br> Please continue here: [[/questions/927506]]</blockquote> I'm just curious if the creators of a theme can get access into your computer or get info from it if you use the theme?

  • For Aperture Users

    I have been a long time Aperture User. I also own my own copy (glad I did) of Photoshop CS6. Aperture will function for a good long while, love the program, but don't mind switching now that the writing is on the wall. I wasn't aware of all the "clou

  • Expired Passwords

    I'm running GW2012 with LDAP authentication against eDir. When a user's password expires in eDir (and they still have grace logins remaining), they can log into Webaccess, but don't get the "prompt" to change their password like they used to with pre

  • IllegalStateException for cmr-fields due to diff txns

    Hi, I am using cmp entity beans on WebLogic Server 7. Assume bean 'A' to bean 'B' is 1:M I get the following exception while trying to create bean B - <Exception in MyClass.createB(): javax.ejb.EJBException: EJB Exception:: java.lang.IllegalStateExce