Best practice when using Tangosol with an app server

Hi,
I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
Best regards,
/Per

Hi Gene,
I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
Still confused,
/Per

Similar Messages

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Best practice when using auto complete in view layer

    Hello
    I have a question regarding best way to store/cash data when using auto complete function in view layer.
    I know that there will be a lot of visitors that will use this function and I dont want to kill the application server so I need some advice.
    Its about 6000 words that should be searchable .... my first thought was to create a Singleton-bean that stores current items and that I will iterate over but its a lot of "waste" doing that way.
    I would be very glad if anyone could advice me how to do this best way if there is any de-facto standard to use when using auto completion in "view layer"
    Thanks!
    Best Regards/D_S

    I dont know what your design is, but here are some ideas:
    To me, autocomplete means you have some user specific data that the user entered prevously such as their home address, and some generic data that is not specific to any particular user. I would store all that in a database. For the user specific data I would store their userID along with the data in the database. Then, when populating a JSP page, I would call up just the data specific to that user and the generic data from the database. I would store it as an array of some type in javascript client--side. When the user clicks the autopopulate button, I would have that button call a javascript fuction that reteives the data from the javascript array and populate the various textfields. All this is done client-side so the form does not have to be re-drawn. I question why you have 6000 items. Normally, autopopulate has at most only a few dozens of items. If you still need 6000 items, I suggest adding a textfield to the form to filter what the data he needs down to a manageable amount. Example: rather than get all names from a telephone book, put a textfield on the form that allowfs an end user to enter a letter a to z such as 'b', then only fetch last names from the phone book that begins with 'b'.

  • Best practice for using Muse with Lightroom?

    I'm creating a photography website. I use Lightroom to manage my photographs, and I keep the pictures for my site in Collections.
    In Muse, I'd like to have ALT text descriptions (or something similar) of each picture, so that search engines can see what's on my site.
    Is there a way to embed the descriptions into the image info in Lightroom, and have it export in a way that Muse picks it up?
    My problem is this: I've been creating ALT tags in my Muse slideshows (by right clicking on each image and selecting "Edit image properties..."). This works, but I lose my ALT tag if I export the images from Lightroom using any naming convention that doesn't maintain a consistent relationship between image and name. This makes it very difficult to manage my exports from Lightroom, especially if I want to add images or change the order of images.
    At the risk of providing too much detail, the reason I rename images on export is this: In Lightroom, my images are named with my client's last name and a sequence number. When I export them for use on my website, I want them to have a different name, primarily to protect my clients' privacy, but also to allow me to organize them easily in my slideshows. So for example Jones-103.dng gets exported as Checkerbox-Wedding-Photography-[nnn].jpg, where [nnn] is a new sequence number.
    Can anyone tell me a better way to manage this workflow?

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Transfer file clean up: What is best practice when creating files with timestamps?

    Created an SSIS package to create files that will be sent which are created with a time stamp. What is the best procedure of clean up for the files? I'd like to keep at least a day of files for verification purposes.

    Run http://filepropertiestask.codeplex.com task to get the file creation date and if older delete it or a Script Task that uses the .Net FileInfo
    CreratedDate method to find it Then you can use
    Precedence Constraints to either skip the file deletion or not.
    Arthur
    MyBlog
    Twitter

  • Best Practices when using MVMC to improve speed of vmdk download

    I've converted a number of machines already from ESXi4.1 to Hyper-V 2012 successfully and learnt pretty much all the gotchas and potential issues to avoid along the way, but I'm still stuck with extremely slow downloading of the source
    vmdk files to the host i'm using for the MVMC. This is not so much an issue for my smaller VM's but it will be once I hit the monster sized ones.
    To give you an idea on a 1GB network it took me 3 hours to download an 80GB VM. Monitoring the network card on the hyper-v host I have MVMC running on shows that i'm at best getting 30-40Mbs download and there are large patches where that falls right down
    to 20Kbs or thereabouts before ramping back up to the Mbs range again. There are no physical network issues that should be causing this as far as I can see.
    Is there some undocumented trick to get this working at an acceptable speed? 
    Copying large files from a windows guest VM on the esx infrastructure to the Hyper-V host does not have this issue and I get the full consistent bandwidth.

    It's VMWARE in general is why... Ever since I can remember (which was ESX 3.5) if you copy using the webservice from the data store the speeds are terrible. Back in the 3.5 days the max speed was 10Mbps as second. FASTSCP came around and threaded it to make
    it fast.
    Backup software like Veeam goes faster only if you have a backup proxy that has access to all data stores running in VMware. It will then utilize the backend VMware pipe and VM network to move the machines which is much faster.
    That being said in theory if you nested a Hyper-V server in a VMWARE VM just for conversations it would be fast permitting the VM server has access to all the datastores.
    Oh and if you look at MAT and MVMC the reason why its fast is because netapp does some SAN offloading to get around VMWARE and make it array based. So then its crazy fast.
    As a side not that was always one thing that has pissed me off about VMWARE.

  • Best Practices for Using JSF with AJAX - BluePrints OR Ajax4Jsf ?

    I am a newbie to AJAX4JSF . I think it provides Rapid Application Development (RAD) just by using tags like a4j: without the need to develop complex JSF Custom Components as shown in BluePrints Catalog
    https://bpcatalog.dev.java.net/ajax/jsf-ajax/
    I understand the purpose of developing JSF Custom components as Reusable for using with AJAX. But its complex and requires lot of coding i.e. PhaseListeners and Managed Beans. There should be easy way to do this especially our project needs RAD tool like AJAX4JSF.
    Any suggestions will be highly appreciated
    Regards
    Bansi

    Bansi, you are trying to compare orange-to-apple. Blue print catalog is a historical retrospection about what people thought about AJAXifying JSF in the past. Currently, the playground has been moved to the jsf-extension project. Look for DynaFaces there.

  • Best Practices of using AJAX with JSF : BluePrints or  Ajax4Jsf ?

    am a newbie to AJAX4JSF . I think it provides Rapid Application Development (RAD) just by using tags like a4j: without the need to develop complex JSF Custom Components as shown in BluePrints Catalog
    https://bpcatalog.dev.java.net/ajax/jsf-ajax/
    I understand the purpose of developing JSF Custom components as Reusable for using with AJAX. But its complex and requires lot of coding i.e. PhaseListeners and Managed Beans. There should be easy way to do this especially our project needs RAD tool like AJAX4JSF.
    Any suggestions will be highly appreciated
    Regards
    Bansi

    Bansi, you are trying to compare orange-to-apple. Blue print catalog is a historical retrospection about what people thought about AJAXifying JSF in the past. Currently, the playground has been moved to the jsf-extension project. Look for DynaFaces there.

  • Best practices when using collections in LR 2

    How are you using collections and collection sets? I realize that each photographer uses them differently, but I'm looking some inspiration because I have used them only very little.
    In LR 1 I used collections a bit like virtual Folders, but it doesn't anymore work as naturally in LR2.
    In LR 1 it was like:
    Travel (1000 images, all Berlin images)
    Travel / Berlin trip (400 images, all Berlin trip images)
    Travel / Berlin trip / web (200 images)
    Travel / Berlin trip / print (100 images)
    In LR 2 it could be done like this, but this somehow feels unnatural.
    Travel (Collection Set)
    Travel / Berlin trip (CS)
    Travel / Berlin trip / All (collection, 400 images)
    Travel / Berlin trip / web (collection, 200 images)
    Travel / Berlin trip / print (collection, 100 images)
    Or is this kind of use stupid, because same could be done with keywords and smart collections, and it would be more transferable.
    Also, how heavily are you using Collections? I'm kind of on the edge now (should I start using them heavily or not), because I just lost all my collections because I had to build my library from scratch because of weird library/database problems.

    Basically, i suggest not to use collections to replicate the physical folder structure, but rather to collect images independent of physical storage to serve a particular purpose. The folder structure is already available as a selection means.
    Collections are used to keep a user defined selection of images as a persistent collection, that can be easily accessed.
    Smart collections are based on criteria that are understood by the application and automatically kept up to date, again as a persistent collection for easy access. If this is based on a simple criterium, this can also, and perhaps even easier, done by use of keywords. If however it is a set of criteria with AND, OR combinations, or includes any other metadata field, the smart collection is the better way to do it. So keywords and collections in Lightroom are complementary to eachother.
    I use (smart)collections extensively, check my website  www.fromklicktokick.com where i have published a paper and an essay on the use of (smart)collections to add controls to the workflow.
    Jan R.

  • Using ThreadLocal with WebLogic App Server

              With weblogic thread pooling, When I use threadlocal variables in my application,
              how does it work as far as cleaning those variables after the request is completed.
              Thanks in advance.
              

    Hi.
              Hmm, since WLS execute threads never die, I don't know that your threadlocal variables
              will get cleaned up or gc'd until the server is shutdown.
              Regards,
              Michael
              Kumar Ampani wrote:
              > With weblogic thread pooling, When I use threadlocal variables in my application,
              > how does it work as far as cleaning those variables after the request is completed.
              >
              > Thanks in advance.
              Michael Young
              Developer Relations Engineer
              BEA Support
              

  • Best practice to use Tortoise SVN with LV

    Can anyone recommend what is the best practice to use and structure the project using TSVN with LV? I have seen the jki tool and have also read about some issues of linkage when using TSVN with LV as posted on the forum here. I suppose these linkage issues still exists? Other than perforce is there any suggestion on source control that integrates well with LV?
    TIA
    CLD,CTD

    We use Tortoise SVN with LV and it works very good. It's not integrated in that i cannot from within LV check in and out stuff, i have to do that in the Explorer. That's not a problem to me.
    SVN is a very good source and version control regardless.
    One small issue with external handling is if you want to change an already used and active filename. In LV you can save to another filename and references will update, but ofcourse SVN doesn't pick up on that automagically. There are two solutions to this:
    1. When you check in, you'll get 1 added and 1 deleted file, select both, r-click and "Repair move".
    2. After changing the filename in LV, change it back in explorer and r-click the file for a SVN rename and rename it to the new name.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

Maybe you are looking for

  • Camera Raw Preferences greyed out in Bridge 4.05.11

    I have recently downloaded an update for Adobe Camera Raw / PS CS5 via automatic updates and now I cannot open raw files via Adobe Bridge (V4.05.011). When I double click the NEF file In Bridge it tries to open in PS but I get "Could not complete you

  • Using a PB display to display a MacMini

    Is there a way to display a Mini on my PB's display? It is the only monitor I have at hand right now.

  • Maximum Thunderbolt/VGA resolution for external displays?

    Hi there Until recently, I've been using a Macbook Air Core 2 Duo with a MiniDisplayPort output. This is connected via the Apple MiniDisplayPort > VGA adaptor to an LG Flatron W2246 22" display, and running 1920x1080 quite happily. I recently bought

  • What is the Infinity option 2 usege limit?

    hello all. what is the Infinity option 2 usage limit? is it 100gb or 300gb? a month. cos its been 5 days now I'm getting very bad speed, 2 mb down and  .8mb up after 6 pm till 2am. this is absolutely rubbish paying £24.99 a month for this **bleep**!!

  • Is MacBook Pro 2012's RAM Memory extendable more than 8GB?

    I have extended my MacBook Pro's RAM Memory upto 10 GB and its working and showing In its activity normal is MacBook extendable to 10 GB ?