Entity cache

hi all
have a question regarding entity cache behaviour.
how do I make sure there are NO More than N beans in the entity cache. If I set max-beans-in-free-pool, problem seems to be this: if I need to retrieve more than (N+1) rows from DB, weblogic retrieves ONLY N beans because it reaches the max-beans-in-free-pool and doesn't bring (N+1)th row from DB. That behaviour is not desirable, of course.
so, how does one set the entity-cache params, to
1. be able to retrieve as many rows from the DB as the query results in - regardless of the param max-beans-in-free-pool
2. let weblogic grow the entity cache to as big a cache as required - but then shrink it back to "some specified" cache size after idle-timeout.
what params in deployment descriptor do I need to set - and to what values?
any help in understanding these issues would be appreciated.

thanks for the reply.
well, maybe weblogic is not behaving as documented..
It is supposed to use idle-timeout to reduce the number of beans to "initial-free-beans". I set initial-free-beans to 100 but even after the timeout has passed, it keeps lot more than 100 beans in pool.
can anyone think of a reason why it does that?

Similar Messages

  • How to make two Application Modules share the same entity cache?

    Hello everyone, I am using JDeveloper 11.1.2.3.0
    I have a little problem in my application through using two AppModules that contain also same ViewObjects. So to be clear one VO is declared in two AppModules. When I commit the view by using AppModule1 and then go to another page that uses the same VO but from AppModule2, I have to commit again, even through the row is previously stored in the database.
    I understood that this came because different AppModules use different entityCache for database communications.
    I am asking if anyone knows any option on how to sync the entity caches from the two appmodules, or how to make them use the same entity cache.
    Thank you

    You can make a copy of the app and give it another name, but it will use the same settings files no matter which you open. Settings aren't stored in the app, but in your Preferences.
    Firefox, being a Windows derivative, might have the capability to use "Profiles" where you'd set it up different for each profile. I don't use Firefox, so I don't know if that option exists.

  • Does entity cache cause high heap usage ? better setClearCacheOnCommit ?

    Hi all,
    During peak load (150-200 users) of our production ADF application (10.1.3.3), the heap usage can be reach 3GB, causing JVM very busy doing frequent GC.
    Is this possibly because the 'by default uncleared' entity cache ?
    What is the implication is I do 'setClearCacheOnCommit()' ?
    Thank you for your help,
    xtanto

    The EO cache will be cleared when the AM is released in stateless mode. By default that would occur when your web session times out, but you can eagerly release it in stateless mode (when the user is finished with the task that uses that AM).
    Using setClearCacheOnCommit() will more eagerly clear the EO cache, however doing so will clear the VO caches, too, for the VOs related to those EOs so it may end up causing more database requerying than you were doing before. Effectively, after a commit you'll need to requery any data that's needed for the subsequent pages the user visits. If your work flow is such that the user does not do a commit and then continue processing other rows that you've already queried, then it might be an overall slight win on memory usage, however if the user does issue a commit (say, from an Edit form) and then return back to a "list" page to process some other record, doing a clearCacheOnCommit=true will force your list page to requery the data (which it's not doing now when the entity cache isn't been eagerly cleared)
    So, like many performance-related question, it depends on exactly what your app is doing.

  • How to clear the ADF Entity cache

    Hi All,
    While I'm using an entity based updatable view object,after deleting a record,I created a new record with the same key,It displays the deleted record instead of new record.
    Sample steps:
    1.Create a updatable view object with key attribute name sequence.
    2.I dropped this as table in the view page.
    3.I created a new record for this using CreateInsert action
    4.I entered the values for this new record.
    5.I deleted the new record
    6.Again I created a new record.
    7.If I enter the previous sequence value,it displays the old record values, If I enter a different value the delete record does not affect this.
    So I think,I need to clear the entity cache, but I have used clearCache() of ViewObjectImpl, there is no change.
    If anyone came across the problem and fixed,Please let me know the solution.
    Regards,
    Felix

    You are wrong here. if you don't commit the delete, the record is still there.
    You should not enter the same sequence again anyway. a sequence should be a one time number, once used it's gone. set the new key using a groovy expression and make it read inky in the ui.
    Timo

  • How to clear the Entity Cache

    I have an Entity based on a GLOBAL temp table. I have a ViewObject based on the Entity. A stored procedure is called from the client that populates the GLOBAL temp table and then calls executeQuery on the ViewObject. The first time the user does this, all rows display fine. When the user changes the selection criteria and clicks a button a 2nd time, the same procedure is called which clears the GLOBAL temp table and repopulates it with the new data. An exception occurs, however, stating that there are too many rows that match the key. The problem seems to be that the Enity has cached the rows from the first execute, but those rows aren't flushed when the 2nd execute is called, which makes sense. It seems that I just need to force the Entity to clear it's cache after my procedure is called, but before the executeQuery is called. I have tried that from the client side by calling clearCache on the view AND clearEntityCache on the transaction, but they don't seem to do anything. I have also called clear cache on the app module side in the method that calls the stored procedure, but it doesn't seem to do anything either. I also turned on the JBO debugging messages and I can't see any message stating that the Entity or View cache was cleared. It appears that these calls do nothing.
    Is there a way to work around this to make the Entity/View chaches completely refresh from the database?
    Thanks
    Erik

    Typically the client layer never has access to entities. This API is the one exception. The way to keep it cleaner would be to create a custom Application Module method, and inside that perform the clear cache, rather than burdening the client to have to know about any details of underlying entity objects. Keep that hidden.
    I wouldn't think you should need to clear the entity cache to do what you're doing, though. If you have a reproducible test case -- with SQL script that creates your example global tempory table -- if you could create a TAR on Metalink and ask support to file a bug for it, that would be great so we can look into it.

  • Flushing Toplink/JPA entity cache following background update of database

    We have a JPA/Toplink web application (essentailly read only at present) and the database tables are to be refreshed from master tables at intervals (probably daily).
    Is there any way (short of bouncing the web app) we can persuade Toplink to dump it's entity cache so the web app is guaranteed to pick up the new values promptly?

    I seem to have an answer at last, for the benefit of forum searchers. Looks like:
    http://www.jroller.com/LordFoom/
    ((oracle.toplink.essentials.ejb.cmp3.EntityManager)em.getDelegate()).getServerSession().getIdentityMapAccessor().invalidateAll(); Does the trick.

  • To limit the number of beans pooled in entity-cache

    here are the issues in the problem.
    1. would like to make sure there are NO More than N beans in the entity cache.
    2. if I set max-beans-in-free-pool, problem seems to be this: if I need to retrieve more than (N+1) rows from DB, weblogic retrieves ONLY N beans because it reached the max-beans-in-free-pool. That behaviour is not desirable, of course.
    so, how does one set the entity-cache params, to
    1. retrieve as many entity beans as required - regardless of the param max-beans-in-free-pool
    2. let weblogic grow the entity cache to as big a cache as required - but then shrink it back to "some specified" cache size after idle-timeout. whats the param for that?
    any help in understanding these issues would be great.

    HI,
    You can make use of feature to maintain maximum and minimum limits per condition type.
    SPRO > SAP Implementation Guide > Customer Relationship Management > Basic Functions > Pricing > Define Settings for Pricing > Create Minimum/Maximum Limits
    In this activity you can restrict manual processing of pricing conditions in transactions using minimum and maximum limits per condition type.
    Kindly reward with points in case helpful
    Sharif

  • Setting the size of a shared entity cache

    Hi,
    I am using WLS 7.1 and have a few entity beans that use a shared
    application-level cache. I would like to set the size of this cache using
    the admin console.
    As this property is specified in the weblogic-application.xml file, I
    attempted to access the page in the Admin console Deployment Descriptor
    Editor that lets you modify this deployment descriptor.
    However, clicking on the Weblogic-application node in the deployment
    descriptor editor gives me an Error 404--Not Found message.
    Any ideas why this is happening?
    Thanks in advance.
    Santosh
    Outgoing mail is certified Virus Free.
    Checked by AVG anti-virus system (http://www.grisoft.com).
    Version: 6.0.754 / Virus Database: 504 - Release Date: 6/09/2004

    Hi,
    I am using WLS 7.1 and have a few entity beans that use a shared
    application-level cache. I would like to set the size of this cache using
    the admin console.
    As this property is specified in the weblogic-application.xml file, I
    attempted to access the page in the Admin console Deployment Descriptor
    Editor that lets you modify this deployment descriptor.
    However, clicking on the Weblogic-application node in the deployment
    descriptor editor gives me an Error 404--Not Found message.
    Any ideas why this is happening?
    Thanks in advance.
    Santosh
    Outgoing mail is certified Virus Free.
    Checked by AVG anti-virus system (http://www.grisoft.com).
    Version: 6.0.754 / Virus Database: 504 - Release Date: 6/09/2004

  • BMP Entity caching

    Hi,
    We are using BMP Entity beans on OC4J 9.0.2 and we have following problem:
    Every access to an entity bean invokes ejbLoad even if the bean is already loaded. Moreover, it's seems like OC4J re-uses instances of the beans instead of creating new instances. I mean that, for example, if we have a bean that was loaded with PK=1, and we're trying to load another bean with PK=2, OC4J invokes ejbLoad on first bean with the new PK.
    I've tried to increase max-instances and max-instances-per-pk without any success.
    I've changed validity-timeout to big number, again without success.
    I cannot change exclusive-write-access to true. OC4J always puts "false" there.
    I found couple of posts in this forum about BMP caching, but there was no any solution.
    Is there any solution to this problem?
    Thanks

    Stas -- There were a number of issues with EJB locking strategies in the v1022x. These included some scalability limits that we believed needed to be removed for enterprise systenms as well as being able to have true multi-JVM concurrency which was not easy to do with CMP in v1022x. Much of the exclusive-write-access code for non-read-only beans relied on these old mechanisms. A side effect of these changes was that for a small set of applications these changes might have some performance impact. We are looking to see how we might change this for the future but for the time being three methods exist to work around these issues. The first is to use TopLink for BMP. TopLink can provide caching that takes the place of the caching you were relying on in v1022x. The second it to use a caching mechanism like JCache that Avi described. The last and probably least desireable is to continue to use Oracle9iAS v0122x for your application.
    Realize that these changes were made to increase enterprise scalability beyond what was available in v0122x, not to negatively impact enterprise scalability.
    Laslty, it would be good to know if you are using multiple VMs in your application, if you manage multi-VM locking, and if the impact you are seeing is as great when you start to scale your application beyond a single VM.
    Thanks -- Jeff

  • Query performance tuning beyond entity caching

    Hi,
    We have an extremely large read-only dataset stored using BDB-JE DPL. I'm seeking to tune our use of BDB while querying, and I'm wondering what options we have beyond simply attempting to cache more entities? Our expected cache hit rate is low. Can I tune things to keep more of the btree nodes and other internal structures buffered? What kind of configuration parameters should I be looking at?
    Thanks,
    Brian

    No, you don't have to preload the leaf nodes. But if you don't preload the secondary at all, you'll see more I/O when you read by secondary index.
    If you don't have enough cache to load leaf nodes, you should not call setLoadLNs(true), for primary or secondary DBs. Instead, try to load the internal nodes for all DBs if possible. You can limit the time taken by preload using PreloadConfig.
    I strongly suspect that the primary DB loads faster because it is probably written in key order, while the secondaries are not.
    The LRU-only setting is an environment wide setting and should apply to all databases, that is not a problem. If you are doing random access in general, this is the correct setting.
    Please use preload to reduce the amount of I/O that you see in the environment stats. If performance is still not adequate, you may want to look at the IO subsystem you're using -- do you know what you're getting for seek and read times? Also, you may want to turn on the Java verbose GC option and see if full GCs are occurring -- if so, tuning the Java GC will be necessary.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to clear entity cache ?

    I am using ADF UIX with Struts as the controller. I have an entity object "Shipments" and two view objects "ShipmentsOL" and "ShipmentsUL". Different validation logic needs to be applied depending on whether or not the user is creating records through the ShipmentsOL view or the ShipmentsUL View. I am using the validateEntity method to enforce my business logic. Some fields are common between the two view objects and others are not. When the user commits on one page, they are directed immediately to the second view object.
    When the validateEntity runs on the seond view Object (ShipmentsUL), the values obtained through the getXXX methods for attributes that only appear in the ShipmentsOL view object are still present. What I need to happen is for all the values in the entity to be cleared after processing the first view object. I tried calling getTransaction().clearEntityCache("Shipments") from my commit method within the Struts action between the first and second view object but it did not work ?
    Any ideas ? Please ? !
    Cheers,
    Brent

    Please ignore. I went back to a prior version of my application and it seemed to now be working as expected. Not enough coffee I think !

  • How can we change the state of records in view cache and entity cache

    Hi everybody,
    I am trying to achieve selective rollback, or selective commit. By this I mean that I am looking for a way to change the state of rows in view cache, so that the selective changed rows can be rolled back during comit.
    if anyone has tried anything in this please help me.

    I agree!
    I can't top this approach.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • I get a NullPointerEx when ADF tries to get Entity Cache

    Hello! I'm using ADF JDeveloper 11.1.2.1.0.
    I'm having a hard time tracing back a bug I'm experiencing. When I try to commit some data that involves setting properties across many view links and entities, I get a weird error near the end:
    <22/05/2012 12:20:49,208> <SEVERE> <BaseLoggerImpl> <logException> java.lang.NullPointerException
    at oracle.jbo.server.EntityImpl.getEntityCache(EntityImpl.java:4665)
         at oracle.jbo.server.EntityImpl.getAttributeInternal(EntityImpl.java:3371)
         at br.com.cds.gtj.gco.model.gco.entity.MovimentoMaterialImpl.getConsultaUltmSeqcMovmMaterialView(MovimentoMaterialImpl.java:647)
         at br.com.cds.gtj.gco.model.gco.entity.MovimentoMaterialImpl.getUltmSeqcMovmMatl(MovimentoMaterialImpl.java:676)
         at br.com.cds.gtj.gco.model.gco.entity.NotaFiscalPedidoImpl.recebeNotaFiscalPedido(NotaFiscalPedidoImpl.java:1208)
    Whats really curious about this error is that, if I try the commit operation again right after getting the nullpointerexception, everything goes smoothly and the process is completed. What could cause this kind of error, and why would a second try make it work?
    Thanks in advance,
    Daniel

    It's a very simple VO. It has a query that returns a value I need from the database. I could very well solve this by making a procedure with this query and programmatically executing this procedure from the entity X, but I'm interested in figuring why this is happening and what am I doing wrong.
    I will paste the query here, but I suspect my problem is with the entity. I tried creating other view accessors and tried calling their "get" methods, and none of them can return the RowSet, all ending on the same error in the stack trace: EntityCache -> NullPointerException.
    Heres the query:
    SELECT
    MAX(MovimentoMaterial.SEQC_MOVM_MATL) AS UltmSeqc
    FROM
    MOVIMENTO_MATERIAL MovimentoMaterial
    WHERE
    MovimentoMaterial.ALMX_MOVM_MATL = :varAlmxMovmMatl AND
    MovimentoMaterial.CODG_GRPO_MATL_MOVM_MATL = :varCodgGrpoMatlMovmMatl AND
    MovimentoMaterial.CODG_SUBS_MATL_MOVM_MATL = :varCodgSubsMatlMovmMatl AND
    MovimentoMaterial.NUM_SEQC_MATL_MOVM_MATL = :varNumSeqcMatlMovmMatl
    My objective is to set the bind Variables in this query, and get the value for UltmSeqc, which I will use for future procedures.
    Being so, I tried to get the rowset, and from it execute the setNamedBindVariable stuff to get the result from this query. Why can't I get a RowSet from this view, given that I created it's accessor inside the entity X?

  • Transient entity attributes and clearing cache

    I have an entity with both queriable and transient attributes. One of the transient attributes uses the persistent primary key of the entity object as well as an attributes retrieved from another entity via an association in order to execute a CallableStatement. This transient attribute uses the following code:
    public Number getReflectallow() {
    if ((Number)getAttributeInternal(REFLECTALLOW) == null) {
    return getReflectAllowFromDB();
    } else {
    return (Number)getAttributeInternal(REFLECTALLOW);
    The getReflectAllowFromDB also sets the value in entity cache using the populateAttribute method.
    My problem is that this database value can change and there are certain points in the application where I would like to clear the cache and rebuild all new values for this attribute. I have tried using both getTransaction.clearEntityCache() and the clearCache() on the view object but neither affects this field. I also cannot loop through the view and reset that attribute to null because it dirties the transaction. I'm assuming that clearing the cache does not work because the attribute is transient. How can I clear the values from cache?

    Any help with this Shay?

  • Purpose of Entity / View caches

    From research, articles, testing, and what info I've been able to glean off forum and metalink, it appears that entity/view caches are not used to satisfy querying, but to minimize storage of redundant data in memory. In other words, the purpose of the cache is simply to minimize memory used, but does not help query performance in any way.
    Is this accurate? If this is the case, it would seem that minimizing caching in many cases might be better for performance than using a cache.
    I've asked about the naure of entity / view caching in a TAR -- over two weeks in, still no answer....
    B

    Brad,
    A fundamental design point of ADF Business Components is that we let the database be our global, cross-user shared cache. We make no attempt to try and be the global, shared cache in the middle tier very much by design. The application module defines the boundaries of the current users unit of work, and provides facilities for allowing that unit of work to span multiple HTTP requests with failover protection, etc.
    The alternative approach is to try and provide a global, shared cache in the middle-tier, and require the user to consciously clone objects that need to be modified, and then deal with the issues of keeping the global, shared cache up to date when users make changes. This incurs overhead to maintain the consistency of that global cache when users are pounding lots of changes into it. We consciously opted against this approach after studying the way that Oracle Applications applications worked with the database in a typical scenario. The AM's transaction, which holds the caches, is a cache of in-progress work by that user, which can optionally have some of its data kept around for the next client that will use that AM from the pool.
    The AM instances are pooled and used in a stateless way, but using an algorithm called "stateless with affinity" which attempts to keep an AM intance "sticky" to the client that used it last if load allows us to do that. This occurs when the user is performing a unit of work that spans pages, so that the AM is being released to the AM pool in the "Managed State" mode instead of the "Stateless" mode.
    During the span of that unit of work, a user might use the same LOV's and visit the same screens over and over in the act of completing the job. The caches allow that user to avoid requerying any of that data during the span of that transaction, and generally the caches will contain only the data that is relevant to that user's task at hand.
    In 10.1.2, you can use the RowQualifier to filter rows in memory for simple kinds of SQL-type predicates. In 10.1.3, we've added a lot more control for querying over the caches, filtering over the caches, and doing both -- under developer control -- over either or both the cache and the database.
    Today, the primary way that the cache comes into play is avoiding database queries when things like association traversal is performed (typically as a part of business logic inside entity objects that need to access related entities to consult properties or methods on them), finding an entity by primary key which is performed for various reasons I've outlined in this blog article...
    http://radio.weblogs.com/0118231/stories/2005/07/28/differenceBetweenViewObjectSelectAndEntityDoselectMethod.html
    ..., as well as avoiding requerying when you re-render view object data that's already been queried during the unit of work.
    The entity cache holds the entity instances that the user has queried into the application module instance's transaction. The entity cache is populated by virtue of a view object's SQL statement, or by a direct or indirect call to EntityDefImpl.findByPrimaryKey().
    That said, we do support the notion of a shared, read-mostly application module as well. To use a shared application module, you need to set the jbo.ampool.isuseexclusive configuration property to the value false.
    Since all users are sharing the same application module, in particular they are sharing the view object instance, and even more in particular, its default rowset's default iterator.
    At the present time, for this feature to work robustly, the client code must insure that each user iterating the rows in a view objects inside a shared AM instance creates his own RowSetIterator and does not rely upon using the default RowSetIterator. Failure to do this could result in users messing each other notion of "current row" up.
    If you read my article on OTN about VO Performance Tuning...
    http://www.oracle.com/technology/products/jdev/tips/muench/voperftips/index.html
    you'll see that by avoiding caching when not needed, in some situations you can improve performance.
    We hope to make the multi-user, shared-readonly data reading even simpler to take advantage of in the future for shared, read-only data, but it's possible to achieve today with a little work.

Maybe you are looking for

  • Problems with FCP and Nvidia GeForce 8600M GT video card

    Not sure if this should be in the MacBook Pro forum or this one... I am having trouble with the GeForce 8600M GT video card while editing in final cut pro (and after effects). I have talked with numerous people at applecare, but they say that they ar

  • Error when executing Maps Updater

    Hi, I installed Nokia Maps Updater version 1.0.12 but I'm not able to use it. It opens and reaches the window where is shows "Checking for Updates" but then I get this error: "Error Cannot get enough information about the phone [3]. Unspecified error

  • ITunes isn't showing my tabs, which is making it incredibly difficult for me to sync.

    I have previously synced both my jailbroken fourth generation iPod touch and my jailbroken first generation iPad (both running iOS 5.0.1) to iTunes (10.5.2 Windows 7) and they managed to sync all but my Cydia apps (which is fine) but now no tabs are

  • How to Use Swing-ADF in a Three-Tier-Archiitecture

    I am currently avaluating Swing-ADF (11g) and I wonder how I can leverage it in a three-tier, rich-thin-client-environment. When I bind a control, normally I would do it directly to the database. As I want to have my business logic and database acces

  • BADI / EXIT for KO88 and KO8G to populate a warning if posting period blank

    Hi, Need is to have a user exit to populate a warning message if posting period is blank when executing these transactions KO88 and KO8G . Can u tell me the BADI or exit for the same? Regards