Entity beans caching non-persistent data between transactions
Some of the properties in our entity bean implementation classes are not declared
in our descriptor files, and therefore, are non-persistent (we are using container-managed
persistence); I will refer to these properties as "non-persistent properties".
In WebLogic 5.1, we've noticed that the non-persistent properties are cached in
between transactions. For instance, I ask for a particular Person (Person(James)),
and I set one of the non-persistent properties (Property(X)) inside Transaction(A).
In Transaction(B) (which is exclusive of Transaction(A)), I access Property(X)
and find that it is the same value as I had set in Transaction(A)- this gives
the appearance that non-persistent entity properties are being cached in between
transactions.
The same appears to hold true in WebLogic 7 SP1, however, we must use the "Exclusive"
concurrency-strategy to maintain this consistency.
I am worried that this assumption we are making of non-persistent properties is
not valid in all cases, and the documentation does not promise anything in the
way of such an assumption. I am worried that the container could kill the Person(James)
entity implementation instance in the pool after Transaction(A), and create a
new Person(James) instance to serve Transaction(B)- once that happens our assumption
fails.
"Database" concurrency strategy seems to fail our assumption on a regular basis,
but that makes sense, since the documentation states that the "database will maintain
the cache", and the container seems more willing to kill instances when they are
finished with, or create new instances for new transactions.
So my question is this: What is exactly guaranteed by the "Exclusive" concurrency-strategy?
Will the assumption that we've made above ever fail under this strategy?
Thanks in advance for any help.
Regards,
James
It simply means that there is only one entity bean instance per PK in the
server, and transaction which uses it locks it exclusively.
James DeFelice <[email protected]> wrote:
Thank you for the suggestion. I have considered taking this path, but before I
make a final decision, I was hoping to get a clear answer to the question that
I stated below:
What EXACTLY is guaranteed by the "Exclusive" concurrency-strategy? Maybe someone
from BEA knows?
"Cameron Purdy" <[email protected]> wrote:
To be safe: You should clear those values before the ejb load or set
them
after (or both).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"James DeFelice" <[email protected]> wrote in message
news:[email protected]...
Some of the properties in our entity bean implementation classes arenot
declared
in our descriptor files, and therefore, are non-persistent (we areusing
container-managed
persistence); I will refer to these properties as "non-persistentproperties".
In WebLogic 5.1, we've noticed that the non-persistent properties arecached in
between transactions. For instance, I ask for a particular Person(Person(James)),
and I set one of the non-persistent properties (Property(X)) insideTransaction(A).
In Transaction(B) (which is exclusive of Transaction(A)), I accessProperty(X)
and find that it is the same value as I had set in Transaction(A)-this
gives
the appearance that non-persistent entity properties are being cachedin
between
transactions.
The same appears to hold true in WebLogic 7 SP1, however, we must usethe
"Exclusive"
concurrency-strategy to maintain this consistency.
I am worried that this assumption we are making of non-persistentproperties is
not valid in all cases, and the documentation does not promise anythingin
the
way of such an assumption. I am worried that the container could killthe
Person(James)
entity implementation instance in the pool after Transaction(A), andcreate a
new Person(James) instance to serve Transaction(B)- once that happensour
assumption
fails.
"Database" concurrency strategy seems to fail our assumption on a regularbasis,
but that makes sense, since the documentation states that the "databasewill maintain
the cache", and the container seems more willing to kill instanceswhen
they are
finished with, or create new instances for new transactions.
So my question is this: What is exactly guaranteed by the "Exclusive"concurrency-strategy?
Will the assumption that we've made above ever fail under this strategy?
Thanks in advance for any help.
Regards,
James
Dimitri
Similar Messages
-
Questions about entity bean caching/pooling
We have a large J2ee app running on weblogic6.1 sp4. We are using entity beans
with cmp/cmr. We have about 200 EntityBeans and accessed quite heavily. We are
struggling with what is the right setting of max-beans-in-cache and idle-time-out.
The current max heap setting is 2GB. With the current setting (default setting
of max-beans-in-cache to 1000, with a few exceptions to take care of cachefullexceptions)
we run into extended gc happening after about 4 hours. The memory freed gradually
reduces with time and lurks around the 30% mark after about 4 hours of run at
the expected load. In relation to this we had the following questions
1. What does caching mean?
a. If a bean with primary key 100 exists in the cache, and the following
is done what is expected
i. findByPrimaryKey(100)
ii. findBySomeOtherKey(xyz)
which results in loading up bean with primary key 100
iii. cmr access to bean with
primary key 100
Is the instance in the cache reused at all between transactions?
If there is minimal reuse of the beans in cache, Is it fair to assume that caching
can only help loading of beans within a transaction. If this is the case, is there
any driver to increase the max-beans-in-cache other than to avoid CacheFullException?
In other words, is it wrong to say that max-beans-in-cache should be set to the
minimum value so as to avoid CacheFullExceptions.
2. Again what is the driver of setting idle-time-out to a value? ( We currently
have it at 30 secs) Partly the answer to this question would again go back to
what amount of reuse is done from cache? Is it right to say that it should be
set to a very low value? (Why is the default 10 min?)
3. Can you provide us any documentation that explains how all this works
in more detail, particularly in relevance to entity beans. We have already read
the documentation from weblogic as is. Anything to give more explicit detail?
Any tools that can be of use.
4. What is the right parameter (from among the things that weblogic console
throws up) to look at for optimizing?
Thanks in advance for your help
Cheers
ArunThe behaviour changes according to these descriptor settings: concurrency-strategy,
db-is-shared and include-updates.
1. If concurrency-strategy is Database, then the database is used to provide locking
and db-is-shared is ignored. A bean's ejbLoad() is called once per transaction,
and the 'cache' is really a per-transaction pool. A findByPrimaryKey() always
initially hits the db, but can use the cache if called again in the same txn (although
you'd simply just pass a reference around). A findByAnythingElse() always hits
the db.
2. If concurrency-strategy is ReadOnly then the cache is longer-term: ejbLoad()
is only called when the bean is activated; thereafter, the number of times ejbLoad()
is called is influenced by the setting of read-timeout-seconds. A findByPrimaryKey()
can use the cache. A findByAnythingElse() can't.
3. If concurrency-strategy is Exclusive then db-is-shared influences how many
times ejbLoad() is called. If db-is-shared is false (i.e. the container has exclusive
use of the underlying table), then the ejbLoad() behaviour is more like ReadOnly
(2. above), and the cache is longer-term. If db-is-shared is true, then the ejbLoad()
behaviour is like Database (1. above).
Exclusive concurrency reduces ejbLoads(), increases the effectiveness of the cache,
but can reduce app concurrency as only one instance of an entity bean can exist
inside the server, and access to it is serialised at the txn level.
You can't use db-is-shared = false in a cluster. So Exclusive mode is less useful.
That's when you think long and hard about Tangosol Coherence (http://www.tangosol.com)
4. If include-updates is true, then the cache is flushed to the db before every
non-findByPrimaryKey() finder call so the finder (which always hits the db) will
get the latest bean values. This overrides a true setting of delay-updates-until-end-of-tx.
The max-beans-in-cache setting refers to the maximum number of active beans (really
beans that have been returned by a finder in a txn that hasn't committed). This
wasn't checked in SP2 (we have an app that accidently loads 30,000 beans in a
txn with a max-beans-in-cache of 3,000. Slow, but it works, showing 3,000 active
beans, and 27,000 passivated ones...).
This setting is checked in SP5, but I don't know about SP4. So you do need to
size appropriately.
In summary:
- The cache isn't nearly as useful as you'd like. You get far more db activity
with entity beans than you'd like (too many ejbLoads()). This is disappointing.
- findByPrimaryKey() finders can use the cache. How long the cache is kept around
depends on concurrency-strategy.
- findByAnythingElse() finders always hit the db.
WebLogic 8 tidies all this up a bit with a cache-between-transactions setting
and optimistic locking. But I believe findByAnythingElse() finders still have
to hit the db - ejbql is never run against the cache, but is always converted
to SQL and run against the db.
Hope this is of some help - feel free to email me at simon-dot-spruzen-at-rbos-dot-com
(you get the idea!)
simon. -
Hi,
We are running our application on WL platform 7.0. We have a number of EntityBeans(about
30-40) which are container managed and also use CMR.
The max-beans-in-cache is at its default of 1000. We reach this limit of 1000
for about 10-15 of these beans in a day or two after a restart of the server.
(This is production server, we restart this occasionally for maintenance). The
memory usage for the server keeps increasing and once the entity cache limit is
reached we see that passivation keeps occurring and the heap usage is always at
about 80%-95% of the maximum (Total heap size is 1.5GB). We assume this could
be due to the EntityBeans that are cached by WebLogic. We also see performance
problems occassionally that might be probably due to the GC or passivation.
We want to lower our memory usage and also get rid of the occassional slow response
time. For doing this, is there any way to flush out those Beans from EntityCache
which are no more used ? WebLogic doesn't seem to flush the cache but only passivate
them as and when new beans are required. Is there any setting to change this behaviour
Cheers
Raja V.Thanks Thorick,
We are using Database concurrency and non-read only beans, hence i believe this
patch must help us.
secondly, are you aware of any way to find out the memory usage of the default
WLS Entity Bean Cache ?
Cheers
Raja
"thorick" <[email protected]> wrote:
>
Hi,
If you are using 'Database' concurrency, then support for an idle-timeout-seconds
on this cache will be coming in release 7.0sp5. This feature is intended
to ease
heap usage when Entity Beans using Database/Optimistic/ReadOnly (but
NOT Exclusive
or read-only !). One sets the max-beans-in-cache to be large enough
to handle
periodic
or occasional peak loads and idle-timeout-seconds is set to free the
cache of
unused beans
during periods of low demand.
If you cannot wait for sp5 and are willing to run a patch, there are
patches available
for
7.0sp2 and 7.0sp3. You'll have to contact your support representative
about
these.
Refer to 'CR110440' courtesy of yours truly !
Hope this helps
-thorick -
Entity Bean can only use container-managed transaction demarcation?
In <<Designing Enterprise Application with J2EE 2nd>>
Section 2.3.3.3 Enterprise Bean Transactions,it says:Entity beans can only use container-managed transaction demarcation.
That means,i can not get UserTransaction from EJBContext.
Is that true?Yes this is the requirement of the specs. Your ejb code generator should give you the error if you use usertransaction.
--Ashwani -
Programmatic Invalidation of Entity Bean Cache
Hi,
I wonder if there is a way to trigger the invalidation of the WASs Entity Bean cache from a Java program.
Does anybody know anything regarding this issue?
Regards,
HeikoHi Heiko,
the SAP EJB container does not support such kind of caching. We support a kind of "read only" entity beans, but you can mark a bean as "read only" only if it will never be updated. Any attempt for update will produce an exception. As a comparison, the JBoss "read only" options can be set for beans that are rarely updated.
This explains why there is no invalidation command in SAP Web AS as well.
We plan the implementation of a similar entity beans cache for the next releases.
I hope this is not a showstopper for your porting project. This kind of caching is usually used with performance reasons, so the fall-back variant would be to define the entity beans as "regular" instead of "read only".
HTH
Regards,
Svetoslav -
Need to cache non-persistent field value
Is it possible to have a non-persistent field in PC class which value will be updated by app, stored in DataCache and then can be used when PC instance will be taken from DataCache next time?
I tried to map my field as persistence-modifier="transactional" or "none". I do see in my log that the object is taken from cache, but that transient field value is always null after object retreival.
Thanks, Denis.Thanks for the hint, Abe.
I got what I need via custom filed mapping. If anyone interested:
public class CacheFieldMapping extends ColumnFieldMapping {
public CacheFieldMapping(FieldMetaData meta) {
super(meta);
public String getMappingType() {
return getClass().getName();
protected int getJDBCType() {
return Types.OTHER;
public boolean map() {
return true;
public void fromMappingInfo(MappingInfo info, boolean adapt) {
// do nothing...
public void toMappingInfo(MappingInfo info) {
// do nothing...
public void refSchemaComponents() {
// do nothing...
public int select(Select sel, KodoStateManager sm, JDBCStoreManager store,
JDBCFetchConfiguration fetch, int eagerMode) {
return -1;
public void load(KodoStateManager sm, JDBCStoreManager store,
JDBCFetchConfiguration fetch, Result res) throws SQLException {
// do nothing...
public void load(KodoStateManager sm, JDBCStoreManager store,
JDBCFetchConfiguration fetch) throws SQLException {
sm.storeObject(getIndex(), null);
public Object loadProjection(JDBCStoreManager store,
JDBCFetchConfiguration fetch, Result res, Joins joins) throws SQLException
return null;
public Boolean isCustomInsert(KodoStateManager sm) {
return Boolean.TRUE;
public Boolean isCustomUpdate(KodoStateManager sm) {
return Boolean.TRUE;
protected void update(KodoStateManager sm, Row row) throws SQLException {
// do nothing...
"Abe White" <[email protected]> wrote in message
news:[email protected]..
No, this is not possible. The data cache acts exactly like the
database, only storing persistent values. -
JDBC or Entity Beans for Read-Only Data?
Entity beans are way to slow on pulling the amount of data I need. What are the cons of just using JDBC? Is this bad programming?
One query pulls about 700 rows from 6 different tables taking up to 20 seconds. The same call using JDBC takes 2 seconds. What is the proper thing to do?One query pulls about 700 rows from 6 different
tables taking up to 20 seconds. The same call using
JDBC takes 2 seconds. What is the proper thing to do?JDBC. Entity beans are not really suited for data across multiple tables. This can be best done with plain SQL +JDBC. -
How to use same transaction when calling CMP entity beans and DAO (JDBC)
We are currently using Weblogic 8.1 SP2 with an Oracle 10g database (using XA thin and non-XA drivers).
We have a session bean that invokes an entity bean and a DAO (data access object pattern) in order to add data in 2 separate tables (account and history). Rows are added to the first (account) table using a CMP Entity bean while inserts are done in the 2nd (history) table using a DAO. Here is some pseudo code:
addHistorySessionBean (trans-attribute="Required")
begin
Step #1 - call addAccountEntityBean (trans- attribute="Required")
Step #2 - call addHistoryDAO (get datasource, connection)
end
The 2nd table (history) has a foreign key constraint to ensure that the corresponding key exists in the first (account) table. Unfortunately, DAO inserts on the 2nd (history) table fail with a foreign key constraint violation (INTEGRITY CONSTRAINT VIOLATION - PARENT KEY NOT FOUND!) since they cannot see the row added to the 1st (account) table in step #1 by the CMP entity bean.
How does one ensure that all this is done in a single transaction ? It appears that the app server creates two seperate transactions (one for the session bean facade and the entity bean and a 2nd transaction (when we retrieve a connection using the same data source JNDI name) for the DAO.
A post on server side suggested using a "<resource-ref>" in the session bean to tie the two potentially separate transactions together, but that does not work for us. However, I am not sure if we are doing that correctly. After we define the resource ref in the session facade bean, do we use the resource ref "name" attribute to lookup the datasource or do we still lookup the datasource by JNDI name ? Do we need to define the resource-ref tag in the entity bean also ?
Does Weblogic allow using a single transaction for this type of a scenario ? How does one specify within Weblogic that the same transaction should be utilized by the entity bean and any subsequent DAOs?
People have also suggested that we defer constraint checking until the transaction(s) are committed but that sounds like a work acount without addressing this issue. Would postponing the constraint checking in Oracle cause any additional overhead ?
Any suggestions with specific examples or documentation on how to address this issue will be gratefully appreciated.Thanks for your suggestion. Unfortunately, this does not work since it appears that there are 2 separate transactions going on here. One, the original one initiated by the session bean and used by the entity bean and the other initiated by the DAO. Any other ideas appreciated.
Hi,
Try setting the delay-database-inserts tag to
ejbCreate in RDBMS descriptor file.
http://bernal/stage/wls/docs81/ejb/DDreference-cmp-jar
.html#1113981
vasanthi ramesh -
How to disable cache implementation in 8.1 SP6 for entity beans
How to disable cache implementation in 8.1 SP6 for entity beans
In our production environment, we want to disable cache implementation for entity beans. We are using weblogic 8.1 SP6.
Weblogic cache implementation has been causing many cachefull exceptions so we want that it fetches the data from database everytime instead of cache.Check the "max-beans-in-cache" setting in your weblogic-ejb-jar.xml. The cache full exception, means that a new bean could not be created because of this cache being full.
Nothing to do with fetching from the database directly.
If you really want direct fetches then do not use beans at all. Consider using direct JDCB/SQL calls (without a entity bean) to fetch huge data.
Let's again review your application based on the below explanation before making a decision on which configuration change to perform:
If you truly have many ejbs simultaneously enrolled in transactions, then yes, the cache must be big enough to hold all the instances.
Your cache must have a max size large enough to handle your peak simultaneous transaction load.
Also, try to ask the following query to your application developer. Does every user actually have to have more than one bean in the cache ? Are many of these values read-only reference values that are not updated ?
SOLUTION CHOICE:
1. If you are testing with more number of users simultaneously, then you will need to increase the "max-beans-in-cache" setting in your weblogic-ejb-jar.xml to match the load.
2. If you have few users but still seeing a lot of beans, then consult with your application developer and try to ask the above questions (Does every user have more than......) and based on that try to redesign/tune your application to make it an effective Entity bean model.
3. If you have queries where huge result sets are returned, consider using plain/direct JDB/SQL calls.
Arun -
Using a BLOB data type in a CMP Entity Bean??
Hi all,
i am trying to store blob type data using entity bean, The corresponding field in the Entity bean is defined with data type byte[ ] , and i'm getting error while creating entity bean, could anyone please help me!
This is the error:
[6/7/06 11:19:05:559 EEST] 5b7936b5 ExceptionUtil E CNTR0019E: Non-application exception occurred while processing method "create". Exception data: com.ibm.ejs.container.CreateFailureException: ; nested exception is:
javax.ejb.EJBException: nested exception is: com.ibm.ws.ejbpersistence.utilpm.PersistenceManagerException: PMGR6022E: Error using adapter to create or execute an Interaction. com.ibm.ws.rsadapter.cci.WSInteractionImpl@6593f6a7
javax.ejb.EJBException: nested exception is: com.ibm.ws.ejbpersistence.utilpm.PersistenceManagerException: PMGR6022E: Error using adapter to create or execute an Interaction. com.ibm.ws.rsadapter.cci.WSInteractionImpl@6593f6a7
com.ibm.ws.ejbpersistence.utilpm.PersistenceManagerException: PMGR6022E: Error using adapter to create or execute an Interaction. com.ibm.ws.rsadapter.cci.WSInteractionImpl@6593f6a7
at com.ibm.ws.ejbpersistence.dataaccess.DataAccessRequestImpl.execute(Unknown Source)
at com.ibm.ws.ejbpersistence.beanextensions.ConcreteBeanStatefulInstanceExtensionImpl.executeCreate(Unknown Source)
at com.ibm.ws.ejbpersistence.beanextensions.CBNotExistState.ejbCreate(Unknown Source)
at com.ibm.ws.ejbpersistence.beanextensions.ConcreteBeanStatefulInstanceExtensionImpl.ejbPostCreate(Unknown Source)
at org.epo.dg3.task.ConcreteDocStore_5418fe50.ejbPostCreate(ConcreteDocStore_5418fe50.java:133)
at org.epo.dg3.task.EJSCMPDocStoreHomeBean_5418fe50.create_Local(EJSCMPDocStoreHomeBean_5418fe50.java:29)
at org.epo.dg3.task.EJSLocalCMPDocStoreHome_5418fe50.create(EJSLocalCMPDocStoreHome_5418fe50.java:23)
at java.lang.reflect.Method.invoke(Native Method)
at com.ibm.etools.utc.model.ReflectionMethodModel.invoke(ReflectionMethodModel.java:68)Try to use the OCI native driver instead of THIN.
-
Caching CMP Entity Beans with 9.0.2
Hi,
my CMP Entity Bean is accessed from a JSP-Page through a Stateless Session Bean. The Session-Bean executes findAll() on the Entity-Bean and the JSP-Page iterates over the Collection and displays the Fields through their getter-Methods. It turns out that every getXXX Methods results in a complete load of the Bean from the Database, so for 3 database-records with 4 attributes i get 12 SELECT - statements and 12 Activate/Load/Passivate cycles of the Entity Bean!!!!
I tried to setup the attributes in orion-ejb-jar.xml correctly, but that didn't show a difference:
-) exclusive-write-access="true" does only work in "read-only" mode (see oc4j_ejb_guide_r2.pdf)
-) setting timeouts didnt work
My Entity Bean settings are:
exclusive-write-access="true"
instance-cache-timeout="60"
max-instances="100"
max-tx-retries="0"
pool-cache-timeout="100"
The EJB-Callbacks are:
first the Session-Bean is created, and findAll() is executed - then all Entity-Beans are passivated again:
TestManBean Constructor called 1
TestManBean.setSessionContext() 1
TestManBean.ejbCreate() 1
PersonBean Constructor called 1
PersonBean.setEntityContext() 1
PersonBean.ejbActivate() 1
PersonBean.ejbLoad() 1
PersonBean Constructor called 2
PersonBean.setEntityContext() 2
PersonBean.ejbActivate() 2
PersonBean.ejbLoad() 2
PersonBean Constructor called 3
PersonBean.setEntityContext() 3
PersonBean.ejbActivate() 3
PersonBean.ejbLoad() 3
PersonBean.ejbPassivate() 1
PersonBean.ejbPassivate() 2
PersonBean.ejbPassivate() 3
now for every getXXX-Methods the following occurs:
PersonBean.ejbActivate() 3
PersonBean.ejbLoad() 3
PersonBean.ejbPassivate() 3
I user OC4J 9.0.2 Production with Oracle 8.1.6.
How can I turn on Entity-Bean Caching?
Thanks,
GilbertHi Gilbert,
Someone recently asked a similar question to yours. Perhaps that thread will
be helpful to you?
Author: Valeri Kireitchik
Subject: how to set exclusive-write-access="true"
Date: May 17, 2002 10:18 PT
URL:
Re: Get the Table name by providing the Sequence name
The same person (Valeri) also asked his question on two other forums:
http://www.elephantwalker.com/rfa?id=549
http://www.mail-archive.com/orion-interest%40orionserver.com/msg19037.html
Good Luck,
Avi. -
We have a rather large system where we get a lot of transactions. To
tune our system we decided to turn off transaction support for some
beans. (by setting the transaction attribute to NotSupported).
The system did not decrease the amount of transactions run so we
started wondering what was going on.
To test this we first implemented a small test application and then
also tested some examples provided by weblogic. In both cases we came
to the conclusion that when using entity beans transactions are always
run. We tested with wl6.1 and wl6.1sp3
A short example of some code:
Client:
for (int i=0; i<numBeans; i++) {
accounts = findOrCreateAccount("ID: "+i, i * 1000);
//transfer 1000 from account 10 to account 11
try {
accounts[10].withdraw(1000);
accounts[11].deposit(1000);
} catch (ProcessingErrorException e) {
log(" could not xfer cash");
} catch (RemoteException e) {
log(" could not xfer cash");
AccountBean (Entity Bean)
public double deposit(double amount) {
log("Depositing $" + amount + " into '" + accountId + "'");
if (weblogic.transaction.TxHelper.getTransaction() == null) {
log("dep no transaction");
else {
log("dep tx:
"+weblogic.transaction.TxHelper.getTransaction().getXID());
balance += amount;
if(true) {
log("throwing EJB EXCEPTION FOR TESTING");
throw new EJBException();
return balance;
ejb-jar.xml
<container-transaction>
<method>
<ejb-name>beanManaged</ejb-name>
<method-intf>Remote</method-intf>
<method-name>deposit</method-name>
</method>
<method>
<ejb-name>beanManaged</ejb-name>
<method-intf>Remote</method-intf>
<method-name>withdraw</method-name>
</method>
<trans-attribute>NotSupported</trans-attribute>
</container-transaction>
The first indication that a transaction was running was to check the
log:
deposit transaction: 370:971b99c458e3d95b
Then the transaction was rolled back and and we could see that the
account balance was unchanged which proves that a transaction was
running.
We can't be the first ones to discover this, but I can't find this
documented anywhere else.If an entity bean is called and no transaction is in effect, the container
will
always start a transaction for its own use. Its not expensive to start a
transaction in WLS. The best architecture is almost always to
begin/commit
transactions before calling entity beans and let the entity beans inherit
the transaction, so that multiple calls to an entity bean are grouped into
a single transaction.
Seth
Daniel Bernholc wrote:
We have a rather large system where we get a lot of transactions. To
tune our system we decided to turn off transaction support for some
beans. (by setting the transaction attribute to NotSupported).
The system did not decrease the amount of transactions run so we
started wondering what was going on.
To test this we first implemented a small test application and then
also tested some examples provided by weblogic. In both cases we came
to the conclusion that when using entity beans transactions are always
run. We tested with wl6.1 and wl6.1sp3
A short example of some code:
Client:
for (int i=0; i<numBeans; i++) {
accounts = findOrCreateAccount("ID: "+i, i * 1000);
//transfer 1000 from account 10 to account 11
try {
accounts[10].withdraw(1000);
accounts[11].deposit(1000);
} catch (ProcessingErrorException e) {
log(" could not xfer cash");
} catch (RemoteException e) {
log(" could not xfer cash");
AccountBean (Entity Bean)
public double deposit(double amount) {
log("Depositing $" + amount + " into '" + accountId + "'");
if (weblogic.transaction.TxHelper.getTransaction() == null) {
log("dep no transaction");
else {
log("dep tx:
"+weblogic.transaction.TxHelper.getTransaction().getXID());
balance += amount;
if(true) {
log("throwing EJB EXCEPTION FOR TESTING");
throw new EJBException();
return balance;
ejb-jar.xml
<container-transaction>
<method>
<ejb-name>beanManaged</ejb-name>
<method-intf>Remote</method-intf>
<method-name>deposit</method-name>
</method>
<method>
<ejb-name>beanManaged</ejb-name>
<method-intf>Remote</method-intf>
<method-name>withdraw</method-name>
</method>
<trans-attribute>NotSupported</trans-attribute>
</container-transaction>
The first indication that a transaction was running was to check the
log:
deposit transaction: 370:971b99c458e3d95b
Then the transaction was rolled back and and we could see that the
account balance was unchanged which proves that a transaction was
running.
We can't be the first ones to discover this, but I can't find this
documented anywhere else. -
Entity beans(EJB 3.0) replication in IAS 10.1.3
Hi,
I have a application with webservices, having a few stateless session beans and a few entity beans. The application has to be deployed in a grid environment, on 2 machines.
My problem is that a change in an entity needs to be replicated on both machines. Right now, with the application deployed on 2 machines, in a load balanced non-cluster environment, I have stale data. I have reviewed the option of cache coordination, but the toplink essentials don't have this feature.
As a temp solution, one of the machines was stopped. Another option is to disable the entity beans cache completely, but that's not what I'm looking for.
I have done some digging in the documentation, and it's clear that stateful beans can be replicated in a clustered environment. In this configuration, are the entity beans replicated too?Thanks for your reply. I'm actually interested in the entities, not the web session. I need to make sure that the entities cache stays synchronized between the nodes.
Example:
I have entity "customer" for example.. I do an update from one of the nodes, changing the customer name, and do commit. This happens in node 1. I want to make sure that the node 2 knows this happened. -
Exposing local entity beans to web tier?
Hello all,
I have a question concerning basic design issues with EJB and the JSP presentation
layer. Currently, I have designed a system using all local component interfaces
of EJB 2.0. I currently have the following architecture in place:
EJB --> Servlets (WebWork Actions) --> JSP
I'm utilizing the session facade design pattern where the business logic in encapsulated
in session beans, which internally access the entity beans, etc. However, I am
doing something of the following:
sessionBean.getUserAccounts() which returns a Collection of actual UserAcountLocal's.
I then am passing this collection off to the JSP just to cycle through each entity
bean and display the data via its getXXX() methods. No modifications to the entity
beans are being done directly in the Actions/JSPs, rather they are modified through
methods of the session beans. Now I know that there is the concept of DTO (Data
transfer objects), which send the data from the particular entity bean to a regular
java bean for presentation. I know that DTO's increase performance if one is using
remote interfaces, because there is less network traffic that occurs via that
transport method. However, I know that WebLogic performs excellent caching of
entity beans, so that multiple invocations of get() methods on entity beans will
not make a trip to the database each and every time the get() method is called.
So, my question is: Is it "safe" to continue with the current way I am designing/coding
the system? I just find it a bit tedious to create value objects for each and
every entity bean, if I know that I will not be calling setXXX() methods from
within the presentation layer. Also, with EJB 2.0 and the introduction of local
component interfaces, it seems that issues regarding limiting the amount of network
traffic don't seem to be so relevant. Any suggestions/tips are appreciated. :-)
Best regards,
Ryan LeCompte
[email protected]
http://www.louisiana.edu/~rml7669use dtos
the main reason is that if you call a getXXX() method on a local or remote
interface from your servlet then that bean is retrieved again (as the
servlet is outside the transaction involved in the initial retrieval)
For example if you retrieve 100 users and want to display them in a html
table with the user id, first name and lastname then there end up being more
than 300 SQL statements executed (this is unless your ejbs are readonly)
If you have a tool (like sql server profiler) that traces sql statements i
recommend you use it to see the staggering amount of sql statements that are
being executed by your current code - then DTOs will look much more
appealing (it worked for me) :).
I would also recommend using dtos when performing updates. Basically work
towards your servlets never directly accessing anything entity bean related.
Some people extend this further and have the DTO as the single argument in
the create method of an entity bean - I havent done this yet myself but it
looks like a good idea to me.
"Ryan LeCompte" <[email protected]> wrote in message
news:[email protected]...
>
Hello all,
I have a question concerning basic design issues with EJB and the JSPpresentation
layer. Currently, I have designed a system using all local componentinterfaces
of EJB 2.0. I currently have the following architecture in place:
EJB --> Servlets (WebWork Actions) --> JSP
I'm utilizing the session facade design pattern where the business logicin encapsulated
in session beans, which internally access the entity beans, etc. However,I am
doing something of the following:
sessionBean.getUserAccounts() which returns a Collection of actualUserAcountLocal's.
I then am passing this collection off to the JSP just to cycle througheach entity
bean and display the data via its getXXX() methods. No modifications tothe entity
beans are being done directly in the Actions/JSPs, rather they aremodified through
methods of the session beans. Now I know that there is the concept of DTO(Data
transfer objects), which send the data from the particular entity bean toa regular
java bean for presentation. I know that DTO's increase performance if oneis using
remote interfaces, because there is less network traffic that occurs viathat
transport method. However, I know that WebLogic performs excellent cachingof
entity beans, so that multiple invocations of get() methods on entitybeans will
not make a trip to the database each and every time the get() method iscalled.
So, my question is: Is it "safe" to continue with the current way I amdesigning/coding
the system? I just find it a bit tedious to create value objects for eachand
every entity bean, if I know that I will not be calling setXXX() methodsfrom
within the presentation layer. Also, with EJB 2.0 and the introduction oflocal
component interfaces, it seems that issues regarding limiting the amountof network
traffic don't seem to be so relevant. Any suggestions/tips areappreciated. :-)
>
Best regards,
Ryan LeCompte
[email protected]
http://www.louisiana.edu/~rml7669 -
statement
In an entity bean, passivation includes releasing resources as well as pushing back the data it is
holding to the underlying database prior to the passivation process.
Hi, when i read stateful session bean the author mentioned about passivation because the stateful bean must have to store the conversational states of many clients. so the bean pushes the data to the secondary stoarge (disk) for future use, if at some time any client needs them the bean could use those states.... thats passivation.
but in Entity bean, i see author also talking about passivation. though author did not explain much. the entity bean is already using a permanent storage . why they need any secondary storage which is meant for passivation ?
because here activation means grabbing data from underlying DB. How does passivation helps here ?Just a guess - I have no idea about EJBs:
Could it be that passivaion in this case simply means "performing the stuff that's needed for an entity bean to be persistent"? An entity bean IIRC represents a persistent set of data, which means that at one point in time, the data must be stored (written to DB) to be accessible later. That'd mean "passivation" is simply a name for an entity bean's normal behavior.
Maybe you are looking for
-
How to find out what server the outlook client is connected to/change it automatically
Hi, I am performing a server migration from Exchange 2010 to Exchange 2010 (the previous installation was installed by an outsourced provider and is rubbish so I'm configuring it properly). On the of things the existing server lacks is a CAS array co
-
Oracle 9i Database and Solaris 10 Zones
Can an existing oracle 9i database be moved into a new zone? The database resides on it's own filesystem. The server is runnign Solaris 10, and the zones are not set up yet, but Oracle is installed, and the 2 databases are up and running. Basically t
-
TFS 2013 Move Team Project to a new Team Project under the same Collection
Was trying to move a team project to another team project so that it can be named correctly. DefaultCollection ProjectA to DefaultCollection ProjectB Everything I've found online is all over the place as far as accomplishing this task. Project be do
-
Hi All, I have been using my itunes on my mac the last few days and noticed it had been sluggish. I store all my music on a 2TB WD external hard drive and itunes references from that drive. In fact I felt the overall performance of my computer had be
-
BI_CONT 7.35 with GRC 5.3
Hi All, Has anyone had experience with activating this business content version with GRC 5.3 CC and AE? I have created the UD connection and activated all the process chains and downstream objects, however I am pulling almost no data into the DSO's a