JPA - Shared Entity
Hi All,
I am using JPA (Hibernate), and I am stuck...
I have an Address class, and a Customer class. Customer has different types of Address, fixed (One), delivery (Multiple) and charge(Multiple).
If I try to map with OneToMany and OneToMany for delivery and charge, JPA will create a table with a customer PK, delivery PK and charge PK, and every time it tryies to store a delivery PK, charge PK will be null, causing an exception.
I could fix that using inheritance nad InheritanceType.JOINED, but I would prefer to use single table, for queries.
Is a hibernate/JPA or design problem??? How do I fix that?
Thx in advance
Hi All,
I am using JPA (Hibernate), and I am stuck...
I have an Address class, and a Customer class. Customer has different types of Address, fixed (One), delivery (Multiple) and charge(Multiple).
If I try to map with OneToMany and OneToMany for delivery and charge, JPA will create a table with a customer PK, delivery PK and charge PK, and every time it tryies to store a delivery PK, charge PK will be null, causing an exception.
I could fix that using inheritance nad InheritanceType.JOINED, but I would prefer to use single table, for queries.
Is a hibernate/JPA or design problem??? How do I fix that?
Thx in advance
Similar Messages
-
EclipseLink + JPA + Generic Entity + SINGLE_TABLE Inheritance
I was wondering if it's possible in JPA to define a generic entity like in my case PropertyBase<T> and derive concrete entity classes like ShortProperty and StringProperty and use them with the SINGLE_TABLE inheritance mode? If I try to commit newly created ElementModel instances (see ElementModelTest) over the EntityManager I always get an NumberFormatException that "value" can't be properly converted to a Short. Strangely enough if I define all classes below as inner static classes of my test case class "ElementModelTest" this seems to work. Any ideas what I need to change to make this work?
I'm using EclipseLink eclipselink-2.6.0.v20131019-ef98e5d.
public abstract class PersistableObject
implements Serializable {
private static final long serialVersionUID = 1L;
private String id = UUID.randomUUID().toString();
private Long version;
public PersistableObject() {
this(serialVersionUID);
public PersistableObject(final Long paramVersion) {
version = paramVersion;
public String getId() {
return id;
public void setId(final String paramId) {
id = paramId;
public Long getVersion() {
return version;
public void setVersion(final Long paramVersion) {
version = paramVersion;
public String toString() {
return this.getClass().getName() + "[id=" + id + "]";
public abstract class PropertyBase<T> extends PersistableObject {
private static final long serialVersionUID = 1L;
private String name;
private T value;
public PropertyBase() {
this(serialVersionUID);
public PropertyBase(final Long paramVersion) {
this(paramVersion, null);
public PropertyBase(final Long paramVersion, final String paramName) {
this(paramVersion, paramName, null);
public PropertyBase(final Long paramVersion, final String paramName, final T paramValue) {
super(paramVersion);
name = paramName;
value = paramValue;
public String getName() {
return name;
public void setName(final String paramName) {
name = paramName;
public T getValue() {
return value;
public void setValue(final T paramValue) {
value = paramValue;
public class ShortProperty extends PropertyBase<Short> {
private static final long serialVersionUID = 1L;
public ShortProperty() {
this(null, null);
public ShortProperty(final String paramName) {
this(paramName, null);
public ShortProperty(final String paramName, final Short paramValue) {
super(serialVersionUID, paramName, paramValue);
public class StringProperty extends PropertyBase<String> {
private static final long serialVersionUID = 1L;
protected StringProperty() {
this(null, null);
public StringProperty(final String paramName) {
this(paramName, null);
public StringProperty(final String paramName, final String paramValue) {
super(serialVersionUID, paramName, paramValue);
public class ElementModel extends PersistableObject {
private static final long serialVersionUID = 1L;
private StringProperty name = new StringProperty("name");
private ShortProperty number = new ShortProperty("number");
public ElementModel() {
this(serialVersionUID);
public ElementModel(final Long paramVersion) {
super(paramVersion);
public String getName() {
return name.getValue();
public void setName(final String paramName) {
name.setValue(paramName);
public Short getNumber() {
return number.getValue();
public void setNumber(final Short paramNumber) {
number.setValue(paramNumber);
<?xml version="1.0" encoding="UTF-8" ?>
<entity-mappings version="2.1"
xmlns="http://xmlns.jcp.org/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence/orm http://xmlns.jcp.org/xml/ns/persistence/orm_2_1.xsd">
<mapped-superclass
class="PersistableObject">
<attributes>
<id name="id">
<column name="id" />
</id>
<version name="version" access="PROPERTY">
<column name="version" />
</version>
</attributes>
</mapped-superclass>
<entity class="PropertyBase">
<table name="PropertyBase" />
<inheritance />
<discriminator-column name="type"/>
<attributes>
<basic name="name">
<column name="name" />
</basic>
<basic name="value">
<column name="value" />
</basic>
</attributes>
</entity>
<entity class="StringProperty">
<discriminator-value>StringProperty</discriminator-value>
</entity>
<entity class="ShortProperty">
<discriminator-value>ShortProperty</discriminator-value>
</entity>
<entity class="ElementModel">
<table name="ElementModel" />
<inheritance />
<discriminator-column name="type"/>
<attributes>
<one-to-one name="name">
<join-column name="name" referenced-column-name="id" />
<cascade>
<cascade-all />
</cascade>
</one-to-one>
<one-to-one name="number">
<join-column name="number" referenced-column-name="id" />
<cascade>
<cascade-all />
</cascade>
</one-to-one>
</attributes>
</entity>
</entity-mappings>
public class ElementModelTest extends ModelTest<ElementModel> {
public ElementModelTest() {
super(ElementModel.class);
@Test
@SuppressWarnings("unchecked")
public void testSQLPersistence() {
final String PERSISTENCE_UNIT_NAME = getClass().getPackage().getName();
new File("res/db/test/" + PERSISTENCE_UNIT_NAME + ".sqlite").delete();
EntityManagerFactory factory = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT_NAME);
EntityManager em = factory.createEntityManager();
Query q = em.createQuery("select m from ElementModel m");
List<ElementModel> modelList = q.getResultList();
int originalSize = modelList.size();
for (ElementModel model : modelList) {
System.out.println("name: " + model.getName());
System.out.println("size before insert: " + modelList.size());
em.getTransaction().begin();
for (int i = 0; i < 10; ++i) {
ElementModel device = new ElementModel();
device.setName("ElementModel: " + i);
device.setNumber((short) i);
em.persist(device);
em.getTransaction().commit();
modelList = q.getResultList();
System.out.println("size after insert: " + modelList.size());
assertTrue(modelList.size() == (originalSize + 10));
em.close();This was answered in a cross post here java - EclipseLink + JPA + Generic Entity + SINGLE_TABLE Inheritance - Stack Overflow
Short answer: No, it shouldn't work as the underlying database field type would be constant. So either the Short or the String would have problems converting to the database type if they both mapped to the same table field. -
JPA: Error "entity is detached" when executing a query
Hi experts,
I have two database tables with a foreign key constraint and generated JPA-entities for them:
@Entity
public class Verdeck implements Serializable {
@EmbeddedId
private VerdeckPK pk;
@Column(name="ID_VERDECK")
private String idVerdeck;
@OneToMany(mappedBy="verdeck")
@PersistenceContext
private Set<Uzsb> uzsbCollection;
@Embeddable
public class UzsbPK implements Serializable {
@Column(name="ID_UZSB")
private String idUzsb;
@Column(name="ID_PROJECT")
private BigDecimal idProject;
Furthermore I have a SessionBean implementing a query in one of its business methods:
@WebMethod(operationName="getVerdeckData", exclude=false)
public List<Verdeck> getVerdeckData (@WebParam(name="searchkey")
BigDecimal searchkey){
Query q = em.createQuery("SELECT v FROM Verdeck v WHERE v.pk.idProject = :searchkey")
.setParameter("searchkey", searchkey);
return q.getResultList();
When calling the method via WebService-Navigator I get this error:
"com.sap.engine.services.webservices.espbase.server.additions.exceptions.ProcessException: The relationship >>uzsbCollection<< of entity (com.karmann.r57schraub.jpa.Verdeck(idProject=57, idIntern=v1))cannot be loaded because the entity is detached"
(idProject / idIntern) is the composed key of "Verdeck" and (57 / v1) is a concrete value for this key in the database table.
If required I could give you classes VerdeckPK and UzsbPK as well.
Could you please explain what I'm doing wrong?
Thanks for each hint,
ChristophHi Vladimir,
thank you for this hint! Especially for the article which provides the neccessary knowledge in background.
But using FetchType.EAGER does not solve my problem. I get a runtime error. In defaultTrace I get the following message
SAXException2: A cycle is detected in the object graph. This will cause infinitely deep XML: com.karmann.r57schraub.jpa.Verdeck@f9ac24 -> com.karmann.r57schraub.jpa.Uzsb@1f2a70 -> com.karmann.r57schraub.jpa.Verdeck@f9ac24]->com.sun.istack.SAXException2: A cycle is detected in the object graph. This will cause infinitely deep XML: com.karmann.r57schraub.jpa.Verdeck@f9ac24 -> com.karmann.r57schraub.jpa.Uzsb@1f2a70 -> com.karmann.r57schraub.jpa.Verdeck@f9ac24#
I already checked the records in my tables. There does not seem to be any cycle. Here's my test data:
VERDECK
ID_PROJECT ID_INTERN ID_VERDECK
57 v1 vext1
57 v2 vext2
UZSB
ID_PROJECT ID_UZSB ID_INTERN TYP_UZSB
57 ls1 v1 <null>
57 rs1 v1 <null>
57 sd1 v1 <null>
57 ls2 v2 <null>
57 rs2 v2 <null>
57 sd2 v2 <null>
Do you have any more idea?
Regards,
Christoph -
Setting the size of a shared entity cache
Hi,
I am using WLS 7.1 and have a few entity beans that use a shared
application-level cache. I would like to set the size of this cache using
the admin console.
As this property is specified in the weblogic-application.xml file, I
attempted to access the page in the Admin console Deployment Descriptor
Editor that lets you modify this deployment descriptor.
However, clicking on the Weblogic-application node in the deployment
descriptor editor gives me an Error 404--Not Found message.
Any ideas why this is happening?
Thanks in advance.
Santosh
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.754 / Virus Database: 504 - Release Date: 6/09/2004Hi,
I am using WLS 7.1 and have a few entity beans that use a shared
application-level cache. I would like to set the size of this cache using
the admin console.
As this property is specified in the weblogic-application.xml file, I
attempted to access the page in the Admin console Deployment Descriptor
Editor that lets you modify this deployment descriptor.
However, clicking on the Weblogic-application node in the deployment
descriptor editor gives me an Error 404--Not Found message.
Any ideas why this is happening?
Thanks in advance.
Santosh
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.754 / Virus Database: 504 - Release Date: 6/09/2004 -
EJB/JPA Session/Entity Bean: Back Reference Id not set
Hello All,
I m using JPA as a persist tool. I have an entity Account and in Account there is a collection of entity AccountDetail. In Account entity relation for AccountDetail is One-To-Many and in AccountDetail the relation for Account is Many-To-One. When i save the Account, object of Account and AccountDetail are saved correctly. But the problem is that in AccountDetail the Id of Account is not saved in Database table. I have AccountId in AccountDetail table. plz help me??
Second thing is that how to generate automatically primary keys when entity saved in database ???define the column you want to auto increment as an Integer with Identity attribute set to True (using the sql server management tool for example) and then put
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
in front of the ID declaration in the mapping file, it should generate the IDs automatically. -
Hi,
I am looking for help to create an class hierarchy that would use interface instead inheritance.
My example: In a house I have livingrooms, kitchens, and bedrooms. So I have a class for each type of room, and theses classes extends Contruction (another entity).
I also have furniture. I would like to retrieve all the rooms that have a funiture type.
See below (simple version):
@Entity
class Construction {
@Id
private Long id
@Entity
class LivingRoom extends Construction implements Equipable {
@ManyToMany
List<Funiture> furnitures
@Entity
class Furniture {
@ManyToMany
List<Equipable> equipables
}Is it possible to map that with JPA? My final objective is to retrieve a List of Places that a forniture could be, and also alll the forniture that a place could have.
I am trying that with jpa/hibernate but I am not having any success.
And, if it is not possible, the best way to implement that?
ThxUnfortunately you cannot do this. From the persistence section of the JavaEE 5 Tutorial:
The persistent state of an entity can be accessed either through the entity�s
instance variables or through JavaBeans-style properties. The fields or properties
must be of the following Java language types:
� Java primitive types
� java.lang.String
� Other serializable types including:
� Wrappers of Java primitive types
� java.math.BigInteger
� java.math.BigDecimal
� java.util.Date
� java.util.Calendar
� java.sql.Date
� java.sql.Time
� java.sql.TimeStamp
� User-defined serializable types
� byte[]
� Byte[]
� char[]
� Character[]
� Enumerated types
� Other entities and/or collections of entities
� Embeddable classes
As you can see this doesn't mention interfaces, which makes sense since they are abstract and have no actual state to persist - the best way I've thought of to cope would be to have everything you want to implement 'equipable' inherit from a parent class, even if that class is completely anemic.
If anyone has any better solutions please reply. -
JPA and Entity manager.
Hi all,
I have a "simple" question:
Should I create a class to manage entity manager and entity manager factory on JPA2?
Why do I ask that? Because I read on J2EE tutorial:
With a container-managed entity manager, an EntityManager instance’s persistence context is automatically propagated by the container to all application components that use the EntityManager instance within a single Java Transaction Architecture (JTA) transaction.
To obtain an EntityManager instance, inject the entity manager into the application component:
@PersistenceContext
EntityManager em;
I'm using JSF2, JPA2, JTA and Glassfish3.1 on the NetBeans 7.0.1.
The wizard that creates the JSF's pages from entity class does a good work but I have some problem in the other classes.
When I call EntityManager it returns null!
The strange thing is that it works properly the first time, when I recall the EntityManager from another xhtml page and from another class the entity manager returns NULL!
So I would understand if the creation of a class for manage the EntityManagerFactory and EntityManager could help me.
Thank you for your future help!Filippo Tenaglia wrote:
Hi all,
I have a "simple" question:
Should I create a class to manage entity manager and entity manager factory on JPA2?That question is far from complete, as the answer to this one is "depends on what you want to do!"
Why do I ask that? Because I read on J2EE tutorial:
With a container-managed entity manager, an EntityManager instance’s persistence context is automatically propagated by the container to all application components that use the EntityManager instance within a single Java Transaction Architecture (JTA) transaction.
To obtain an EntityManager instance, inject the entity manager into the application component:
@PersistenceContext
EntityManager em;
Given this information...
I'm using JSF2, JPA2, JTA and Glassfish3.1 on the NetBeans 7.0.1.That answer to your question is "NO", because you have a container available to you that can do the work (Glassfish). It makes no sense using Glassfish and then purposely ignoring the features it has to offer.
Lets get back to basics: you are doing something wrong here. You have to figure out what, you can only do that if you keep studying and getting a more complete understanding. If you are having trouble realizing that, perhaps you should start over without the help of any wizard at all; wizards are only useful when you are already experienced, right now the fact that code is generated is hindering you a lot because you will have a strong impulse to believe there can be no mistake; unfortunately there is no such safety net. The fact that you used the word "weird" is enough proof of this by the way. There is nothing weird here, just a mistake being made that has to be corrected.
And for the future: what IDE (Netbeans in your case) you use to develop really makes absolutely no difference at all. What is more interesting is which version of Java your are using, which can likely be Java 6 or Java 7 nowadays. -
JPA OC4J Entity Manager query.getResultList
Hello All,
I am using JPA with OC4J version 10.1.3.3. I am trying to query the database useing enitityManager.createQuery() and requirementList = requirementQuery.getResultList();. It is freezs on executing this statement and i can not see any errors in the log files. I verified the following log files in OC4J
1)$ORACLE_HOME/j2ee/RMS/application-deployments/RMS/RMS_default_group_1/application.log
2)$ORACLE_HOME/j2ee/RMS/log/RMS_default_group_1/oc4j/log.xml
3)$ORACLE_HOME/opmn/logs/default_group~RMS~default_group~1.log
Please find the peice of code i am using below. It prints the debug statement log.debug("test 411"); in the log file.
Please some body help me to resolve this issue.
@TransactionAttribute(TransactionAttributeType.REQUIRED)
private List<Requirement> getLatestRequirementsForRequest(Request request) throws RMSApplicationException,
RMSSystemException,
Exception {
log.debug("\n\n\n###############################\n\ninside getLatestRequirementsForRequest 1");
String latestRqmtQry =
//" and rqmt.reqmt_timestamp = :requestReceived " +
// " and rqmt.reqmtClientReference is null" +
// " and req.requestContract in ( " + contractNumbers + " )" +
// " or req.request_proposal in ( " + contractNumbers + " )" +
" select rqmt from Requirement rqmt, Request request " +
" where request.parent.requestId = :requestId" +
" and rqmt.reqmt_timestamp = request.requestReceived " +
" and rqmt.requirementStatus.reqmtStatusName ='Outstanding' " +
" and rqmt.reqmtType = 1 " +
" and request.requestStatus.requestStatusName not in ( 'Completed' ,'Cancelled' )";
log.debug("sal: qry for rqmtlist " + latestRqmtQry);
List<Requirement> requirementList = new ArrayList<Requirement>();
try {
log.debug("sTest41");
Query requirementQuery = entityManager.createQuery(latestRqmtQry);
log.debug("requesId="+request.getRequestId());
requirementQuery.setParameter("requestId", request.getRequestId());
log.debug("test 411");
// requirementQuery.setFlushMode()
requirementList = requirementQuery.getResultList();
// requirementList =(List)em.createNamedQuery("Requirement.findLatestRqmtsForContracts").setParameter("contractNumbers",contractNumbers);
log.debug("requirementList.size = " + requirementList.size());
log.debug("requirementList = " + requirementList);
} catch (RuntimeException e) {
//ctx.setRollbackOnly();
log.error("Runtime Exception while getting the requirements:",e);
throw e;
} catch(Exception e){
log.error("Exception while getting the requirements:",e);
throw e;
log.debug("Test71");
return requirementList;
}SQL Query is select query using 4 tables. I verified with the data base when i get encounter this problem there are no locks on any of the tables. I also see that the session to the data base completes smoothly. So it proves there is no problem with data base.
Please note that this kind of behaviour i am seeing with only perticular cases around 2 to 5% of cases. This code i am using from last 6 months.
Could you please let me know how to get stack dump? -
How to create a cache for JPA Entities using an EJB
Hello everybody! I have recently got started with JPA 2.0 (I use eclipseLink) and EJB 3.1 and have a problem to figure out how to best implement a cache for my JPA Entities using an EJB.
In the following I try to describe my problem.. I know it is a bit verbose, but hope somebody will help me.. (I highlighted in bold the core of my problem, in case you want to first decide if you can/want help and in the case spend another couple of minutes to understand the domain)
I have the following JPA Entities:
@Entity Genre{
private String name;
@OneToMany(mappedBy = "genre", cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Collection<Novel> novels;
@Entity
class Novel{
@ManyToOne(cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Genre genre;
private String titleUnique;
@OneToMany(mappedBy="novel", cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Collection<NovelEdition> editions;
@Entity
class NovelEdition{
private String publisherNameUnique;
private String year;
@ManyToOne(optional=false, cascade={CascadeType.PERSIST, CascadeType.MERGE})
private Novel novel;
@ManyToOne(optional=false, cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Catalog appearsInCatalog;
@Entity
class Catalog{
private String name;
@OneToMany(mappedBy = "appearsInCatalog", cascade = {CascadeType.MERGE, CascadeType.PERSIST})
private Collection<NovelEdition> novelsInCatalog;
The idea is to have several Novels, belonging each to a specific Genre, for which can exist more than an edition (different publisher, year, etc). For semplicity a NovelEdition can belong to just one Catalog, being such a Catalog represented by such a text file:
FILE 1:
Catalog: Name Of Catalog 1
"Title of Novel 1", "Genre1 name","Publisher1 Name", 2009
"Title of Novel 2", "Genre1 name","Pulisher2 Name", 2010
FILE 2:
Catalog: Name Of Catalog 2
"Title of Novel 1", "Genre1 name","Publisher2 Name", 2011
"Title of Novel 2", "Genre1 name","Pulisher1 Name", 2011
Each entity has associated a Stateless EJB that acts as a DAO, using a Transaction Scoped EntityManager. For example:
@Stateless
public class NovelDAO extends AbstractDAO<Novel> {
@PersistenceContext(unitName = "XXX")
private EntityManager em;
protected EntityManager getEntityManager() {
return em;
public NovelDAO() {
super(Novel.class);
//NovelDAO Specific methods
I am interested at when the catalog files are parsed and the corresponding entities are built (I usually read a whole batch of Catalogs at a time).
Being the parsing a String-driven procedure, I don't want to repeat actions like novelDAO.getByName("Title of Novel 1") so I would like to use a centralized cache for mappings of type String-Identifier->Entity object.
Currently I use +3 Objects+:
1) The file parser, which does something like:
final CatalogBuilder catalogBuilder = //JNDI Lookup
//for each file:
String catalogName = parseCatalogName(file);
catalogBuilder.setCatalogName(catalogName);
//For each novel edition
String title= parseNovelTitle();
String genre= parseGenre();
catalogBuilder.addNovelEdition(title, genre, publisher, year);
//End foreach
catalogBuilder.build();
2) The CatalogBuilder is a Stateful EJB which uses the Cache and gets re-initialized every time a new Catalog file is parsed and gets "removed" after a catalog is persisted.
@Stateful
public class CatalogBuilder {
@PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
private EntityManager em;
@EJB
private Cache cache;
private Catalog catalog;
@PostConstruct
public void initialize() {
catalog = new Catalog();
catalog.setNovelsInCatalog(new ArrayList<NovelEdition>());
public void addNovelEdition(String title, String genreStr, String publisher, String year){
Genre genre = cache.findGenreCreateIfAbsent(genreStr);//##
Novel novel = cache.findNovelCreateIfAbsent(title, genre);//##
NovelEdition novEd = new NovelEdition();
novEd.setNovel(novel);
//novEd.set publisher year catalog
catalog.getNovelsInCatalog().add();
public void setCatalogName(String name) {
catalog.setName(name);
@Remove
public void build(){
em.merge(catalog);
3) Finally, the problematic bean: Cache. For CatalogBuilder I used an EXTENDED persistence context (which I need as the Parser executes several succesive transactions) together with a Stateful EJB; but in this case I am not really sure what I need. In fact, the cache:
Should stay in memory until the parser is finished with its job, but not longer (should not be a singleton) as the parsing is just a very particular activity which happens rarely.
Should keep all of the entities in context, and should return managed entities form mehtods marked with ##, otherwise the attempt to persist the catalog should fail (duplicated INSERTs)..
Should use the same persistence context as the CatalogBuilder.
What I have now is :
@Stateful
public class Cache {
@PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
private EntityManager em;
@EJB
private sessionbean.GenreDAO genreDAO;
//DAOs for other cached entities
Map<String, Genre> genreName2Object=new TreeMap<String, Genre>();
@PostConstruct
public void initialize(){
for (Genre g: genreDAO.findAll()) {
genreName2Object.put(g.getName(), em.merge(g));
public Genre findGenreCreateIfAbsent(String genreName){
if (genreName2Object.containsKey(genreName){
return genreName2Object.get(genreName);
Genre g = new Genre();
g.setName();
g.setNovels(new ArrayList<Novel>());
genreDAO.persist(t);
genreName2Object.put(t.getIdentifier(), em.merge(t));
return t;
But honestly I couldn't find a solution which satisfies these 3 points at the same time. For example, using another stateful bean with an extended persistence context (PC) would work for the 1st parsed file, but I have no idea what should happen from the 2nd file on.. Indeed, for the 1st file the PC will be created and propagated from CatalogBuilder to Cache, which will then use the same PC. But after build() returns, the PC of CatalogBuilder should (I guess) be removed and re-created during the succesive parsing, although the PC of Cache should stay "alive": shouldn't in this case an exception being thrown? Another problem is what to do when the Cache bean is passivated. Currently I get the exception:
"passivateEJB(), Exception caught ->
java.io.IOException: java.io.IOException
at com.sun.ejb.base.io.IOUtils.serializeObject(IOUtils.java:101)
at com.sun.ejb.containers.util.cache.LruSessionCache.saveStateToStore(LruSessionCache.java:501)"
Hence, I have no Idea how to implement my cache.. Can you please tell me how would you solve the problem?
Many thanks!
ByeHi Chris,
thanks for your reply!
I've tried to add the following into persistence.xml (although I've read that eclipseLink uses L2 cache by default..):
<shared-cache-mode>ALL</shared-cache-mode>
Then I replaced the Cache bean with a stateless bean which has methods like
Genre findGenreCreateIfAbsent(String genreName){
Genre genre = genreDAO.findByName(genreName);
if (genre!=null){
return genre;
genre = //Build new genre object
genreDAO.persist(genre);
return genre;
As far as I undestood, the shared cache should automatically store the genre and avoid querying the DB multiple times for the same genre, but unfortunately this is not the case: if I use a FINE logging level, I see really a lot of SELECT queries, which I didn't see with my "home made" Cache...
I am really confused.. :(
Thanks again for helping + bye -
Kodo 4 seems to have incomplete support for JPA spec
I posted a message previously about the incomplete persistence.xml schema. I have again run into this problem and would appreciate anyone from BEA (if they still read this forum) clearing the matter up. I am trying to evaluate this product for use and need to confirm if some features are available.
I tried using the <jar-file> attribute in my persistence.xml and got the following error when enhancing:
java.util.MissingResourceException: <0|false|4.0.1> kodo.util.GeneralException: org.xml.sax.SAXException: file:/D:/Code/JPAView/xsm_Config/ssmmodule/META-INF/persistence.xml [Location: Line: 7, C: 19]: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'jar-file'. One of '{"http://java.sun.com/xml/ns/persistence":class, "http://java.sun.com/xml/ns/persistence":exclude-unlisted-classes, "http://java.sun.com/xml/ns/persistence":properties}' is expected.
This implies that Kodo does not even support the vast majority of attributes allowed in persistence.xml, as defined in the JPA spec (page 135).
Perhaps I am approaching the issue in the wrong way, as I am attempting to re-use entities over persistence units. I would prefer that higher-level libraries would be able to re-use entity definitions in a support libraries, by specifying the jar in persistence.xml. I cannot find another way to re-use entity definitions, apart from manually listing every single shared entity in higher-level persistence.xml files.This implies that Kodo does not even support the vast
majority of attributes allowed in persistence.xml, as
defined in the JPA spec (page 135).This is not a correct inference as Kodo does process <jar-file> and other tags as per JPA spec http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd.
It would be useful to analyze persistence.xml and the environment setup to ensure that correct META-INF/persistence.xml is being picked up by Kodo runtime.
You can maintain a hierarchy of persistent domain model following the approach you outlined. -
Multiple instances of the same entity bean?
I am designing a J2EE application that is made up of a number of separate components that each have a well-defined responsibility. Each component is made up of one or more J2EE components (web clients and EJBs). I want to design the application such that it is easy in the future to deploy each component (or a group of components) on different servers. In order to do this I need to make sure that the interfaces between each component are exposed as remote interfaces (but I will use local interfaces inside each component). However, there are a number of entity beans that need to be accessed by more than one component. I am wondering how best to expose these entity beans. I believe there are a number of options:
1. Expose remote interfaces on each of the shared entity beans. The disadvantage of this approach is that it is inefficient and that I will not be able to take advantage of container managed relationships. (I am intending to use container managed persistence.)
2. Create a facade object (stateful session bean) for each of the entity beans which exposes a remote interface and in turn accesses the shared entity beans locally. The disadvantage of this approach is that I have to create some extra EJBs and that I cannot directly make use of container managed relationships etc from the client component.
3. I don't know if this is an option but I am wondering whether I can deploy a copy of each shared entity bean with each application component. The advantage of this approach is that the component would access the entity locally and could make use of container managed relationships. However, I don't know what the issues are with having more than one instance (per primary-key) of an entity in the same application. I don't know whether this would cause errors of whether they would get out of sync (because different instances with the same primary key would be updated by different clients). Initially each component would be deployed in the same server but later they would be deployed in different servers. In both cases with this option each component (JAR) would have copies of the shared entity bean classes.
Any suggestions as to the best approach and whether the last option is feasible would be much appreciated.
Thanks.I think 2 beats 1. The main reason being to minimise the number of network calls. You're basically asking, are fascades a good idea? and the answer is yes.
You can obviously do 3 in different app servers. However you'll need to configure your app servers so they can handle the fact that they're not the only ones updating the database. This is to handle concurrency as you mentioned. How you do this will depend on your app server and will affect performance, but shouldn't be a problem.
I think you should definately decide up front what's going in different app servers, I dunno if 3 would work in the same app server.
Why do you would want to use multiple app servers?
Why not have everything in the same app?
Is is just the 1 database?
You can use clustering for scalability. -
JPA: Oracle Sequence Generator not up to date
Hi,
I'm using the JPA Oracle Sequence Generator in one of my JPA classes:
@Entity
@Table(name = "DACC_COST_TYPE")
public class JPACostType implements Serializable {
@SequenceGenerator(name = "CostTypeGenerator", sequenceName = "DACC_COST_TYPE_SEQ")
@Column(name = "ID_COST_TYPE")
@Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "CostTypeGenerator")
private Integer idCostType;
In order to persist a new object I perform the following code:
@PersistenceContext
private EntityManager em;
JPACostType myJPA = new JPACostType();
myJPA.setIdCostType = null;
em.merge(myJPA);
em.flush();
Normally this works fine. But after deploying the app there sometimes happens an error:
Caused by: javax.persistence.PersistenceException: SQLException while inserting entity {com.karmann.dacc.ejb.busilog.jpa.JPACostType(idCostType=4)}.
at com.sap.engine.services.orpersistence.core.PersistenceContextImpl.flush(PersistenceContextImpl.java:278)
at com.sap.engine.services.orpersistence.core.PersistenceContextImpl.beforeCompletion(PersistenceContextImpl.java:565)
at com.sap.engine.services.orpersistence.entitymanager.EntityManagerImpl.beforeCompletion(EntityManagerImpl.java:410)
at com.sap.engine.services.orpersistence.environment.AppJTAEnvironmentManager.beforeCompletion(AppJTAEnvironmentManager.java:197)
at com.sap.engine.services.ts.jta.impl.TransactionImpl.commit(TransactionImpl.java:232)
... 52 more
Caused by: java.sql.SQLException: ORA-00001: unique constraint (AEMA.DACC_COST_TYPE_PK) violated
Obviously JPA does not fetch the new key by accessing the Oracle sequence. This documents "next value = 5". Does JPA fetch the new key from its cache? Is there any possibility to avoid this?
Thanks for each hint,
ChristophHello Christoph Schäfer ,
I am stuck with a similar issue. I was able to save mutiple entries and there has not been much change to my JPA. I added new entities and new sequences.
Now, I get the error Caused by: javax.persistence.PersistenceException: java.sql.SQLException: ORA-02289: Sequence ist nicht vorhanden.
I have checked the name of sequence and sequence next val on the DB. It works on DB but when i execute it from ejb, it gives me thsi error. Now, it gives the error for all previously working JPA entities.
I have also provided allocationSize = 1 for all entities.
Please let me know, possible cause/solution to this issue.
thank you.
Regards,
Sharath -
NON-transactional session bean access entity bean
We are currently profiling our product using Borland OptmizeIt tool, and we
found some interesting issues. Due to our design, we have many session beans which
are non transactional, and these session beans will access entity beans to do
the reading operations, such as getWeight, getRate, since it's read only, there
is no need to do transaction commit stuff which really takes time, this could
be seen through the profile. I know weblogic support readonly entity bean, but
it seems that it only has benefit on ejbLoad call, my test program shows that
weblogic still creates local transaction even I specified it as transaction not
supported, and Transaction.commit() will always be called in postInvoke(), from
the profile, we got that for a single method call, such as getRate(), 80% time
spent on postInvoke(), any suggestion on this? BTW, most of our entity beans are
using Exclusive lock, that's the reason that we use non-transactional session
bean to avoid dead lock problem.
ThanksSlava,
Thanks for the link, actually I read it before, and following is what I extracted
it from the doc:
<weblogic-doc>
Do not set db-is-shared to "false" if you set the entity bean's concurrency
strategy to the "Database" option. If you do, WebLogic Server will ignore the
db-is-shared setting.
</weblogic-doc>
Thanks
"Slava Imeshev" <[email protected]> wrote:
Hi Jinsong,
You may want to read this to get more detailed explanation
on db-is-shared (cache-between-transactions for 7.0):
http://e-docs.bea.com/wls/docs61/ejb/EJB_environment.html#1127563
Let me know if you have any questions.
Regards,
Slava Imeshev
"Jinsong HU" <[email protected]> wrote in message
news:[email protected]...
Thanks.
But it's still not clear to me in db-is-shared setting, if I specifiedentity
lock as database lock, I assumed db-is-shared is useless, because foreach
new
transaction, entity bean will reload data anyway. Correct me if I amwrong.
Jinsong
"Slava Imeshev" <[email protected]> wrote:
Jinsong,
See my answers inline.
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
Hi Slava,
Thanks for your reply, actually, I agree with you, we need to
review
our db
schema and seperate business logic to avoid db lock. I can not say,guys,
we need
to change this and that, since it's a big application and developedsince
EJB1.0
spec, I think they are afraid to do such a big change.Total rewrite is the worst thing that can happen to an app. The
better aproach would be identifying the most critical piece and
make a surgery on it.
Following are questions in my mind:
(1) I think there should be many companies using weblogic serverto
develop
large enterprise applications, I am just wondering what's the maintransaction/lock
mechanism that is used? Transional session / database lock,
db-is-shared
entity
I can't say for the whole community, as for my experience the standard
usage patthern is session fasades calling Entity EJBs while having
Required TX attribute plus plain transacted JDBC calls for bulk
reads or inserts.
is the dominant one? It seems that if you speficy database lock,
the
db-is-shared
should be true, right?Basically it's not true. One will need db-is-shared only if thereare
changes
to the database done from outside of the app server.
(2) For RO bean, if I specify read-idle-timeout to 0, it shouldonly
load
once at the first use time, right?I assume read-timeout-seconds was meant. That's right, but if
an application constantly reads new RO data, RO beans will be
constantly dropped from cache and new ones will be loaded.
You may want to looks at server console to see if there's a lot
of passivation for RO beans.
(3) For clustering part, have anyone use it in real enterpriseapplication?
My concern, since database lock is the only way to choose, how aboutthe
affect
of ejbLoad to performance, since most transactions are short live,if high
volume
transactions are in processing, I am just scared to death about
the
ejbLoad overhead.
ejbLoad is a part of bean's lifecycle, how would you be scared ofit?
If ejbLoads take too much time, it could be a good idea to profile
used SQLs. Right index optimization can make huge difference.
Also you may want cosider using CMP beans to let weblogic
take care about load optimization.
(4) If using Optimization lock, all the ejbStore need to do
version
check
or timestamp check, right? How about this overhead?As for optimistic concurrency, it performs quite well as you can
use lighter isolation levels.
HTH,
Slava Imeshev
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
We are using Exclusive Lock for entity bean, because of we do
not
want
to
load
data in each new transaction. If we use Database lock, that means
we
dedicate
data access calls to database, if database deadlock happens,
it's
hard
to
detect,
while using Exclusive lock, we could detect this dead lock in
container
level.
The problem is, using Exclusive concurrency mode you serialize
access to data represented by the bean. This aproach has negative
effect on ablity of application to process concurrent requests.As
a
result the app may have performance problems under load.
Actually, at the beginnning, we did use database lock and usingtransactional
The fact that you had database deadlocking issues tells that
application logic / database schema may need some review.
Normally to avoid deadlocking it's good to group database
operations mixing in updattes and inserts into one place so
that db locking sequence is not spreaded in time. Moving to
forced serialized data access just hides design/implementation
problems.
session bean, but the database dead lock and frequent ejbLoad
really
kill
us,
so we decided to move to use Exclusive lock and to avoid dead
lock,
we
change
some session bean to non-transactional.Making session beans non-transactions makes container
creating short-living transactions for each call to entity bean
methods. It's a costly process and it puts additional load to
both container and database.
We could use ReadOnly lock for some entity beans, but since weblogicserver will
always create local transaction for entity bean, and we found
transaction
commit
is expensive, I am arguing why do we need create container leveltransaction for
read only bean.First, read-only beans still need to load data. Also, you may seeRO
beans
contanly loading data if db-is-shared set to true. Other reason
can
be
that
RO semantics is not applicable the data presented by RO bean (forinstance,
you have a reporting engine that constantly produces "RO" data,
while
application-consumer of that data retrieves only new data and neverasks
for "old" data). RO beans are good when there is a relatively stable
data
accessed repeatedly for read only access.
You may want to tell us more about your app, we may be of help.
Regards,
Slava Imeshev
I will post the performance data, let's see how costful
transaction.commit
is.
"Cameron Purdy" <[email protected]> wrote:
We are currently profiling our product using Borland
OptmizeIt
tool,
and we
found some interesting issues. Due to our design, we have
many
session
beans which
are non transactional, and these session beans will access
entity
beans
to
do
the reading operations, such as getWeight, getRate, since
it's
read
only,
there
is no need to do transaction commit stuff which really takes
time,
this
could
be seen through the profile. I know weblogic support readonly
entity
bean,
but
it seems that it only has benefit on ejbLoad call, my test
program
shows
that
weblogic still creates local transaction even I specified
it
as
transaction not
supported, and Transaction.commit() will always be called
in
postInvoke(),
from
the profile, we got that for a single method call, such as
getRate(),
80%
time
spent on postInvoke(), any suggestion on this? BTW, most of
our
entity
beans are
using Exclusive lock, that's the reason that we use
non-transactional
session
bean to avoid dead lock problem.I am worried that you have made some decisions based on an improper
understand of what WebLogic is doing.
First, you say "non transactional", but from your description
you
should
have those marked as tx REQUIRED to avoid multiple transactions
(since
non-transactional just means that the database operation becomesits
own
little transaction).
Second, you say you are using exclusive lock, which you shouldonly
use
if
you are absolutely sure that you need it, (and note that it
does
not
work in
a cluster).
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Jinsong Hu" <[email protected]> wrote in message
news:[email protected]...
> -
Hi,
please consider the following code snippet:
@Entity
@Table(name = "MS")
@DiscriminatorColumn (name="TYPE",discriminatorType=DiscriminatorType.INTEGER)
public abstract class MS<T extends M> implements java.io.Serializable {
@JoinColumn(name = "M_ID", referencedColumnName = "ID")
@ManyToOne(fetch=FetchType.LAZY)
private T m;
@Entity
@DiscriminatorValue("0")
public class BS extends MS<B> implements java.io.Serializable {
@Entity
@Table (name="M")
@DiscriminatorColumn (name="TYPE",discriminatorType=DiscriminatorType.INTEGER)
public abstract class M<T extends MS> implements java.io.Serializable {
@OneToMany(mappedBy="m")
private Set<T> ms = new HashSet<T>();
@Entity
@DiscriminatorValue("0")
public class B extends M<BS> implements java.io.Serializable {
}This compiles and deploys fine. However, at runtime JPA is not able
to figure out what m in the mappedBy attribute refers to.
It complains that it has a set of type BS which does not have the field m defined. This is true for BS
but its super class MS has this field defined.
So I wondering if this is a limitation of JPA or may be I configured something
wrong!? Is a scenario like above possible with the current JPA?
Thanks for any help.
regards.No idea!? Let me give an example using Hibernate (you could replace it with JPA though):
B b = session.get(B.class, Long.valueOf(1));
Hibernate.initialize(b);In the initialize() process you get an exception, complaining that the Set cannot be resolved because the property m does not exist in class B (which is the type at runtime).
So I am wondering if it is currently possible to use generics in this way with JPA/Hibernate Entity Manager? -
JPA - How can i add ON DELETE CASCADE constraint ?
I have a three tables.
1. A (name)
2. B (name)
and relationship(manytomany) table of A and B
3. C (a_name,b_name)
I am using 2 JPA entities@Entity
public class A {
@id
String name;
@ManyToMany
@JoinTable(name="C",
joinColumns=@JoinColumn(name="a_name"),
inverseJoinColumns=@JoinColumn(name="b_name"))
private List<B> bs = new ArrayList<B>();
//getter setter methos
}and@Entity
public class B {
@id
String name;
@ManyToMany(mappedBy="bs")
private List<A> as = new ArrayList<A>();
//getter setter methos
}DDL of table C which is generated by JPA is something like thatCREATE TABLE C (
"a_name" VARCHAR,
"b_name" VARCHAR,
CONSTRAINT "c_a_name_fkey" FOREIGN KEY ("a_name")
REFERENCES a(name)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT "c_b_name_fkey" FOREIGN KEY ("b_name")
REFERENCES b(name)
ON DELETE NO ACTION
ON UPDATE NO ACTION
)but i want to add constraint ON DELETE CASCADE* instead of ON DELETE NO ACTION_ in above relationship table C.
how can i do this ?
Thanks in advance.Right click the message and select Edit as New.
Maybe you are looking for
-
Writing to large file takes an increasing​ly long time.
We are acquiring a large amount of data and streaming it to disk. We have noticed that when the file gets to be a certain size it takes an increasingly longer time to complete the write operation. Of course, this means that during these times our D
-
Hi all, I'm struggling to replace my hard drive on my 2.0 GHz MacBook. Want to upgrade from 60Gb to 160Gb. Ordered the below hard drive from OWC: http://eshop.macsales.com/item/Hitachi/0A28844/ The swap out of drives was easy (after having to run to
-
'Filter finder items' by variable
Hi, Is this just a flaw in Automator or am I being a n00b? All I want is to filter finder items (name) by a variable set earlier in the action. This should be a gimme, right? Wrong. No worky. Anyone know why / how to get this working ? thanks in adva
-
Flame won't boot after update (1.4)
Hello, I was running nightly FFOS 1.4 on my Flame for last month. Today, after update and reboot, phone boots into recovery mode. It looks like fastboot (I can see only ThunderSoft(r) logo) but I can't see phone listed with fastboot command. It does,
-
Got the Fast Five DVD/Bluray/Digital Copy set. Went into iTunes Store like directed and entered code in box. iTunes starts downloading movie and some extras. The movie got error -1303 and the extras just got error -50 after getting error 6 on previou