Purpose of toplink

What is the purpose of toplink?
Kindly explain me
Thanks,
Ervan.

Hi Ervan,
TopLink is a modeling and code generation tool that connects to your database and reads its schema, then lets you map objects to database tables and views, specify single-row insert, update, load, and delete operations, queries and stored procedure calls, as methods to these objects. And, it also lets you define one-to-one, one-to-many, many-to-one, and many-to-many relationships between objects based on relationships between tables in the database. It then generates fully working persistence objects code for you.
There are many other features that TopLink provides but I'm keeping my description brief so I can talk more about the benefits.
ORM Cuts down your development time
A typical application with 15-20 database tables has 30-50 objects (including domain and factory objects) and this is roughly 5000 to 10,000 lines of code. It is likely to take you a few weeks to a couple of months to develop and test this code. And, if your application has more tables than this (which many do), then just multiple the above numbers by that much.
On the other hand, an TopLink would generate this code very fast and easier. Even here, you need these 1-2 days primarily to determine your object mappings to the database. The actual code generation is instantaneous. So, your time saving is tremendous.
TopLink Tool Produces better designed code*
The code that you'll generate from an ORM is very likely going to be better designed than code designed by your own development team.
In brief the Model to Database interaction completelym will be handled by TopLink
Regards,
Vinay

Similar Messages

  • Unable to deploy Web App using JPA TopLink Essentials in Tomcat5.5.17

    Hi All,
    I am trying to deploy a Web App ( used Top Link Essentials ) to Tomcat and i am getting the following Error..
    I am strating tomcat using -javaagent:/Path/To/spring-agaent.jar
    Dec 14, 2006 9:52:46 AM org.apache.catalina.loader.WebappClassLoader loadClass
    INFO: Illegal access: this web application instance has been stopped already.  Could not load oracle.toplink.essentials.internal.weaving.ClassDetails.  The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
    java.lang.IllegalStateException
            at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1238)
            at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1198)
            at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
            at oracle.toplink.essentials.internal.weaving.TopLinkWeaver.transform(TopLinkWeaver.java:84)
            at org.springframework.orm.jpa.persistenceunit.ClassFileTransformerAdapter.transform(ClassFileTransformerAdapter.java:56)
            at sun.instrument.TransformerManager.transform(TransformerManager.java:122)
            at sun.instrument.InstrumentationImpl.transform(InstrumentationImpl.java:155)
            at java.lang.ClassLoader.defineClass1(Native Method)
            at java.lang.ClassLoader.defineClass(ClassLoader.java:620)
            at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
            at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:1812)
            at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:866)
            at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1319)
            at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1198)
            at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
            at java.lang.Class.getDeclaredConstructors0(Native Method)
            at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357)
            at java.lang.Class.getConstructor0(Class.java:2671)
            at java.lang.Class.newInstance0(Class.java:321)
            at java.lang.Class.newInstance(Class.java:303)
            at org.apache.myfaces.application.ApplicationImpl.createComponent(ApplicationImpl.java:396)
            at com.sun.faces.config.ConfigureListener.verifyObjects(ConfigureListener.java:1438)
            at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:509)
            at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3729)
            at org.apache.catalina.core.StandardContext.start(StandardContext.java:4187)
            at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:759)
            at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:739)
            at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:524)
            at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:608)
            at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:535)
            at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:470)
            at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1122)
            at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:310)
            at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
            at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1021)
            at org.apache.catalina.core.StandardHost.start(StandardHost.java:718)
            at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1013)
            at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:442)
            at org.apache.catalina.core.StandardService.start(StandardService.java:450)
            at org.apache.catalina.core.StandardServer.start(StandardServer.java:709)
            at org.apache.catalina.startup.Catalina.start(Catalina.java:551)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:294)
            at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:432) Thanks
    Sateesh

    Spring 2.0 provides custom support for TopLink Essentials in Tomcat out-of-the-box. You should follow the instructions here: http://static.springframework.org/spring/docs/2.0.x/reference/orm.html#orm-jpa-setup-lcemfb-tomcat
    Essentially, Spring provides a custom class loader for Tomcat and doesn't use an agent.
    --Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • What is the best way of returning group-by sql results in Toplink?

    I have many-to-many relationship between Employee and Project; so,
    a Employee can have many Projects, and a Project can be owned by many Employees.
    I have three tables in the database:
    Employee(id int, name varchar(32)),
    Project(id int, name varchar(32)), and
    Employee_Project(employee_id int, project_id int), which is the join-table between Employee and Project.
    Now, I want to find out for each employee, how many projects does the employee has.
    The sql query that achieves what I want would look like this:
    select e.id, count(*) as numProjects
    from employee e, employee_project ep
    where e.id = ep.employee_id
    group by e.id
    Just for information, currently I am using a named ReadAllQuery and I write my own sql in
    the Workbench rather than using the ExpressionBuilder.
    Now, my two questions are :
    1. Since there is a "group by e.id" on the query, only e.id can appear in the select clause.
    This prevent me from returning the full Employee pojo using ReadAllQuery.
    I can change the query to a nested query like this
    select e.eid, e.name, emp.cnt as numProjects
    from employee e,
    (select e_inner.id, count(*) as cnt
    from employee e_inner, employee_project ep_inner
    where e_inner.id = ep_inner.employee_id
    group by e_inner.id) emp
    where e.id = emp.id
    but, I don't like the complication of having extra join because of the nested query. Is there a
    better way of doing something like this?
    2. The second question is what is the best way of returning the count(*) or the numProjects.
    What I did right now is that I have a ReadAllQuery that returns a List<Employee>; then for
    each returned Employee pojo, I call a method getNumProjects() to get the count(*) information.
    I had an extra column "numProjects" in the Employee table and in the Employee descriptor, and
    I set this attribute to be "ReadOnly" on the Workbench; (the value for this dummy "numProjects"
    column in the database is always 0). So far this works ok. However, since the numProjects is
    transient, I need to set the query to refreshIdentityMapResult() or otherwise the Employee object
    in the cache could contain stale numProjects information. What I worry is that refreshIdentityMapResult()
    will cause the query to always hit the database and beat the purpose of having a cache. Also, if
    there are multiple concurrent queries to the database, I worry that there will be a race condition
    of updating this transient "numProjects" attribute. What are the better way of returning this kind
    of transient information such as count(*)? Can I have the query to return something like a tuple
    containing the Employee pojo and an int for the count(*), rather than just a Employee pojo with the
    transient int inside the pojo? Please advise.
    I greatly appreciate any help.
    Thanks,
    Frans

    No I don't want to modify the set of attributes after TopLink returns it to me. But I don't
    quite understand why this matters?
    I understand that I can use ReportQuery to return all the Employee's attributes plus the int count(*)
    and then I can iterate through the list of ReportQueryResult to construct the Employee pojo myself.
    I was hesitant of doing this because I think there will be a performance cost of not being able to
    use lazy fetching. For example, in the case of large result sets and the client only needs a few of them,
    if we use the above aproach, we need to iterate through all of them and wastefully create all the Employee
    pojos. On the other hand, if we let Toplink directly return a list of Employee pojo, then we can tell
    Toplink to use ScrollableCursor and to fetch only the first several rows. Please advise.
    Thanks.

  • Toplink essentials overwhelming bug(s)

    Hello.
    I'm working on EJB3 project running on Glassfish application server that uses Toplink Essentials as persistance engine.
    In the project there is User entity and some entities inherited from User such as Operator, Vendor, Client, etc... And there is StoredMessage entity. StoredMessage has two attributes (sender, recepient) of User class.
    And the problem is: when trying to persist StoredMessage object via entity manager i'm getting an error like "Cannot persist detached object Operator"... but that is gibberish thing. The User objects are not detached (for test purposes I even merged them vie entity manage just befor StoredMessage persisting) and i'm not working with Operator object there, I'm working with User object, that's most interesting thing.
    I decided to make a temporary workaround and store sender and recepient's IDs int StoredMessage class insted of User object references... But i didn't want to change interfaces in my program, so I leaved old getters and setters for sender and recepient, annotated them transient and changed to get these User objects transparrently through entity menager (I know, that's not a good architectural solution).
    Afrer that Toplink become insane :) I got an error that java.util.Date property in my StoredMessage class is not annotated with a @Temporal, but in code everything was ok. I removed all the code related to that date field and after that I got another error: it said i have to annotate ID field with @Id, but it also was annotated correctly.
    When I'm setting sender and recepient fields to null everything works all right and entity manager persists my StoredMessage object with no problem.
    That bugs are very weird, I haven't seen anything like this... And i'm working on this project for two months, there are about 30 other entities with different relation types in the project and everything works all right and doesn't have any problem with entity manager. Today i've got the first bug.
    Does anybody seen something like this? Do you know how to fix it?
    There is another workaround in my mind, i can just use pure JDBC for StoredMessage objects persistance, but such solution has a number of disadvantages and I have no guarantee that such bugs won't happen with new entities in the project.

    Thank you for reply!
    Do sender and recipient use the same Foreign key? Do you have cascade persist set?Humm, yes, they both the same foreign key. Yes, i have "(cascade = {CascadeType.ALL})" if you mean that.
    Can you please post the relevant Portions of the java and annotations and I can take a closer look.Sure, http://lab37.com/xchng/javacode.zip -- here is archive with User entity, StoredMessage entity and just in case i put Operator (extends User) entity there.

  • Convert from Hibernate to TopLink

    Hi,
    I would like to convert from Hibernate and Tomcat to TopLink and IAS with the
    latest version 10.1.3.1 that include the EJB3.0 and JPA.
    What is the effort that will be needed to for this convertion (mainly the Hibernate to Toplink convertion)?
    Do you know about a migration tool for that purpose?
    Thanks
    Yafit

    Hi Yafit,
    For the sake of those who might be interested I'm posting the response I had given to you in email.
    If you are using Hibernate with JPA the migration should not be a large effort. The persistence.xml (deployment configuration) file will need to be updated of course. A question for you is whether you're using many proprietary Hibernate features or have you used only JPA defined mappings. Anything proprietary will require attention. Also, Hibernate's JPQL supports some non-standard features so this question about staying within the spec applies to queries too.
    There are no migration tools for moving from Hibernate JPA to TopLink JPA as the differences are mostly in configuration which is easy enough to adjust manually.
    --Shaun                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Using Toplink API to persist data to database

    My requirement is to persist data to the database (oracle) using Toplink Java API approach.
    I have a basic setup program, but its not solving the purpose.
    Kindly let me know where I am missing.
    package sample;
    import oracle.toplink.essentials.descriptors.ClassDescriptor;
    import oracle.toplink.essentials.descriptors.RelationalDescriptor;
    import oracle.toplink.essentials.internal.sessions.DatabaseSessionImpl;
    import oracle.toplink.essentials.mappings.DirectToFieldMapping;
    import oracle.toplink.essentials.queryframework.DatabaseQuery;
    import oracle.toplink.essentials.sessions.Login;
    import oracle.toplink.essentials.sessions.UnitOfWork;
    public class EmployeeProject extends oracle.toplink.essentials.sessions.Project
    private ClassDescriptor classDescriptor;
    public EmployeeProject()
    applyPROJECT();
    applyLOGIN();
    classDescriptor = buildEmployeeDescriptor();
    addDescriptor(classDescriptor);
    System.out.println("classDescriptor.getMappings(): " + classDescriptor.getMappings());
    protected void applyPROJECT(){
    setName("Employee");
    protected void applyLOGIN()
    oracle.toplink.essentials.sessions.DatabaseLogin login = new oracle.toplink.essentials.sessions.DatabaseLogin();
    login.setDriverClassName("oracle.jdbc.OracleDriver");
    login.setConnectionString("jdbc:oracle:thin:ptyagi-pc.idc.oracle.com:1521:orcl");
    login.setUserName("system");
    login.setPassword("orcl");
    // Configuration Properties
    setDatasourceLogin((Login)login);
    // SECTION: DESCRIPTOR
    public ClassDescriptor buildEmployeeDescriptor() {
    RelationalDescriptor descriptor = new RelationalDescriptor();
    // specify the class to be made persistent
    descriptor.setJavaClass(sample.Employee.class);
    // specify the tables to be used and primary key
    descriptor.addTableName("EMP1");
    descriptor.addPrimaryKeyFieldName("EMP1.ID");
    // Descriptor Properties
    descriptor.useSoftCacheWeakIdentityMap();
    descriptor.setIdentityMapSize(100);
    descriptor.setAlias("Employee");
    // Mappings
    DirectToFieldMapping idMapping = new DirectToFieldMapping();
    idMapping.setAttributeName("id");
    idMapping.setFieldName("EMP1.ID");
    descriptor.addMapping(idMapping);
    DirectToFieldMapping nameMapping = new DirectToFieldMapping();
    nameMapping.setAttributeName("name");
    nameMapping.setFieldName("EMP1.NAME");
    descriptor.addMapping(nameMapping);
    DirectToFieldMapping salMapping = new DirectToFieldMapping();
    salMapping.setAttributeName("salary");
    salMapping.setFieldName("EMP1.SALARY");
    descriptor.addMapping(salMapping);
    return descriptor;
    public static void main(String [] args) {
    EmployeeProject empProj = new EmployeeProject();
    Employee emp = new Employee();
    emp.setID(1);
    emp.setName("Pulkita");
    emp.setSalary(100);
    DatabaseSessionImpl databaseSessionImpl = new DatabaseSessionImpl(empProj);
    databaseSessionImpl.login();
    databaseSessionImpl.beginTransaction();
    UnitOfWork unitOfWork = databaseSessionImpl.acquireUnitOfWork();
    unitOfWork.registerNewObject(emp);
    unitOfWork.commit();
    }

    The issue is with the line:
    databaseSessionImpl.beginTransaction();Since you began the transaction yourself you must also commit it. The easiest solution is to remove the above line and allow the UnitOfWork to begin and commit the transaction itself.
    Doug

  • How can we maintain External cache in TopLink JPA

    Hi,
    JPA maintains shared session cache for the purpose of multiple users .Individual user can not use this shared session cache . How can we maintain that individual user cache in TopLink JPA . In our application we are using container managed transactions.
    Regards
    Sucharitha.

    I'm not sure I understand, perhaps you could clarify your question?
    Do you want to access an object from the shared cache? There is a TopLink JPA query hint "return-shared" to allow for read-only shared objects to be returned on a query.
    If you do not wish to have a shared cache, you can set the cache to be isolated (see @Cache annotation, or persistence property "eclipselink.cache.shared.<class>").
    You can also access the shared cache directly using getServerSession().getIdentityMapAccessor().

  • Top link migration from  toplink-9.0.4.3 to 10.1.3.1

    I had migrate the toplink 10.1.3.1 from 9.0.4.3. When i am starting the server the toplink modules are loading successfully.But one of the sql.log is created unnecessarily. Please any body have any idea about the sql.log is coming from where?What are the new stubs added in 10.1.3.1 version for logging purpose.

    You can configure TopLink 10.1.3.x logging level and output mechanism on the"logging" tab of your session in the Workbench [1].
    --Shaun
    [1] http://www.oracle.com/technology/products/ias/toplink/doc/10131/main/_html/sescfg004.htm

  • Help on creating and deleting xml child elements using Toplink please.

    Hi there,
    I am trying to build a toplink xml demo illustrating toplink acting as the layer between my java code and an xml datasource.
    After pulling my custom schema into toplink and following the steps in http://www.oracle.com/technology/products/ias/toplink/preview/10.1.3dp3/howto/jaxb/index.htm related to
    Click on Mapping Workbench Project...Click on From XML Schema (JAXB)...
    I am able to set up java code which can run get and sets against my xml datasource. However, I want to also be able create and delete elements within the xml data for child elements.
    i.e. in a simple scenario I have a xsd for departments which has an unbounded element of type employee. How does toplink allow me to add and or remove employees in a department on the marshalled xml data source? Only gets and sets for the elements seem accessible.
    In my experience with database schema based toplink demos I have seen methods such as:
    public void setEmployeesCollection(Collection EmployeesCollection) {
         this.employeesCollection = employeesCollection;
    Is this functionality available for xml backended toplink projects?
    cheers
    Nick

    Hi Nick,
    Below I'll give an example of using the generated JAXB object model to remove and add a new node. The available APIs are defined in the JAXB spec. TopLink also supports mapping your own objects to XML, your own objects could contain more convenient APIs for adding or removing collection members
    Example Schema
    The following XML Schema will be used to generate a JAXB model.
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
         elementFormDefault="qualified" attributeFormDefault="unqualified">
         <xs:element name="department">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element ref="employee" maxOccurs="unbounded"/>
                   </xs:sequence>
              </xs:complexType>
         </xs:element>
         <xs:element name="employee">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="name" type="xs:string"/>
                   </xs:sequence>
              </xs:complexType>
         </xs:element>
    </xs:schema>---
    Example Input
    The following document will be used as input. For the purpose of this example this XML document is saved in a file called "employee-data.xml".
    <department>
         <employee>
              <name>Anne</name>
         </employee>
         <employee>
              <name>Bob</name>
         </employee>
    </department>---
    Example Code
    The following code demonstrates how to use the JAXB APIs to remove the object representing the first employee node, and to add a new Employee (with name = "Carol").
    JAXBContext jaxbContext = JAXBContext.newInstance("your_context_path");
    Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();
    File file = new File("employee-data.xml");
    Department department = (Department) unmarshaller.unmarshal(file);
    // Remove the first employee in the list
    department.getEmployee().remove(0);
    // Add a new employee
    ObjectFactory objectFactory = new ObjectFactory();
    Employee newEmployee = objectFactory.createEmployee();
    newEmployee.setName("Carol");
    department.getEmployee().add(newEmployee);
    Marshaller marshaller = jaxbContext.createMarshaller();
    marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
    marshaller.marshal(department, System.out);---
    Example Output
    The following is the result of running the example code.
    <department>
         <employee>
              <name>Bob</name>
         </employee>
         <employee>
              <name>Carol</name>
         </employee>
    </department>

  • Can I catch the resulting TopLink query in the JSF backing bean?

    Hello!
    I'm using a JSF in my application. In my case, data on the JSF Pages is based on a TopLink queries, which usually have some parameters. Such a construction works fine except one moment.
    For export data purposes, I need to catch the generated text of query evidently, suppose, in a Page's backing bean.
    Is it possible to perform this action and how to?
    Thanks.

    generally speaking, you can just throw one exception a time. you can throw your own application but you can't throw both SQLException and your own.

  • Toplink UpdateAllQuery Issue

    We're experiencing some problems with an UpdateAllQuery when trying to update some fields belonging to an aggregate mapping within the component we're trying to update.
    We want to set these fields to NULL.
    One of the fields in the Aggregate mapping is a OneToOne mapping lookup type component. We want to set this field to NULL. (In a nutshell we want to set the FK to this lookup type object to NULL).
    In order to accomplish this, we have created a DirectQueryKey for the OneToOne mapping inside of the Aggregate mapping and then physically mapped it within the Component that makes use of the Aggregate mapping.
    It appears that this DirectQueryKey is not working properly. Even when we create a ReadAllQuery and use the same selection criteria, TopLink complains with a TOPLINK 6015 "Invalid QueryKey [businessFuncId] in expression" exception when trying to execute the query.
    We have checked to make sure the QueryKey name in our TopLink mapping matches the name used in the query definition... we're stumped... can someone review and provide some clues as to what we may be doing wrong.
    Here is the UpdateAllQuery definition:
    UpdateAllQuery updateQuery = new UpdateAllQuery(RegisterImpl.class);
    updateQuery.setCacheUsage(UpdateAllQuery.INVALIDATE_CACHE);
    ExpressionBuilder registerBuilder = updateQuery.getExpressionBuilder();
    updateQuery.addArgument("userId");
    updateQuery.addArgument("channel");
    updateQuery.addArgument("token");
    updateQuery.addArgument("date");
    updateQuery.addArgument("businessFuncId");
    Expression reservedDetailReg = registerBuilder.get("reservedDetail");
    Expression userIdExp = reservedDetailReg.get("userId").equal(
    registerBuilder.getParameter("userId"));
    Expression channelExp = reservedDetailReg.get("channel").equal(
    registerBuilder.getParameter("channel"));
    Expression tokenExp = reservedDetailReg.get("token").equal(
    registerBuilder.getParameter("token"));
    Expression dateExp = reservedDetailReg.get("date").equal(
    registerBuilder.getParameter("date"));
    Expression busFuncExp = reservedDetailReg.get("businessFuncId").equal(
    registerBuilder.getParameter("businessFuncId"));
    updateQuery.setSelectionCriteria(userIdExp.and(channelExp.and(tokenExp.and(dateExp
    .and(busFuncExp)))));
    updateQuery.addUpdate(reservedDetailReg.get("userId"), "");
    updateQuery.addUpdate(reservedDetailReg.get("channel"), "");
    updateQuery.addUpdate(reservedDetailReg.get("token"), "");
    updateQuery.addUpdate(reservedDetailReg.get("date"), "");
    updateQuery.addUpdate(reservedDetailReg.get("businessFuncId"), "");
    descriptor.getQueryManager().addQuery(QueryConstants.UNFREEZE_REGISTER_BY_RESERVED_DETAIL,
    updateQuery);
    Thanks in advance.

    In answer to your posted questions:
    1) Are you mapping a query key for the lookup object(between an attribute of the object and a field of the table) or are you adding a query key for the relationship mapping?
    Answer: We are mapping a DirectQueryKey (see "businessFuncId" above) to a field on the table directly related to the component being updated. However, the query key represents the foreign key (primary key) for a OneToOne mapping contained within an aggregate descriptor.
    Here's an explanation of the OR mapping/object relationship (reference the query definition in the original posting for more details):
    RegisterImpl (target of updateAllQuery) contains a ReservedDetail (Aggregate mapping) which in turn contains a OneToOne mapping to a BusinessFunction (read-only lookup type object).
    We created a DirectQueryKey: businessFuncId which represents the FK on the RegisterImpl table that is used within the ReservedDetail aggregate mapping for the OneToOne mapping to the BusinessFunction component.
    This query key is for query purposes only. As you can see from the query definition it is used to avoid a join with a BusinessFunction table. This is because, as we understand it, an UpdateAllQuery can only update ONE target component (table). The problem we're having is: how do you set a OneToOne FK reference field on the target component to NULL???
    2) To which descriptor are you adding the query key?
    Answer: The query key ("businessFuncId") must be, by convention, defined within the ReservedDetail aggregate mapping. When we map the ReservedDetail aggregate mapping within the RegisterImpl component mapping, we map the query key to the businessFunction FK field contained with the RegisterImpl table, (the target table we're trying to update).

  • Lazy loading differences Toplink vs. Hibernate - plz. explain

    I'm in the process of evaluating both Toplink and hibernate as potential ORM framework to use for a project I'm currently involved in.
    My question is about how Toplink resolves lazily loaded associations. In hibernate I have to perform a query inside a transactional context boundary, like:
    Session s = SessionFactory.getSession();
    s.beginTransaction();
    ... your query logic here
    s.getTransaction().commit();When the query involves associations which are declared as lazily loadable, trying to invoke these associations after the transaction boundary has been closed, results in an exception. This differs from Toplink (my JUnit testcase breaks for Hibernate if I set the DAOFactory to return hibernate enabled DAO's) and I'm wondering why?
    I'm guessing this has something to do with how Toplink manages its clientsession, but would like to get some confirmation about this. It looks like as-long as the thread of execution is running I can resolve associations using Toplink, but not when I use Hibernate.
    This brings me to yet another question - what's considered best practices in Toplink regarding session disposal. Should I do something myself, or let the garbage collector take care of it?

    I'm not too sure here, but I think it's because TopLink has a "long running" ServerSession. When you do lazy initialization outside a clientsession it is for read only purposes and it will use the centrally managed ServerSession (and cache). I'm still trying to figure out the exact relationships here, som I'm not too sure. :) Hibernate does not have a centrally shared cache, and will not be able to instantiate objects if the session is closed (for each session, objects are instantiated from it's data cache).
    As for handling resources and closing/opening, use the Spring/TopLink integration. It will handle it for you and give you heaps of convenience methods that uses some clever tricks to decide if it should fetch objects with Session or UnitOfWork. It will also do some good Exception handling built into Spring.

  • Query vs Toplink managed collection and cascade persist

    A fairly simple situation: I have a 1-N relation which is managed by Toplink: 1 relation can have N calendars (I know, badly chosen class name, but alas).
    If I access the collection through the Toplink managed collection, make a change to one of the calendars and then merge the relation, the change in the calendar instance automatically is detected and also persisted.
    However, if I use a query (because I do not need all calendars) to find the same instance and make the same change, then it is not persisted. Appearantly the "cascade persist" is not done here.
    There are a few ways around this:
    1. fetch the original collection and by compare-and-remove emulate the query
    2. do a setRelation(null) and then setRelation(xxx) of the relation
    3. do a merge inside the transaction (a merge outside does not work)
    The funny thing is, workaround #2 really sets the same relation again!
    Is there a way to have the result of a query also cascade persist?

    Well, I do not want to do it in a transaction, because then the changes are written to the database immediately and that will result in all kind of locking problems (this is a fat client situation). What want is fairly simple: the user modifies entities in an object graph in memory and at the end of his work either presses "cancel" and clears all changes, or presses "save" and stores all changes. When he presses "save" I expect the EM to persist every changed entity.
    This approach works ok for all scenario's I have implemented up until now. The current one is different in that I get related entities not by traveling the object graph (so via Cascade.PERSIST collections), but via a query. There is one major difference between these two: the entities from the collections are automatically persisted, the ones from a query are not.. BUT they are -for all means and purposes- identical. Specifically: the collection gives me ALL calendars associated with the relation, the query only those from a timespan but still associated with the relation.
    For some reason I expected the entities to also auto-persist, BECAUSE they also are present in the collection.
    Ok then, so I understand that entities fetched through a query are unrelated to any other entity, even though they also exist in a Cascade.PERSIST collection. (I still have to test what happens if I, after the query, also access the collection: will the same object be present?)
    That being as it as, I need to merge each queries entity separate and thus I expect the EM to remember any entities merged outside a transaction, but it does not. That I do not understand.
    Now, I already have a patched / extended EM because of a strange behavior in the remove vs clear dynamics, so this was a minor add-on and works perfectly (so far ;-). But if you have a better idea how to remember changes to entities, which are to be merged upon transaction start... Please!

  • ObjectRelational Mapping with Toplink

    Hi,
    I'm looking for any tutorial or example about ObjectRelational mappings, Object-types, Varrays, Nested tables, ObjectRelationalDescriptor. I tried with code examples from b100063,b100064. but they are too short and not clear.
    thanks,

    Hi,
    I found this example on metalink.oracle.com it gives a good example of a Varray. I tried the example and it works. You said that there were code examples: b100063,b100064, where are they located.
    Anyway here is the example:
    Bookmark Fixed font Go to End
    Doc ID: Note:224177.1
    Subject: Mapping to a VARRAY object type defined in an Oracle database
    Type: SAMPLE CODE
    Status: PUBLISHED
    Content Type: TEXT/PLAIN
    Creation Date: 20-DEC-2002
    Last Revision Date: 08-APR-2003
    Overview
    Oracle9iAS TopLink supports object-relational mapping. That means that
    custom made datatypes (created by developer) in database could be used
    to map some java application side attributes to. One of object types is
    VARRAY. Mapping Workbench does not allow object-relational mapping (at
    the time when this article is written), but if this feature is needed,
    project file can be edited to accomplish this.
    If object type (VARRAY) already exist in database, all developer has to
    do is make TopLink aware of it and map an attribute to it.
    Program Notes
    This example uses varray type courses_list_type defined in oracle database as:
    create or replace type courses_list_type as varray(5) of varchar2(25);
    The courses_list_type type is used in table test_student (used in this example)
    defined as:
    create TABLE test_student (
    student_id number(3) PRIMARY KEY,
    student_name varchar2(30),
    student_courses courses_list_type);
    Since we need some data in table test_student, the following statement has
    been executed:
    INSERT INTO test_student VALUES(1, 'Ron More',
    courses_list_type ('Visual Basic', 'Java', 'C++', 'UML', 'SQL'));
    Caution
    The sample program in this article is provided for educational purposes only
    and is NOT supported by Oracle Support Services. It has been tested
    internally, however, and works as documented. We do not guarantee that it
    will work for you, so be sure to test it in your environment before relying
    on it.
    Program
    SessionEventAdapter class must be set up and the preLogin() method implemented.
    The session can be retrieved from the SessionEvent (getSession() method) and
    descriptor with the VARRAY MAPPING (the new ObjectRelationalDescriptor) added
    to the project. This must be done in code since this cannot be done from the
    Mapping Workbench. This is recommended way of dealing with this issue since
    the modification of the project.java (generated by Mapping Workbench) file is
    avoided. Doing this, project itself can be modified many times without changing
    object-relational descriptor.
    A) class Student is created in Jdeveloper
    package oracle.toplink.demos.employee.domain;
    import java.util.*;
    import java.io.*;
    public class Student {
    private int id;
    private String name; // a collection of courses being stored in Oracle database as VArray type
    private Vector courses;
    public Student ( ){
    this.id = 0;
    this.name = "";
    this.courses = new Vector(5);
    public void addCourse(String courseName){getCourses().addElement(courseName);}
    public void removeCourse(String courseName){getCourses().removeElement(courseName);}
    public int getId ( ) {return id;}
    public String getName ( ) {return name;}
    public Vector getCourses(){ return courses;}
    public void setId(int studentId) {this.id = studentId;}
    public void setName(String studentName) {this.name = studentName;}
    public void setCourses(Vector studentCourses){this.courses = studentCourses;}
    } // end of class student
    B) implementation of SessionEventAdapter class
    package oracle.toplink.demos.employee.domain;
    import oracle.toplink.sessions.*;
    import oracle.toplink.demos.employee.relational.*;
    public class MySessionEventAdapter extends SessionEventAdapter{
    /* This Event is raised before the session logs in. */
    public void preLogin(SessionEvent event){
    System.out.println("In preLogin()");
    // need empProject variable since it owns the buildStudentDescriptor() method
    EmployeeProject empProject = new EmployeeProject();
    event.getSession().getProject().addDescriptor(empProject.buildStudentDescriptor());
    } // end of preLogin
    } // end of MySessionEventAdapter
    Before login to database the following two lines must be executed:
    MySessionEventAdapter myAdapter = new MySessionEventAdapter();
    session.getEventManager().addListener((SessionEventListener)myAdapter);
    C) method that builds the ObjectRelationaDescriptor descriptor on student class
    public static Descriptor buildStudentDescriptor(){
    ObjectRelationalDescriptor descriptor = new ObjectRelationalDescriptor();
    descriptor.setJavaClass(oracle.toplink.demos.employee.domain.Student.class);
    descriptor.setTableName("TEST_STUDENT");
    descriptor.setPrimaryKeyFieldName("TEST_STUDENT.STUDENT_ID");
    // Mappings.
    DirectToFieldMapping idMapping = new DirectToFieldMapping();
    idMapping.setAttributeName("id");
    idMapping.setFieldName("TEST_STUDENT.STUDENT_ID");
    descriptor.addMapping(idMapping);
    DirectToFieldMapping nameMapping = new DirectToFieldMapping();
    nameMapping.setAttributeName("name");
    nameMapping.setFieldName("TEST_STUDENT.STUDENT_NAME");
    descriptor.addMapping(nameMapping);
    ArrayMapping coursesMapping = new ArrayMapping(); // here we do not use ObjectArrayMapping
    coursesMapping.setAttributeName("courses");
    coursesMapping.setGetMethodName("getCourses");
    coursesMapping.setSetMethodName("setCourses");
    coursesMapping.setStructureName("COURSES_LIST_TYPE");
    coursesMapping.setFieldName("TEST_STUDENT.STUDENT_COURSES");
    descriptor.addMapping(coursesMapping);
    return descriptor;
    } // end of buildStudentDescriptor()
    Defining the student descriptor this way, TopLink has all information on varray
    type used. Please note that varray type already exist in database so ArrayMapping
    is used instead of ObjectArrayMapping.
    D) method that inserts object of type student into database
    public void callObjectRelationalVArray(){
    session.initializeIdentityMaps();
    System.out.println("\n\n\nSTART: callObjectRelationalVArray()");
    /* ************* callObjectRelationalVArray; insert student - start *******/
    UnitOfWork uowInsertStudent = session.acquireUnitOfWork();
    Student newStudent = new Student();
    Student cloneStudent = (Student)uowInsertStudent.registerObject(newStudent);
    cloneStudent.setId(6);
    cloneStudent.setName("Ron Moore");
    cloneStudent.addCourse("Java");
    cloneStudent.addCourse("JBuilder");
    cloneStudent.addCourse("HTML");
    cloneStudent.addCourse("Rational Rose");
    cloneStudent.addCourse("Visual Basic");
    uowInsertStudent.commit();
    /* ************* callObjectRelationalVArray; insert student - end *********/
    System.out.println("END: callObjectRelationalVArray()");
    } // end of callObjectRelationalVArray
    Sample Output
    By executing callObjectRelationalVarray() method JDeveloper produces the
    following output:
    START: callObjectRelationalVArray()
    DatabaseSession(11)--Connection(12)--delete test_student where STUDENT_ID = 6
    student with STUDENT_ID = 6 is deleted
    DatabaseSession(11)--acquire unit of work:28
    UnitOfWork(28)--begin unit of work commit
    DatabaseSession(11)--Connection(12)--begin transaction
    UnitOfWork(28)--Connection(12)--INSERT INTO TEST_STUDENT
    (STUDENT_ID, STUDENT_NAME, STUDENT_COURSES) VALUES (?, ?, ?)
    bind => [6, Ron Moore, oracle.sql.ARRAY@20]
    DatabaseSession(11)--Connection(12)--commit transaction
    UnitOfWork(28)--end unit of work commit
    UnitOfWork(28)--release unit of work
    Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.

  • Toplink 9.0.4.6, Batching and CLOBS

    Hello,
    New to this forum, new to Java and very new to Toplink. So, please forgive me if I ask a stupid question or two.
    I have written a java program whose purpose is to migrate data from one schema to another. For the most part, it works well, except for a few issues.
    The legacy table contains a number of varchar(4000) fields (Oracle 9i), and the equivelant fields in the new schema (Oracle 10g) are CLOB fields. I ran my migration code a few times, and found it a bit too slow for my liking. In an effort to speed it up, I did the following:
    1. Turned on batching in the connection to the database
    2. Refactored java code to allow for batching commits to the database
    After doing this, migration code runs a lot faster, but, now I have an issue with writing from the Varchar(4000) fields to CLOB fields. To be specific, about half the records that are migrated wind up with either null values in the corresponding CLOB fields, or a series of spaces - but not the text that should be there. To reiterate, in some cases the data seems to migrate fine, in others, well, not so fine or not at all.
    My question is this: is this a known issue? If so, how do I fix it? Before I started batching my commits, things seemed to be going well.
    Not sure what else to include here, other than the following, which you may find useful:
    Sessions.xml settings:
    UseNativeSequencing = True
    ShouldBindAllParamters = True
    UsesStringBinding = True
    UsesStreamsForBinding = True
    UsesBatchWriting = True
    UsesJDBCBatchWriting = True
    Actually, not sure if you need this information or not. Should you need further information, please do not hesitate to ask. Apologies again for my glaring lack of expertise in these matters, and any help would be greatly appreciated.
    Thanks!

    Hi,
    I am going to use the following codes which I read from the ADF Developer Guide for SRDemo (with a little bit of adjustment):
    public void fileUploaded(ValueChangeEvent event) {
    InputStream in;
    FileOutputStream out;
    // Set fileUPloadLoc to "SRDemo.FILE_UPLOADS_DIR" context init parameter
    String fileUploadLoc =
    FacesContext.getCurrentInstance().getExternalContext().getInitParameter("SRDemo.FILE_UPLOADS_DIR");
    if (fileUploadLoc == null) {
    // Backup value if context init parameter not set.
    fileUploadLoc = "/tmp/srdemo_fileuploads";
    //get svrId and append to file upload location
    Integer svrId =
    (Integer)JSFUtils.getManagedBeanValue("userState.currentSvrId");
    fileUploadLoc += "/sr_" + svrId + "_uploadedfiles";
    // Create upload directory if it does not exists.
    boolean exists = (new File(fileUploadLoc)).exists();
    if (!exists) {
    (new File(fileUploadLoc)).mkdirs();
    UploadedFile file = (UploadedFile)event.getNewValue();
    if (file != null && file.getLength() > 0) {
    FacesContext context = FacesContext.getCurrentInstance();
    FacesMessage message =
    new FacesMessage(JSFUtils.getStringFromBundle("srmain.srfileupload.success") +
    " " + file.getFilename() + " (" +
    file.getLength() + " bytes)");
    context.addMessage(event.getComponent().getClientId(context),
    message);
    try {
    out = new FileOutputStream(fileUploadLoc + "/" + file.getFilename());
    in = file.getInputStream();
    for (int bytes = 0; bytes < file.getLength(); bytes++) {
    out.write(in.read());
    in.close();
    out.close();
    } catch (IOException e) {
    e.printStackTrace();
    } else {
    // need to check for null value here as otherwise closing
    // the dialog after a failed upload attempt will lead to
    // a nullpointer exception
    String filename = file != null ? file.getFilename() : null;
    String byteLength = file != null ? "" + file.getLength() : "0";
    FacesContext context = FacesContext.getCurrentInstance();
    FacesMessage message =
    new FacesMessage(FacesMessage.SEVERITY_WARN, JSFUtils.getStringFromBundle("srmain.srfileupload.error") +
    " " + filename + " (" + byteLength +
    " bytes)", null);
    context.addMessage(event.getComponent().getClientId(context),
    message);
    Two questions:
    One, I am going to read in the file as a byte[], from my readings, I know that I would have to use a custom UploadedFileProcessor to save files to the Database. How can I save that file as a blob into the Database? Can you let me know of some code samples? I am working with ADF Faces and TopLink, not Business Component.
    Two, regarding your reply on 1/16, could you explain how do I map my byte[] to a blob?
    Appreciate your assistance.
    Lin

Maybe you are looking for

  • SharePoint Foundation 2013 Prerequisite installer keeps installing and restarting the server

    Hi, I'm trying to install SharePoint foundation 2013 on Windows Server 2008 R2 Standard SP1. I already install SQL Server 2012 SP1 and TFS 2013 on the server. I don't have internet connection on the server so i install the next prerequisites manually

  • SQL Server Reporting Services SharePoint Integrated mode - Power View is not supported

    Hi there, My environment is the following: SharePoint Server 2013 SQL Server 2012 SP1 Enterprise edition I am getting the following message when trying to open BISM item with PowerView icon: "The feature: "Power View" is not supported in this edition

  • OIM 10.1.4.3 patcher error connecting to 11gr2 DB

    I am trying to install FRP 11g and am almost to the actual FRP install.... I have the latest version of 11g database as my repository DB. I installed OIM 10.1.4.0.1 for the SSO and DAS into the 11g IM repo successfully. However when trying to run the

  • Error in integartion builder startup

    Hi , I am getting the the below mentioned error after i restarted my J2EE engine. Stacktrace Thrown: MESSAGE ID: com.sap.aii.utilxi.prop.api.rb_all.NO_PROPERTIES com.sap.aii.utilxi.prop.api.PropertiesException: Unable to read configuration data (Exch

  • PS - Best Practices - Single version of the truth

    Hi Gurus I'm very new to SAP PS and have previously logged a similar thread on this subject. There appears to be mixed locations/naming standards for the various Best Practice Guides which I'd like some clarification on. In some instance J* Type Best