RegisterObject(Object) vs. registerNewObject(Object)

Hi all,
I need to decide the approach for my new application when creating new objects using Unit of Work. What do you recommend for registering the object? I have to confess that I’m a little confused. Each article or topic that I read makes me think differently from the previous one.
Should I use the registerObject(Object) call or the alternate method registerNewObject(Object)?
What I am planning to do is to use registerNewObject(Object), but checking if the object is already in cache.
I read that registerNewObject(Object) allows the user to perform primary key queries and has performance improvements since it does not clone the object.
Is this approach correct? What do you recommend for this case?
Here is a sample code of what I’m planning to do:
public Object insertObject(Object anObject, UnitOfWork uow) throws XxxException, TopLinkException
//Check if object is already in cache
Vector key = uow.keyFromObject(anObject);
Object obj = uow.getFromIdentityMap(key, anObject.getClass());
if (obj != null)
throw new XxxException("", key.toString());
return uow.registerNewObject(anObject);
Thanks in advance,
Ana Paula Carvalho

>
Q1) please also tel me can collection whould be better if use nested table or assocative arrya or varry,
insted of object and record to make table. i personly found object and record easy to understand. please suggest ,as i will use them in many functions.
and in which situation, which one whould be better, with respect to performance and mantainabiliity.
>
Use a nested table. You don't need an associative array.
>
and please consider i want to use it like table ex IN "JOIN", in subquery , on right side of IN, EXISTS etc.
>
A pipelined function is your only choice if you want to query it like a table. Use object types
-- type to match emp record
create or replace type emp_scalar_type as object
  (EMPNO NUMBER(4) ,
   ENAME VARCHAR2(10),
   JOB VARCHAR2(9),
   MGR NUMBER(4),
   HIREDATE DATE,
   SAL NUMBER(7, 2),
   COMM NUMBER(7, 2),
   DEPTNO NUMBER(2)
-- table of emp records
create or replace type emp_table_type as table of emp_scalar_type
-- pipelined function
create or replace function get_emp( p_deptno in number )
  return emp_table_type
  PIPELINED
  as
   TYPE EmpCurTyp IS REF CURSOR RETURN emp%ROWTYPE;
    emp_cv EmpCurTyp;
    l_rec  emp%rowtype;
  begin
    open emp_cv for select * from emp where deptno = p_deptno;
    loop
      fetch emp_cv into l_rec;
      exit when (emp_cv%notfound);
      pipe row( emp_scalar_type( l_rec.empno, LOWER(l_rec.ename),
          l_rec.job, l_rec.mgr, l_rec.hiredate, l_rec.sal, l_rec.comm, l_rec.deptno ) );
    end loop;
    return;
  end;
select * from table(get_emp(20))

Similar Messages

  • Refresh has effect on clone but not on object itself

    Hi,
    (Using TopLink 9.0.3 with an Oracle Database)
    I see some mystifying behaviour when I insert an object, refresh it because I know the database performs some changes on the data during insert from a database trigger and then inspect the refreshed object.
    In short my code is as follows:
    UnitOfWork unitOfWork = session.acquireUnitOfWork();
    Vak vak = new Vak();
    unitOfWork.registerNewObject(vak);
    vak.setTitle("New Title");
    unitOfWork.commit();
    // the database will turn the Title to all UpperCase
    session.refreshObject(vak);
    // the following call prints out: New Title
    // !!! not in uppercase!!
    System.out.println(vak.getTitel());
    unitOfWork = session.acquireUnitOfWork();
    Vak vakClone = (Vak)unitOfWork.registerObject(vak);
    // the following call prints out: NEWTITLE
    // !!! Now it is in uppercase!!
    System.out.println(vakClone.getTitel());
    vakClone.setTitel("++"+vakClone.getTitel());
    unitOfWork.commit();
    Can someone explain this behavior? Why does the refresh have an effect on the clone of vak but not on vak itself? Is it because vak is newly created in this piece of code?
    How can I ensure that the vak object itself has the changes made in the database?
    thanks for your help,
    Lucas Jellema (AMIS)

    The problem is that registerNewObject has semantics beyond the obvious use of registering new objects.... Time for me to lobby for a name change :)
    Search this forum for "registerNewObject" and I think you'll see some rants from me on this topic. Ultimately, in your first example in your first post, if you had just done "registerObject" instead of registerNewObject, everthing would have behaved as you hoped.
    You do NOT have to use registerNewObject for new objects... registerObject is preffered.
    Send me an email, donald.smith @ oracle.com and I have a UOW primer I can send (it's being incorporated into the docs for an upcoming release), but it will help clarify this in the meantime.
    - Don

  • Question/issue regarding querying for uncommited objects in Toplink...

    Hi, was hoping to get some insight into this problem we are encoutering…
    We have this scenario were we are creating a folder hierarchy (using Toplink)
    1. a parent folder is created
    2. child elements are created (in the same transaction as step 1),
    3. we need to lookup the parent folder and assign it as the parent
    of these child elements
    4. end the transaction and commit all data
    In our system we control access to objects by appending a filter to the selection criteria, so we end up with SQL like this example
    (The t2 stuff is the authorization lookup part of the query.) ;
    SELECT t0.ID, t0.CLASS_NAME, t0.DESCRIPTION, t0.EDITABLE,
    t0.DATE_MODIFIED, t0.DATE_CREATED,
    t0.MODIFIED_BY, t0.ACL_ID, t0.NAME, t0.CREATED_BY,
    t0.TYPE_ID, t0.WKSP_ID, t1.ID, t1.LINK_SRC_PATH,
    t1.ABSOLUTE_PATH, t1.MIME_TYPE, t1.FSIZE,
    t1.CONTENT_PATH, t1.PARENT_ID
    FROM XDOOBJECT t0, ALL_OBJECT_PRIVILEGES t2,
    ARCHIVEOBJECT t1
    WHERE ((((t1.ABSOLUTE_PATH = '/favorites/twatson2')
    AND ((t1.ID = t2.xdoobject_id)
    AND ((t2.user_id = 'twatson2')
    AND (bitand(t2.privilege, 2) = 2))))
    AND (t1.ID = t0.ID))
    AND (t0.CLASS_NAME = 'oracle.xdo.server.repository.model.Archivable'))
    When creating new objects we also create the authorization lookup record (which is inserted into a different table.) I can see all the objects are registered in the UOW identity map.
    Basically, the issue is that this scenario all occurs in a single transaction and when querying for the newly created parent folder, if the authorization filter is appended to the query, the parent is not found. If I remove the authorization filter then the parent is found correctly. Or if I break this up into separate transactions and commit after each insert, then the parent is found correctly.
    I use the conformResultsInUnitOfWork attribute on the queries.
    This is related to an earlier thread I have in this discussion forum;
    Nested UnitOfWork and reading newly created objects...
    Thanks for any help you can provide,
    -Tim

    Hi Doug, we add the authorization filter directly in the application code as the query is getting set up.
    Here are some code examples; 1) the first is the code to create new object in the system, followed by 2) the code to create a new authorization lookup record (which also uses the first code to do the actual Toplink insert), then 3) an example of a read query where the authorization filter is appended to the Expression and after that 4) several helper methods.
    I hope this is of some use as it's difficult to show the complete flow in a simple example.
    1)
    // create new object example
    public Object DataAccess.createObject(....
    Object result = null;
    boolean inTx = true;
    UnitOfWork uow = null;
    try
    SessionContext sc = mScm.getCurrentSessionContext();
    uow = TLTransactionManager.getActiveTransaction(sc.getUserId());
    if (uow == null)
    Session session = TLSessionFactory.getSession();
    uow = session.acquireUnitOfWork();
    inTx = false;
    Object oclone = (Object) uow.registerObject(object);
    uow.assignSequenceNumbers();
    if (oclone instanceof BaseObject)
    BaseObject boclone = (BaseObject)oclone;
    Date now = new Date();
    boclone.setCreated(now);
    boclone.setModified(now);
    boclone.setModifiedBy(sc.getUserId());
    boclone.setCreatedBy(sc.getUserId());
    uow.printRegisteredObjects();
    uow.validateObjectSpace();
    if (inTx == false) uow.commit();
    //just temp, see above
    if (true == authorizer.requiresCheck(oclone))
    authorizer.grantPrivilege(oclone);
    result = oclone;
    2)
    // Authorizer.grantPrivilege method
    public void grantPrivilege(Object object) throws DataAccessException
    if (requiresCheck(object) == false)
    throw new DataAccessException(
    "Object does not implement Securable interface.");
    Securable so = (Securable)object;
    ModulePrivilege[] privs = so.getDefinedPrivileges();
    BigInteger pmask = new BigInteger("0");
    for (int i = 0; i < privs.length; i++)
    BigInteger pv = PrivilegeManagerFactory.getPrivilegeValue(privs);
    pmask = pmask.add(pv);
    SessionContext sc = mScm.getCurrentSessionContext();
    // the authorization lookup record
    ObjectUserPrivilege oup = new ObjectUserPrivilege();
    oup.setAclId(so.getAclId());
    oup.setPrivileges(pmask);
    oup.setUserId(sc.getUserId());
    oup.setXdoObjectId(so.getId());
    try
    // this recurses back to the code snippet from above
    mDataAccess.createObject(oup, this);
    catch (DataAccessException dae) {
    Object[] args = {dae.getClass().toString(), dae.getMessage()};
    logger.severe(MessageFormat.format(EXCEPTION_MESSAGE, args));
    throw new DataAccessException("Failed to grant object privilege.", dae);
    3)
    // example Query code
    Object object = null;
    ExpressionBuilder eb = new ExpressionBuilder();
    Expression exp = eb.get(queryKeys[0]).equal(keyValues[0]);
    for (int i = 1; i < queryKeys.length; i++)
    exp = exp.and(eb.get(queryKeys[i]).equal(keyValues[i]));
    // check if need to add authorization filter
    if (authorizer.requiresCheck(domainClass) == true)
    // this is where the authorization filter is appended to query
    exp = exp.and(appendReadFilter());
    ReadObjectQuery query = new ReadObjectQuery(domainClass, exp);
    SessionContext sc = mScm.getCurrentSessionContext();
    if (TLTransactionManager.isInTransaction(sc.getUserId()))
    // part of a larger transaction scenario
    query.conformResultsInUnitOfWork();
    else
    // not part of a transaction
    query.refreshIdentityMapResult();
    query.cascadePrivateParts();
    Session session = getSession();
    object = session.executeQuery(query);
    4)
    // builds the authorzation filter
    private Expression appendReadFilter()
    ExpressionBuilder eb = new ExpressionBuilder();
    Expression exp1 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("xdoobject_id");
    Expression exp2 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("user_id");
    Expression exp3 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("privilege");
    Vector args = new Vector();
    args.add(READ_PRIVILEGE_VALUE);
    Expression exp4 =
    exp3.getFunctionWithArguments("bitand",args).equal(READ_PRIVILEGE_VALUE);
    SessionContext sc = mScm.getCurrentSessionContext();
    return eb.get("ID").equal(exp1).and(exp2.equal(sc.getUserId()).and(exp4));
    // helper to get Toplink Session
    private Session getSession() throws DataAccessException
    SessionContext sc = mScm.getCurrentSessionContext();
    Session session = TLTransactionManager.getActiveTransaction(sc.getUserId());
    if (session == null)
    session = TLSessionFactory.getSession();
    return session;
    // method of TLTransactionManager, provides easy access to TLSession
    // which handles Toplink Sessions and is a singleton
    public static UnitOfWork getActiveTransaction(String userId)
    throws DataAccessException
    TLSession tls = TLSession.getInstance();
    return tls.getTransaction(userId);
    // the TLSession method, returns the active transaction (UOW)
    // or null if none
    public UnitOfWork getTransaction(String uid) {
    UnitOfWork uow = null;
    UowWrapper uw = (UowWrapper)mTransactions.get(uid);
    if (uw != null) {
    uow = uw.getUow();
    return uow;
    Thanks!
    -Tim

  • Update an object

    Post Author: kctraceys
    CA Forum: Administration
    I have two copies of a report.  One that is on the server and another that has been modified and it is now ready to be put into production.  I have two issues.
    1)  I don't see anywhere that I can just update the object to include the modified report.
    2)  If I remove the object and try to create a new one it tells me I can't because it already exists?

    There is a bit of code missing from you example that would help me determine what you are trying to do. I suspect you are trying to do something like this:
    List objects=sess.readAllObjects(clazz);
    o = objects.get(i); // where i is the index of the object I am interested in
    o.setBlah(...);
    update(o, sess);
    The Object must be modified after it has been registered with the Unit Of Work. Registering the object with the Unit Of Work is like begining an object level transaction, giving you a transactional copy of the object to work on. Something like:
    List objects=sess.readAllObjects(clazz);
    unitOfWork = session.acquireUnitOfWork();
    o = (Cast)unitOfWork.registerObject(objects.get(i));
    o.setBlah(...);
    unitOfWork.commit();
    Note how I am not using the mergeClone(..) api that you were using. That API is for use when the object is serialized (or otherwise copied) between the session read and the registration with the unit of work.
    Peter

  • Update object problem

    Hi!
    I'm having some problems to update an object using TopLink with OracleXE.
    I have been traying different things with the merge method in the UnitOfWork API. The code executes without problems, but the rows in the databaste keep untouched as before.
    Any idea?
    Thanks,
    Claudio
    This is my code:
    Session sess=DBManager.getInstance().getSession();
              List objects=sess.readAllObjects(Sector.class);
              Sector sec=(Sector)objects.get(0);
              sec.setNombre("Contenedores"); //Changing name to "Contenedores"
              GrupoDeTrabajo grupo = (GrupoDeTrabajo) sec.getGruposDeTrabajo()
                        .get(0);
              sec.removeFromGruposDeTrabajo(grupo); //removing the first child
              grupo = (GrupoDeTrabajo) sec.getGruposDeTrabajo().get(0); //Retriving the next child
              grupo.setNombre("Other group"); //changing name of one child of sec
              UnitOfWork uow=DBManager.getInstance().getSession().acquireUnitOfWork();
              Object workingCopy = uow.readObject(Sector.class);
              if (workingCopy == null || workingCopy==sec) {
                        throw new DBException();
              uow.validateObjectSpace();
              uow.printRegisteredObjects();
              uow.mergeCloneWithReferences(sec);
              uow.writeChanges();
              uow.commit();
              uow.release();

    You cannot change objects you read through the TopLink Session. These objects must be read-only. You can only modify registered clones in a UnitOfWork.
    i.e.
    Session sess=DBManager.getInstance().getSession();
    List objects=sess.readAllObjects(Sector.class);
    UnitOfWork uow=DBManager.getInstance().getSession().acquireUnitOfWork();
    Sector sec=(Sector)uow.registerObject(objects.get(0));
    sec.setNombre("Contenedores"); //Changing name to "Contenedores"
    GrupoDeTrabajo grupo = (GrupoDeTrabajo) sec.getGruposDeTrabajo()
    .get(0);
    sec.removeFromGruposDeTrabajo(grupo); //removing the first child
    grupo = (GrupoDeTrabajo) sec.getGruposDeTrabajo().get(0); //Retriving the next child
    grupo.setNombre("Other group"); //changing name of one child of sec
    uow.commit();
    The merge API is for objects that are serialized to a remote client and back.

  • Overhead of registerObjecr vs. registerNew-RegisterExisting

    Hello,
    All ower descriptors are declared SoftWeakIdentityMAp
    1- We use registerNewObject on uow to regiter new created objects.
    2- We use registerExistingObject on uow to register existing
    objects.
    Can one be satisfied with these two methods knowing that the registerObject method is more expensive in performances
    Regards

    The performance difference between the two depends on your existence check option. Typically check-cache is used for existence, so the performance difference is very minor, if check-database were used the performance difference would be larger. In general using just reigsterObject is sufficient.
    One important difference between registerNewObject and registerObject is that registerNewObject does not make a clone, but registers the new object itself as the working clone. If you use registerObject you must ensure that you edit the working cloned returned from registerObject.

  • Write/Read Synchronization Problem

    Hi there,
    we have a little problem reading objects from the DB shortly after updating them.
    After we commit the UnitOfWork we do a ReadObjectQuery. Sometimes (about evry 6-10th time) the ReadObjectQuery return an object with outdated data (the state like before the commit).
    I inserted an initializeAllIdentityMaps() between the write and the read, but that didn't help.
    I inserted a Thread.sleep() after the write which fixes the problem, but I'm not willing to accept this as a solution. There must be a better way than active waiting for the changes to complete.
    I'm looking for notification mechanism to find out if the commit is complete and the read-cache is ready to use.
    Any suggestions?
    Thanks,
    Tassilo.

    Thanks for your help, Don.
    I think we can exclude race conditions. This happens in a single thread in a single Method.
    But I forgot to post one important information (I'm sorry for that): We are still using TOPLink 3.6
    I tried to boil the problem down to a minimal test that reproduces the effect: The following code reads an object form the DB, sets on of its reference to null, commits it, sets the reference back to the original value and commits again.
    Then it reads the same object again from the DB and in 4 out of 20 tries the reference is still null. A second read always brings up the right reference. The ratio 4 of 20 is not constant, sometimes it's over 10 fails in 20 tries, in some rare cases all 20 tries work out fine.
    I have no clue what is going on here. I'll appreciate any help.
    public void testMethod(){
    for (int i = 0; i < 20; i++) {
    long myOid = 19618;
    ClientSession clientSession = ServerSession.getClientSession();
    UnitOfWork uow = clientSession.acquireUnitOfWork();
    CoObject object = CoObject.findWhereObjectIdExists(clientSession, myOid); /* this is a ReadObjectQuery*/
    CoContent content = object.getCoContent();
    CoObject objectClone = (CoObject) uow.registerObject(object);
    CoContent contentClone = (CoContent) uow.registerObject(content);
    objectClone.setCoContent(null);
    uow.commitAndResume();
    objectClone.setCoContent(contentClone);
    uow.commit();
    // uow.initializeAllIdentityMaps(); /* doesn't make a difference */
    /* now we read the same Object the DB again */
    object = CoObject.findWhereObjectIdExists(clientSession, myOid);
    System.out.println(i + "object.getCoContent(): " + object.getCoContent()); /* ouch! */
    try {
    Thread.sleep(500); /* this helps, but hurts! */
    catch (Exception e) {
    object = CoObject.findWhereObjectIdExists(clientSession, myOid);
    System.out.println(i + "neu object.getCoContent(): " + object.getCoContent());
    0 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    0 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    1 object.getCoContent(): null
    1 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    2 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    2 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    3 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    3 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    4 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    4 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    5 object.getCoContent(): null
    5 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    6 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    6 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    7 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    7 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    8 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    8 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    9 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    9 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    10 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    10 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    11 object.getCoContent(): null
    11 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    12 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    12 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    13 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    13 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    14 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    14 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    15 object.getCoContent(): null
    15 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    16 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    16 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    17 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    17 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    18 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    18 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    19 object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]
    19 neu object.getCoContent(): CoFlexTyp[ID=16226.0 Name=Checkthisout, Object-Id=19618.0]

  • Issues when adding new objects

    A few questions regarding changes in Toplink from 2.5 to the current version.
    Say I had object A which contains a collection of Bs and a collection of Cs.
    Now when I want to save A and Bs and Cs to the database as an insert.
    Is the following code correct ?
    A a = new A();
    A cloneA = (A)uow.registerObject(a);
    Vector bCollectionClone = uow.registerAllObjects(new Vector(bCollection));
    Vector cCollectionClone = uow.registerAllObjects(new Vector(cCollection));
    uow.commit();
    This seemed to work in 2.5. According to the API java docs this should work even today.
    The reason I am asking is all of these -A, B and C are new objects. Should they be registered with registerNewObject();
    Is it incorrect to register a new object with cloning ?

    I forgot to add a couple of lines of code to that snippet
    A a = new A();
    A cloneA = (A)uow.registerObject(a);
    Vector bCollectionClone = uow.registerAllObjects(new Vector(bCollection));
    Vector cCollectionClone = uow.registerAllObjects(new Vector(cCollection));
    cloneA.setBCollection(bCollectionClone);
    cloneA.setCCollection(cCollectionClone());
    uow.commit();
    A few questions regarding changes in Toplink from 2.5
    to the current version.
    Say I had object A which contains a collection of Bs
    and a collection of Cs.
    Now when I want to save A and Bs and Cs to the
    database as an insert.
    Is the following code correct ?
    A a = new A();
    A cloneA = (A)uow.registerObject(a);
    Vector bCollectionClone = uow.registerAllObjects(new
    Vector(bCollection));
    Vector cCollectionClone = uow.registerAllObjects(new
    Vector(cCollection));
    uow.commit();
    This seemed to work in 2.5. According to the API java
    docs this should work even today.
    The reason I am asking is all of these -A, B and C
    are new objects. Should they be registered with
    registerNewObject();
    Is it incorrect to register a new object with cloning

  • How to copy an Object with sequencing primary key?

    Hi, I have a use case here to copy all the informations and create a new object? The draft process i am using is:
    obj original = session.readObject;
    obj target = uow.readObject;
    if(target is not there) {
    target = uow.registerObject(new target())
    target.attrA = original.attrA
    target.attrZ = original.attrZ}
    uow.commit;
    It works fine, but i don't like to repeat the boring attribute copying. So i change my code to:
    obj original = session.readObject;
    obj target = uow.readObject;
    if(target is not there) {
    original.pk = null;
    uow.registerNewObject(original);
    uow.commit;
    I try to set the pk of the original to null and let it to use the sequence one. However, It fails with an exception that the primary key cannot be null. Is there anybody can help me to simplify the process? Any concerns or comments are really appreciated.
    Message was edited by:
    juwen

    Hello Juwen,
    The problem is you are registering an object, assigning it a null pk, and then commiting the uow/transaction. TopLink uses registered objects to keep track of changes you make inorder to persist those changes on commit. So the simple fix is to not commit the UnitOfWork - call release() on it instead.
    Another solution is to use the session copyObject api. The simple form that only takes an object will work similar to registering the object as it will copy all persistent attributes but it will leave the primary key null. You can also use this method to specify a copyPolicy to customize the process. Using this method will be a bit more efficient, since a UOW makes both a working copy and a back up copy of objects registered, inorder to keep track of changes. Using the copyObject api will only make a single copy.
    Best Regards,
    Chris

  • UnitOfWork/Cache/ClientSession Object Update Problem

    I have a test case that does a create, read, update, read. This is done through methods on a stateless session bean with all methods requiring a transaction.
    I am using Toplink 9.0.3,JBoss 3.0.7 and JTS. Bascally I have a very simple object.
    1. I create the object then call my bean, my bean
    calls TopLink and gets a ClientSession from a
    ServerSession and then gets a UnitOfWork from the
    clientSession. I then register the object as new,
    and commit the object, which get's written to the
    database. My bean method returns the pk.
    BeanInterface.createObject(Object obj) {
    UnitOfWork uow = Singleton.getServerSession().getClientSession().getUnitOfWork();
    uow.registerNewObject();
    uow.commit();
    2. I then read the object via my bean, by calling
    get ClientSession from ServerSession and calling
    readObject on it.
    BeanInterface.readObject(Object obj) {
    ClientSession session = Singleton.getServerSession().getClientSession();
    return Utility.cloneObject(session.readObject(obj)); //Insures I am getting my own copy not a ref.
    So far so good.
    3. I then modify the object in my client and call
    an update method on my bean. This methods does a
    SessionServer.getClientSession().getUnitOfWork()
    I then register my object as existing, deep
    merge it and then commit the unit of work.
    BeanInterface.updateObject(Object obj) {
    UnitOfWork uow = Singleton.getServerSession().getClientSession().getUnitOfWork();
    Object clone = uow.registerExistingObject(obj);
    uow.deepMergeClone(obj);
    uow.commit();
    I check the database and the changes are there.
    4. Now the kicker. I read the object again like in
    step 2 and I get the original object not changes.
    So what is in the cache is the orignal object,
    wht is in the database is the modified object.
    BeanInterface.readObject(Object obj) {
    ClientSession session = Singleton.getServerSession().getClientSession();
    return Utility.cloneObject(session.readObject(obj)); //Insures I am getting my own copy not a ref.
    At this point I am stumped because the changes are in
    the database, but not in the cache and I get no
    exceptions.
    Any thoughts?
    Thanks
    Mike H. SR.

    Donald,
    There are three methods you can use. registerExistingObject(), registerNewObject(), registerObject(). If you know that the object is new then using registerNewObject() is better because it does not do the existence check. If you know that that object exists then using registerExistingObject() saves you the overhead of having TopLink determine whether the object exists or not. registerObject() is really only useful if you do not know whether the object is new or exists. So registNew, registExisting are optimizations. In my case we are using registerNewObject() when a consumer says create the object, and registerExistingObject() when they are doing an update. There is also a store, which uses the registObject(), in this case the consumer does not know whether the object is new or existing.
    I have been tracking the register method in my debugger and the problem may be in this method. Basically the registerObject or registerExistingObject is suppose to load the original object from the datbase if it is not in the cache. If it is in the cache is should clone it giving back the reference to the clone. In the end there will be two copies of the object the Clone on the cloned object list and the original on the original object list. The deepMergeClone should update the clone with the latest changes and then commit should be deltaing the clone and the original to determine what goes to the db. So far the is working ok because the data does filter to the database. If I turn caching off there are no problems.
    Additional information. I did some testing and watching in the debugger, I know this, if I do a serverSession.getClientSession.readObject() or any select or read on the clientSession I get back the same reference every time, but serverSession.getClientSession().acquireUnitOfWork().readObject() returns a different object reference than the clientSession. This may be by design, also I believe the problem is the descriptor, but I cannot figure out what it is about the descriptor, because this only happens on a few descriptors.
    By the way I tried the registerObject prior to posting to no avail and had it worked that would have meant there was a problem with registerExistingObject, because this is an existing object.
    Thanks for the input.

  • Conforming UnitOfWork New Object w/assignSequenceNumber

    If I:
    1. Create an instance of a mapped object.
    2. Assign the appropriate values.
    3. Call UnitOfWork.registerNewObject();
    4. Call assignSequenceNumbers();
    5. And, using the same UnitOfWork but before committing to the database, create a ReadObjectQuery with an Expression to search on the newly-assigned Sequence Number.
    Then, is it possible for a ReadObjectQuery to find this value?
    I have tried:
    1. On the descriptor, setting setShouldAlwaysConformResultsInUnitOfWork(true);
    2. On the query, checkCacheByPrimaryKey();
    Nothing works.
    Everything in the docs suggests I should be able to do this, but I always get null. Is there any way to get this newly registered object out of the UnitOfWork?

    Nate,
    Here is a chunk of code I just wrote to perform what I believe you want. I use regsiterObject instead of registerNewObject and I set conforming on the query.
    UnitOfWork uow = getSession().acquireUnitOfWork();
    Employee newEmp = new Employee(); // Original
    Employee wcEmp = (Employee) uow.registerObject(newEmp); // Register and recieved working copy
    uow.assignSequenceNumbers();
    System.out.println("Original id: " + newEmp.getId() + " - " + newEmp.hashCode());
    System.out.println("WC id: " + wcEmp.getId() + " - " + wcEmp.hashCode());
    // Now we'll search for it
    ReadObjectQuery roq = new ReadObjectQuery(Employee.class);
    roq.setSelectionCriteria(roq.getExpressionBuilder().get("id").equal(wcEmp.getId()));
    roq.conformResultsInUnitOfWork();
    Employee emp = (Employee) uow.executeQuery(roq);
    System.out.println("Found: " + emp.getId() + " - " + emp.hashCode());
    The output from the run gives me:
    Original id: 0 - 33459432
    WC id: 471 - 29345020
    Found: 471 - 29345020
    As you can see the working copy returned from the register is the one that has the PK assigned and is also returned from the query.
    Doug

  • TopLink 9.0.4.5 and aggregate object mappings

    We use TopLink version 9.0.3 to access Oracle database 9i with our application.
    I currently evaluate TopLink version 9.0.4.5 with Oracle database 10g (10.1.0.3.0).
    The problem we have is with aggregate object mappings.
    After instantiation of the owning and the owned object, we set a back reference from the owned object to the owning object.
    We then register the owning object at the unit of work with registerObject() and work on the returned clone.
    After commit we read again the created data.
    Using TopLink version 9.0.3:
    The owned object has the back reference to the owning object.
    Using TopLink version 9.0.4.5:
    The owning object is valid but at the owned object the back reference is missing (null).
    That's strange because we had no problems in the past (since TOPLink 2.1 up to 9.0.3).
    What is going wrong here? Any idea?

    How was the back pointer mapped in the TopLink project?
    If you check your project from 9.0.3.X is there a Descriptor event listener with postBuild and postClone implemented?.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                   

  • Adding object to collection outside of unit of work?

    Hi,
    Tried to find the answer in this forum but difficult to know what keywords to use...anyway
    What I am trying to do is so simple I can only believe I am missing the point somewhat ;-)
    I have an object Licence that has a Set of LicenceHolders. I have a licence already saved in the db and the application now has cause to add a licenceHolder. I create the licenceHolder as a new object then get the licence out of the db, add the licenceHolder to the Set, then I make a call to my data persistance business class to make the save to the database. I.e. I have a separate layer to do the persistance from my business logic. The latter being the specific method to say 'add a licence holder to the licence', the former being 'save the licenceHolder, and update the link table between the licence and licenceHolder' (m - m relationship).
    The problem I have is that in my business method licence.addLicencee(licenceHolder) doesn't save my licenceHolder and link table if I don't include the bold line below. This does work but see below for more comments
    code snippet for business method....
    // lm is the persistance method that gets licence from db
    PremisesLicence licence = (PremisesLicence) lm.getLicence(premisesLicenceId);
    // licenceePerson - new object not yet persisted
    licence.addLicenceHolder(licenceePerson);     
    //lhm is another persistance method to save licenceePerson and update licence_licenceHolder link table
    lhm.addLicenceHolder(licenceePerson, licence);
    code for lhm...
    public void addLicenceHolder(ILicenceHolder licenceHolder, Licence licence) throws DataAccessException {
    UnitOfWork uow = aSession.acquireUnitOfWork();
    try {
         Licence licenceClone = (Licence) uow.readObject(licence);
    licenceClone.addLicenceHolder(licenceHolder);
         uow.registerNewObject(licenceHolder);
         uow.commit();
    } catch (Exception e) {
         e.printStackTrace();
         throw new DataAccessException(e);
    I don't believe I should have to do the bold line as it is business logic in the persistance logic. I have already (in business method) said 'this licence has this licenceHolder' why should I have to repeat it in the persistance layer too. This can only lead to bugs in my software - I need all the help I can not to introduce any more ;-)
    Comments please?
    Thanks
    Conrad

    We've been working with TopLink for a while now, and had some issues with what you are doing. The following way is how we interact with TopLink to enable modifications outside the UoW:
    - When returning objects from you DAO layer always return a copy of the retrieved objects (try Session.copyObject()), not the object returned by TopLink.
    - Do modifications to the copied objects
    - Pass the modified object graph to the DAO layer and use uow.mergeCloneWithReferences() to merge in differences
    - Save to database with TopLink
    From our experience, the reason you have to copy things is because what TopLink returns is the cached objects. So because you then will modify the cached objects, TopLink will not discover changes (comparing an object to itself will not give any differences). At least that was the only explanation we could find to our strange behaviour.
    This is not a very efficient way to do things, and I think that if you are able to have a UoW open during this whole process it would be much more efficient. For an easy way to do this with web applications check out the TopLink integration with Springframework.
    Please let me know if anyone has any feedback to this way of doing it. I know it's not ideal, but it is the best solution we found for detaching the objects.

  • RE: Named anchored objects

    Albert,
    In my case I was using a named anchored object to get a handle to an actual
    service object. My named object that I registered in the name service was
    an intermediary to which I did not maintain a connection. So I have not
    explicitly tested what you are asking.
    However, I too was not using a hard coded reference to the SO, and fail over
    and load balancing worked fine. The functions of fail over and load
    balancing are not done by the service object but by the name service, proxy
    and router. Since you are getting a proxy back any time you do a lookup in
    the name service I would think that fail over should work with any anchored
    object that is registered in the name service. When you do a RegisterObject
    call you will notice that one of the arguments is the session duration,
    which implies to me that fail over will be handled the same as for service
    objects.
    Load balancing adds another wrinkle. Load balancing is handled by a router.
    You must get a proxy to the router and not a proxy to an instance of the
    object that the router is doing the load balancing for. In the latter
    scenario you will be bypassing the router. If you are creating, anchoring
    and registering your objects dynamically you will not have a router so you
    will not be able to load balance! This applies even if the objects are
    instantiated within partitions that are load balanced because you will still
    be getting proxies back to a particular instance of the anchored objects.
    There are ways to accomplish load balancing between objects that you
    register yourself. However, the best solution will vary depending on the
    actual problem trying to be solved. If you would like to discuss this
    further, include a little more detail about the scenario you need to
    implement and I will give you what I know.
    BTY what I have outlined above also applies to getting references via a
    system agent.
    Sean
    Cornice Consulting, Inc.
    -----Original Message-----
    From: [email protected]
    [<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Albert Dijk
    Sent: Friday, July 03, 1998 11:01 AM
    To: [email protected]
    Subject:
    Alex, David, Jez, Sean,...
    My question about both solutions (using Nameservice and agents) is:
    If I reach a remote service object using either a BindObject or an agent, do
    fail-over and load-balancing work the same way as they normally do when
    using a hard coded reference to the SO.
    Albert Dijk
    From: Sean Brown[SMTP:[email protected]]
    Reply To: [email protected]
    Sent: Thursday, June 25, 1998 6:55 AM
    To: Ananiev, Alex; [email protected]
    Subject: RE: multiple named objects with the same name and
    interface
    Alexander,
    I can not comment on the speed difference because I never tested it.
    But, I
    will say that we looked at the agent solution at a client sight
    before. I
    will give the same warning I gave them. If you go the agent direction
    you
    are now using agents for a purpose that they were not intended. Even
    though
    it technically works, as soon as you start using a piece of
    functionality in
    a way the developer did not intend it to be used you run the risk of
    forward
    compatibility problems. By this I mean, since agents were not
    originally
    intended to be used to look up service / anchored object references,
    it may
    not work in the future because it is not likely to be given
    consideration in
    any future design.
    As we all know, programmers are always stretching the bounds of the
    tools
    they use and you may have a good reason (i.e. performance). I just
    wanted to
    let you know the possible risk.
    One final note on a limitation of using system agents to obtain
    references
    to anchored objects. You can not access agents across environments.
    So, if
    you have connected environments and need to get references to services
    in
    another environment for fail-over or whatever, you will not be able to
    do it
    with agents.
    Just some thoughts!
    Sean
    -----Original Message-----
    From: [email protected]
    [<a href="mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of Ananiev, Alex
    Sent: Wednesday, June 24, 1998 12:14 PM
    To: '[email protected]'
    Subject: RE: multiple named objects with the same name and interface
    David,
    The problem with dynamic binding is that in this case you have to keep
    the reference to the service object somewhere. You don't want to call
    "bindObject" every time you need to use this service object, "bind" is
    a
    time-consuming operation, even on the same partition. Keeping
    reference
    could be undesirable if your object could be moved across partitions
    (e.g. business object).
    The alternative solution is to use agents. You can create custom
    agent,
    make it a subagent of an active partition agent and use it as a
    placeholder for whatever service you need. "FindSubAgent" works much
    faster than "bindObject", we verified that and agent is "user-visible"
    by its nature.
    Alexander
    From: "Sean Brown" <[email protected]>
    Date: Wed, 24 Jun 1998 09:12:55 -0500
    Subject: RE: multiple named objects with the same name and interface
    David,
    I actually determined it through testing. In my case I did not want
    this to
    happen and was trying to determine why it was happing. It makes sense
    if
    you think about it. Forte is trying to avoid making a remote method
    invocation if it can.
    Now, for anything more complex than looking locally first and if none
    is
    found give me any remote instance you can find, you will need to do
    more
    work. Using a naming scheme like Jez suggests below works well.
    Sean
    - -----Original Message-----
    From: Jez Sygrove [<a href="mailto:[email protected]">mailto:[email protected]</a>]
    Sent: Wednesday, June 24, 1998 4:34 AM
    To: [email protected]; 'David Foote'
    Cc: [email protected]
    Subject: RE: multiple named objects with the same name and interface
    David,
    there's a mechanism used within SCAFFOLDS that allows the
    location of the 'nearest' SO when more than one is available.
    It involves registering each duplicated SO under three dynamically
    built
    names. The names include the partition, the node or the environment
    name.
    When wishing to locate the nearest SO the BO builds a SO name using
    its
    own partition and asks the name service for that.
    If there is an SO registered under that name then it must be in the
    same
    partition and all is well. No cross partition calls.
    If not, then the BO builds the name using its node and asks the name
    service for that.
    This means that if there is an SO outside the BO partition but still
    on
    the same node then this can be used. Again, relatively 'local'.
    If neither of these work then the BO has to resort to an environment
    wide search.
    It may be that this approach could be adapted / adopted; I like it's
    ingenuity.
    Cheers,
    Jez
    From: David Foote[SMTP:[email protected]]
    Reply To: David Foote
    Sent: 24 June 1998 03:17
    To: [email protected]
    Cc: [email protected]
    Subject: RE: multiple named objects with the same name and
    interface
    Sean,
    First, thank you for your response. I have wondered about this fora
    long time.
    I looked at the documentation for ObjectLocationManager and on page
    327
    of the Framework Library and AppletSupport Library Guide indescribing
    the BindObject method Forte says:
    "The name service allows more than one anchored object (from
    different
    partitions) to be registered in the name service under the same
    registration name. When you invoke the BindObject method with a
    request
    for a name that has duplicate registration entries, the BindObject
    method finds an entry corresponding to an active partition, skipping
    any
    entries that do not. If no such active partition is found, or if the
    requested name is not found in the name service registry, a
    RemoteAccessException will be raised when the BindObject method is
    invoked."
    My question is: How did you discover that in the case of duplicate
    registrations the naming service will return the local object if one
    exists? This is not apparent from the documentation I have quoted.
    Is
    it documented elsewhere? Or did you determine it empirically?
    David N. Foote,
    Consultant
    ----Original Message Follows----
    David,
    First I will start by saying that this can be done by using named
    anchored
    objects and registering them yourself in the name service. There is
    documentation on how to do this. And by default you will get mostof
    the
    behavior you desire. When you do a lookup in the name service
    (BindObject
    method) it will first look in the local partition and see if thereis
    a
    local copy and give you that copy. By anchoring the object and
    manually
    registering it in the name service you are programmatically creating
    your
    own SO without defining it as such in the development environment.
    BTW
    in
    response to your item number 1. This should be the case there as
    well.
    If
    your "mobile" object is in the same partition where the serviceobject
    he is
    calling resides, you should get a handle to the local instance ofthe
    service object.
    Here is the catch, if you make a bind object call and there is no
    local
    copy
    you will get a handle to a remote copy but you can not be sure which
    one!
    It end ups as more or less a random selection. Off the top of myhead
    and
    without going to the doc, I am pretty sure that when you register an
    anchored object you can not limit it's visibility to "User".
    Sean
    -----Original Message-----
    From: [email protected]
    [<a href=
    "mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of David Foote
    Sent: Monday, June 22, 1998 4:51 PM
    To: [email protected]
    Subject: multiple named objects with the same name and interface
    All,
    More than once, I have wished that Forte allowed you to place named
    objects with the same name in more than one partition. There aretwo
    situations in which this seems desirable:
    1) Objects that are not distributed, but are mobile (passed by value
    to
    remote objects), cannot safely reference a Service Object unless it
    has
    environment visibility, but this forces the overhead of a remote
    method
    call when it might not otherwise be necessary. If it were possibleto
    place a copy of the same Service Object (with user visibility) ineach
    partition, the overhead of a remote method call could be avoided.
    This
    would only be useful for a service object whose state could besafely
    replicated.
    2) My second scenario also involves mobile objects referencing a
    Service
    Object, but this time I would like the behavior of the calledService
    Object to differ with the partition from which it is called.
    This could be accomplished by placing Service Objects with the same
    name
    and the same interface in each partition, but varying the
    implementation
    with the partition.
    Does anyone have any thoughts about why this would be a good thingor
    a
    bad thing?
    David N. Foote
    Consultant
    Alexander Ananiev
    Claremont Technology Group
    916-558-4127
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive
    <URL:<a href="http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
    >
    >
    >
    Alexander Ananiev
    Claremont Technology Group
    916-558-4127
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:<a href=
    "http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:<a href=
    "http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>

    Albert,
    In my case I was using a named anchored object to get a handle to an actual
    service object. My named object that I registered in the name service was
    an intermediary to which I did not maintain a connection. So I have not
    explicitly tested what you are asking.
    However, I too was not using a hard coded reference to the SO, and fail over
    and load balancing worked fine. The functions of fail over and load
    balancing are not done by the service object but by the name service, proxy
    and router. Since you are getting a proxy back any time you do a lookup in
    the name service I would think that fail over should work with any anchored
    object that is registered in the name service. When you do a RegisterObject
    call you will notice that one of the arguments is the session duration,
    which implies to me that fail over will be handled the same as for service
    objects.
    Load balancing adds another wrinkle. Load balancing is handled by a router.
    You must get a proxy to the router and not a proxy to an instance of the
    object that the router is doing the load balancing for. In the latter
    scenario you will be bypassing the router. If you are creating, anchoring
    and registering your objects dynamically you will not have a router so you
    will not be able to load balance! This applies even if the objects are
    instantiated within partitions that are load balanced because you will still
    be getting proxies back to a particular instance of the anchored objects.
    There are ways to accomplish load balancing between objects that you
    register yourself. However, the best solution will vary depending on the
    actual problem trying to be solved. If you would like to discuss this
    further, include a little more detail about the scenario you need to
    implement and I will give you what I know.
    BTY what I have outlined above also applies to getting references via a
    system agent.
    Sean
    Cornice Consulting, Inc.
    -----Original Message-----
    From: [email protected]
    [<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of Albert Dijk
    Sent: Friday, July 03, 1998 11:01 AM
    To: [email protected]
    Subject:
    Alex, David, Jez, Sean,...
    My question about both solutions (using Nameservice and agents) is:
    If I reach a remote service object using either a BindObject or an agent, do
    fail-over and load-balancing work the same way as they normally do when
    using a hard coded reference to the SO.
    Albert Dijk
    From: Sean Brown[SMTP:[email protected]]
    Reply To: [email protected]
    Sent: Thursday, June 25, 1998 6:55 AM
    To: Ananiev, Alex; [email protected]
    Subject: RE: multiple named objects with the same name and
    interface
    Alexander,
    I can not comment on the speed difference because I never tested it.
    But, I
    will say that we looked at the agent solution at a client sight
    before. I
    will give the same warning I gave them. If you go the agent direction
    you
    are now using agents for a purpose that they were not intended. Even
    though
    it technically works, as soon as you start using a piece of
    functionality in
    a way the developer did not intend it to be used you run the risk of
    forward
    compatibility problems. By this I mean, since agents were not
    originally
    intended to be used to look up service / anchored object references,
    it may
    not work in the future because it is not likely to be given
    consideration in
    any future design.
    As we all know, programmers are always stretching the bounds of the
    tools
    they use and you may have a good reason (i.e. performance). I just
    wanted to
    let you know the possible risk.
    One final note on a limitation of using system agents to obtain
    references
    to anchored objects. You can not access agents across environments.
    So, if
    you have connected environments and need to get references to services
    in
    another environment for fail-over or whatever, you will not be able to
    do it
    with agents.
    Just some thoughts!
    Sean
    -----Original Message-----
    From: [email protected]
    [<a href="mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of Ananiev, Alex
    Sent: Wednesday, June 24, 1998 12:14 PM
    To: '[email protected]'
    Subject: RE: multiple named objects with the same name and interface
    David,
    The problem with dynamic binding is that in this case you have to keep
    the reference to the service object somewhere. You don't want to call
    "bindObject" every time you need to use this service object, "bind" is
    a
    time-consuming operation, even on the same partition. Keeping
    reference
    could be undesirable if your object could be moved across partitions
    (e.g. business object).
    The alternative solution is to use agents. You can create custom
    agent,
    make it a subagent of an active partition agent and use it as a
    placeholder for whatever service you need. "FindSubAgent" works much
    faster than "bindObject", we verified that and agent is "user-visible"
    by its nature.
    Alexander
    From: "Sean Brown" <[email protected]>
    Date: Wed, 24 Jun 1998 09:12:55 -0500
    Subject: RE: multiple named objects with the same name and interface
    David,
    I actually determined it through testing. In my case I did not want
    this to
    happen and was trying to determine why it was happing. It makes sense
    if
    you think about it. Forte is trying to avoid making a remote method
    invocation if it can.
    Now, for anything more complex than looking locally first and if none
    is
    found give me any remote instance you can find, you will need to do
    more
    work. Using a naming scheme like Jez suggests below works well.
    Sean
    - -----Original Message-----
    From: Jez Sygrove [<a href="mailto:[email protected]">mailto:[email protected]</a>]
    Sent: Wednesday, June 24, 1998 4:34 AM
    To: [email protected]; 'David Foote'
    Cc: [email protected]
    Subject: RE: multiple named objects with the same name and interface
    David,
    there's a mechanism used within SCAFFOLDS that allows the
    location of the 'nearest' SO when more than one is available.
    It involves registering each duplicated SO under three dynamically
    built
    names. The names include the partition, the node or the environment
    name.
    When wishing to locate the nearest SO the BO builds a SO name using
    its
    own partition and asks the name service for that.
    If there is an SO registered under that name then it must be in the
    same
    partition and all is well. No cross partition calls.
    If not, then the BO builds the name using its node and asks the name
    service for that.
    This means that if there is an SO outside the BO partition but still
    on
    the same node then this can be used. Again, relatively 'local'.
    If neither of these work then the BO has to resort to an environment
    wide search.
    It may be that this approach could be adapted / adopted; I like it's
    ingenuity.
    Cheers,
    Jez
    From: David Foote[SMTP:[email protected]]
    Reply To: David Foote
    Sent: 24 June 1998 03:17
    To: [email protected]
    Cc: [email protected]
    Subject: RE: multiple named objects with the same name and
    interface
    Sean,
    First, thank you for your response. I have wondered about this fora
    long time.
    I looked at the documentation for ObjectLocationManager and on page
    327
    of the Framework Library and AppletSupport Library Guide indescribing
    the BindObject method Forte says:
    "The name service allows more than one anchored object (from
    different
    partitions) to be registered in the name service under the same
    registration name. When you invoke the BindObject method with a
    request
    for a name that has duplicate registration entries, the BindObject
    method finds an entry corresponding to an active partition, skipping
    any
    entries that do not. If no such active partition is found, or if the
    requested name is not found in the name service registry, a
    RemoteAccessException will be raised when the BindObject method is
    invoked."
    My question is: How did you discover that in the case of duplicate
    registrations the naming service will return the local object if one
    exists? This is not apparent from the documentation I have quoted.
    Is
    it documented elsewhere? Or did you determine it empirically?
    David N. Foote,
    Consultant
    ----Original Message Follows----
    David,
    First I will start by saying that this can be done by using named
    anchored
    objects and registering them yourself in the name service. There is
    documentation on how to do this. And by default you will get mostof
    the
    behavior you desire. When you do a lookup in the name service
    (BindObject
    method) it will first look in the local partition and see if thereis
    a
    local copy and give you that copy. By anchoring the object and
    manually
    registering it in the name service you are programmatically creating
    your
    own SO without defining it as such in the development environment.
    BTW
    in
    response to your item number 1. This should be the case there as
    well.
    If
    your "mobile" object is in the same partition where the serviceobject
    he is
    calling resides, you should get a handle to the local instance ofthe
    service object.
    Here is the catch, if you make a bind object call and there is no
    local
    copy
    you will get a handle to a remote copy but you can not be sure which
    one!
    It end ups as more or less a random selection. Off the top of myhead
    and
    without going to the doc, I am pretty sure that when you register an
    anchored object you can not limit it's visibility to "User".
    Sean
    -----Original Message-----
    From: [email protected]
    [<a href=
    "mailto:[email protected]]On">mailto:[email protected]]On</a> Behalf Of David Foote
    Sent: Monday, June 22, 1998 4:51 PM
    To: [email protected]
    Subject: multiple named objects with the same name and interface
    All,
    More than once, I have wished that Forte allowed you to place named
    objects with the same name in more than one partition. There aretwo
    situations in which this seems desirable:
    1) Objects that are not distributed, but are mobile (passed by value
    to
    remote objects), cannot safely reference a Service Object unless it
    has
    environment visibility, but this forces the overhead of a remote
    method
    call when it might not otherwise be necessary. If it were possibleto
    place a copy of the same Service Object (with user visibility) ineach
    partition, the overhead of a remote method call could be avoided.
    This
    would only be useful for a service object whose state could besafely
    replicated.
    2) My second scenario also involves mobile objects referencing a
    Service
    Object, but this time I would like the behavior of the calledService
    Object to differ with the partition from which it is called.
    This could be accomplished by placing Service Objects with the same
    name
    and the same interface in each partition, but varying the
    implementation
    with the partition.
    Does anyone have any thoughts about why this would be a good thingor
    a
    bad thing?
    David N. Foote
    Consultant
    Alexander Ananiev
    Claremont Technology Group
    916-558-4127
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive
    <URL:<a href="http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
    >
    >
    >
    Alexander Ananiev
    Claremont Technology Group
    916-558-4127
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:<a href=
    "http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:<a href=
    "http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>

  • Best Practice for caching global list of objects

    Here's my situation, (I'm guessing this is mostly a question about cache synchronization):
    I have a database with several tables that contain between 10-50 rows of information. The values in these tables CAN be added/edited/deleted, but this happens VERY RARELY. I have to retrieve a list of these objects VERY FREQUENTLY (sometimes all, sometimes with a simple filter) throughout the application.
    What I would like to do is to load these up at startup time and then only query the cache from then on out, managing the cache manually when necessary.
    My questions are:
    What's the best way to guarantee that I can load a list of objects into the cache and always have them there?
    In the above scenario, would I only need to synchronize the cache on add and delete? Would edits be handled automatically?
    Is it better to ditch this approach and to just cache them myself (this doesn't sound great for deploying in a cluster)?
    Ideas?

    The cache synch feature as it exists today is kind of an "all or nothing" thing. You either synch everything in your app, or nothing in your app. There isn't really any mechanism within TopLink cache synch you can exploit for more app specific cache synch.
    Keeping in mind that I haven't spent much time looking at your app and use cases, I still think that the helper class is the way to go, because it sounds like your need for refreshing is rather infrequent and very specific. I would just make use of JMS and have your app send updates.
    I.e., in some node in the cluster:
    Vector changed = new Vector();
    UnitOfWork uow= session.acquireUnitOfWork();
    MyObject mo = uow.registerObject(someObject);
    // user updates mo in a GUI
    changed.addElement(mo);
    uow.commit();
    MoHelper.broadcastChange(changed);
    Then in MoHelper:
    public void broadcast(Vector changed) {
    Hashtable classnameAndIds = new Hashtable();
    iterate over changed
    if (i.getClassname() exists in classAndIDs)
    classAndIds.get(i.getClassname()).add(i.getId());
    else {
    Vector vc = new Vector();
    vc.add(i.getId())
    classAndIds.add(i.getClassname(),vc);
    jmsTopic.send(classAndIds);
    Then in each node in the cluster you have a listener to the topic/queue:
    public void processJMSMessage(Hashtable classnameAndIds) {
    iterate over classAndIds
    Class c = Class.forname(classname);
    ReadAllQuery raq = new ReadAllQuery(c);
    raq.refreshIdentityMapResult();
    ExpressionBuilder b = new ExpressionBuilder();
    Expression exp = b.get("id").in(idsVector);
    roq.setSelectionCriteria(exp);
    session.executeQuery(roq);
    - Don

Maybe you are looking for

  • Problems in creating Invoice using BAPI

    Hello Friends, I am using a BAPI BAPI_BILLINGDOC_CREATEMULTIPLE to create invoice from a sales order, but i am not able to change the payer value. The payer is by default taken from Sales-Order, even if i change it in Payer value passed to BAPI. Plea

  • Net value ( Basicprice-discounts) / Report sales value/purchase value

    Hi all I want to do report reg sales order value vs purchase value.  I am taking  VPRS condition type for  purchase order value and which field i have to take for net value in billing. regards Ramachandra

  • Alert pop up when trying to install itunes OS X

    when i plug my iphone 5 into itunes it comes up with, "The iPhone "Tina's iPhone" cannot be used because it requires iTunes version 10.7 or later. Go to www.itunes.com to download the latest version of iTunes." then when i download the new version (O

  • Pre-load

    In other Servlet engines like JRun or ServletExec you can mark a Servlet           so its pre-loaded - in other words the Servlet will be loaded into           memory and the INIT() method called.           I tried to Use the ServletStartup class to

  • VMware fails during VMware Kernel Module Update!

    VMware error log Upon trying to run VMware, it brings up the VMware Kernel Module Updater.  When I click Install, it properly stops VMWare Service.  Then, it properly compiles the Virtual Machine Monitor, the Virtual Machine Communication Interface,