Native Toplink to EclipseLink to JPA - Migration Best Practice

I am currently looking at the future technical stack of our developments, and would appreciate any advise concerning best practice migration paths.
Our current platform is as follows:
Oracle 10g AS -> Toplink 10g -> Spring 2.5.x
We have (approx.) 100 seperate Toplink Mapping Workbench projects (we have one per DDD Aggregate object in effect) and therefore 100 Repositories (or DAOs), some using Toplink code (e.g. Expression Builder, Class Extractors etc) on top of the mappings to support Object to RDB (legacy) mismatch.
Future platform is:
Oracle 11g AS -> EclipseLink -> Spring 3.x
Migration issues are as follows:
Spring 3.x does not provide any Native Toplink ORM support
Spring 2.5.x requires Toplink 10g to provide Native Toplink ORM support
My current plan is as follows:
1. Migrate Code and Mappings to use EclipseLink (as per Link:[http://wiki.eclipse.org/EclipseLink/Examples/MigratingFromOracleTopLink])
2. Temporarily re-implement the Spring 2.5.x ->Toplink 10g support code to use EclipseLink (e.g. TopLinkDaoSupport etc) to enable testing of this step.
3. Refactor all Repositories/DAOs and Support code to use JPA engine (i.e. Entity Manager etc.)
4. Move to Spring 3.x
5. Move to 11g (when available!)
Step 2 is only required to enable testing of the mapping changes, without changing to use the JPA engine.
Step 3 will only work if my understanding of the following statement is correct (i.e. I can use the JPA engine to run native Toplink mappings and associated code):
Quote:"Deployment XML files from Oracle TopLink 10.1.3 and above can be read by EclipseLink."
Speciifc questions are:
Is my understanding correct regarding the above?
Is there any other path to achieve the goal of using 11g, EclipseLink (and Spring 3.x)?
Is this achieveable without refactoring all XML mappings from Native -> JPA?
Many thanks for any assistance.
Marc

It is possible to use the native/MW TopLink/EclipseLink deployment xml files with JPA in EclipseLink, this is correct. You just need to pass a persistence property giving your sessions.xml file location. The native API is also still supported in EclipseLink.
James : http://www.eclipselink.org

Similar Messages

  • Migration Best Practice When Using an Auth Source

    Hi,
    I'm looking for some advice on migration best practices or more specifically, how to choose whether to import/export groups and users or to let the auth source do a sync to bring users and groups into each environment.
    One of our customers is using an LDAP auth source to synchronize users and groups. I'm trying to help them do a migration from a development environment to a test environment. I'd like to export/import security on each object as I migrate it, but does this mean I have to export/import the groups on each object's ACLs before I export/import each object? What about users? I'd like to leave users and groups out of the PTE files and just export/import the auth source and let it run in each environment. But I'm afraid the UUIDs for the newly created groups will be different and they won't match up with object ACLs any more, causing all the objects to lose their security settings.
    If anyone has done this before, any suggestions about best practices and gotchas when using the migration wizard in conjunction with an auth source would be much appreciated.
    Thanks,
    Chris Bucchere
    Bucchere Development Group
    [email protected]
    http://www.bucchere.com

    The best practice here would be to migrate only the auth source through the migration wizard, and then do an LDAP sync on the new system to pull in the users and groups. The migration wizard will then just "do the right thing" in matching up the users and groups on the ACLs of objects between the two systems.
    Users and groups are actually a special case during migration -- they are resolved first by UUID, but if that is not found, then a user with the same auth source UUID and unique auth name is also treated as a match. Since you are importing from the same LDAP auth source, the unique auth name for the user/group should be the same on both systems. The auth source's UUID will also match on the two systems, since you just migrated that over using the migration wizard.

  • Data Migration Best Practice

    Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

    I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
    One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
    Thanks,
    Gordon

  • New white paper: Character Set Migration Best Practices

    This paper can be found on the Globalization Home Page at:
    http://technet.oracle.com/tech/globalization/pdf/mwp.pdf
    This paper outlines the best practices for database character set
    migration that has been utilized on behalf of hundreds of customers
    successfully. Following these methods will help determine what
    strategies are best suited for your environment and will help minimize
    risk and downtime. This paper also highlights migration to Unicode.
    Many customers today are finding Unicode to be essential to supporting
    their global businesses.

    Sorry about that. I posted that too soon. It should become available today (Monday Aug 22nd).
    Doug

  • Win xp to Win 7 migration -- best practice

    Hi , 
    What are the best practices that need to be followed when we migrate xp to win 7 using configuration manager?
     - like computer name (should we rename it during migration or keep it as is?).
    - USMT like what should be migrated?
    Psl. share any pointers\suggeations. Thanks in advance.
    Regards,

    First determine your needs... do you really need to capture the user data or not.. perhaps they can store their precious data before the OS upgrade by themselves, so you don't need to worry about it? If you can made this kind of political decision, then
    you don't need USMT. Same goes for computer name, you can use the old ones if you please. It's a political decision, you should use unique name for all of your computers, some prefer PCs serial numbers, some prefer something else,
    it's really up to you to decide.
    Some technical pointers to consider:
    Clients should have ConfigMgr client installed on them before the migration (so that they appear in the console and can be instructed to do things, like run task sequence...)
    If the clients use static IP addresses, you need to configure your TS to capture those settings and use them during the upgrade process...

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • EIS Migration Best Practice

    Hello, All
    I have a couple of questions on EIS upgrade and migrations. Appreciate any useful inputs:
    1. What is the best way to automate the EIS application migration from one environment to another environment? I doubt whether LCM can be used. On the other hand, I think there maybe an export/import options available, but is there a command line utility to trigger this export/import so that we can put it into a batch job? If it works, anything else that needs to be migrated?
    2. If it is to upgrade from 7 to latest version of EIS? Can we simply restore the EIS catalog, and let the system to automatically convert/upgrade the catalog?
    Thanks a lot!

    That is correct, there is no command line migration utility.
    Supposedly, you can install a 32 bit version of EIS on a 32 bit environment and do the XML import on that, but I haven't had any success with it.
    Yes, you should just be able to use the backup/restore of the EIS repository for EIS migration. We did a simple "lift & shift" of the repository database when we upgraded. We didn't rebuild or migrate anything.
    We could have just left the catalog in place and pointed the new EIS at it, but our SQLServer2005 group wanted to keep the naming conventions consistent across all the Oracle Hyperion databases.
    Tim Young

  • Best Practice for migration to Exadata2

    Hi Guru,
    I'm thinking to migrate an Oracle RAC 11g (11.2.0.2) on HP/UX Itanium cluster machine to a New Exadata 2 System
    Are there best practice? Where can I found documentation about migration?
    Thanks very much
    Regards
    Gio
    Edited by: ggiulian on 18-ago-2011 7.39

    There are several docs available on MOS
    HP Oracle Exadata Migration Best Practices [ID 760390.1]
    Oracle Exadata Best Practices [ID 757552.1]
    Oracle Sun Database Machine X2-2/X2-8 Migration Best Practices [ID 1312308.1]
    If you already have Exadata, I recommend to open an SR with Oracle and engage with ACS.
    - Wilson
    www.michaelwilsondba.info

  • Best practices TopLink Mapping Workbench multi-user + CVS?

    This might be a very important issue, in our decision whether or not to choose TopLink --
    How well is multi-user development and CVS supported when using the TopLink Mapping Workbench? Are there best practices regarding this use case?
    Thanks.

    We have no problem with the workbench and CVS. Only a couple of our developers are responsible for the mappings so we havn't really run into concurrent edits. It's pure XML so a decent mergetool with XML support should let you resolve conflicts pretty easily.

  • Native TopLink named query with named parameters

    Hello,
    Defining my metadata in native TopLink xml and using the native TopLink's Session interface I can access and successfully execute a named query using positional parameters (parameters passed to match the ?1 ?2 etc). I used for this the Session.executeQuery(String, Class, List) method e.g.
    select p from Person p where p.name = ?1
    Now, how can I get the same Session to execute named queries using named parameters? None of the Session.executeQuery methods seem suitable ... Am I missing anything here? e.g.
    select p from Person p where p.age = :age
    I can't find in Session http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/b13698/oracle/toplink/sessions/Session.html a good match for this use-case. I would expect something like:
    Session.executeQuery(String queryName, Class target, List argNames, List argValues)
    or
    Session.executeQuery(String queryName, Class target, Map argsKeyedByName)
    but can't find any good match, can anyone please enlighten me?
    Thanks in advance,
    Best regards,
    Giovanni

    Hello Chris,
    Many thanks for your response. I am sorry if I did not explain my problem properly.
    Suppose I already defined a named query in the metadata XXXProject.xml using the <opm:querying ... this JPQL named query "customFinder" already exists and would look something like:
    select p from Person p where p.firstname=:firstname and p.lastname=:lastname and p.birthdate=:birthdate
    now say you want to execute this query from the Session:
    Vector args = new Vector();
    // how do you know the order? you shouldn't know the order!
    // you know only the parameter names and that's what I mean
    // about named parameters
    // This args setup is wrong ... I need a way to specify to which
    // parameter name each argument corresponds to. In other words
    // if the named query where criteria order of parameters is modified
    // perhaps because of pruning composite keys etc you won't break the
    // existing code ...
    args.add(new Date());
    args.add("Azua");
    args.add("Giovanni");
    Session session = ...
    session.executeQuery("customFinder", Person.class, args);
    JPA supports both type of queries positional parameters and named parameters. Your explanation above is only for the first, my question refers to the later.
    I have not yet found the api for this ... though I am investigating along the lines of:
    Query query = session.getQuery("customFinder");
    and then try to assign the arguments in the same order that the parameters are defined in query or?
    Thanks in advance,
    Best regards,
    Giovanni
    Edited by: bravegag on 29.05.2009 08:06

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Best practice for migrating IDOCs?

    Subject: Best practice for migrating IDOC's? 
    Hi,
    I need to migrate some IDOC's to another system for 'historical reference'.
    However, I don't want to move them using the regular setup as I don't want the inbound processing to be triggered.
    The data that was created in the original system by the processed IDOC's will be migrated to the new system using migration workbench. I only need to migrate the IDOC's as-is due to legal requirements.
    What is the best way to do this? I can see three solutions:
    A) Download IDOC table contents to a local file and upload them in the new system. Quick and dirty approach, but it might also be a bit risky.
    B) Use LSMW. However, I'm not sure whether this is feasible for IDOC's.
    C) Using ALE and setting up a custom partner profile where inbound processing only writes the IDOC's to the database. Send the IDOC's from legacy to the new system. Using standard functionality in this way seems to me to be the best solution, but I need to make sure that the IDOC's once migration will get the same status as they had in the old system.
    Any help/input will be appreciated
    Regards
    Karl Johan
    PS. For anyone interested in the business case: Within EU the utility market was deregulated a few years ago, so that any customer can buy electricity from any supplier. When a customer switches supplier this is handled via EDI, in SAP using ALE and IDOC's. I'm working on a merger between two utility companies and for legal reasons we need to move the IDOC's. Any other data is migrated using migration workbench for IS-U.

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practice for database migration in 11g

    Hello,
    Database migration is required due to OS change.  Here, I have two database instances say A and B in the old server where RDBMS_VERSION is 11.1.0.7.0. They need to be migrated into a new OS where the oracle has been installed with version 11.2.0.2.0.
    Since all data + objects need to be migrated into the new server, I want to know what the best practice is and how to do that. Thanks in advance for your necessary guidance.
    Thanks and Regards,
    Prosenjit

    Hi Prosenjit,
    you have some options.
    1. RMAN Restore: you can restore your database via rman to the new host, and then upgrade it.
        Please follow instruction from MOS Note: RMAN Restore of Backups as Part of a Database Upgrade (Doc ID 790559.1)
    2. Data Guard: check the MOS Note: Mixed Oracle Version support with Data Guard Redo Transport Services (Doc ID 785347.1)
    3. Full Export / Import (DataPump)
    Borys

Maybe you are looking for

  • Problem with checkbox and Event.stop(event)

    Hello, I cannot change the checkbox in a row, if the Event.stop(event) is fired on checkbox. My aim is, that the event OnRowClick is stoped, if I change the checkbox in the row. <rich:extendedDataTable id="requestTable" value="#{requestListHandler.no

  • OBIEE 11g: Bugs Fixed in 11.1.1.6.5 Patch Set

    Hello, I have a question about the patches that apply. I have installed the product BI Publisher Enterprise 11g (11.1.1.6.0) I detected a bug: bug 13791065 - UNGROUP OF A SUBGROUP WITH AN EXPRESSION DOES NOT WORK PROPERLY This bug was fixed in OBIEE

  • ABAP Runtime error MESSAGE_TYPE_X in process chain

    in our daily load process chain  we have a level where datas are getting loaded in to a cube from 4 dso thru dtp. in this level the DTP fails approximately twice in a week. the DTP is shown in red in the log view. when  we check the message its showi

  • Web Elements 2.4 - Encoding Values

    I am using web elements 2.4 on XIR2. I have some cascading dropdown boxes populated with dynamic values from a subreport. This all works fine. However, some of the data has ampersand's in it such as 'B&P Level' Looking at the opendocument URL that th

  • Scripting: copy articles kb of an "update list" into an list "deployment management"

    hello, i am novice in scripting with SCCM... with the tool as powershell wmi browser, i can find the tables for querying with VBS and then list their contents. I have find whereis my update list and deployment management collections. Now I want to kn