My experience migrating Kodo 3.4 to 4.1

Hello Stefan,
I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a much better
release. I migrated my app in a day. First I managed to run it against 3.4
metadata with some property file changes (migration docs are not very good
and miss few things but may be if you use Kodo automated migration tools it
will do for you what I was doing manually) . If you use lots of custom
filed mappings (especially non-trivial mappings) allocate much more time for
conversion - the whole thing has changed. I have not had chance to convert
my mapping and had to bandate it with externalizers and other things for
now. One thing you lose in kodo3.4 mode is ability to query by interface
since now it must be explicetly.
Couple of tips here
- kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no longer valid
logger name
- kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is no longer a
valid option use
kodo.jdbc.SQLFactory: (BatchLimit=50) instead
- kodo.PersistentClasses= ... is no longer works use
kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
kodo.MetaDataFactory: (Types=...) in jdo2 mode
- Any SQL with DATE column is no longer batched leading to 2-3 times
performance drop. The decision swa made based on bugs in oracle 9 drivers in
batching mode. If you have latest drivers (and database) from my experience
you will not have any problems. So to reenable it you can register your own
instance of AdvancedSQL (now a factored out part of DatabaseDictionary):
kodo.jdbc.SQLFactory: (BatchLimit=50,
AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
where OracleAdvancedSQL could look like:
public class OracleAdvancedSQL extends kodo.jdbc.sql.OracleAdvancedSQL {
@Override public boolean canBatch(Column col) {
switch (col.getType()) {
case Types.DATE:
return true;
default:
return super.canBatch(col);
- I have not tested read performance much since I was concentrating on
writes. But write performance even with batch enabled does not seems to be
not up to 3.4 level I observed a consistent 30-40% decrease in performance
while persisting large graph complex of fairly small objects. I ran 3.4 and
4.1 versions side by side against a dedicated Oracle 10 server and noticed
performance decrease of 30-40%
SQL generated by both versions was very close if not identical (I only did
spotcheck) but incase of INSERT you would not expect it any different anyway
I tried profiling 4.1 version and found some significant hot spots but could
not dechipher them in any reasonable time because of huge depth of stacks
and lack of source code. I might try it again if I have time because
performance is critical for us.
- I have not tried any new/advanced features yet. including new mappings,
detachment, data cache, quality of eager fetch so I can not give you any
good feedback on that. At least I can say that this release worth trying -
after migration my application worked as expected except for lower
performance
I also have to say I do not know how well automated migration of kodo 3.4
metadata to jdo2 metadata works (if it exists) because I use my model driven
code generator and I just developed JDO2 plugin for it and regenerated my
metadata from the model (I did not have to gegenerate my java classes of
course)
Alex
Then I created native JDO2 mappings and everything

Denis,
Could you email it to me please shurik at peacetech dot com
Thanks
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Alex,
The issue was in version 3.4.1. BEA has provided a patch, no new version.
Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Denis,
In which version did you observe it and which version fixed it?
Thank you
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
I don't know, I didn't tried 4.1 yet. It is possible that this issue
was'n exist in kodo 4.x at all.
"Christiaan" <[email protected]> wrote in message
news:[email protected]...
Nice! Is it also solved for 4.1?
regards,
Christiaan
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Finally, BEA has solved the issue I mentioned. Reading cached PCs
which have embedded objects become much faster (about 10 times in my
tests).
Thank you very much to all why was involved in this job.
Denis.
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Hi Alex,
I know about default-fetch-group, of course I marked these embedded
fields properly. You're right, it is not a cache miss but an
unnecessary fetch from DB. It's strange that nobody has found this
before. I managed to create a stanalone test case and send it to BEA
support. They agree that it is a bug, but still can't fix the issue.
The test is quite small, so if anyone interested I can send it here.
Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Hi Denis,
That's very strange. All custom fields such enums etc. are
essentially mapped onto regular JDO mandated types. I use it all the
time and have not observed this behavior but I might have missed it
of course. I have a suspicion that what you are seeing is not cache
misses but rather fetches outside of default fetch group. Keep in
mind that Kodo does not fetch any custom field as part of default
fetch group unless you explicetly specified it in your package.jdo
So, try to mark all your custom mapped fields with
default-fetch-group="true" and I suspect all your extra database
selects will disspear.
Read speed, inded, is always the critical part. I just have not had
chance to play with 4.1 reads enough to say if it is faster or
slower. There are more ways to optimize reads (various flavors of
eager fetches, cutom optimized mapping of collections including
embedding ...) but very few optimizations for updates
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Hi Alex.
My question is out of this topic, but it looks like you may have an
answer. BEA support did nothing for the last few months.
We still use kodo 3.4.1. DB is Sybase 12.5.x. In our app we're very
concerned on performance as well. But, we do have much more reads
then writes. So, we're trying to cache as much as possible. Kodo
distributed cache works quite good. At least, it presents better
performance then Gemfire and Tangosol, on the same use cases. But
we found a bug in its caching mechanism: when you have a persistent
class and this class has an embedded attribute of some
non-primitive type (like enum or just a simple complex type with
one or two attributes in it) kodo bypasses cache and performs a
select to DB each time. Have you seen this? Is it possible to solve
via custom mapping, what do you think?
Thanks. Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Hello Stefan,
I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a
much better release. I migrated my app in a day. First I managed
to run it against 3.4 metadata with some property file changes
(migration docs are not very good and miss few things but may be
if you use Kodo automated migration tools it will do for you what
I was doing manually) . If you use lots of custom filed mappings
(especially non-trivial mappings) allocate much more time for
conversion - the whole thing has changed. I have not had chance to
convert my mapping and had to bandate it with externalizers and
other things for now. One thing you lose in kodo3.4 mode is
ability to query by interface since now it must be explicetly.
Couple of tips here
- kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no
longer valid logger name
- kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is
no longer a valid option use
kodo.jdbc.SQLFactory: (BatchLimit=50) instead
- kodo.PersistentClasses= ... is no longer works use
kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
kodo.MetaDataFactory: (Types=...) in jdo2 mode
- Any SQL with DATE column is no longer batched leading to 2-3
times performance drop. The decision swa made based on bugs in
oracle 9 drivers in batching mode. If you have latest drivers (and
database) from my experience you will not have any problems. So to
reenable it you can register your own instance of AdvancedSQL (now
a factored out part of DatabaseDictionary):
kodo.jdbc.SQLFactory: (BatchLimit=50,
AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
where OracleAdvancedSQL could look like:
public class OracleAdvancedSQL extends
kodo.jdbc.sql.OracleAdvancedSQL {
@Override public boolean canBatch(Column col) {
switch (col.getType()) {
case Types.DATE:
return true;
default:
return super.canBatch(col);
- I have not tested read performance much since I was
concentrating on writes. But write performance even with batch
enabled does not seems to be not up to 3.4 level I observed a
consistent 30-40% decrease in performance while persisting large
graph complex of fairly small objects. I ran 3.4 and 4.1 versions
side by side against a dedicated Oracle 10 server and noticed
performance decrease of 30-40%
SQL generated by both versions was very close if not identical (I
only did spotcheck) but incase of INSERT you would not expect it
any different anyway :-)
I tried profiling 4.1 version and found some significant hot spots
but could not dechipher them in any reasonable time because of
huge depth of stacks and lack of source code. I might try it
again if I have time because performance is critical for us.
- I have not tried any new/advanced features yet. including new
mappings, detachment, data cache, quality of eager fetch so I can
not give you any good feedback on that. At least I can say that
this release worth trying - after migration my application worked
as expected except for lower performance
I also have to say I do not know how well automated migration of
kodo 3.4 metadata to jdo2 metadata works (if it exists) because I
use my model driven code generator and I just developed JDO2
plugin for it and regenerated my metadata from the model (I did
not have to gegenerate my java classes of course)
Alex
Then I created native JDO2 mappings and everything

Similar Messages

  • Experiences Migrating Tools Forms to J2EE : EVO + Exodus ADF

    Hi,
    has anyone used the tools
    -EVO from NEOS Software
    -Exodus ADF from Cipher Software?
    We are planning to migrate our 500 forms into J2EE (ADF Faces). So we are very pleased if anyone has experiences in these tools.
    Thanks and Regards
    Wolfgang Brandt

    In J2ee it is much easier to get the 3 value of record 5 (multi row)I've not had much exposure to web-based development but I can see a number of ways that a forms multi-record block has an advantage over a tabular form on a webpage:
    For data entry:
    o You don't need to click a button and refresh the page when you run out of rows
    o You can restrict the user to a field/record until it's in a valid state
    o Records are locked instantly
    o You can validate without refreshing the page
    o You can commit each record automatically as it's entered/deleted/updated
    For viewing data:
    o You can have multiple MRBs on a page (very hard if not impossible in App Express)
    o You can move between master records and show their children just by using the cursor keys
    o You can access all the records at once (scrolling in a block is much friendlier than page refreshes and much more efficient than a huge iframe)
    In general, navigating between records is much easier in Forms, and there's less need to reach for the mouse. Having to do a go_record to get a field value is annoying, but on the whole I don't think it's a big deal. At least in Forms we don't have to loop though a collection of records just to do a delete!
    Another advantage is that Forms development requires skills only in pl/sql and the very similar forms built-ins. For an ADF application that does anything more than the wizards can create, you'd need skills in one or more of java, sql, html, css, and javascript.
    IMHO the only reason to migrate away from forms is if you can't find the developers (unlikely!) or you want to open the application up to the general web without java/jinitiator in between.

  • Experiences migrating from Sybase to Oracle

    Hi all,
    I have been asked to look into migrating a database from Sybase to Oracle.
    I know there is lots of info on Oracle's website, which I have downloaded, but I was just hoping for some info about some of your experiences regarding your processes. What are the gotchas. I suspect there will be a few.
    Anything that will help would be of interest.
    Many thanks
    VicC

    HI VicC,
    I work within the SQL Developer team and not a customer as such, but the follow maybe of use.
    SQL Developer is a migration aid, its not 100% magic bullet.
    SQL Developer does alot of the manual conversion for you quickly. But you may find Testing / Tuning and the application migration (if any) take you the most amount of time.
    SQL Developer aims to migrate you tables, indexes, and data without issue. But SQL Objects like Procedures, Triggers, Views and Functions conversion will likely require manual work on top of the automatic conversion provided by SQL Developer.
    SQL Developer does not yet automatically handle things like the tablespaces structure or the the logins. Although these things can be scripted and managed more easily within SQL Developer.
    I would recommend downloading and performing the migration from a non production instance ASAP.
    SQL Developer can perform a migration quickly and you will then be able to assess what SQL Developer can and cant do for you.
    I would recommend creating a migration repository on an Oracle database local (install Oracle XE if you dont have a local database) to the SQL Developer instance you are using.
    I would also recommend downloading SQL Developer 2.1
    Here are some key sites regarding Sybase migrations.
    http://www.oracle.com/technology/tech/migration/workbench/viewlets/sqlserver.html . This viewlet for SQL Server is the same process for Sybase.
    http://www.oracle.com/technology/obe/hol08/sqldev_migration/sybase/migrate_sybase_otn.htm . This Oracle By Example takes you step by step through a Sybase migration including some gotchas
    http://dermotoneill.blogspot.com/2008/06/sql-developer-migration-workbench-151_11.html . This is my blog outlining the steps to perform a Sybase migration.
    Regards,
    Dermot
    SQL Developer Team.

  • CAD experience migrating from 8.6 to 10.1

    Hello,
    We are upgrading our UCCX to version 10.0(1).  For now, we are keeping our Cisco Agent Desktop application installed instead of having the Cisco Finesse Desktop.  After the upgrade takes place, will the clients experience anything with the Cisco Agent Desktop application (like a notification pop up or anything like that)? If so, they should be able to download the latest update directly from the application right?  Also, their IP phones should receive an update as well right?
    Thanks in advance
    First time poster

    The Agent experience is basically two pop ups upon launching CAD after the upgrade, which they will have to click to contnue.  They will need appropriate permissions to install software on their PC for the upgrade to work.  All three apps: CDA, CAD and CSD are upgraded at the same time.  Then CAD launches after a few minutes and then they login.  The upgrade can sometimes hide the window and it will appear like nothing is happening.  Just keep waiting.  Shouldn't take more than 10 minutes.
    Test CAD out on the night of the upgrade.  I've seen some settings fail to come over as well as macros and the like not work post upgrade.

  • Big performance drop migrating to Kodo 4.1 (Why would Kodo refuse to batch inserts in certain tables)

    Hello,
    While migrating from Kodo 3.4 to Kodo 4.1, I noticed significant drop in
    insert performance. I traced it down to some strange batching behavior.
    While most of the PCs were committed with batched inserts one particular
    class refused to batch and would insert row by row resulting in 10 times
    performance drop.
    There is nothing special about the class. Its hierarchy is mapped on base
    its table except for one of the lowest members which is mapped vertically.
    Thank you very much
    Alex Roytman
    Peace Technology, Inc.
    301-206-9696x103

    See my post "My experience migrating Kodo 3.4 to 4.1"
    "Stefan Hansel" <[email protected]> wrote in message
    news:[email protected]..
    Abe White wrote:
    This case is being handled elsewhere, but for the benefit of others:
    Kodo 4 works around a date/timestamp batching bug in some Oracle drivers
    by not batching rows with date/timestamp columns.-------------
    Abe could you get a bit deeper into it ?
    It sounds like kodo 3 was batching that rows and had no problem with it ?
    We only ship with the suggested oracle driver 10.2.0.1.0.
    50% of our customers use oracle, so it would be a pity if performance
    drops.
    Alex ... it sounds like you successfully migrated to kodo4.
    Could you give a short summery on how 'difficult' it was in the end ?
    From the docs it sounds very easy (except the sentence 'Many
    confniguration
    properties and defaults have changed.') but from the newsgroups it sounds
    as if
    you'd struggle a bit.
    I'm about to schedule our migration for the first quarter of 2007 and any
    hint
    helping use to estimate the time will help us.
    If you compare with migration from kodo 2 to 3 ... is it easier or worse ?

  • What is the best way to migrate mail from OSX 10.4.11 (Tiger) to OSX 10.7 (Lion)

    Greetings,
    Does anyone have experience migrating mail boxes from OSX 10.4 to OSX 10.7?
    I have an old 1.5 Ghz PowerPC G4 laptop with a lot of email (about 8 years worth). I just purchased a new Mac Mini with OSX 10.7 Lion and I'd like to migrate 2 groups of files, mail and photos. I figure the mail might be the most difficult, hence this request for information.
    I have read where you can run a migration assistant for OS 10.5 and 10.6 to OSX 10.7 but there are only a few articles about the earlier OS.
    Also, I would like to migrate ONLY the mail box into an existing user account. I would rather not migrate older files and applications that I have on the older PC.
    Can anyone point me to articles about doing this?
    Thank you,
    Tim Kern

    Hi Linc,
    Thank you for yout suggestion.
    Export the import makes sense. Do you know of any "How to..." articles for exporting from Tiger?
    Thanks again,
    Tim Kern

  • How to migrate when Migration Assistant isn't helping

    I'm helping a friend move from her old MacBook (OS X 10.4.11) to a new MacBook Air (OS X 10.7.1). Her home directory on the MacBook is Filevault-protected.
    The first thing that I tried was using Migration Assistant to migrate via wi-fi. However, the two machines just stall at the "searching for other computers" stage. Neither computer is ever able to see the other (I had a similar experience migrating a Snow Leopard MacBook Pro to a newer machine, making me think that wireless migration just doesn't work).
    I tried on a non-Apple wireless network, and on a network using an Airport Express. I also tried having the Air create an ad hoc network, and connecting the MacBook to that. In all cases, Migration Assistant on the Air was not able to detect the Macbook and just stayed at "searching for other computers" indefinitely.
    The Air doesn't have a Firewire port, so a Firewire connection is not an option. I believe that it's not possible to connect two machines using USB and have Migration Assistant work over the USB connection.
    I don't have the USB-to-Ethernet adaptor, so Ethernet isn't an option. The MacBook is running OS X 10.4.11, so making a TimeMachine backup is not an option.
    I used Carbon Copy Cloner to create a disk image of the MacBook's disk on an external drive, in the hope that Migration Assistant could import from that. However, Migration Assistant on the Air won't migrate user data from a Filevault-protected home directory.
    It looks to me as if the best route is for me to remove Filevault protection from her home directory, image the disk to the external drive, and then migrate from that image.
    However, my understanding is that disabling Filevault protection requires free space on the hard drive equal to the size of the protected user home directory (at least temporarily). There simply isn't that much space available.
    Is there a way that I can duplicate the user's home directory to the external drive, removing FileVault encryption as I go? Or can I clone the user's home to the external drive, and then strip FileVault from the copy there?
    Any suggestions or tips would be gratefully received.

    Also, I want to add that in the accounts section I DO NOT see two accounts there. I have been reading the forums and some suggested that after using MA it creates another user profile and all the files should be there. However, I do not see another profile. Also, I think it is strange that it took less than 1 minute to transfer my files! Help!

  • Kodo 4.1.2 in Weblogic 10 Problem

    I was told by BEA that Kodo/openJPA is included in Weblogic 10. However, now I have Weblogic 10 but I could not located much Kodo classes from Weblogic libraries. I searched all the JARs under BEA_HOME\wlserver_10.0\server\lib.
    I also tried to migrate Kodo/JPA application from Weblogic 9.2 to Weblogic 10. My application depends on Kodo JCA deployment in managed environment. The application and Kodo JCA deployed fine into Weblogic 10. But when I test any application, the test failed when I tried to create EntityMaanger from EntityManagerFactory:
    Caused by: <4|false|0.9.7> org.apache.openjpa.persistence.ArgumentException: config-error
         at weblogic.kodo.event.ClusterRemoteCommitProvider.endConfiguration(ClusterRemoteCommitProvider.java:112)
         at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:447)
         at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiate(RemoteCommitProviderValue.java:122)
         at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:103)
         at org.apache.openjpa.conf.RemoteCommitProviderValue.instantiateProvider(RemoteCommitProviderValue.java:95)
         at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newRemoteCommitProviderInstance(OpenJPAConfigurationImpl.java:708)
         at org.apache.openjpa.event.RemoteCommitEventManager.(RemoteCommitEventManager.java:56)
         at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getRemoteCommitEventManager(OpenJPAConfigurationImpl.java:720)
         at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:177)
         at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:139)
         at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:187)
         at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:140)
         at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:144)
         at kodo.persistence.jdbc.JPAConnectionFactory.createEntityManager(JPAConnectionFactory.java:23)
         at com.psi.vida.ejb.JPASessionBean.list(JPASessionBean.java:165)
         at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl.list(JPASessionEJB_lvtqkz_EOImpl.java:134)
         at com.psi.vida.ejb.JPASessionEJB_lvtqkz_EOImpl_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:174)
         ... 17 more
    Caused by: java.lang.Exception: <0|true|0.9.7> org.apache.openjpa.persistence.PersistenceException: no-trasport
         at org.apache.openjpa.util.Exceptions.replaceNestedThrowables(Exceptions.java:230)
         at org.apache.openjpa.persistence.ArgumentException.writeObject(ArgumentException.java:104)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:890)
         at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1333)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
         at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1369)
         at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1341)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1284)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1073)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:291)
         at weblogic.rmi.extensions.server.CBVOutputStream.writeObject(CBVOutputStream.java:84)
         at weblogic.rmi.internal.ServerRequest.unmarshalThrowable(ServerRequest.java:349)
         at weblogic.rmi.internal.ServerRequest.getThrowable(ServerRequest.java:62)
         at weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:203)
         ... 17 more

    I was told by BEA that Kodo/openJPA is included in
    Weblogic 10. However, now I have Weblogic 10 but I
    could not located much Kodo classes from Weblogic
    libraries. I searched all the JARs under
    BEA_HOME\wlserver_10.0\server\lib. They're in the (new) modules directory. weblogic.jar refers to stuff in the modules directory via its manifest classpath.
    I also tried to migrate Kodo/JPA application from
    Weblogic 9.2 to Weblogic 10. My application depends
    on Kodo JCA deployment in managed environment. The
    application and Kodo JCA deployed fine into Weblogic
    10. But when I test any application, the test failed
    when I tried to create EntityMaanger from
    EntityManagerFactory:Interesting. I do not know what the status of Kodo JCA testing is in WebLogic 10, but it sounds like something is a bit wonky.
    Basically, in a WLS environment, the default remote commit provider is automatically set to the new weblogic.kodo.event.ClusterRemoteCommitProvider, which uses the WLS clustering protocol to communicate cache notifications. The error that you're seeing indicates that cluster services are not available in the execution context. You can probably get around this by explicitly setting the 'kodo.RemoteCommitProvider' option to 'sjvm' (if you're not running in a cluster), or to whatever you had it set to in the past. (I'm guessing that it was unset in the past, as otherwise, the configuration should be picking up that instead of the new default.)
    However, personally, I much prefer the new persistence.xml configuration file format, compared to JCA configuration. (You can trivially use the persistence.xml format with Kodo JDO, even though it's a JPA-specified feature.) You might want to look into moving away from JCA and to the persistence.xml style instead.
    If you do this, you'll end up putting a META-INF/persistence.xml file in your EAR (and possibly a META-INF/persistence-configuration.xml file, if you want to use the strongly-typed Kodo XML configuration format), and replacing your JNDI lookups with java:comp/env/persistence/<persistence-unit-name>. (I think that's the right location. I might be mistaken, though.)
    Also, I can't guarantee that WebLogic 10 really handles JCA configuration all that well; some bits of that exception make it look like maybe some resources are not available in the classloader, which is surprising. So, it's possible that there is some sort of more fundamental JCA problem here (and not just a problem with the new remote commit provider).
    -Patrick

  • Migrate Library to External HD

    I've been trying to move my entire library off of my 17" MBP to an external HD (including ratings, playlists, play counts, etc.) so that I can more easily move between that machine and my 15" MBP. I followed Apple's directions a few times and tried those on Lifehacker as well. I've had moderate success, but I've never managed to get my full iTunes experience migrated.
    Even worse, the last time I tried, I ended up with duplicate entries for a ton of items in my source library. The duplicates aren't in the file system, only in the library metadata. Is there a way to remove those duplicates from the library without creating a new library? And, hopefully, without having to go through every duplicate and deleting manually. If I can get my source library - the one on my 17" machine - back to square one, I can continue trying to migrate everything.
    Thanks.

    Quit iTunes.
    Drag \Music\iTunes\ folder to the external.
    Hold Option and launch iTunes.
    Select *Choose library* and select the *iTunes library* file in the \iTunes\ folder you copied to the external.
    Make sure the external is on and mounted before you launch iTunes on yoru laptop.

  • Anyone successfully migrated an existing DSfW server to an OES11SP1 server using the new migration wizard ?

    See above - I do not want to be the first one.
    W. Prindl

    I had a very bad experience migrating a simple file server from OES2SP3
    32-bit to OES11 (without SP).
    So I am a little bit cautious.
    I want to migrate a single DC DSfW server (OES2SP3 32-bit) in a multi
    server edir tree to a new OES11SP1 server. The only thing which runs in
    addition to DSfW on this server is an iprint server, which shall be
    migrated, too.
    W. Prindl
    hargagan wrote:
    >
    >Hi ,
    >
    >This is just to understand the status as of now :
    >
    >1. Are you able to migrate any DSfW successfully ?
    >
    >1. What is the source you are going to migrate ?
    >
    >2. How many servers are you planning to migrate ?
    >
    >Do you want us to address some specific issues like in documentation
    >etc, for increasing your confidence ?

  • Need help in RFP - Migration Estimates for Crystal Reports from BI 3.1 to 4.1

    Hi,
    I have to migrate 100 crystal reports from BI 3.1 to 4.1. The reports vary from simple to complex in terms of complexity.
    I want to know the time estimates (no. of hours/report) for the above upgradation for a client proposal.
    What all parameters should i mention in the proposal?
    Thanks

    You don't actually have to upgrade the reports to a newer version of Crystal in order to migrate them from BO 3.1 to BI 4.1 and have them work.  BI 4.1 will work correctly with Crystal Reports that have been created in Crystal XI r2 and higher.  The inner structure of the .rpt file is essentially the same, with new things being added when new features from the newer versions are used.
    Some things to remember when migrating Crystal reports using the Upgrade Management Tool (based on my experience migrating a number of clients from 3.1 to 4.x):
    1.  If the reports were saved with a specific printer, make sure that the printer has been configured on the BI 4.1 server.  In 3.1 this was not an issue, but there's a change in printer processing in the newer versions of Windows Server where BI 4.1 will run that cause errors/warnings in the migration of these reports.
    2.  If your reports use ODBC connections, make sure that ALL of the required connections have been created prior to migration.  Even though BI 4.1 is 64-bit software, the Crystal "servers" are still 32-bit, so they require 32-bit connections.  If you're connecting to MS SQL Server, make sure you use SQL Server Native Connection 10.0 or 11.0 for these connections.
    3.  If you have Dynamic Parameters and you've had to make any changes to which DB drivers are used - even if it's in an ODBC connection that has the same name as the old system, be aware that you'll have to create new data connections that use the new ODBC connections or drivers and then point the data foundations to the new data connections - otherwise your dynamic parameters won't work.
    -Dell

  • 9i and 10g migration/interoperability

    Does anyone have any experience migration 9i Lite to 10g Lite?
    Particularly:
    Can 9i clients sync with a 10g server?
    Can a 9i Mobile Server and a 10g Mobile Server both be running replication on the same sets of tables in the enterprise db?
    Thanks for any insights!

    Hi ppl, in my work we where doing a migration from 9i Lite to 10g not only the lite, mobile server too, the only don't change was the enterprise server ( remains 9i )
    We found some problems with the migration ( althougt isn't a real migration because we don't make a exportation of database 9i and importation after in 10g ).
    Thats because we don't find how. ( or be don't exist )
    I describe you the problems we find and incompatibilities we suffer to make your migration a little sweet.
    9i clients can't sync with a 10g mobile server. odbc, msync.exe and other files are different in this two versions of lite and mobile server, well in a few words they are different versions and incompatible both.
    9i mobile server and 10g mobile server can running replication on the same set of tables in the enterprise bd?, yes they can but we do it in different fisical machines ( the mobile servers i refer ), i don't known if they can co-exists in the same machine.
    --Another question : The migration..
    Isn't well called a migration because you can't do it a migration with import/export tools because there isn't, its so simple.
    We need to do a reconstruction of the same was do it in the past with the 9i mobile server ( create the project, publications and publish the applications ), users and groups for the mb server.
    take a look at que question i introduce
    Exportation / Importation of Mobile Database Workbench information
    No one replies me already, then y suppose there isn't a way to automatize the mobile server implementation.
    --Another things we suffer
    Don't install 9i Lite and 10g Lite in the same client , we have problems and need to deinstall all and install 10g alone.
    ODBC supplied with 10g Lite installation are not configurable, you can't modify a simple option in the control panel > Administrative Tools > ODBC Origins , i suppose oracle do it for automatize the implementation of lite .
    Install 10g being the user who needs the Lite because the ODBC only CAN exists in this user and you CAN'T create an ODBC connection manually to 10g ( YES THIS IS TOO BAD but its true ).
    I don't known what more to tell u now, they where the most interesting two weeks in my life involved with ORACLE.
    And , good luck!
    Joan

  • Migrating MySQL database to Oracle 11g RAC

    Hi,
    I would like to migrate an entire database from MySQL to my new Oracle 11g RAC. How do I do it?

    Hi Turloch,
    I was able to configure SQLDeveloper to connect to Oracle and MySQL.
    I am having a hard time migrating data from mysql maybe due to table size. I got mysql tables with more than 20M and the PC am running the sqldeveloper is running out of memory every time I try to migrate the tables.
    Can you share some experience migrating from MySQL to Oracle?
    Thank You.

  • SQL Developer migration tool

    Does anyone have any experience migrating just one schema from MySQL to Oracle?
    I've specified my database that I want the migration tool to use when I do the migration, but it goes ahead and tries to grab every single function, procedure, table for every single user in the database.
    I'm basically trying to migrate a customers' mysql dataset to our oracle db.
    Any thoughts?
    Thanks in advance all!
    ---Patrick

    Patrick,
    did you already read the Oracle® Database SQL Developer Supplementary Information for MySQL Migrations manual, available at:
    http://download.oracle.com/docs/cd/E12151_01/doc.150/e12155/toc.htm
    The chapter 2.2 is dealing with schema migrations. Perhaps that is useful for you. If there is going something wrong for you and different as described in that manual then please let me know more details.
    Best regards
    Wolfgang

  • Migrating to G6 from V5.0.4

    Hi all,
    We are about to start a migration from V5.0.4 Java to G6. Before we do so, I'd like to talk to other Plumtree customers about their issues, experiences, migration problems etc.
    Are there any voluteers out there would be prepared to talk to me about their site? I'm in the UK (South Coast) so would be happy to visit or talk about things over the phone. Any takers?
    Many thanks, Adam

    Hi Adam,
    I have been working on Plumtree with Birlasoft. Its a software services firm and provides plumtree services including upgrades and 24*7 support and portal administration.
    Migration from 5.0.4 ideally should not be tough as migarting from 4.5. You could perhaps mail me at [email protected] with details such as the current architecture, user size, number of communities, portlets etc. The components involved, Plumtree content server, collaboration server, studio server etc. Also the secirity mode (SSLs implemented) Custom portlets etc.
    Its only after we have an idea about the current architecture, a staged and planned migration can be suggested.

Maybe you are looking for