Oracle 9 with Solaris 10 and VCS 5.0

Hi All,
I've looked around for the following combo, but I usually get Solaris 9 with VCS 4.0:
Oracle 9i on Solaris 10 and Veritas Cluster Server 5.0.
I've got the VCS 5.0 in, it's up and running on two nodes. I want to install Oracle 9i on each node.
Do I need any special Oracle 9i RAC?
Any steps, suggestions, tips, or even script to automate would be greatly appreciated.
I was looking at OraToolKit.ch site, but looks like they don't cover Oracle 9 anymore.
Thank you,
Nitin

Oracle 9 is in extended support mode and that will end July 31 of this year.
Why would you want to build something new with an antique foundation?
If you can explain it we may be able to help you further.

Similar Messages

  • Question for using ORACLE with SOLARIS

    Hi Experts,
    I have worked Oracle with Linux in one of my projects 2yrs back and i was just a developer, used to write SQL Query, Creating Table and Objects.
    Now i got a question from TL which is
    Tell me about consequences for using Oracle with Solaris?
    I am not worked ORACLE with SOLARIS, Can some one give me the answer for this question with
    1. Difference between ORACLE with LINUX and ORACLE with SOLARIS.
    2. Advantages and Disadvantages Between ORACLE with LINUX and ORACLE with SOLARIS.
    Thanks,
    MuraliDharan V

    Hi MuraliDharan V,
    It would had been better if you had searched first;
    Here is one
    Advantage for Linux64-bit Versus Solaris-x86_64 OS in RAC
    And beside that your question is incomplete:
    -What Oracle? Database, etc
    -Which Version? 9i, 10g, etc
    Aside from that a simple search on google might have answered your question as well.
    But I think there is some new trend of dumping questions here before searching.
    Ex Senior DBA

  • Oracle 10, Solaris and ZFS

    Hello,
    I'm planning to run Oracle 10 under Solaris 10 with a ZFS filesystem. Is Oracle 10 compatible with ZFS? The Solaris-ARC-process uses most of the available memory (RAM) for caching purposes. As other processes demand more memory, ARC releases it. Is such a dynamic memory allocation compatible with Oracle or does Oracle need fixed memory allocations?
    Thanks,
    - Karl-Josef

    In principle all should be fine. ZFS obeys all filesystem semantics, and Oracle will access it through the normal filesystem APIs. I'm not sure if Oracle need to officially state that they are compatible with ZFS. I would have thought it was the other way around - ZFS needs to state it is a fully compatible file system, and so any application will work on it.
    ZFS has many neat design features in it. But be aware - it is a write only file system! It never updates an existing block on disk. Instead it writes out a new block in a new location with the updated data in it, and also writes out new parent inode blocks that point to this block, and so on. This has some benefits around snapshotting a file system, and providing fallback recovery or quick recovery in the event of a system crash. However, one update in one data block can cause a cascaded series of writes of many blocks to the disk.
    This can have a major impact if you put your redo logs on ZFS. You need to consider this, and if possible do some comparison tests between ZFS and UFS with logging and direct I/O. Redo log writes on COMMIT are synchronous and must go all the way to the disk device itself. This could cause ZFS to have to do many physical disk writes, just for writing one redo log block.
    Oracle needs its SGA memory up front, permanently allocated. Solaris should handle this properly, and release as much filesystem cache memory as needed when the Oracle shared memory is allocated. If it doesn't then Sun have messed up big time. But I cannot imagine this, so I am sure your Oracle SGA will be created fine.
    I like the design of ZFS a lot. It has similarities with Oracle's ASM - a built in volume manager that abstracts underlying raw disks to a pool of directly useful storage. ASM abstracts to pools for database storage objects, ZFS abstracts to pools for filesystems. Much better than simple volume managers that abstract raw disks to just logical disks. You still end up with disks, and other management issues. I'm still undecided as to whether it makes sense to store an OLTP database on it that needs to process a high transaction rate, given the extra writes incurred by ZFS.
    I also assume you are going to use an 8 KB database block size to match the filesystem block size? You don't want small database writes leading to bigger ZFS writes, and vice versa.
    John

  • Oracle on Solaris and JAVA app on Windows clients

    hi im currently developing a JAVA app that accesses the data from an Oracle databse, these apps all running on windows clients. Im wondering the best way to access the database? is the JDBC the only way to do this.. thanks in advance

    Yes, you can download a JDBC driver from Oracle. I think this site might do it:
    http://otn.oracle.com/software/tech/java/sqlj_jdbc/content.html
    I'm assuming that you're using the JDBC-ODBC bridge driver now to get into Access. You'll just have to change the name of the JDBC driver class and the database URL to connect to Oracle.
    Once you have it, you'll have to make sure you put it in the right place for your app. If it's a Web-based app, it'll depend on the container you're running.
    - MOD

  • [SOLVED] SGA_MAX_SIZE pre-allocated with Solaris 10?

    Hi all,
    I'm about to build a new production database to migrate an existing 8.1.7 database to 10.2.0.3. I'm in the enviable position of having a good chunk of memory to play with on the new system (compared with the existing one) so was looking at a suitable size for the SGA... when something pinged in my memory about SGA_MAX_SIZE and memory allocation in the OS where some platforms will allocate the entire amount of SGA_MAX_SIZE rather than just SGA_TARGET.
    So I did a little test. Using Solaris 10 and Oracle 10.2.0.3 I've created a basic database with SGA_MAX_SIZE set to 400MB and SGA_TARGET 280MB
    $ sqlplus
    SQL*Plus: Release 10.2.0.3.0 - Production on Wed Jan 30 18:31:21 2008
    Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.
    Enter user-name: / as sysdba
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> show parameter sga
    NAME                                 TYPE        VALUE
    lock_sga                             boolean     FALSE
    pre_page_sga                         boolean     FALSE
    sga_max_size                         big integer 400M
    sga_target                           big integer 280MSo I was expecting to see the OS pre-allocate 280MB of memory but when I checked the segment is actually the 400MB (i.e. SGA_MAX_SIZE) (my database owner is 'ora10g'):
    $ ipcs -a
    IPC status from <running system> as of Wed Jan 30 18:31:36 GMT 2008
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP CBYTES  QNUM QBYTES LSPID LRPID   STIME    RTIME    CTIME
    Message Queues:
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NATTCH      SEGSZ  CPID  LPID   ATIME    DTIME    CTIME
    Shared Memory:
    m         22   0x2394e4   rw-r---   ora10g   10gdba   ora10g  
    10gdba     20  419438592  2386  2542 18:31:22 18:31:28 18:28:18
    T         ID      KEY        MODE        OWNER    GROUP  CREATOR  
    CGROUP NSEMS   OTIME    CTIME
    Semaphores:
    s         23   0x89a070e8 ra-r---   ora10g   10gdba   ora10g  
    10gdba   154 18:31:31 18:28:18
    $ I wasn't sure whether Solaris 10 was one of the OSs with truly dynamic memory for the SGA but had hoped it was... this seems to say different. Really I'm just after some confirmation that I'm reading this correctly.
    Thanks.
    Joseph
    Message was edited by:
    Joseph Crofts
    Edited for clarity

    I don't want to get bogged down in too many details, as the links provided in previous posts have many details of SGA tests and the results of what happened. I just want to add a bit of explanation about the Oracle SGA and shared memory on UNIX and Solaris in particular.
    As you know Oracle's SGA is generally a single segment of shared memory. Historically this was 'normal' memory and could be paged out to the swap device. So a 500 MB SGA on a 1 GB physical memory system, would allocate 500 MB from the swap device for paging purposes, but might not use 500 MB of physical memory i.e. free memory might not decrease by 500 MB. How much physical memory depended on what pages in the SGA were accessed, and how frequently.
    At some point some people realised that this paging of the SGA was actually slowing performance of Oracle, as now some 'memory' accesses by Oracle could actually cause 'disk' accesses by paging in saved pages from the swap device. So some operating systems introduced a 'lock' option when creating a shared memory segment (shmat system call if memory serves me). And this was often enabled by a corresponding Oracle initialisation parameter, such as lock_sga.
    Now a 'locked' SGA did use up the full physical memory, and was guaranteed not to be paged out to disk. So Oracle SGA access was now always at memory speed, and consistent.
    Some operating systems took advantage of this 'lock' flag to shared memory segment creation to implement some other performance optimisations. One is not to allocate paging storage from swap space anyway, as it cannot be used by this shared memory segment. Another is to share the secondary page tables within the virtual memory sub-system for this segment over all processes attached to it i.e. one shared page table for the segment, not one page table per process. This can lead to massive memory savings on large SGAs with many attached shadow server processes. Another optimisation on this non-paged, contiguous memory segment is to use large memory pages instead of standard small ones. On Solaris instead of one page entry covering 8 KB of physical memory, it covers 8 MB of physical memory. This reduces the size of the virtual memory page table by a factor of 1,000 - another major memory saving.
    These were some of the optimisations that the original Red Hat Enterprise Linux had to introduce, to play catch up with Solaris, and to not waste memory on large page tables.
    Due to these extra optimisations, Solaris chose to call this 'locking' of shared memory segments 'initimate shared memory' or ISM for short. And I think there was a corresponding Oracle parameter 'use_ism'. This is now the default setting in Oracle ports to Solaris.
    As a result, this is why when Oracle grabs its shared memory segment up front (SGA_MAX_SIZE), it results in that amount of real physical memory being allocated and used.
    With Oracle 9i and 10g when Oracle introduced the SGA_TARGET and other settings and could dynamically resize the SGA, this messed things up for Solaris. Because the shared memory segment was 'Intimate' by default, and was not backed up by paging space on the swap device, it could never shrink in size, or release memory as it could not be paged out.
    Eventually Sun wrote a work around for this problem, and called it Dynamic Intimate Shared Memory (DISM). This is not on by default in Oracle, hence you are seeing all your shared memory segments using the same amount of physical memory. DISM allows the 'lock' flag to be turned on and off on a shared memory segment, and to be done over various memory sizes.
    I am not sure of the details, and so am beginning to get vague here. But I remember that this was a workaround on Sun's part to still get the benefits of ISM and the memory savings from large virtual memory pages and shared secondary page tables, while allowing Oracle to manage the SGA size dynamically and be able to release memory back for use by other things. I'm not sure if DISM allows Oracle to mark memory areas as pageable or locked, or whether it allows Oracle to really grow and shrink the size of a single shared memory segment. I presumed it added yet more flags to the various shared memory system calls.
    Although DISM should work on normal, single Solaris systems, as you know it is not enabled by default, and requires a special initialisation parameter. Also be aware that there are issues with DISM on high end Solaris systems that support Domains (F15K, F25K, etc.) and in Solaris Zones or Containers. Domains have problems when you want to dynamically remove a CPU/Memory board from the system, and the allocations of memory on that board must be reallocated to other memory boards. This can break the rule that a locked shared memory segment must occupy contiguous physical memory. It took Sun another couple of releases of Solaris (or patches or quarterly releases) before they got DISM to work properly in a system with domains.
    I hope I am not trying to teach my granny to suck eggs, if you know what I mean. I just thought I'd provide a bit more background details.
    John

  • Bandwidth manager with Solaris 10

    Hi all
    I newbie with solaris and would like to test the IPQoS feature to use for bandwidth limit in a isp.
    I just readed the oficial IPQoS Sun tutorial but I looking for simples examples to limit the bandwidth per ip (clients).
    Someone knows where can find more examples.
    Thanks in advance.
    roberto

    Did you checked this one out already?
    http://docs.sun.com/app/docs/doc/816-4554/6maoq028g?a=view
    //M.

  • Integration with ERP and other systems

    We have integrated iP Oracle with OEX and we are now trying to
    integrate with BBP/SAP, Maconomy Procurement systems. Has anyone
    tried to integrate OEX with other systems?

    We are now running a project to integrate OEX with SAP R/3 4.6c (MM Module). Oracle recommendation includes semiautomatic process associated to the RFQ in SAP and P.O. placement in SAP.
    This semiautomatic process is consequence of poor XML integration with some transaction on OEX in which the buyer needs to operate in both systems: SAP and OEX. Nevertheless we are trying to request the incorporation of this changes in OEX's new version.
    Its a matter of time that ORACLE will understand that a full integration is needed to satisfy users; SAP has done it and most of ERP transactions can be taken to XML format through their product Business Connector (Original a Webmethods product bought by SAP).
    Juan Carlos Blesa

  • Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
    Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
    So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
    Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
    Tim
    ---

  • Oracle 9i Database and Solaris 10 Zones

    Can an existing oracle 9i database be moved into a new zone? The database resides on it's own filesystem. The server is runnign Solaris 10, and the zones are not set up yet, but Oracle is installed, and the 2 databases are up and running.
    Basically there are 2 existing oracle 9i databases, and I want to setup 2 zones, where none other than the default global exist right now, and have each database in a zone.
    Thanks in advance.

    You need to do the following -
    Configure loopback mount points from the global zone into the local zone through zonecfg (one for Oracle binary, other for Oracle data). I am assuming that you want to share the same Oracle binary location between all the zones. The Oracle database mounts must be separate & make sure that you put them in the respective zone's config only.
    Create an oracle user with dba group in both the zones. It's best if the user IDs & group IDs across all the zones & global zone match.
    Stop both the database instances in the global zone.
    zlogin to a zone, su as oracle and startup the instances.
    Hope that works!

  • Problems with oracle10 and solaris 10

    Hello,
    I been trying for 3 days to install and make oracle 10g work with solaris 10. I've downloaded oracle 10.1.0.2 (it's the only one available for solaris 64 bitS). I've followed every bit of documentation I found in this forum but still no go. I've read this thread and followed did what it says but still no go.
    Oracle Database10g on Solaris 10
    My questions are:
    1. I've set the semsys and shmsys as instructed in the documentation, but when I run the installer and it comes to the checking kernel security paramters it says that I need to update the kernel system paramaters BIT_SIZE and noexec_user_stack. According to the docs that I've read, the noexec_user_stack is only for soalris versions up to 5.8, but just to test I added it to the kernel but I still get the same error. What gives? Where do I change the BIT_SIZE and what value should i put?
    2. After ignoring these errors, and the installation has finished, I get this error on the console
    Aug 1 18:19:18 db1 root: [ID 702911 user.alert] (Oracle CSSD will be run out of init)
    Any assistance will be very much appreciated.
    Thanks

    I've downloaded oracle 10.1.0.2 (it's the only one available for solaris 64 bitS).I hope you installing this on Solaris Sparc 64bit.
    According to the docs that I've read, the noexec_user_stack is only for soalris versions up to 5.8noexec_user_stack was introduced in Solaris 7 and is valid for all >7 versions (9, 10 too)
    Aug 1 18:19:18 db1 root: [ID 702911 user.alert] (Oracle CSSD will be run out of init)You could safely ignore this message

  • Oracle 9.2 i/o activity slow in v880 with solaris 9

    v880 with 8 * 900 cpu 32gb internal hdds 146*10 without raid ,
    oracle data uploading & other i/o tasks take longer to perform then other intel server,
    did this server or application req. any kind of tune up or parameters changing

    yes & yesterday we allocate 8 gb ram for oracle still the problem is same
    oracle on solaris with 8* 900 cpu and 32 gb ram is running slower then oracle on windows 2003 with 3ghz * 2 cpu and 8 gb ram

  • Make and dmake included with Solaris 10/Studio 11

    Subject: make and dmake included with Solaris 10/Studio 11
    I've been working on a project where I'm converting makefiles that were originally developed for GNU make to utilize dmake.
    The one issue I've been having is that one dynamic macro "$<" (insert dependency) doesn't seem to work as documented in the make man page either with dmake and make (in /usr/ccs/bin, or /usr/xpg4/bin). Here's a small snippet of what I'm trying to figure out:
    #### Targets
    oraSQL.c: oraSQL.pc
    $(ORACLE_HOME)/bin/proc iname=$< dbms=v8 HOLD_CURSOR=NO RELEASE_CURSOR=YES ireclen=132 oreclen=132 select_error=no lines=yes sqlcheck=semantics xref=yes CODE=CPP ORACA=YES MODE=ORACLE MAXOPENCURSORS=50 userid=$(CONNSTR) When make executes this, it should replace the "$<" with "oraSQL.pc". This does work with GNU make. Now, I know that Solaris make and GNU make are definitely not the same; the manpage for Solaris make at the section titled "Dynamic Macros", clearly states that the same behavior should exist. Unfortunately with Solaris make, the macro is simply replaced with nothing.
    Am I being a blockhead, or is there more too this that someone here might know about?
    Thanks in advance;

    First of all, this is a well known problem. It is not a bug, but a
    difference in behavior between Sun "make" and GNU "make".
    The behavior of Sun "make" is documented in man page,
    and we cannot change the default behavior because this will
    break the existing builds.
    Here is how it is documented in make.1 man page:
    $< The name of a dependency file, derived as if
    selected for use with an implicit rule.
    Second, this problem has been reported several times (see CR
    6593262 for example) , and we addressed it - we provided
    environment variable to specify the compatibility mode.
    SUN_MAKE_COMPAT_MODE=GNU
    - compatibility with GNU "make"
    Compatibility with GNU "make" is partially implemented, and
    will be improved in next releases.
    Here is an example how to set SUN_MAKE_COMPAT_MODE
    to avoid the problem, mentioned in CR 6593262:
    1. This is what happens if SUN_MAKE_COMPAT_MODE is not set.
    % unsetenv SUN_MAKE_COMPAT_MODE
    % cat Makefile
    ## -- Makefile --
    ECHO=/usr/bin/echo
    all: this
    that:
    $(ECHO) "soccer"
    this: that
    $(ECHO) $<
    % dmake -m parallel this
    /usr/bin/echo "soccer"
    soccer
    /usr/bin/echo
    2. Here is what happens if SUN_MAKE_COMPAT_MODE=GNU
    % setenv SUN_MAKE_COMPAT_MODE GNU
    % env | grep SUN_MAKE_COMPAT_MODE
    SUN_MAKE_COMPAT_MODE=GNU
    % dmake -m parallel this
    /usr/bin/echo "soccer"
    soccer
    /usr/bin/echo that
    that
    So, in this case the output is identical to GNU make, correct?
    Please, try to set the environment variable:
    SUN_MAKE_COMPAT_MODE=GNU
    and restart your build.
    Thanks,
    Nik

  • Trying to simply connect to Oracle with VBscript/ASP - and I cannot.

    This is rather embarrassing. I am pretty fluent with ASP and VBscript, and I have written many a web application connecting to Microsoft SQL Server. Now I have a need to connect to an Oracle database, and I'm beating my head against the wall.
    1) Web server is Windows Server 2003 SP1
    2) Using ASP (not ASP.NET) & VBscript
    3) I have installed the Oracle drivers on the server - it is version 10g
    4) The administrator of the Oracle database to which I want to connect has created a username and password for me to use from within my code
    5) Here is the code I am trying to run:
    Set objConn = Server.CreateObject("ADODB.Connection")
    objConn.Open "Provider=MSDAORA;Data Source=XXXXXXX;User Id=YYYYYYY;Password=ZZZZZZZ;"
    That's it. 2 lines of code just trying to establish a connection. Using the user name and password provided to me by the administrator, and for Data Source I am using the IP address of the Oracle server (like I have done in the past when connecting to SQL Server). I receive the following error message when viewing this in a browser:
    Microsoft OLE DB Provider for Oracle error '80004005'
    ORA-12154: TNS:could not resolve the connect identifier specified
    Evidently, the Data Source I am using is not correct, but I was provided no other information from the admin. This is the first time any of us have tried to connect to Oracle using ASP/VBScript, so the administrator isn't sure what I need to do ... any help would be so appreciated.

    Hello,
    I got mine to work by setting up an ODBC System DSN and connecting to it. I think this bypasses the Microsoft driver, which might be what's causing the problem.
    Set Db = Server.CreateObject("ADODB.Connection")
    Db.Open "DSN=TEST;User ID=userid;Password=password;"
    Good luck, I've found it requires a lot of persistence...
    Al
    Springfield, MO

  • Two EX90 devices can make video calls over Internet with No VCS-C and VCS-E

    Dear Experts;
    I  have just started TelePresence and VCS 2 weeks before by going through cisco docs and videos and I had taken a risk of implementing the infrastructure elements.
    We are now implementing Cisco Tele presence with VCS-C, VCS-E TMS,TCS,MCU and End points with  Jabber in One setup.
    and in another setup CUCM 10.5,UCCX 10.5 IM&P,Jabber with some 10 agents.
    Now the question is In our same building in 2nd floor we have one EX90 and in 5th Floor one EX90 and over LAN we can make video calls using IP address.
    In the same way is it possible to make  a video call between 2 EX90 devices present in different location in the same city over Internet without the involvement of VCS-C and VCS-E.
    Appreciate your valuable reply.
    Regards
    Debashis

    The simple answer to that is NO! Where is the EX90 registered to at the moment? Is it on the VCS? or CUCM

  • Installation and creation of first database in Oracle on Solaris 5.10

    hi all,
    I have dug around and found that /etc/system is no longer used in solaris 10. i found that the command prctl is now envoked to setup the kernel parms.
    I have solaris 10 - oracle 10g - Intel Processor.
    Can anyone point me in the right direction or provide me a link to the doc I need to setup correct kernel parms in my version of solaris and to get dbca to just create an intial database. I keep getting the oracle not available, shared memory, error messages on inital dbca creation of database and I know it is because of kernel parms not being set right. I have followed the instructions found under documentation here but it gives me argument errors when issuing:
    prctl -n project.max-sem-nsems -v 512 -r -i project default
    All other params work fine and when I check them they show up as correct. The only doc I could find was for SPARC. I didnt know if this made a difference in setting the parms but I assume it does.
    Thanks.

    Try this link
    http://www.dbspecialists.com/presentations/oracle10gsolaris.html
    Regards
    Raman

  • Hibernate OR EclipseLink...Which is best with Weblogic and Oracle DB?

    Hi All,
    In my company, we are using Oracle DB and Weblogic application server. So in the process to upgrade or switch to new ORM, we shortlisted two options - Hibernate and EclipseLink.
    I gathered following summary regarding both ORMs -
    Hibernate:
    1.     When you need to train people, like we are going to do next week – most of the companies have Hibernate experts.
    2.     When you hire new developers, most of them come with specific Hibernate experience.
    3.     When you need to consult with experts, both in the internet or consultants, you have LOTS of options. Endless forums and communities all regarding Hibernate.
    4.     Hibernate is an open source which has a huge community. This means that it will be improved all the time and will push the ORM market forward.
    5.     Hibernate is an open source which means you have the code to handle, and in case needed, fit it to your needs.
    6.     There are lots of plugins to Hibernate, such as validations tool, audit tools, etc. These becomes standard as well and dismiss you from impl. things yourself.
    7.     One most important thing with ORM tool, is to configure it according to your application’s needs. Usually the default setting doesn’t fit to your needs.
    For that sake, when the market has a huge experience with the tool’s configuration, and lots of experts (see point 1 and 3) – most of chances you will find similar cases and
    lots of knowledge about how to configure the tool and thus – your application.
    EclipseLink:
    1. Fully supported by Oracle. Hibernate no. In case of pb, it could be cumbersome to prove that it is a pure Weblogic one. Concretely, we will have to prove it (waste of time and complexity).
    2. Eclipse link is developed by Oracle and the preferred ORM in the Weblogic /Oracle DB world.
    3. Even if at a certain time EclipseLink was a bit late compared to Hibernate (feature), EclipseLink evolved very fast and we can consider now that they close the gap.
    4. No additional fee as soon as you have Weblogic license. You will need to pays additional fee if you want some professional support on Hibernate.
    5. We are currently relying on Hibernate for our legacy offer and are facing pb in second level cache (JGroups). Today, we are riding off this part!. Consequences are limitation in clustering approach (perf)
    6. On EclipseLink side we do succeed to manage first and second level cache in a clustering approach.
    7. Indeed Hibernate is open source, so you can imagine handling it. In reality, the code is so complex that it is nearly impossible to modify it. Moreover as it is LGPL, you need to feedback all the modified sources to the community systematically.
    8. All tests performed by Oracle concerning Weblogic are using EclipseLink. Moreover, Oracle says that some specific optimizations are done to manage Oracle DB.
    9. Hibernate comes from JBoss community.
    Right now we are preferring Hibernate but there are concerns/reasons like EclipseLink developed by Oracle and preferred ORM in Webogic/ Oracle DB world (compatibility of ORM with DB and App. server), support comparison with both ORM, which are preventing to finalize the decision.
    Please help me with you views and opinions and share you experience with us so that we can make a perfect decision.
    If you want you can also reply to me @ [email protected].
    Thanks.

    The way the ORMs are designed, integration with application servers are relatively simple, and all provides the same features. Also since WebLogic have been around for a while, all ORMs are all well tested in this configuration.
    Hibernate has lot more users, and is likely very often used with Oracle DB, so you can expect not much bug against Oracle DB, maybe even less bug than EclipseLink, which is not much used. EclipseLink does provide support for some esoteric Oracle DB features like hierarchical and flashback queries.
    OpenJPA and DataNucleus are also JPA compliant. It’s likely that Open JPA has a higher user base than EclipseLink, so less unknown bugs.
    Oracle paying support is well known to be a bad joke. It’s a negative return to use this channel, even if they would be free. So in reality, you end up to use the open (free) forum to get support.
    What’s was lacking with Hibernate before is Dynamic Fetch Planning, but they now have some support, see http://opensource.atlassian.com/projects/hibernate/browse/HHH-3414. OpenJPA was the first to implement this must have.
    EclipseLink has query in memory, which can be used, but the API do not help to leverage it, and EclipseLink’s leadership made it clear that they are not going to make it better, instead they want to push Coherence cache.
    Hibernate has an open API for second level cache, which mean you can get out of problem by using another implementation, for example, EHCache seems to be professionally tested, so I would be surprise you find obvious bugs.
    I cannot comment on Hibernate source code quality, but I can tell you that locking mechanism in EclipseLink is used to be very fragile, and many concepts are dispersed over the code base.
    The runtime monitoring of Hibernate have always been great due to the fact that JBoss have always been strong on JMX, EclipseLink has not much usable features on this.
    If I would be you, I would consider OpenJPA or Hibernate instead of EclipseLink, the main reason is that because EclipseLink has a so low user base, I have found lot of obvious bugs in production, like if I was the only user of it. Then, when I submitted bugs to the small development team, which do not encourage user base contribution, they were too busy trying to keep up adding the JPA interfaces on top of their existing proprietary APIs.

Maybe you are looking for

  • [solved] After Gnome 3.8/Kernel update bumblebee is broken

    After some troubles with Gnome 3.8 and GDM and getting back to work bumblebee is broken. I guess the troubles came with the new kernel. $ optirun -vv glxgears [ 184.945324] [DEBUG]Reading file: /etc/bumblebee/bumblebee.conf [ 184.946196] [DEBUG]optir

  • When I try to open iTunes I get this error on my Toshiba Satellite

    All I get is this error Error signature EventType:BEX P1: iTunes P2:10.2.1.1 P3:4d756476 P4:icuuc40.dll P5:4.0.0.3207 P6:4d3b6a8e P7:00073335 P8:c0000409 P9:00000000 Error Report Contents C:\DOCUME~1\STEVEO~1\LOCALS~1\Temp\WERd3f9.dir00\iTunes.exe.md

  • Flash Player Lag in IE 9 FF and Chrome

    For some reason on all 3 of these platforms Flash Player is laggy. I can't watch any type of videos or even play any type of games that require flashplayer with out this lag. Is there a fix to this? It's getting pretty rediculous. This is a fresh ins

  • Reinstalling Tiger

    I want to reinstall Tiger but can't do it over Leopard so how can I go back to Tiger?

  • Best way to work with large file?

    Ok I have a huge file (200mb) that is all sequenes of numbers separate by commas. I wish to know if a specific sequence exists in this file. What is the best way of doing that check? I cant load the whole file into a StringBuffer since it is too larg