Standby problem - log numbers out of sequence

Hi everyone! I am on 10.2.0.3.
My physical standby stopped applying logs because all of a sudden the logs went out of sequence. See errors below - after seq 7975 & 7976 seq# 7974 showed up again:
Recovery of Online Redo Log: Thread 1 Group 200 Seq 7974 Reading mem 0
Mem# 0: /db/data3/FSPRD/stby_redo02.log
Mem# 1: /db/data4/FSPRD/stby_redo02_02.log
Thu Dec 18 16:14:58 2008
RFS[23]: Successfully opened standby log 100: '/db/data1/FSPRD/stby_redo01.log'
Thu Dec 18 16:15:01 2008
RFS[25]: Successfully opened standby log 300: '/db/data5/FSPRD/stby_redo03.log'
Thu Dec 18 16:15:01 2008
Primary database is in MAXIMUM PERFORMANCE mode
RFS[37]: Successfully opened standby log 400: '/db/data7/FSPRD/stby_redo04.log'
Thu Dec 18 16:15:02 2008
RFS[24]: Successfully opened standby log 200: '/db/data3/FSPRD/stby_redo02.log'
Thu Dec 18 16:15:03 2008
Media Recovery Waiting for thread 1 sequence 7975 (in transit)
Thu Dec 18 16:15:03 2008
Recovery of Online Redo Log: Thread 1 Group 100 Seq 7975 Reading mem 0
Mem# 0: /db/data1/FSPRD/stby_redo01.log
Mem# 1: /db/data2/FSPRD/stby_redo01_02.log
Media Recovery Waiting for thread 1 sequence 7976 (in transit)
Thu Dec 18 16:15:07 2008
Recovery of Online Redo Log: Thread 1 Group 200 Seq 7974 Reading mem 0
Mem# 0: /db/data3/FSPRD/stby_redo02.log
Mem# 1: /db/data4/FSPRD/stby_redo02_02.log
MRP0: Background Media Recovery terminated with error 355
Thu Dec 18 16:15:12 2008
Errors in file /db/dbdump/FSPRD/bdump/fsprd_mrp0_11351.trc:
ORA-00355: change numbers out of order
ORA-00353: log corruption near block 2 change 4994299896 time 12/18/2008 16:14:57
ORA-00312: online log 200 thread 1: '/db/data4/FSPRD/stby_redo02_02.log'
ORA-00312: online log 200 thread 1: '/db/data3/FSPRD/stby_redo02.log'
Managed Standby Recovery not using Real Time Apply
Thu Dec 18 16:15:12 2008
Primary database is in MAXIMUM PERFORMANCE mode
RFS[37]: Successfully opened standby log 100: '/db/data1/FSPRD/stby_redo01.log'
Thu Dec 18 16:15:13 2008
Recovery interrupted!
Thu Dec 18 16:15:13 2008
Errors in file /db/dbdump/FSPRD/bdump/fsprd_mrp0_11351.trc:
ORA-00355: change numbers out of order
ORA-00353: log corruption near block 2 change 4994299896 time 12/18/2008 16:14:57
ORA-00312: online log 200 thread 1: '/db/data4/FSPRD/stby_redo02_02.log'
ORA-00312: online log 200 thread 1: '/db/data3/FSPRD/stby_redo02.log'
Thu Dec 18 16:15:13 2008
MRP0: Background Media Recovery process shutdown (FSPRD)
Thu Dec 18 16:15:19 2008
I of course restarted the managed recovery and everything went fine from this point on - but I could not find the root cause of this problem.
Anybody has any ideas - or experienced this before?
TIA.

I'm afraid you're hitting bug 6039415 ORA-355 can occur during real time apply, the good news: there's a patch available.
HTH
Enrique

Similar Messages

  • Line Item numbers out of sequence in C_T_DATA

    We are using several different data sources within include ZXRSAU01.
    For one of them, we have assigned a custom structure ZOXNRD0112 to C_T_DATA which contains two different fields for Line Item number. One of these fields is based on FAGLFLEXA-DOCLN and the other is based on BSEG-BUZEI.
    The values in these two fields should agree on each record contained within C_T_DATA, but occasionally we get a situation where they do not. BUZEI is always correct when compared to the Billing Document, but DOCLN will be out of sequence.
    For example:
    DOCLN BUZEI
    000001 001
    000003 002
    000002 003
    DOCLN & BUZEI both arrive already populated in the C_T_DATA table.
    We simply use the values as keys to read FAGLFLEXA and BSEG, the values in DOCLN & BUZEI themselves are not derived.
    They are there when entering the user exit ZXRSAU01, and they are still there when processing leaves ZXRSAU01.
    I donu2019t have a lot of experience in BI, and am not familiar with how C_T_DATA receives itu2019s data.
    Iu2019m trying to look back further in the processing to find where C_T_DATA is populated and draw some conclusions from that. I have asked a couple of our FI people to examine the documents involved to see what could be going on there.
    We are on ECC 6.0, and BI version 7.0.
    Has anyone out there run into this kind of problem before?

    Hello Dan,
    Welcome to SDN!
    The program ZXRSAU01 is an exit program for transactional DataSource extraction. Whenever the DataSource pulls data from ECC, it usually divides the data into several data packages, and for every package it will comes to that program. The internal table C_T_DATA contains all records of the data package.
    We don't use sequence to link data together, but use fields mappings. For example, I want to read FAGLFLEXA-DOCLN, I first need to find a key field say XXXX which exist both in C_T_DATA and FAGLFLEXA. Then use below statement to read FAGLFLEXA:
    loop at c_t_table into ls_structurename.
    select single DOCLN
        from FAGLFLEXA
        into ls_structurename-docln
      where XXXX = ls_structurename-XXXX.
      modify c_t_table from ls_structurename.
    endloop.
    With the code we don't need to worry about sequence. And it works for either full loading or delta loading.
    Please let us know if you have any questions.
    Regards,
    Frank

  • Unassigned Status Of Standby Redo Log Files

    I created 2 standby redo log groups, and use LGWR in primary site to
    transfer redo data, all are good. But when I query the V$STANDBY_LOG
    view, I found that the status column of my both standby redo logs is UNASSIGNED".
    also sequence#
    THREAD# and all others are 0 and 0.
    Any explains.

    Thanks for the reply Sophie. I did perform log switch at my primary site but the status of standby redo log files remained unassinged. I am pasting here the message in my Alert Log file may be that can help you to diagonose the problem.
    ALTER DATABASE SET STANDBY DATABASE PROTECTED
    Tue Jul 26 15:35:18 2005
    Completed: ALTER DATABASE SET STANDBY DATABASE PROTECTED
    Tue Jul 26 15:35:22 2005
    ALTER DATABASE OPEN
    Tue Jul 26 15:35:23 2005
    LGWR: Primary database is in CLUSTER CONSISTENT mode
    LGWR: Primary database is in MAXIMUM PROTECTION mode
    LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR
    LNS0 started with pid=18
    Tue Jul 26 15:35:28 2005
    LGWR: Error 16086 verifying archivelog destination LOG_ARCHIVE_DEST_2
    LGWR: Continuing...
    Tue Jul 26 15:35:28 2005
    Errors in file e:\oracle\admin\test\bdump\test_lgwr_1864.trc:
    ORA-16086: standby database does not contain available standby log files
    LGWR: Error 16086 disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'TESTstdb'
    LGWR: Minimum of 1 applicable standby database required
    Tue Jul 26 15:35:28 2005
    Errors in file e:\oracle\admin\test\bdump\test_lgwr_1864.trc:
    ORA-16072: a minimum of one standby database destination is required
    LGWR: terminating instance due to error 16072
    Instance terminated by LGWR, pid = 1864

  • Physical standby database standby redo log problem

    Hello
    We have a physical standby database , I've created some standby redo log files but my problem is that they aren't used,
    their status in v$stanby_log view is UNASSIGNED
    and I see this message (ORA-16086: standby database does not contain available standby log files) in primary database alert_log file
    while when I run "alter system switch logfile" in the primary database it transfer redo logs to the physsical standby database
    and archive log file will be created in standby database
    I've even recreated the standby redo log files and I added new ones to them but the problem wasn't solved
    Do you know what is problem ?
    elect group#,THREAD#,BYTES,STATUS from V$STANDBY_LOG;
    group#     THREAD#      BYTES       STATUS
    1                   0                   524288000                   UNASSIGNED                  
    2                   0                   524288000                   UNASSIGNED                  
    3                   0                   524288000                   UNASSIGNED                  
    8                   0                   524288000                   UNASSIGNED                  
    9                   0                   524288000                   UNASSIGNED                  
    10                   0                   524288000                   UNASSIGNED                  
    select group#,THREAD#,BYTES,MEMBERS,STATUS from v$log;
    group#                    THREAD#                    BYTES                    MEMBERS                    STATUS
    4                   1                   524288000                   2                   CLEARING                  
    7                   1                   524288000                   2                   CLEARING_CURRENT                  
    6                   1                   524288000                   2                   CLEARING                  
    5                   1                   524288000                   2                   CLEARING                  
    thanks

    Hello Anurag
    Thank you for your reply
    I have found some issue in the standby database alert_log too , in the standby database alert_log it has been written:
    RFS[782]: Assigned to RFS process 3919
    RFS[782]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM AVAILABILITY mode
    Standby controlfile consistent with primary
    Primary database is in MAXIMUM AVAILABILITY mode
    Standby controlfile consistent with primary
    RFS[782]: No standby redo logfiles selected (reason:6)
    Sun Jan 31 13:59:43 2010
    Errors in file /u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc:
    ORA-16086: standby database does not contain available standby log files
    Sun Jan 31 13:59:48 2010
    RFS[781]: Archived Log: '/disks/sda/tehrep/archivelogs/1_6516_670414641.dbf'
    Sun Jan 31 13:59:50 2010
    and the context "/u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc"  is below :
    +/u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc+
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
    System name:    Linux
    Node name:      linserver2.com
    Release:        2.6.9-42.ELsmp
    Version:        #1 SMP Wed Jul 12 23:27:17 EDT 2006
    Machine:        i686
    Instance name: tehrep
    Redo thread mounted by this instance: 1
    Oracle process number: 58
    Unix process pid: 3919, image: [email protected]
    *** SERVICE NAME:() 2010-01-31 13:59:43.865
    *** SESSION ID:(109.1225) 2010-01-31 13:59:43.865
    KCRRFLAS
    KCRRSNPS
    No space in recovery area for active standby redo logs
    The primary database is operating in MAXIMUM PROTECTION
    or MAXIMUM AVAILABILITY mode, and the standby database
    does not contain adequate disk space in the recovery area
    to safely archive the contents of the standby redo logfiles.
    ORA-16086: standby database does not contain available standby log files
    when I saw this line "No space in recovery area for active standby redo logs" I thought that STANDBY_ARCHIVE_DEST parameter points where that there is no enough space , but when I consider I found out that points a directory on disk a "sda" that has enough space , I don't know what that means
    by the way, at below I've written a section of the primary database alert_log context and "lgwr" trace file around Sun Jan 31 13:30:34 2010
    alert_log :
    ORA-16086: standby database does not contain available standby log files
    Sun Jan 31 13:30:34 2010
    LGWR: Failed to archive log 7 thread 1 sequence 6512 (16086)
    Thread 1 advanced to log sequence 6512
    Current log# 7 seq# 6512 mem# 0: /disks/sdb/tehrep/redo71.log
    Current log# 7 seq# 6512 mem# 1: /disks/sdd/tehrep/redo72.log
    LNSc started with pid=53, OS id=11451
    Sun Jan 31 13:36:34 2010
    Errors in file /u01/app/oracle/admin/tehrep/bdump/tehrep_lgwr_3692.trc:
    ORA-16086: standby database does not contain available standby log files
    Sun Jan 31 13:36:34 2010
    LGWR: Failed to archive log 5 thread 1 sequence 6513 (16086)
    Thread 1 advanced to log sequence 6513
    Current log# 5 seq# 6513 mem# 0: /disks/sdb/tehrep/redo51.log
    Current log# 5 seq# 6513 mem# 1: /disks/sdd/tehrep/redo52.log
    */u01/app/oracle/admin/tehrep/bdump/tehrep_lgwr_3692.trc file :*
    Error 16086 creating standby archive log file at host '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com
    +)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated)))'+
    *** 2010-01-31 13:30:34.712 60679 kcrr.c
    LGWR: Attempting destination LOG_ARCHIVE_DEST_3 network reconnect (16086)
    *** 2010-01-31 13:30:34.712 60679 kcrr.c
    LGWR: Destination LOG_ARCHIVE_DEST_3 network reconnect abandoned
    ORA-16086: standby database does not contain available standby log files
    *** 2010-01-31 13:30:34.712 60679 kcrr.c
    LGWR: Error 16086 creating archivelog file '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1521
    +)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated)))'+
    *** 2010-01-31 13:30:34.712 58941 kcrr.c
    kcrrfail: dest:3 err:16086 force:0 blast:1
    Receiving message from LNSc
    *** 2010-01-31 13:30:34.718 55444 kcrr.c
    Making upidhs request to LNSc (ocis 0x0xb648db48). Begin time is <01/31/2010 13:30:30> and NET_TIMEOUT <180> seconds
    NetServer pid:11196
    *** 2010-01-31 13:30:38.718 55616 kcrr.c
    upidhs done status 0
    *** 2010-01-31 13:36:31.062
    LGWR: Archivelog for thread 1 sequence 6513 will NOT be compressed
    *** 2010-01-31 13:36:31.062 53681 kcrr.c
    +Initializing NetServer[LNSc] for dest=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1521)))(CO+
    NNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated))) mode SYNC
    LNSc is not running anymore.
    New SYNC LNSc needs to be started
    Waiting for subscriber count on LGWR-LNSc channel to go to zero
    Subscriber count went to zero - time now is <01/31/2010 13:36:31>
    Starting LNSc ...
    Waiting for LNSc to initialize itself
    *** 2010-01-31 13:36:34.116 53972 kcrr.c
    +Netserver LNSc [pid 11451] for mode SYNC has been initialized+
    Performing a channel reset to ignore previous responses
    +Successfully started LNSc [pid 11451] for dest (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1+
    +521)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated))) mode SYNC ocis=0x0xb648db48+
    *** 2010-01-31 13:36:34.116 54475 kcrr.c
    +Making upiahm request to LNSc [pid 11451]: Begin Time is <01/31/2010 13:36:31>. NET_TIMEOUT = <180> seconds+
    Waiting for LNSc to respond to upiahm
    *** 2010-01-31 13:36:34.266 54639 kcrr.c
    upiahm connect done status is 0
    Receiving message from LNSc
    Receiving message from LNSc
    Destination LOG_ARCHIVE_DEST_3 is in STANDBY RESYNCHRONIZATION mode
    Receiving message from LNSc

  • Indexing - Out of Sequence and Missing Page Numbers

    I have a 296 page book. This is an on-going project and has been updated many times.
    Two new sections have been added to this book. I have marked words for indexing.
    My problem is that many of the page numbers are repeated or out-of-sequence.
    For example: Intake Manifolds 66, 67, 68, 69, 68, 87, 68, 69, 186, 189, 203
    This is one item that is not in the new sections, so this should not have changed.
    I also have noticed missing page numbers. I verified a 'Helmets & Accessories' marker on page 248 of the new section, but it doesn't show up in the index at all.
    Does anyone have any idea what could cause this? I have searched the net for others having the same issues, but I haven't found many.
    IDCS4 on Mac Pro.
    Thanks.
    Randy

    We are having the same problem! I'm using ID CS4 6.0.1 on a 1.8 GHz PowerPC G5 running OS X 10.5.6. For example, see below. The * represents a missing page number.
    Axle, Rear
    Alignment 163
    Fastener Torque 15
    Magnetic Drain Plugs 28, 51, 27, 45 [note how out of sequence these are!!]
    Preventive Maintenance *, 145
    CS3 was finally fixed by an Adobe patch, and as far as indexing big books went, became as solid as a rock. But now they have broken it again!! It is killing us, because we've switched 20 computers to CS4 only and updated all of our files, so we can't go back!
    The only thing that seems to work is to find problem files and open the Index palette and click the Update Preview button. THEN you run your index, and sometimes it's fixed. A couple of "bad" files can spoil the whole barrel, but it's impossible to figure out which ones are bad, or even why they're bad. And when you finally fix one problem, by using the Update Preview button or by blowing away index markers and re-doing them, and getting rid of unused topics, the SAME PROBLEM will appear in other places related to other files in your book, that were "good" before!!!
    Jeff Syrop
    Technical Writer

  • Icloud numbers wont load and keeps logging me out of icloud

    I'm trying to load up the numbers app on www.icloud.com using a laptop. I can get login to my account but as soon as I try and load the app, it crashes, logs me out and takes me back to the main screen to log back in again. I've tried to load other apps and nothing else seems to have this problem, occasionally it wont log on to the icloud website and kicks me straight back out but if I try straight away to login then it works and gets me to the home screen.
    This is on a Toshiba laptop running Windows 7.

    1. try to sign in once on iTunes
    2. try to get a new password for your apple id
    3. try to restore your iPhone using iTunes
    http://support.apple.com/kb/ht5570
    http://support.apple.com/kb/ht4137

  • ORACLE 11g + PHP5 problem: "fetch out of sequence" on remote database link

    Hi!
    I have a new server with oracle 11g (11.1.0.7) database and apache/php5 (actual build) on linux.
    connection type is dedicated.
    when doing a connection to a remote AS/400 database I always get this error:
    "Warning: oci_fetch_array() http://function.oci-fetch-array: ORA-01002: fetch out of sequence ORA-02063: preceding line from..."
    when doing the same simple "select * from database link" at my old server with oracle 9i and apache/php5 no problem occurrs.
    this happens at AS/400 database link only. normal oracle database links work fine.
    Anyone an idea how to solve this problem or where the problem is????
    further info:
    when doing " select * from [ database link ] where rownum < 11 " it works, but when doing the query with more than 10 results I get the error. Any idea?
    bye,
    Oliver
    Edited by: user501548 on 10.10.2008 02:20

    well a fetch out of sequence usually indicates that a cursor has been closed before the process has finished fetching or some such thing.
    I'm not aware of there being a specific problem with the situation you outline, but maybe if you could provide all the connection setup and how you are trying to connect that may give a better idea.
    Also, you may be better asking on the General Database Discussions forum as this isn't really a SQL or PL/SQL problem

  • My MBP recently started logging itself out.  I narrowed the problem to the external monitor which is connected with a Thunderbolt adapter. When I disconnected it, the problem went away.

    My early 2011 MBP laptop (OS Yosemite 10.2.2) recently started logging itself out.  I narrowed the problem to the external monitor which is connected with a Thunderbolt adapter. I installed Thunderbolt firmware update 1.2, that did not correct the problem. When I disconnect the external monitor, the problem went away. Is there something I can do to be able to use the external monitor again?

    Try Function F1

  • ORACLE 11g + PHP5 problem: "fetch out of sequence" on i5/OS AS/400

    Hi!
    I have a big problem using a database connection to i5 AS/400 with PHP5 OCI8 interface by Oracle 11g:
    when doing a simple "select * from (as400_database_link)" I get this return: "fetch out of sequence... preceding line..."
    I never had this problem with 9i, old transparent gateway.
    BUT:
    when doing for example a "select * from (as400_database_link) where rownum < 1000000" it works fine without problems.
    Anyone an idea how to eliminate this stupid "where rownum < 100000" behaviour?
    I would have to add this in many select statements... ;(
    bye,
    Oliver

    Tach,
    how do you call oci_execute ?
    try oci_execute($query,OCI_DEFAULT);
    I had the same problem with adodb
    $q = " select * from elephant@africa";
    $db->GetAll($q);
    Failed with the same error.
    ADODB called oci_execute with OCI_COMMIT_ON_SUCCESS,which caused my error.
    $q = " select * from elephant@africa";
    $db->BeginTrans();
    $db->GetAll($q);
    $db->CommitTrans(); (or rollback);
    did the job,because BeginTrans changed oci_execute to run with OCI_DEFAULT.
    Hope this helps,
    gw
    Edited by: unficyp123 on Oct 20, 2008 2:51 PM

  • Problems with CC logging me out and not syncing

    Every morning now when I come into work I find I've been logged out of CC
    This has two affects:
    No. 1 it didn't sync my files last night so I couldn't do the work I wanted to and
    No. 2 I have to agree with the licence agreement for each app in turn every morning.
    I've seen sync problems almost every day but drop box just works. Quietly in the background gets it right every time.
    Adobe CC file syncing, one word "unreliable" why when drop box works more than perfectly on the exact same computer.
    Any ideas what's going on here?
    I understand it can't sync if it's logged me out but the point was I didn't even know it had logged me out, just shut the computer down
    after about an hour of inactivity in the evening, went home to do the work I'd put in the CC folder and No, it wasn't there.
    Do I really have to go back to thumb drives. 20gig of space if no good if it isn't reliable.

    Thank you. That was it. The settings must have reverted back be the iOS 7 update. I NEVER would have found that. Thank you.

  • My ipad logs me out of every site if my screen turns off. It also clears all history. Even some of my email accounts won't let me send mail. Ever since the ios7 update, I am frustrated daily! How do I fix these problems?

    My ipad logs me out of every site if my screen turns off. It also clears all history. Even some of my email accounts won't let me send mail. Ever since the ios7 update, I am frustrated daily! How do I fix these problems?

    My ipad logs me out of every site if my screen turns off. It also clears all history. Even some of my email accounts won't let me send mail. Ever since the ios7 update, I am frustrated daily! How do I fix these problems?

  • Lookout Warning - logged data out of time sequence

    Hello,
     what mean a warning :You have logged data out of time sequence. Repeated instances of logging "backwards" in time can result in overly large database files, and even data corruption.
    When we use Lookout 6.1 and the Logger object to write data to citadel DB.
    Pavel Rucka. 

    For example, last time you logged value 10 at 10:00:00, then now you log a new value at 9:58:00, you will get the warning.
    If the time datamember of Logger is not set, it will use the system time when it logs. The warning will happen when your system time is set back.
    This is just a warning. The data will still be logged.
    Ryan Shi
    National Instruments

  • ORA-01002/ Fetch out of sequence on lazy loading

    Hello,
    We are facing an oracle SQLException (ORA-01002: fetch out of sequence)
    while we are trying to get a field (retrieved via lazy loading) from an
    object that was retrieved using a kodoquery.
    This error only occurs during performance testing under a heavy load.
    (100 concurrent users). For each thread we get a new persistencemanager
    from the factory. And we have put the multithreaded option to true.
    Can anyone help us with this problem?
    Thanks in advance,
    Kind Regards,
    Niels Soeffers
    Caused by: java.sql.SQLException: ORA-01002: fetch out of sequence
         at
    oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
         at
    oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
         at
    oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1027)
         at
    oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:291)
         at
    oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:213)
         at
    com.solarmetric.jdbc.DelegatingResultSet.next(DelegatingResultSet.java:97)
         at kodo.jdbc.sql.ResultSetResult.nextInternal(ResultSetResult.java:151)
         at kodo.jdbc.sql.AbstractResult.next(AbstractResult.java:123)
         at kodo.jdbc.sql.Select$SelectResult.next(Select.java:2236)
         at
    kodo.jdbc.meta.AbstractCollectionFieldMapping.load(AbstractCollectionFieldMapping.java:592)
         at kodo.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java:521)
         at
    kodo.runtime.DelegatingStoreManager.load(DelegatingStoreManager.java:133)
         at kodo.runtime.ROPStoreManager.load(ROPStoreManager.java:79)
         at kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java:3166)
         at kodo.runtime.StateManagerImpl.loadField(StateManagerImpl.java:3265)
         at kodo.runtime.StateManagerImpl.isLoaded(StateManagerImpl.java:1386)
         at
    com.ardatis.ventouris.domain.OntvangstReeks.jdoGetontvangstTransacties(OntvangstReeks.java)
         at
    com.ardatis.ventouris.domain.OntvangstReeks.getStatus(OntvangstReeks.java:72)
         at
    com.ardatis.ventouris.service.financien.transfer.FinancienTOAssembler.getOntvangstReeksBaseTO(FinancienTOAssembler.java:71)
         at
    com.ardatis.ventouris.service.financien.transfer.FinancienTOAssembler.getOntvangstReeksBaseTOs(FinancienTOAssembler.java:84)
         at
    com.ardatis.ventouris.service.financien.FinancienManagerImpl.getOntvangstReeksBaseTOs(FinancienManagerImpl.java:241)
         at
    com.ardatis.ventouris.service.financien.ejb.FinancienManagerBean.getOntvangstReeksBaseTOs(FinancienManagerBean.java:62)
         at sun.reflect.GeneratedMethodAccessor181.invoke(Unknown Source)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at com.sun.enterprise.security.SecurityUtil$2.run(SecurityUtil.java:153)
         at java.security.AccessController.doPrivileged(Native Method)
         at
    com.sun.enterprise.security.application.EJBSecurityManager.doAsPrivileged(EJBSecurityManager.java:950)
         at com.sun.enterprise.security.SecurityUtil.invoke(SecurityUtil.java:158)
         at
    com.sun.ejb.containers.EJBObjectInvocationHandler.invoke(EJBObjectInvocationHandler.java:128)
         at $Proxy31.getOntvangstReeksBaseTOs(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source)
         at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at
    com.sun.corba.ee.impl.presentation.rmi.ReflectiveTie._invoke(ReflectiveTie.java:123)

    We are using Oracle 10g and have tried multiple dirvers. (classes12.jar,
    ojdbc14.jar)
    Our kodo.properties ( actually the ra.xml we supply with the kodo.rar )
    <connector>
    <display-name>KodoJDO</display-name>
    <description>Resource Adapter for integration of the Kodo Java Data
    Objects (JDO) implementation with J2EE 1.3 compliant managed
    environments</description>
    <vendor-name>Solarmetric, Inc.</vendor-name>
    <spec-version>1.0</spec-version>
    <eis-type>jdo</eis-type>
    <version>1.0</version>
    <license>
    <description>
    See http://www.solarmetric.com for terms and license conditions.
    </description>
    <license-required>true</license-required>
    </license>
    <resourceadapter>
         <managedconnectionfactory-class>kodo.jdbc.ee.JDBCManagedConnectionFactory</managedconnectionfactory-class>
    <connectionfactory-interface>javax.resource.cci.ConnectionFactory</connectionfactory-interface>
    <connectionfactory-impl-class>kodo.jdbc.ee.JDBCConnectionFactory</connectionfactory-impl-class>
    <connection-interface>javax.resource.cci.Connection</connection-interface>
    <connection-impl-class>kodo.runtime.PersistenceManagerImpl</connection-impl-class>
    <transaction-support>XATransaction</transaction-support>
    <config-property>
    <description>A comma-separated list of query aggregate listeners
    to add to the default list of extensions. Each listener must implement
    the kodo.jdbc.query.JDBCAggregateListener interface.</description>
    <config-property-name>AggregateListeners</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The kodo.jdbc.meta.ClassIndicator to use by default
    for new mappings. The class indicator is responsible for tracking the
    concrete class or subclass implemented by the object stored in each row of
    a table.</description>
    <config-property-name>ClassIndicator</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>in-class-name</config-property-value>
    </config-property>
    <config-property>
    <description>The kodo.util.ClassResolver implementation that
    should be used for JDO class resolution. Defaults to a JDO spec-compliant
    resolver.</description>
    <config-property-name>ClassResolver</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>spec</config-property-value>
    </config-property>
    <config-property>
    <description>Details about various compatibiity levels for the
    current environment.</description>
    <config-property-name>Compatibility</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>The class name of either the JDBC java.sql.Driver, or
    an instance of a javax.sql.DataSource to use to connect to the non-XA data
    source.</description>
    <config-property-name>Connection2DriverName</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The password for the user specified in
    Connection2UserName</description>
    <config-property-name>Connection2Password</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of properties to be passed to
    the non-XA JDBC Driver when obtaining a Connection. Properties are of the
    form "key=value". If the given JDBC Driver class is a DataSource, these
    properties will be used to configure the bean properties of the
    DataSource. </description>
    <config-property-name>Connection2Properties</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The URL for the non-XA data source.</description>
    <config-property-name>Connection2URL</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The username for the connection listed in
    Connection2URL.</description>
    <config-property-name>Connection2UserName</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of
    com.solarmetric.jdbc.ConnectionDecorator implementations to install on all
    connection pools.</description>
    <config-property-name>ConnectionDecorators</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The class name of either the JDBC java.sql.Driver, or
    an instance of a javax.sql.DataSource to use to connect to the data
    source.</description>
    <config-property-name>ConnectionDriverName</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The JNDI name of the connection factory to use for
    finding non-XA connections. If specified, this is the connection that
    will be used for obtaining sequence numbers.</description>
    <config-property-name>ConnectionFactory2Name</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
         <config-property-value>jdbc/VentourisNonXA</config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of properties used to
    configure the javax.sql.DataSource used as the non-XA ConnectionFactory.
    Each property should be of the form "key=value", where "key" is the name
    of some bean-like property of the data source.</description>
    <config-property-name>ConnectionFactory2Properties</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The JNDI name of the connection factory to use for
    obtaining connections.</description>
    <config-property-name>ConnectionFactoryName</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
         <config-property-value>jdbc/Ventouris</config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of properties used to
    configure the javax.sql.DataSource used as the ConnectionFactory. Each
    property should be of the form "key=value", where "key" is the name of
    some bean-like property of the data source.</description>
    <config-property-name>ConnectionFactoryProperties</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The password for the user specified in
    ConnectionUserName</description>
    <config-property-name>ConnectionPassword</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of properties to be passed to
    the JDBC Driver when obtaining a Connection. Properties are of the form
    "key=value". If the given JDBC Driver class is a DataSource, these
    properties will be used to configure the bean properties of the
    DataSource. </description>
    <config-property-name>ConnectionProperties</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>This property dictates when PersistenceManagers will
    retain and release data store connections. Available options are
    "on-demand" for retaining a connection only during pessimistic
    transactions and data store operations, "transaction" for retaining a
    connection for the life of each transaction, or "persistence-manager" to
    indicate that a persistence manager should retain and reuse a single
    connection for its entire lifespan.</description>
    <config-property-name>ConnectionRetainMode</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>transaction</config-property-value>
    </config-property>
    <config-property>
    <description>The URL for the data source.</description>
    <config-property-name>ConnectionURL</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The username for the connection listed in
    ConnectionURL.</description>
    <config-property-name>ConnectionUserName</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>Set to true if you''d like Kodo to copy all object
    ids before returning them to your code. If you do not plan on modifying
    identity objects, you can set this property to false to avoid the copying
    overhead.</description>
    <config-property-name>CopyObjectIds</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to cache data loaded from the data store.
    Must implement kodo.datacache.DataCache.</description>
    <config-property-name>DataCache</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The number of milliseconds that data in the data
    cache is valid for. A value of 0 or less means that by default, cached
    data does not time out.</description>
    <config-property-name>DataCacheTimeout</config-property-name>
    <config-property-type>java.lang.Integer</config-property-type>
    <config-property-value>-1</config-property-value>
    </config-property>
    <config-property>
    <description>The type of data source in use. Available options
    are "local" for a standard data source under Kodo''s control, or
    "enlisted" for a data source managed by an application server and
    automatically enlisted in global transactions.</description>
    <config-property-name>DataSourceMode</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>enlisted</config-property-value>
    </config-property>
    <config-property>
    <description>The kodo.jdbc.sql.DBDictionary to use for database
    interaction. This is auto-detected based on the setting of
    javax.jdo.option.ConnectionURL, so you need only set this to override the
    default with your own custom dictionary or if you are using an
    unrecognized driver.</description>
    <config-property-name>DBDictionary</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>Whether to dynamically create custom structs to hold
    and transfer persistent state in the Kodo data cache and remote
    persistence manager frameworks. Dynamic structs can reduce data cache
    memory consumption, reduce the amount of data serialized back and forth
    under remote persistence managers, and improve the overall performance of
    these systems. However, they increase application warm-up time while the
    custom classes are generated and loaded into the JVM. Set to true to
    enable dynamic data structs.</description>
    <config-property-name>DynamicDataStructs</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>Specifies the default eager fetch mode to use.
    Either "none" to never eagerly-load relations, "join" for selecting 1-1
    relations along with the target object using inner or outer joins, or
    "parallel" for selecting 1-1 relations via joins, and collections
    (including to-many relations) along with the target object using separate
    select statements executed in parallel.</description>
    <config-property-name>EagerFetchMode</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>parallel</config-property-value>
    </config-property>
    <config-property>
    <description>The number of rows that will be pre-fetched when an
    element in a Query result is accessed. Use -1 to pre-fetch all
    results.</description>
    <config-property-name>FetchBatchSize</config-property-name>
    <config-property-type>java.lang.Integer</config-property-type>
    <config-property-value>-1</config-property-value>
    </config-property>
    <config-property>
    <description>The name of the JDBC fetch direction to use.
    Standard values are "forward", "reverse", and "unknown".</description>
    <config-property-name>FetchDirection</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>forward</config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of fetch group names that
    PersistenceManagers will load by default when loading data from the data
    store.</description>
    <config-property-name>FetchGroups</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of query filter listeners to
    add to the default list of extensions. Each listener must implement the
    kodo.jdbc.query.JDBCFilterListener interface.</description>
    <config-property-name>FilterListeners</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>Whether or not Kodo should automatically flush
    modifications to the data store before executing queries.</description>
    <config-property-name>FlushBeforeQueries</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>with-connection</config-property-value>
    </config-property>
    <config-property>
    <description>If true, Kodo will order all SQL inserts, updates,
    and deletes to meet your schema''s foreign key constraints. Defaults to
    false.</description>
    <config-property-name>ForeignKeyConstraints</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>If false, then the JDO implementation must consider
    modifications, deletions, and additions in the PersistenceManager
    transaction cache when executing a query inside a transaction. Else, the
    implementation is free to ignore the cache and execute the query directly
    against the data store.</description>
    <config-property-name>IgnoreCache</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to manage inverse relations during flush.
    Set to true to use the default inverse manager. Custom inverse managers
    must extend kodo.runtime.InverseManager.</description>
    <config-property-name>InverseManager</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of
    com.solarmetric.jdbc.JDBCListener implementations to install on all
    connection pools.</description>
    <config-property-name>JDBCListeners</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>The license key provided to you by SolarMetric. Keys
    are available at www.solarmetric.com</description>
    <config-property-name>LicenseKey</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value><KEY-REMOVED></config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to handle acquiring locks on persistent
    instances. Must implement kodo.runtime.LockManager.</description>
    <config-property-name>LockManager</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>pessimistic</config-property-value>
    </config-property>
    <config-property>
    <description>The number of milliseconds to wait for an object lock
    before throwing an exception, or -1 for no limit.</description>
    <config-property-name>LockTimeout</config-property-name>
    <config-property-type>java.lang.Integer</config-property-type>
    <config-property-value>-1</config-property-value>
    </config-property>
    <config-property>
    <description>LogFactory and configuration for Kodo''s logging
    needs.</description>
    <config-property-name>Log</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>kodo(DefaultLevel=WARN, Tool=WARN,
    Runtime=WARN, SQL=WARN)</config-property-value>
    </config-property>
    <config-property>
    <description>The mode to use for calculating the size of large
    result sets. Legal values are "unknown", "last", and "query".</description>
    <config-property-name>LRSSize</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>query</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to integrate with an external transaction
    manager. Must implement kodo.runtime.ManagedRuntime.</description>
    <config-property-name>ManagedRuntime</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>auto</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to configure management and profiling
    capabilities.</description>
    <config-property-name>ManagementConfiguration</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>none</config-property-value>
    </config-property>
    <config-property>
    <description>The kodo.jdbc.meta.MappingFactory that will provide
    the object-relational mapping information needed to map each persistent
    class to the database.</description>
    <config-property-name>MappingFactory</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>file</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to create metadata about persistent
    types. Must implement kodo.meta.MetaDataLoader</description>
    <config-property-name>MetaDataLoader</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>jdo</config-property-value>
    </config-property>
    <config-property>
    <description>If true, then the application plans to have multiple
    threads simultaneously accessing a single PersistenceManager, so measures
    must be taken to ensure that the implementation is thread-safe. Otherwise,
    the implementation need not address thread safety.</description>
    <config-property-name>Multithreaded</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>If true, then it is possible to read persistent data
    outside the context of a transaction. Otherwise, a transaction must be in
    progress in order read data.</description>
    <config-property-name>NontransactionalRead</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>If true, then it is possible to write to fields of a
    persistent-nontransactional object when a transaction is not in progress.
    If false, such a write will result in a JDOUserException.</description>
    <config-property-name>NontransactionalWrite</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>Determines the persistence manager's behavior in
    calls to getObjectById with a validate parameter of false. Use "check" to
    check that a database record exists for the object and load its fetch
    group fields. Use "hollow" to return a hollow instance.</description>
    <config-property-name>ObjectLookupMode</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>check</config-property-value>
    </config-property>
    <config-property>
    <description>Selects between optimistic and pessimistic (data
    store) transactional modes.</description>
    <config-property-name>Optimistic</config-property-name>
    <config-property-type>java.lang.Boolean</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>Action to take when Kodo discovers an orphaned key in
    the database. May be a custom action implementing
    kodo.event.OrphanedKeyAction.</description>
    <config-property-name>OrphanedKeyAction</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>log</config-property-value>
    </config-property>
    <config-property>
    <description>The name of the concrete implementation of
    javax.jdo.PersistenceManagerFactory that
    javax.jdo.JDOHelper.getPersistenceManagerFactory () should create. For
    Kodo JDO, this should be kodo.jdbc.runtime.JDBCPersistenceManagerFactory
    or a custom extension of this type.</description>
    <config-property-name>PersistenceManagerFactoryClass</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>kodo.jdbc.runtime.JDBCPersistenceManagerFactory</config-property-value>
    </config-property>
    <config-property>
    <description>Persistence manager plugin and properties. If you
    use a custom class, it must extend
    kodo.runtime.PersistenceManagerImpl.</description>
    <config-property-name>PersistenceManagerImpl</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>default</config-property-value>
    </config-property>
    <config-property>
    <description>Configure this persistence manager factory to service
    remote persistence managers.</description>
    <config-property-name>PersistenceManagerServer</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>false</config-property-value>
    </config-property>
    <config-property>
    <description>A comma-separated list of the class names of all
    persistent classes to register whenever a persistence manager is
    obtained.</description>
    <config-property-name>PersistentClasses</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>com.ardatis.ventouris.domain.KwartaalVerhoging,
    com.ardatis.ventouris.domain.Land, com.ardatis.ventouris.domain.Gemeente,
    com.ardatis.ventouris.domain.Adres,
    com.ardatis.ventouris.domain.ContactGegeven,
    com.ardatis.ventouris.domain.AdresContactGegeven,
    com.ardatis.ventouris.common.type.AdresType,
    com.ardatis.ventouris.domain.OntvangstTransactie,
    com.ardatis.ventouris.domain.OntvangstReeks,
    com.ardatis.ventouris.domain.Ontvangst,
    com.ardatis.ventouris.domain.Inkomen,
    com.ardatis.ventouris.domain.loopbaan.LoopbaanPeriode,
    com.ardatis.ventouris.common.type.InkomenSoort,
    com.ardatis.ventouris.common.type.INSZ,
    com.ardatis.ventouris.domain.Aangeslotene,
    com.ardatis.ventouris.domain.NatuurlijkePersoon,
    com.ardatis.ventouris.domain.Persoon, com.ardatis.ventouris.domain.Rol,
    com.ardatis.ventouris.domain.Dossier,
    com.ardatis.ventouris.common.type.Geslacht,
    com.ardatis.ventouris.common.type.Taal,
    com.ardatis.ventouris.common.type.Nationaliteit,
    com.ardatis.ventouris.common.type.KostType,
    com.ardatis.ventouris.domain.Verhoging,
    com.ardatis.ventouris.domain.Aanvraag,
    com.ardatis.ventouris.domain.AanvraagAansluitingSS,
    com.ardatis.ventouris.domain.AanvraagToestand,
    com.ardatis.ventouris.common.type.AanvraagToestandType,
    com.ardatis.ventouris.domain.Factuur,
    com.ardatis.ventouris.domain.TaakType, com.ardatis.ventouris.domain.Taak,
    com.ardatis.ventouris.domain.BijdrageBerekening,
    com.ardatis.ventouris.domain.calculationparameters.HerwaarderingsIndex,
    com.ardatis.ventouris.domain.integration.asis.TempInterneOntvangst,
    com.ardatis.ventouris.domain.calculationparameters.InkomenGrens,
    com.ardatis.ventouris.domain.calculationparameters.BijdrageCategorie,
    com.ardatis.ventouris.domain.calculationparameters.BijdrageCategorieGroep,
    com.ardatis.ventouris.domain.integration.asis.TempOntvangstKinderbijslag,
    com.ardatis.ventouris.domain.calculationparameters.JaarVerhogingParameter,
    com.ardatis.ventouris.domain.calculationparameters.KwartaalVerhogingParameter,
    com.ardatis.ventouris.domain.UitgaveReeks,
    com.ardatis.ventouris.domain.Uitgave,
    com.ardatis.ventouris.domain.Terugbetaling,
    com.ardatis.ventouris.domain.BedragTerugbetaald,
    com.ardatis.ventouris.domain.BijdrageSS</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to proxy second class object fields of
    managed instances. Must implement kodo.util.ProxyManager.</description>
    <config-property-name>ProxyManager</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>default</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to cache query results loaded from the
    data store. Must implement kodo.datacache.QueryCache.</description>
    <config-property-name>QueryCache</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to cache query compilation data. Must
    implement java.util.Map. Does not need to be thread-safe -- it will be
    wrapped via the Collections.synchronizedMap() method if it does not extend
    kodo.util.CacheMap.</description>
    <config-property-name>QueryCompilationCache</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>true</config-property-value>
    </config-property>
    <config-property>
    <description>The default lock level to use when loading objects
    within non-optimistic transactions. Set to none, read, write, or the
    numeric value of the desired lock level for your lock
    manager.</description>
    <config-property-name>ReadLockLevel</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value>read</config-property-value>
    </config-property>
    <config-property>
    <description>Plugin used to communicate commit information among
    JVMs. Must implement kodo.event.RemoteCommitProvider.</description>
    <config-property-name>RemoteCommitProvider</config-property-name>
    <config-property-type>java.lang.String</config-property-type>
    <config-property-value></config-property-value>
    </config-property>
    <config-property>
    <description>Whether or not RemoteCommitEvents will include the
    object Ids of objects added during the transaction.</description>
    <config-property-name>RemoteCommitTransmitAddObjectIds</config-property-na

  • Repeated Opening of  database in a Txn  causes Logging region out of memory

    Hi
    BDB 4.6.21
    When I open and close a single database file repeatedly, it causes the error message "Logging region out of memory; you may need to increase its size". I have set the 65KB default size for set_lg_regionmax. Is there any work around for solving this issue, other than increasing the value for the set_lg_regionmax. Even if we set it to a higher value, we cannot predict how the clients of BDB will opens a file and closes a database file. Following is a stand alone program, using which one can reproduce the scenario.
    void main()
    const int SUCCESS = 0;
    ULONG uEnvFlags = DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_TXN |DB_INIT_LOCK | DB_THREAD;// |
    //DB_RECOVER;
    LPCSTR lpctszHome = "D:\\Nisam\\Temp";
    int nReturn = 0;
    DbEnv* pEnv = new DbEnv( DB_CXX_NO_EXCEPTIONS );
    nReturn = pEnv->set_thread_count( 20 );
    nReturn = pEnv->open( lpctszHome, uEnvFlags, 0 );
    if( SUCCESS != nReturn )
    return 0;
    DbTxn* pTxn = 0;
    char szBuff[MAX_PATH];
    UINT uDbFlags = DB_CREATE | DB_THREAD;
    Db* pDb = 0;
    Db Database( pEnv, 0 );
    lstrcpy( szBuff, "DBbbbbbbbbbbbbbbbbbbbbbbbbbb________0" );// some long name
    // First create the database
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    nReturn = Database.close( 0 );
    for( int nCounter = 0; 10000 > nCounter; ++nCounter )
    // Now repeatedly open and close the above created database
    pEnv->txn_begin( pTxn, &pTxn, 0 );
    Db Database( pEnv, 0 );
    nReturn = Database.open( pTxn, szBuff, 0, DB_BTREE, uDbFlags, 0 );
    if( SUCCESS != nReturn )
    // when the count reaches 435, the error occurs
    pTxn->abort();
    pDb->close( 0 );
    pEnv->close( 0 );
    return 0;
    pTxn->abort();
    pTxn = 0;
    Database.close( 0 );
    By the way, following is the content of my DB_CONFIG file
    set_tx_max 1000
    set_lk_max_lockers 10000
    set_lk_max_locks 100000
    set_lk_max_objects 100000
    set_lock_timeout 20000
    set_lg_bsize 1048576
    set_lg_max 10485760
    #log region: 66KB
    set_lg_regionmax 67584
    set_cachesize 0 8388608 1
    Thanks are Regards
    Nisam

    Hi Nisam,
    I was able to reproduce the problem using Berkeley DB 4.6.21. The problem is with releasing the FNAME structure in certain cases involving abort Transactions. In a situation where you have continuous (in a loop) transactional (open, abort, close) of databases you will notice (as you did) that the log region size needs to be increased (set_lg_regionmax).
    This problem was identified and reproduced yesterday (thanks for letting us know about this) and is reported as SR #15953. It will be fixed in the next release of Berkeley DB and is currently in code review/regression testing. I have a patch that you can apply to Berkeley DB 4.6 and have confirmed that your test program runs with the patch applied. If you send me email at (Ron dot Cohen at Oracle) I’ll send the patch to you.
    As you noticed, commiting the transaction will run cleanly without error. You could do that (with the suggestiion DB_TXN_NOSYNC below) but you may not even need transactions for this.
    I want to expand a bit on my recommendation that you not abort transactions in the manner that you are doing (though with the patch you can certainly do that). First, the open/close database is a heavyweight operation. Typically you create/open your databases and keep them open for the life of the application (or a long time).
    You also mentioned, that you noticed commits may have taken a longer time. We can talk about that (if you email me), but you could consider using the DB_TXN_NOSYNC flag losing durability. Make sure that this suggestion will work with your application requirements.
    Even if you have (create/open/get/commit/abort) that should not need transactions for a single get operation. For that case, there would be no logging for the open and close therefore this sequence would be faster. This was a code snippet so what you have in your application may be a lot more complicated and justify what you have done. But the simple test case above should not require a transaction since you are doing a single atomic get.
    I hope this helps!
    Ron Cohen
    Oracle Corporation

  • File open - images display out of sequence

    Hi. I am new to this forum so please bear with me.
    I installed CS5 a few months back. All was okay except for things being a little slower than I hoped. The only thing I noticed right away was the top tool bar wasn't very clear which I thought most unusual for Photoshop having used numerous previous versions. I noticed that various other operations on my computer were also running slower.
    Then a week ago things started to get much worser. Programs (not just CS5) but other things such as email and Internet etc were going very slow or freezing and not responding at all. As long as I didn't start CS5 it was okay. I also noticed that when I opened a folder of JPEG images they were separating into dimensions by default with the verticals first and then the horizontals with each group in their own numerical sequence. I could select display by name and they would go into the correct order. If I opened an image and then went back to open some more they were out of sequence again. This is any folder of JPEGs and only applied to JPEGs. PSD and Nikon RAW files are fine. Also the JPEGs display correctly in other software applications such as My Computer, Lightroom 3 and Capture One.
    After much research I have realised that my dual core processor with XP just doesn't have the power to run something as memory intensive as CS5. A massive upgrade is on the agenda but will take a little while to get set up. So, in the meantime I have deinstalled CS5 and reinstalled CS3. It is much improved with nothing freezing or crashing and I am able to do other things while Photoshop runs in the background. However the sequencing problem still exists. I wouldn't mind as it is easy to right click and resequence. But each time I have to go to open a new folder of JPEGs it takes minutes to open (time depends on how many images are in the folder). Once open I can return to that folder and it opens in a reasonable time but still out of sequence. But sometimes they aren't just separated into verticals and horizontals but the images numbers are all over the place. I go to a different file and it is the long wait with the CPU groaning away trying to work.
    I am a full time professional photographer and this sort of delay is more than annoying and taking up way too much time - time that I can't afford. Has anyone come across this issue before and is there a solution. Your help would be very much apprciated.
    Thanks Carol

    Hi Mylenium, Or anyone who ever has this problem.
    It turned out it wasn't a Photoshop problem re the JPEGs being out of sequence. It was a setting in Windows XP. The following are the instructions to fix the problem. I have done it and it instantly fixed my problem. Cheers Carol
    Apply a Specified View to All Folders
    Windows XP allows you to choose how to view the contents of individual folders. By clicking "View" on a folder's menu bar, you can choose to view the folder's contents as thumbnails, tiles, icons, a plain list or a list with details. (Image folders will have an extra view, "Filmstrip").
    Changing the view will only apply to the folder you are currently working in. However, with just a few more clicks, you can change the view for all the folders on your computer.
    Here's how:
    Open any folder, click "View" and choose the view you would most like to use.
    Click "Tools" and then click "Folder Options".
    In the "Folder Options" window, click the "View" tab.
    Click "Apply to All Folders".
    A confirmation message should appear. Click "Yes".
    Click "OK" to exit the "Folder Options" window.

Maybe you are looking for

  • Master Detail Forms with 2 composite primary keys - Is there a workaround?

    Hello All, I have been searching for a workaround to the maximum 2 part primary key restriction on the multi-row updates, and master-detail forms, and am hoping that someone can help me. I am using HTMLDB v2.0.0.00.49 with IE 6 against a 9.2 DB. I su

  • Problem in subclass with ActionListener

    Hello, I need to program two forms which are identical except for the actions their buttons should do and some other minor differences, therefore I made a superclass for one of the forms and then extended it to build the other form. I messed up and w

  • A few nooby Questons.

    Hi there, i want to install Arch on my system but I had a few problems initially, the first was with the pacman.conf,  i wanted to download and install KDEmod and not the full KDE package, I appended the config file with. [kdemod-core] Server = http:

  • Extensibility Pack 3.6 for Agile PLM for Process 6.1.1.1 is now available

    Extensibility Pack 3.6 for Agile PLM for Process 6.1.1.1 is now on Oracle Software Delivery Cloud We are pleased to announce the General Availability of Agile PLM for Process Extensibility Pack 3.6 as of October 30th, 2013. Extensibility Pack 3.6 pro

  • Which package has the plasmaoid folder view? [resolved]

    folderview don't appear in "add widget". I'm using extra/kde, not kdemod Last edited by jackphil (2009-08-27 02:13:37)