Commiting transactions in screenflow

Hi,
we are working with a screenflow which calls other screenflows.
Some of this sub-screenflows performs some operations and saves data on a table.
From the main screenflow i want to be able to access and update the saved data.
Is there anyway to force a commit in the transaction, so we can access and change that data?.
Thanks

A commit will be performed just before showing an interactive activity. A
call to a sub-creenflow will inherit the transaction from the calling
screenflow.
Hope this helps,
Juan
On Fri, 07 Dec 2007 12:05:41 -0300, Ezequiel Calderara wrote:
Some of this sub-screenflows performs some operations and saves data on
a table.--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Similar Messages

  • Error committing transaction in Stored Proc call - prev solns not working

    Hi All,
    Our process invokes a DB adapter to fetch the response from the table for our request via Stored Procedure call but facing the below issue. Its a synchronous process. Stored Procedure is present inside the Package and we are calling the Stored procedure using that Package.
    What we did is created a DB datasource of XA type and tried to call the Stored Proc but it was giving a problem “ORA-24777: use of non-migratable database link not allowed” and hence according to this thread Using DB links in Stored proc call in DB adapter 11G SOA we have modified the datasource as non-XA type.
    While we do that, we could see that Stored Proc is called and the response is present in the reply payload inside the flow trace. But the instance is getting faulted and the error is “Error committing transaction:; nested exception is: javax.transaction.xa.XAException: JDBC driver does not support XA, hence cannot be a participant in two-phase commit. To force this participation, set the GlobalTransactionsProtocol attribute to LoggingLastResource (recommended) or EmulateTwoPhaseCommit for the Data Source.”
    We have tried the properties of global transaction support as one phase commit, emulate two phase commit and logging last resource but error remains the same.
    Database from which we are getting the response is of version "Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production". Will the database link error arises even if we connect to Oracle Database?
    Please could you advise me solutions to resolve this issue.
    Thanks in advance.

    You are using Non-XA because it means (among all others) that the commit issue can be handle by the DB as well.
    The Emulate Two Phase property imitating the XA transaction in that way, that it allows you to manage a local db transaction.
    You can stay with XA connection, but then you will have to use "AUTONOMOUS_TRANSACTION pragma" in your procedure.
    Enter the following link to find good explanation about all of your questions:
    http://docs.oracle.com/cd/E15523_01/integration.1111/e10231/adptr_db.htm#BGBIHCIJ
    Arik

  • OCITransCommit() returns ORA-01013 for correctly committed transaction

    I had the following issue which seems to me a critial bug in Oracle 11g:
    Breaking an oracle transaction asynchronously with OCIBreak() while transaction is being committed with OCITransCommit() did result in a correctly committed transaction on the database server. However OCITransCommit() returned ORA-01013 (user requested cancel of current operation) which is inconsistent. It should never happen that the transaction is correctly committed and OCITransCommit() returns anything other than OCI_SUCCESS.
    My assumption is that the transaction is only committed on the database server if the OCITransCommit() returns OCI_SUCCESS. Or is this assumption not always correct?
    Oracle version 11.2.0.3.0 64bit (Linux)

    As Karthick says, perhaps the Call Interface forum is a better place to ask.
    However, as a guess on my part (I've rarely had a need to go directly into OCI calls) from what I know of the internal workings of transactions on Oracle, the COMMIT operation is an atomic (if that's the right word to use here) operation. When you issue a COMMIT from your code, it doesn't go off to the database, do loads of work committing your data and writing it to the disks etc. before execution is returned to your code, it is a very basic instruction to the database to commit the data, which then goes off and does that in the background, whilst execution is returned immediately to the calling code, and Oracle in the background can take it's time getting the data written, and can present the data to your session and others, as if it is actually on the tables. This is all handled using the SCN and the logs internally and users don't have to worry about it (usually) because it all appears on the front end as though the data exists and is written on the tables.
    So, I'm curious as to how you (or the OCI calls) are managing to issue a "break" to try and break a commit from happening. Without seeing code it's hard to see how you are testing this.
    I've just looked up the documentation for TransCommit...
    http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci17msc006.htm#LNOCI13112
    and I see you have the option of "Waiting" for the LGWR to write the transaction to the online redo logs, so that's a possible scenario for breaking, though I imagine you'd have to get in quickly with the break from another thread if the one thread is waiting for the commit.
    Intersting part of the docs...
    >
    Under normal circumstances, OCITransCommit() returns with a status indicating that the transaction has either been committed or rolled back. With global transactions, it is possible that the transaction is now in doubt, meaning that it is neither committed nor terminated. In this case, OCITransCommit() attempts to retrieve the status of the transaction from the server. The status is returned.
    >
    Still, it would be interesting to see the test code to reproduce this.... just my morbid curiosity for low level coding.... ;)

  • Ora9i physical stby db failover-recover to the last committed transaction

    Example: The primary db is not accessible. But we can still access the archived and online redo log files (physically in OS).
    How to recover to the last committed transaction (on primary db) while we do failover on standby?
    I just able to recover to the last transaction in the last archived redo log.

    912030 wrote:
    The database is not available, but I can still copy all archived redo log & online redo log files from primary db to standby db physically by OS command.
    I register and apply all archived log files to my standby db before perform failover.
    SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
    But I just able to recover to the last transaction in the last archived redo log, not the last commited transaction in (current) online redo logs in primary db before it breaks down.You can copy & apply all the archive log files, If you want to apply even current redo log file use below command
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
    Check in detail http://docs.oracle.com/cd/B28359_01/server.111/b28294/log_apply.htm
    Hope that file is not corrupted, But before that create a restore point for safe side if any recover issues you can flashback to that point.
    Until you have archive logs if you applied safely you can open database by performing failover.
    If you apply using Redo log files, it may cause some consistency.
    Or perform until TIME based recover as
    SQL> recover automatic standby database until time '2011-11-17 14:00:00';
    Check this link as well
    http://laurentschneider.com/wordpress/2011/11/failover-to-standby-with-a-delay-until-time.html
    HTH.

  • Ensure 15 minutes of commited transaction restore

    Hi!
    In our SLA we have sentence and obligation:
    "... will ensure that maximum loss of data (RPO) at any circumstance is 20 minutes for the closed transactions, provided that Customer has saved the transaction."
    This mean that we have to be able to restore database in at most 20 minutes from last committed transaction.
    1) We are taking online backup with RMAN
    2) we are taking online backup of archive logs every 19 minutes on tape (Legato backup-RMAN with catalog)
    3) we have in placed in init.or file: *.log_checkpoint_timeout=1200# Checkpoint at least every 20 mins., but this will not cover creation of archive logs every 20 minutes (especially in night hours).
    Wed Sep 19 18:38:08 2007
    Incremental checkpoint up to RBA [0x28e5.6a60.0], current log tail at RBA [0x28e5.828d.0]
    Wed Sep 19 18:58:12 2007
    Incremental checkpoint up to RBA [0x28e5.92dd.0], current log tail at RBA [0x28e5.984a.0]
    Wed Sep 19 19:18:17 2007
    Incremental checkpoint up to RBA [0x28e5.b2fb.0], current log tail at RBA [0x28e5.b7a5.0]
    Wed Sep 19 19:38:21 2007
    Incremental checkpoint up to RBA [0x28e5.c1e6.0], current log tail at RBA [0x28e5.c651.0]
    Wed Sep 19 19:58:26 2007
    Incremental checkpoint up to RBA [0x28e5.d099.0], current log tail at RBA [0x28e5.d3a3.0]
    Wed Sep 19 20:18:30 2007
    Incremental checkpoint up to RBA [0x28e5.f68f.0], current log tail at RBA [0x28e5.1053e.0]
    Wed Sep 19 20:38:35 2007
    Incremental checkpoint up to RBA [0x28e5.1128d.0], current log tail at RBA [0x28e5.1197e.0]
    Wed Sep 19 20:58:40 2007
    Incremental checkpoint up to RBA [0x28e5.1257c.0], current log tail at RBA [0x28e5.12e60.0]
    Wed Sep 19 21:00:49 2007But this doesn't create archive log but only incremental checkpoint.
    My question is: how to ensure SLA to have archive logs created every 20 minutes regardless time of the day. Or there is something else that we can do...
    THX

    log_checkpointpoint_timeout implies incremental chackpoint not log switch,if you want to log switch after every nth time then use ARCHIVE_LAG_TARGET parameter.
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htmKhurram

  • Recovery silently truncates committed transactions on corrupted last log??

    It looks like LastFileReader.readNextEntry() stops at the first corruption and then traces an error? I came across this stepping through RecoveryEdgeTest.testNoCheckpointStart() and it seems to be the same in 3.2 -> latest 4.0
    } catch (ChecksumException e) {
    LoggerUtils.fine
    (logger, envImpl,
    "Found checksum exception while searching for end of log. " +
    "Last valid entry is at " + DbLsn.toString
    (DbLsn.makeLsn(window.currentFileNum(), lastValidOffset)) +
    " Bad entry is at " +
    DbLsn.makeLsn(window.currentFileNum(), nextUnprovenOffset));
    If the last log file has multiple committed transactions, say T1, T2, and T3 and I manually corrupt the file (as the test case does) at T1 the file is then truncated after this in RecoveryManager.findEndOfLog() which seems like it will silently lose changes from T2 and T3.
    /* Now truncate if necessary. */
    if (!readOnly) {
    reader.setEndOfFile();
    Edited by: nick___ on Feb 6, 2010 11:16 AM
    NOTE: post modified to clarify question and hopefully get attention of bdbje developers.
    Edited by: nick___ on Feb 6, 2010 11:21 AM
    (clarified that question regards corrupted last log)

    Thanks for your reply Nick.
    1) we do some locking before opening bdb in one attempt to prevent dual access.2) FileManager.lockEnvironment() will do a file system lock on je.lck which was being used as a fallback in case #1 failed. In general this has worked but has a few bad properties:
    - lock held if process becomes defunct.
    - if machine goes away lock held (this doesn't typically happen).
    - odd performance problems acquiring lock due to NFS/IO system in certain senarios.
    because of the last one we've changed lockEnvironment to lock via methods other than file system locking. >
    I understand that file locking over NFS may be problematic, so it doesn't surprise me that you've implemented other locking mechanisms. But I'm curious about why you changed JE's lockEnvironment, rather than performing the locking prior to opening the environment? Did that have some particular benefit, or was it just a convenient place to put it?
    I ask because I'm trying to determine the benefits of providing a customizable hook for doing the locking in JE. (Not promising anything, just exploring.)
    Currently we always open in read/write mode but are looking at opening read only if we know the request is only a read. the issue with this is that the way je implemented locking (not mvcc) we have to be careful about multiple requests to the same jvm (when it would be fine for multiple JVMs).By that do you mean that you don't want to handle deadlocks? If you have an algorithm you can describe, we may be able to help to figure out a way to reduce or eliminate the deadlocks, if you haven't already explored that in depth.
    to get this working i'm planning on disabling the env cache so each readonly env will not potentially conflict with the write env (either totally separate or by r/w mode within the vm).I think I understand. I assume by "env cache" you mean the internal pooling of Environment handles, in DbEnvPool -- correct?
    Thanks again,
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • AS 2000 Full Cube Process Committing transaction Error

    Hi,
    I have a problem with an AS 2000 Cube which is failing when I run a full process. The full process gets 99% through then at the final step - Committing transaction in database DBName it fails stating the connection to the server is lost.
    At the same time I see an event ID 132 in my application event log stating:
    There was a fatal error during transaction commit in the database DBName. Server will be restarted to preserve the data consistency.
    Does anyone have any idea what might be causing this and how to resolve?
    Thanks in advance,
    Phil

    Hi Philip,
    Since your version is 8.00.2249, it 's already full patched. I find a thread about same issue with no solution:
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/146d7571-8786-462a-9f6f-f74b024132d4/mssqlserverolapservice-there-was-a-fatal-error-during-transaction-commit-in-the-database-server?forum=sqlanalysisservices
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.  
    Thank you for your understanding and support. 
    Regards,
    Simon Hou
    TechNet Community Support

  • Commited transaction durability and DB_REP_HANDLE_DEAD

    From the BDB 4.8 manual "Dead replication handles happen whenever a replication election results in a previously committed transaction becoming invalid. This is an error scenario caused by a new master having a slightly older version of the data than the original master and so all replicas must modify their database(s) to reflect that of the new master. In this situation, some number of previously committed transactions may have to be unrolled."
    In the application I am working on, I can't afford to have committed transactions "unrolled". Suppose I set my application to commit transactions only when a majority of electable peers acknowledges the transaction and stop the application on a DB_EVENT_REP_PERM_FAILED event. Will that guarantee the durability of committed transactions and (equivalently) guarantee that no replica will ever see DB_REP_HANDLE_DEAD error (assuming absence of bugs)?
    Also as I understand it DB_REP_HANDLE_DEAD errors should never be seen on the current the master, is this correct? Is there a way to register a callback with the Replication Manager
    -Sanjit

    I think it is important to separate the txn commit guarantees from the
    HANDLE_DEAD error return. What you are describing mitigates the
    chance of getting that error, but you can never eliminate it 100%.
    Your app description for your group (all electable, quorum ACKs)
    uses the best scenario for providing the guarantees for txn commit.
    Of course the cavaets still remain that you run risk if you use TXN_NOSYNC
    and if you have total group failure and things in memory are lost.
    Also, it is important to separate making a txn guarantee at the master site
    with getting the HANDLE_DEAD return value at a client site. The
    client can get that error even with all these safeguards in place.
    But, let's assume you have a running group, as you described, and
    you have only the occasional failure of a single site. I will describe
    at least 2 ways a client can get HANDLE_DEAD while your txn integrity
    is still maintained.
    Both examples assume a group of 5 sites, call them A, B, C, D, E
    and site A is the master. You have all sites electable and quorum
    policy.
    In the first example, site E is slower and more remote than the other 4
    sites. So, when A commits a txn, sites B, C, and D quickly apply that
    txn and send an ack. They meet the quorum policy and processing
    on A continues. Meanwhile, E is slow and slowly gets further and
    further behind the rest of the group. At some point, the master runs
    log_archive and removes most of its log files because it has sufficient
    checkpoint history. Then, site E requests a log record from the master
    that is now archived. The master sends a message to E saying it has
    to perform an internal initialization because it is impossible to
    provide that old log record. Site E performs this initialization (under the
    covers and not directly involving the application) but any
    DB handles that were open prior to the initialization will now get
    HANDLE_DEAD because the state of the world has changed and
    they need to be closed and reopened.
    Technically, no txns were lost, the group has still maintained its
    txn integrity because all the other sites have all the txns. But E cannot
    know what may or may not exist as a result of this initialization so
    it must return HANDLE_DEAD.
    In the second example, consider that a network partition has happened
    that leaves A and B running on one side, and C, D, and E on the other.
    A commits a txn. B receives the txn and applies it, and sends an ack.
    Site A never hears from C, D, E and quorum is not met and PERM_FAILED
    is returned. In the meantime, C, D, and E notice that they no longer can
    communicate with the master and hold an election. Since they have a
    majority of the sites, they elect one, say C to be a new master. Now,
    since A received PERM_FAILED, it stops. If the network partition
    is resolved, B will find the new master C. However, B still has the
    txn that was not sufficiently ack'ed. So, when B sync's up with C, it
    will unroll that txn. And then HANDLE_DEAD will be returned on B.
    In this case, the unrolled txn was never confirmed as durable by A to
    any application, but B can get the HANDLE_DEAD return. Again, B
    should close and reopen the database.
    I think what you are describing provides the best guarantees,
    but I don't think you can eliminate the possibility of getting that error
    return on a client. But you can know about your txn durability on the
    master.
    You might also consider master leases. You can find a description of
    them in the Reference Guide. Leases provide additional guarantees
    for replication.
    Sue LoVerso
    Oracle

  • Occasional Latency when committing transactions using implicit transactions

    Hello,
    We have a server application that uses ODBC and ODP.NET. We’ve noticed an intermittent problem at runtime after we have called TrasactionScope.Complete().
    We perform the following steps:
    1) Call several stored procedures via ODP.NET. These are all nested within a single ‘using TransactionScope’ statement.
    For the purposes of this example the final stored procedure populates a table named ODPTestTable.
    2) Call Complete() on the TransactionScope object.
    3) Subsequently make a call to the ODPTestTable via a stored procedure using ODBC (on a different thread).
    We have an occasional problem where data that was inserted in the ODPTestTable in steps 1) and 2) does not appear in that table until after the ODBC call in step 3) has completed.
    We are using Oracle 10.2.0.4.0, Oracle.DataAccess.dll version 2.102.4.0.
    Is anybody aware of any latency problems with ODP.NET when using the TransactionScope.Complete() method?
    Any help would be greatly appreciated.
    Thanks very much and best wishes,
    Louise.

    Hi Louise,
    TransactionScope gives you a distributed transaction with Oracle, even if that is not your intent - ie, even if there's only a single database/connection - since promoting a transaction from local to distributed is not currently supported. As such, this may be expected behavior. At least, a similar complaint was examined a while back in bug 1541648 (closed as "not a bug") and here's the resulting explanation:
    This maybe due to the way OLETx ITransaction::Commit() method behaves. After
    phase 1 of the 2PC (i.e. the prepare phase) if all is successful, commit can
    return even if the resource managers haven't actually committed. After all the
    successful "prepare" is a guarantee that the resource managers cannot
    arbitrarily abort after this point. Thus even though a resource manager
    couldn't commit because it didn't receive a "commit" notification from the
    MSDTC (due to say a communication failure), the component's commit request
    returns successfully. If you select rows from the table(s) immediately you may
    sometimes see the actual commit occur in the database after you have already
    executed your select. Your select will not therefore see the new rows due to
    consistent read semantics. There is nothing we can do about this in Oracle as
    the "commit success after successful phase 1" optimization is part of the
    MSDTC's implementation.
    ODP/ORAMTS does support promotable transactions with the current 11g beta provider, and your database needs to be 11g or higher for that to work.
    Hope it helps,
    Greg

  • Commiting transaction Problem.

    Hi,
    I have a Jdev903/WLS6.1 SP1/SqlServer2000 scenario with a BC4J deployed as a EJB session bean (BMT), but when i make a update or insert statement the transaction is not commited. WHY?
    exec sp_cursoropen @P1 output, N'SELECT ID_Medico, Nombre, A_Paterno, A_Materno, Ced_Profesional, Calle, Numero, Cod_Postal, Colonia, Del_Municipio, Ent_Federal, Pager, Tel_Celular, Univer_Egreso, RFC, CURP, Certificaci�n, Consejo, Otras_activ_en, Fecha_Ingreso, Fecha_Nacimiento, Tel_particular, Id_Tipo, Id_Status, FechaReg FROM dbo.Medico WHERE ID_Medico=@P1', @P2 output, @P3 output, @P4 output, N'@P1 int ', 46
    select @P1, @P2, @P3, @P4
    go
    exec sp_cursorfetch 180150002, 2, 1, 256
    go
    exec sp_cursorfetch 180150002, 2, 1, 256
    go
    exec sp_cursorclose 180150002
    go
    exec sp_executesql N' DELETE FROM dbo.Medico WHERE ID_Medico=@P1', N'@P1 int ', 46
    go
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    go
    SET IMPLICIT_TRANSACTIONS OFF
    go
    SET IMPLICIT_TRANSACTIONS ON
    go
    SELECT N'Testing Connection...'
    go
    EXECUTE msdb.dbo.sp_sqlagent_get_perf_counters
    go
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    go
    SET IMPLICIT_TRANSACTIONS OFF
    go
    SET IMPLICIT_TRANSACTIONS ON
    Thanks for reply.

    Hi,
    I have a Jdev903/WLS6.1 SP1/SqlServer2000 scenario with a BC4J deployed as a EJB session bean (BMT), but when i make a update or insert statement the transaction is not commited. WHY?
    exec sp_cursoropen @P1 output, N'SELECT ID_Medico, Nombre, A_Paterno, A_Materno, Ced_Profesional, Calle, Numero, Cod_Postal, Colonia, Del_Municipio, Ent_Federal, Pager, Tel_Celular, Univer_Egreso, RFC, CURP, Certificaci�n, Consejo, Otras_activ_en, Fecha_Ingreso, Fecha_Nacimiento, Tel_particular, Id_Tipo, Id_Status, FechaReg FROM dbo.Medico WHERE ID_Medico=@P1', @P2 output, @P3 output, @P4 output, N'@P1 int ', 46
    select @P1, @P2, @P3, @P4
    go
    exec sp_cursorfetch 180150002, 2, 1, 256
    go
    exec sp_cursorfetch 180150002, 2, 1, 256
    go
    exec sp_cursorclose 180150002
    go
    exec sp_executesql N' DELETE FROM dbo.Medico WHERE ID_Medico=@P1', N'@P1 int ', 46
    go
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    go
    SET IMPLICIT_TRANSACTIONS OFF
    go
    SET IMPLICIT_TRANSACTIONS ON
    go
    SELECT N'Testing Connection...'
    go
    EXECUTE msdb.dbo.sp_sqlagent_get_perf_counters
    go
    IF @@TRANCOUNT > 0 ROLLBACK TRAN
    go
    SET IMPLICIT_TRANSACTIONS OFF
    go
    SET IMPLICIT_TRANSACTIONS ON
    Thanks for reply.

  • Connection is closed after transaction commit

    WebLogic 10.3.0.0, Oracle 10gXE, JPA is provided by EclipseLink v1.1.2.v20090612-r4475.
    Transactions are managed by WLS.
    There is a stateless bean
    @Stateless()
    @TransactionAttribute(TransactionAttributeType.REQUIRED)
    @TransactionManagement(value = TransactionManagementType.CONTAINER)
    public class ServiceFacadeBean
    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
    public void processResponse(....){
    //operations with DB
    which is instantiated by two concurrent threads.
    After one of the threads commits transaction, the other finds connection (or just statement) closed. For instance,
    ####<03.02.2010 10:10:04 MSK> <Notice> <Stdout> <spbnb-prc32> <AdminServer> <[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1265181004591> <BEA-000000> <[EL Finer]: 2010-02-03 10:10:04.591--UnitOfWork(10135841)--Thread(Thread[[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads])--TX afterCompletion callback, status=COMMITTED>
    ####<03.02.2010 10:10:04 MSK> <Notice> <Stdout> <spbnb-prc32> <AdminServer> <[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1265181004591> <BEA-000000> <[EL Finer]: 2010-02-03 10:10:04.591--UnitOfWork(10135841)--Thread(Thread[[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads])--end unit of work commit>
    ####<03.02.2010 10:10:04 MSK> <Notice> <Stdout> <spbnb-prc32> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1265181004607> <BEA-000000> <[EL Warning]: 2010-02-03 10:10:04.591--UnitOfWork(10135841)--Thread(Thread[[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads])--Local Exception Stack:
    Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.1.2.v20090612-r4475): org.eclipse.persistence.exceptions.DatabaseException
    Internal Exception: java.sql.SQLException: Statement has already been closed
    Error Code: 0
    Call: UPDATE TASK SET TSK_RESULT = ?, TSK_PROCESS_STATE = ?, TSK_END_TS = ?, TSK_CREATE_TS = ?, TSK_CHANGE_TS = ? WHERE (TSK_ID = ?)
         bind => [<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    , 4, 2010-02-03 10:10:04.435, 2010-02-03 10:10:04.435, 2010-02-03 10:10:04.435, 41]
    Query: UpdateObjectQuery(com.tsystems.tenergy.smp.mds.persistence.domain.Task@7bc43e)
         at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:332)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:656)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:501)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeCall(AbstractSession.java:872)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:205)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:191)
         at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.updateObject(DatasourceCallQueryMechanism.java:686)
         at org.eclipse.persistence.internal.queries.StatementQueryMechanism.updateObject(StatementQueryMechanism.java:430)
         at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1135)
         at org.eclipse.persistence.queries.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:84)
         at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:286)
         at org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
         at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:664)
         at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:583)
         at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:109)
         at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:86)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2756)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1181)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1165)
         at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1125)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:232)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsForClassWithChangeSet(CommitManager.java:163)
         at org.eclipse.persistence.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:116)
         at org.eclipse.persistence.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:3175)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1299)
         at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitToDatabase(RepeatableWriteUnitOfWork.java:469)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1399)
         at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.issueSQLbeforeCompletion(UnitOfWorkImpl.java:3023)
         at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.issueSQLbeforeCompletion(RepeatableWriteUnitOfWork.java:224)
         at org.eclipse.persistence.transaction.AbstractSynchronizationListener.beforeCompletion(AbstractSynchronizationListener.java:157)
         at org.eclipse.persistence.transaction.JTASynchronizationListener.beforeCompletion(JTASynchronizationListener.java:68)
         at weblogic.transaction.internal.ServerSCInfo.doBeforeCompletion(ServerSCInfo.java:1217)
         at weblogic.transaction.internal.ServerSCInfo.callBeforeCompletions(ServerSCInfo.java:1195)
         at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:118)
         at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1302)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrePrepare(ServerTransactionImpl.java:2114)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:263)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:230)
         at weblogic.ejb.container.internal.BaseRemoteObject.postInvoke1(BaseRemoteObject.java:621)
         at weblogic.ejb.container.internal.StatelessRemoteObject.postInvoke1(StatelessRemoteObject.java:60)
         at weblogic.ejb.container.internal.BaseRemoteObject.postInvokeTxRetry(BaseRemoteObject.java:441)
         at com.tsystems.tenergy.smp.mds.service.impl.MDSServiceFacadeBean_dv9pfe_MDSServiceFacadeImpl.processMCSResponse(MDSServiceFacadeBean_dv9pfe_MDSServiceFacadeImpl.java:243)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at weblogic.ejb.container.internal.RemoteBusinessIntfProxy.invoke(RemoteBusinessIntfProxy.java:69)
         at $Proxy174.processMCSResponse(Unknown Source)
         at com.tsystems.tenergy.smp.mds.access.impl.MCSResponseProcessor.processMessage(MCSResponseProcessor.java:44)
         at com.tsystems.tenergy.smp.mds.access.MDSJMSServiceAdaptor.handleMessage(MDSJMSServiceAdaptor.java:41)
         at com.tsystems.tenergy.smp.common.access.impl.AbstractJMSServiceAdaptor.onMessage(AbstractJMSServiceAdaptor.java:55)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:281)
         at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:187)
         at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:154)
         at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:126)
         at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:114)
         at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
         at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
         at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
         at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:126)
         at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:114)
         at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
         at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:210)
         at $Proxy194.onMessage(Unknown Source)
         at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466)
         at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371)
         at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233)
         at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709)
         at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114)
         at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    Caused by: java.sql.SQLException: Statement has already been closed
         at weblogic.jdbc.wrapper.Statement.checkStatement(Statement.java:305)
         at weblogic.jdbc.wrapper.Statement.preInvocationHandler(Statement.java:116)
         at weblogic.jdbc.wrapper.PreparedStatement_weblogic_jdbc_base_BasePreparedStatement.getWarnings(Unknown Source)
         at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:638)
         ... 77 more>
    I would be very thankful for any help... I'm fighting with the bug already 4 days but there is still no result.....

    You need to make sure all your JDBC objects are method-level, not class or instance variables.
    Otherwise you may have two threads trying to use the same connection or statement, and
    the first one to complete a transaction may cause the connection and it's subobjects to
    be closed.

  • Confused about transaction, checkpoint, normal recovery.

    After reading the documentation pdf, I start getting confused about it's description.
    Rephrased from the paragraph on the transaction pdf:
    "When database records are created, modified, or deleted, the modifications are represented in the BTree's leaf nodes. Beyond leaf node changes, database record modifications can also cause changes to other BTree nodes and structures"
    "if your writes are transaction-protected, then every time a transaction is committed the leaf nodes(and only leaf nodes) modified by that transaction are written to JE logfiles on disk."
    "Normal recovery, then is the process of recreating the entire BTree from the information available in the leaf nodes."
    According to the above description, I have following concerns:
    1. if I open a new environment and db, insert/modify/delete several million records, and without reopen the environment, then normal recovery is not run. That means, so far, the BTree is not complete? Will that affact the query efficiency? Or even worse, will that output incorrect results?
    2. if my above thinking is correct, then every time I finish commiting transactions, I need to let the checkpoint to run in order to recreate the whole BTree. If my above thinking is not correct, then, that means that, I don't need to care about anything, just call transaction.commit(), or db.sync(), and let je to care about all the details.(I hope this is true :>)
    michael.

    http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/chkpoint.html
    Checkpoints are normally performed by the checkpointer background thread, which is always running. Like all background threads, it is managed using the je.properties file. Currently, the only checkpointer property that you may want to manage is je.checkpointer.bytesInterval. This property identifies how much JE's log files can grow before a checkpoint is run. Its value is specified in bytes. Decreasing this value causes the checkpointer thread to run checkpoints more frequently. This will improve the time that it takes to run recovery, but it also increases the system resources (notably, I/O) required by JE.
    """

  • Single-statement 'write consistency' on read committed?

    Please note that in the following I'm only concerned about single-statement read committed transactions. I do realize that for a multi-statement read committed transaction Oracle does not guarantee transaction set consistency without techniques like select for update or explicit hand-coded locking.
    According to the documentation Oracle guarantees 'statement-level transaction set consistency' for queries in read committed transactions. In many cases, Oracle also provides single-statement write consistency. However, when an update based on a consistent read tries to overwrite changes committed by other transactions after the statement started, it creates a write conflict. Oracle never reports write conflicts on read committed. Instead, it automatically handles them based on the new values for the target table columns referenced by the update.
    Let's consider a simple example. Again, I do realize that the following design might look strange or even sloppy, but the ability to produce a quality design when needed is not an issue here. I'm simply trying to understand the Oracle's behavior on write conflicts in a single-statement read committed transaction.
    A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit a transfer (put transfer amounts in the 'pending' column of the account) in case the whole financial transaction is in doubt. Second, after you got all the necessary confirmations you clear all the pending transfers making the corresponding account balance changes, resetting pending amount and marking the accounts cleared by setting the cleared date. Neither stage should leave the data in inconsistent state: sum (amount) for all rows should not change and the sum (pending) for all rows should always be 0 on either stage:
    Setup:
    create table accounts
    acc int primary key,
    amount int,
    pending int,
    cleared date
    Initially the table contains the following:
    ACC AMOUNT PENDING CLEARED
    1 10 -2
    2 0 2
    3 0 0 26-NOV-03
    So, there is a committed database state with a pending funds transfer of 2 dollars from acc 1 to acc 2. Let's submit another transfer of 1 dollar from acc 1 to acc 3 but do not commit it yet in SQL*Plus Session 1:
    update accounts
    set pending = pending - 1, cleared = null where acc = 1;
    update accounts
    set pending = pending + 1, cleared = null where acc = 3;
    ACC AMOUNT PENDING CLEARED
    1 10 -3
    2 0 2
    3 0 1
    And now let's clear all the pending transfers in SQL*Plus Session 2 in a single-statement read-committed transaction:
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    Session 2 naturally blocks. Now commit the transaction in session 1. Session 2 readily unblocks:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 0 1
    Here we go - the results produced by a single-statement transaction read committed transaction in session 2, are inconsistent � the second funds transfer has not completed in full. Session 2 should have produced the following instead:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 1 0 26-NOV-03
    Please note that we would have gotten the correct results if we ran the transactions in session 1 and session 2 serially. Please also note that no update has been lost. The type of isolation anomaly observed is usually referred to as a 'read skew', which is a variation of 'fuzzy read' a.k.a. 'non-repeatable read'.
    But if in the session 2 instead of:
    -- scenario 1
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    we issued:
    -- scenario 2
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and pending <> 0;
    or even:
    -- scenario 3
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and (pending * 0) = 0;
    We'd have gotten what we really wanted.
    I'm very well aware of the 'select for update' or serializable il solution for the problem. Also, I could present a working example for precisely the above scenario for a major database product, providing the results that I would consider to be correct. That is, the interleaving execution of the transactions has the same effect as if they completed serially. Naturally, no extra hand-coded locking techniques like select for update or explicit locking is involved.
    And now let's try to understand what just has happened. Playing around with similar trivial scenarios one could easily figure out that Oracle clearly employs different strategies when handling update conflicts based on the new values for the target table columns, referenced by the update. I have observed the following cases:
    A. The column values have not changed: Oracle simply resumes using the current version of the row. It's perfectly fine because the database view presented to the statement (and hence the final state of the database after the update) is no different from what would have been presented if there had been no conflict at all.
    B. The row (including the columns being updated) has changed, but the predicate columns haven't (see scenario 1): Oracle resumes using the current version of the row. Formally, this is acceptable too as the ANSI read committed by definition is prone to certain anomalies anyway (including the instance of a 'read skew' we've just observed) and leaving behind somewhat inconsistent data can be tolerated as long as the isolation level permits it. But please note - this is not a 'single-statement write consistent' behavior.
    C. Predicate columns have changed (see scenario 2 or 3): Oracle rolls back and then restarts the statement making it look as if it did present a consistent view of the database to the update statement indeed. However, what seems confusing is that sometimes Oracle restarts when it isn't necessary, e.g. when new values for predicate columns don't change the predicate itself (scenario 3). In fact, it's bit more complicated � I also observed restarts on some index column changes, triggers and constraints change things a bit too � but for the sake of simplicity let's no go there yet.
    And here come the questions, assuming that (B) is not a bug, but the expected behavior:
    1. Does anybody know why it's never been documented in detail when exactly Oracle restarts automatically on write conflicts once there are cases when it should restart but it won't? Many developers would hesitate to depend on the feature as long as it's not 'official'. Hence, the lack of the information makes it virtually useless for critical database applications and a careful app developer would be forced to use either serializable isolation level or hand-coded locking for a single-statement update transaction.
    If, on the other hand, it's been documented, could anybody please point me to the bit in the documentation that:
    a) Clearly states that Oracle might restart an update statement in a read committed transaction because otherwise it would produce inconsistent results.
    b) Unambiguously explains the circumstances when Oracle does restart.
    c) Gives clear and unambiguous guidelines on when Oracle doesn't restart and therefore when to use techniques like select for update or the serializable isolation level in a single-statement read committed transaction.
    2. Does anybody have a clue what was the motivation for this peculiar design choice of restarting for a certain subset of write conflicts only? What was so special about them? Since (B) is acceptable for read committed, then why Oracle bothers with automatic restarts in (C) at all?
    3. If, on the other hand, Oracle envisions the statement-level write consistency as an important advantage over other mainstream DBMSs as it clear from the handling of (C), does anybody have any idea why Oracle wouldn't fix (B) using well-known techniques and always produce consistent results?

    I intrigued that this posting has attracted so little interest. The behaviour described is not intuitive and seems to be undocumented in Oracle's manuals.
    Does the lack of response indicate:
    (1) Nobody thinks this is important
    (2) Everybody (except me) already knew this
    (3) Nobody understands the posting
    For the record, I think it is interesting. Having spent some time investigating this, I believe the described is correct, consistent and understandable. But I would be happier if Oracle documented in the Transaction sections of the Manual.
    Cheers, APC

  • Havinf problem with my oracle transaction.Error: InvalidOperationException.

    I use the same oraclecommand for different 2 sql statements in one transaction. One is a simple select, another is an insert. What i wantto do is to select relevant data from related table(by select statement) and insert it into another one(by insert statement). When executing these command(one for select statement is a cmd.ExecuteReader(CommandBehavior.CloseConnection) and the other for insert statement is cmd.ExecuteNonQuery() ) it throws no exception, however when committing transaction, it throws an exception that says "InvalidOperationException was caught". When i try two different commands, it still gives the same error. Could someone pls help me? Thanks in advance...
    Here is my codes:
    db = new Database(); // here is my database class.
    db.Connect();
    OracleCommand cmd = new OracleCommand();
    cmd.Connection = db.Connection;
    cmd.CommandType = CommandType.Text;
    OracleDataReader dr = null;
    try
    MiktarHesapla();
    cmd.CommandText = "SELECT t2.bursno, t2.adi, t2.soyadi, t2.bokod, t2.okulu, t2.sinifi, t2.bakod, t2.bursyerikodu, t1.bursyeriadi, t2.bturkod, t2.babaadi, t2.anneadi, t1.basvurutarihi " +
    "FROM DONEMLER t1 INNER JOIN BURSLAR t2 ON t1.bursno = t2.bursno " +
    "WHERE t1.yil = :YIL and t1.donem = :DONEM ORDER BY t2.bursno ";
    cmd.Prepare();
    cmd.Parameters.Clear();
    cmd.Parameters.Add(":YIL" ,fYIL);
    cmd.Parameters.Add(":DONEM" ,fDONEM);
    dr = cmd.ExecuteReader(CommandBehavior.CloseConnection);
    while (dr.Read())
    fBURSNO = dr["BURSNO"].ToString();
    fADI = dr["ADI"].ToString();
    fSOYADI = dr["SOYADI"].ToString();
    fOGRENIMI = Convert.ToInt32(dr["BOKOD"].ToString());
    fOKULU = dr["OKULU"].ToString();
    fSINIFI = dr["SINIFI"].ToString();
    fBURSALAN = Convert.ToInt32(dr["BAKOD"].ToString());
    fBURSYERI = Convert.ToInt32(dr["BURSYERIKODU"].ToString());
    fBURSYERIADI = dr["BURSYERIADI"].ToString();
    fBURSTURU = Convert.ToInt32(dr["BTURKOD"].ToString());
    fBABAADI = dr["BABAADI"].ToString();
    fANAADI = dr["ANNEADI"].ToString();
    fBASVURUTARIHI = Convert.ToDateTime(dr["BASVURUTARIHI"].ToString());
    cmd.CommandText = "INSERT INTO CIKTI1(BURSNO,ADI,SOYADI,OGRENIMI,OKULU,SINIFI,BURSALAN,MIKTAR,BURSYERI,BURSYERIADI,BURSTURU,BABAADI,ANAADI,BASVURUTARIHI)" +
    " VALUES (:BURSNO,:ADI,:SOYADI,:OGRENIMI,:OKULU,:SINIFI,:BURSALAN,:MIKTAR,:BURSYERI,:BURSYERIADI,:BURSTURU,:BABAADI,:ANAADI,:BASVURUTARIHI)";
    cmd.Prepare();
    cmd.Parameters.Clear();
    cmd.Parameters.Add(":BURSNO", fBURSNO);
    cmd.Parameters.Add(":ADI", fADI);
    cmd.Parameters.Add(":SOYADI", fSOYADI);
    cmd.Parameters.Add(":OGRENIMI", fOGRENIMI);
    cmd.Parameters.Add(":OKULU", fOKULU);
    cmd.Parameters.Add(":SINIFI", fSINIFI);
    cmd.Parameters.Add(":BURSALAN", fBURSALAN);
    cmd.Parameters.Add(":MIKTAR", fMIKTAR);
    cmd.Parameters.Add(":BURSYERI", fBURSYERI);
    cmd.Parameters.Add(":BURSYERIADI", fBURSYERIADI);
    cmd.Parameters.Add(":BURSTURU", fBURSTURU);
    cmd.Parameters.Add(":BABAADI", fBABAADI);
    cmd.Parameters.Add(":ANAADI", fANAADI);
    if (fBASVURUTARIHI.ToShortDateString() != "01.01.0001")
    { cmd.Parameters.Add(":BASVURUTARIHI", fBASVURUTARIHI.ToShortDateString()); }
    else { cmd.Parameters.Add(":BASVURUTARIHI", DBNull.Value); }
    cmd.ExecuteNonQuery();
    db.Transaction.Commit();
    return true;
    catch
    return false;
    finally
    db.Disconnect();
    dr.Close();
    }

    hello,
    there are two ways to do this :
    1) submit two jobs, one that prints and one that creates the PDF
    2) use reports advanced distribution to print and create a PDF file on the server as part of the same job, and then use web.show_document to bring up the PDF in the browser.
    however, this brings up the question, why exactly you want to print and display at the same time. why not display it and let the user print form Acrobat Reader.
    thanks,
    ph.

  • Local transaction support when BPEL invokes JCA adapter

    Hi all,
    I've implemented a BPEL process consisting of multiple invoke activities to my (custom) JCA Resource Adapter which connects to an EIS.
    My concern is to support local transactions. Here are some code snippets describing what I've done so far.
    Declare the transaction support at deployment time (ra.xml)
    <transaction-support>LocalTransaction</transaction-support>Implementer class of ManagedConnection interface
    public class MyManagedConnection implements ManagedConnection {
         public XAResource getXAResource() throws ResourceException {
             throw new NotSupportedException("XA Transactions not supported");
         public LocalTransaction getLocalTransaction() throws ResourceException {
             return new MyLocalTransaction(this);
            public void sendTheEvent(int eventType, Object connectionHandle) {
                 ConnectionEvent event = new ConnectionEvent(this, eventType);
                 if (connectionHandle != null) {
                    event.setConnectionHandle(connectionHandle);
                ConnectionEventListener listener = getEventListener();
             switch (eventType) {
              case ConnectionEvent.CONNECTION_CLOSED:
                   listener.connectionClosed(event); break;
              case ConnectionEvent.LOCAL_TRANSACTION_STARTED:
                   listener.localTransactionStarted(event); break;
              case ConnectionEvent.LOCAL_TRANSACTION_COMMITTED:
                   listener.localTransactionCommitted(event); break;
              case ConnectionEvent.LOCAL_TRANSACTION_ROLLEDBACK:
                   listener.localTransactionRolledback(event); break;
              case ConnectionEvent.CONNECTION_ERROR_OCCURRED:
                   listener.connectionErrorOccurred(event); break;
              default: break;
    }Implementer class of LocalTransaction interface
    public class MyLocalTransaction implements javax.resource.spi.LocalTransaction {
         private MyManagedConnection mc = null;
         public MyLocalTransaction(MyManagedConnection mc) {
             this.mc = mc;
         @Overide
         public void begin() throws ResourceException {
             mc.sendTheEvent(ConnectionEvent.LOCAL_TRANSACTION_STARTED, mc);
         @Override
         public void commit() throws ResourceException {
             eis.commit(); //eis specific method
             mc.sendTheEvent(ConnectionEvent.LOCAL_TRANSACTION_COMMITTED, mc);
         @Override
         public void rollback() throws ResourceException {
             eis.rollback(); //eis specific method
             mc.sendTheEvent(ConnectionEvent.LOCAL_TRANSACTION_ROLLEDBACK, mc);
    }Uppon BPEL process completion, MyLocalTransaction.commit() is called. However, localTransactionCommitted(event) fails and I get the following error:
    Error committing transaction:; nested exception is: weblogic.transaction.nonxa.NonXAException: java.lang.IllegalStateException:
    [Connector:199175]This ManagedConnection is managed by container for its transactional behavior and has been enlisted to JTA transaction by container;
    application/adapter must not call the local transaction begin/commit/rollback API. Reject event LOCAL_TRANSACTION_COMMITTED from adapter.Could someone give me some directions to proceed ?
    My current installation consists of:
    1. Oracle SOA Suite / JDeveoper 11g (11.1.1.4.0),
    2. WebLogic Server 10.3.4
    Thank you for your time,
    George

    Hi Vlad, thank you again for your immediate response.
    With regards to your first comment. I already have been using logs, so I confirm that neither javax.resource.spi.LocalTransaction#begin() nor javax.resource.spi.LocalTransaction#commit()
    is called in the 2nd run.
    I think it might be helpful for our discussion if I give you the call trace for a successful (the first one) run.
    After I deploy my custom JCA Resource Adapter, I create a javax.resource.cci.ConnectionFactory through Oracle EM web application and the following methods are called:
    -- MyManagedConnectionFactory()
    (Constructor of the implementer class of the javax.resource.spi.ManagedConnectionFactory interface)
    -- javax.resource.spi.ManagedConnectionFactory#createManagedConnection(javax.security.auth.Subject, javax.resource.spi.ConnectionRequestInfo)
    -- MyManagedConnection()
    (Constructor of the implementer class of the javax.resource.spi.ManagedConnection interface)
    -- javax.resource.spi.ManagedConnection#addConnectionEventListener(javax.resource.spi.ConnectionEventListener)
    -- javax.resource.spi.ManagedConnection#getLocalTransaction()
    -- MySpiLocalTransaction(MyManagedConnection)
    (Constructor of the implementer class of the javax.resource.spi.LocalTransaction interface)
    -- javax.resource.spi.ManagedConnectionFactory#createConnectionFactory(javax.resource.spi.ConnectionManager)
    -- MyConnectionFactory(javax.resource.spi.ManagedConnectionFactory, javax.resource.spi.ConnectionManager)
    (Constructor of the implementer class of the javax.resource.cci.ConnectionFactory interface)BPEL process consists of multiple invoke activities to my (custom) JCA Resource Adapter which connects to an EIS. Client tester invokes BPEL process, and execution starts.
    Here is the method call trace for the last invoke (after which, commit is executed). The logs for all the rest invocations are identical:
    -- javax.resource.cci.ConnectionFactory#getConnection()
    -- javax.resource.spi.ManagedConnection#getConnection(javax.security.auth.Subject, javax.resource.spi.ConnectionRequestInfo)
    -- MyConnection(MyManagedConnection)
    (Constructor of the implementer class of the javax.resource.cci.Connection interface)
    -- javax.resource.cci.Connection#close()
    (I don't understand why close() is called here, any idea ?)
    -- javax.resource.cci.ConnectionFactory#getConnection()
    -- javax.resource.spi.ManagedConnection#getConnection(javax.security.auth.Subject, javax.resource.spi.ConnectionRequestInfo)
    -- MyConnection(MyManagedConnection)
    (Constructor of the implementer class of the javax.resource.cci.Connection interface)
    -- javax.resource.cci.Connection#createInteraction()
    -- MyInteraction(javax.resource.cci.Connection)
    (Constructor of the implementer class of the javax.resource.cci.Interaction interface)
    -- javax.resource.cci.Interaction#execute(javax.resource.cci.InteractionSpec, javax.resource.cci.Record, javax.resource.cci.Record)
    -- javax.resource.spi.LocalTransaction#commit()I would expect that after the last commit() - meaning that BPEL process is done, and its state is "Completed" - Weblogic server would call the following:
    javax.resource.cci.Connection#close()However it doesn't. Do I miss something ?

Maybe you are looking for