Transactional/Consistent properties

I'm thinking that it would be very nice to have related properties updated all at the same time, only firing their events when they are all in a consistent state. So for exampe, if I have a widthProperty and a heightProperty, and a third property depended on that (areaProperty) I would like to make it so that areaProperty is always in a consistent state.
However, as soon as I change either widthProperty or heightProperty, the areaProperty will get recalculated and since it queries the other x/y property it will calculate a non-existant area value as an intermediary value.
So, I'm wondering if there is something I can do about it. This example is simplified, it might be about a dozen related properties, not just two.
Anyway, I thought a few solutions:
1) have a versionProperty that is only updated after all the other properties have been updated -- areaProperty can then trigger off that and only calculate the value when a new version is available. This kind of sucks because direct binding to the "real" properties would then be discouraged.
2) have a containing property, like Dimension in this simplified case, and then do bindings that start with the containing property (like Bindings.select(dimensionProperty, "x")). This way I could create a new Dimension object (certain that it has no listeners), set its x/y and then update the dimensionProperty -- listeners will see all changes at once.
I don't like either of these solutions. The first one because it makes it impossible to direct bindings, and the second one because I'd like to avoid doing Bindings that are only checked at runtime (and involve creating new objects, which makes it hard to do direct bindings without monitoring the containing object as well).
Are there any other solutions?

Yes I have noticed the same problem while implementing the same scenario I detailed in my response. Luckily for me all the values finally converged to the right value, I think the ChangeListener was called multiple times with the final one having access to the final values. But in the end it was a non-issue for me as I only ended up relying on the height property, and did not have to worry about the width. But still I see the point.
For 1) does this assume that both dependent properties will change at the same time? I imagine there are three scenarios; one of the dependent properties changes, the other one changes, or both change.
A simple solution for the case that both dependent properties must change together is to use two boolean flags. Since the ChangeListener is an anonymous class, the two booleans can be member variables. The same ChangeListener is applied to both height and width properties. When changed() is called set the flag of the ObservableValue (either height or width), then change the area if both flags are set, then after calculating the new area change the flags back to false. Though this still will not take care of the situation where only one dependent property might change and not the other.
If the solution must allow the scenario of only one dependency being changed, as well as both, then I just do not see how to avoid having the intermediary value... though maybe my imagination is lacking today.
thanks
jose
Edited by: jmart on Sep 10, 2012 1:36 AM

Similar Messages

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • CDC Subscription question concerning transactional consistency

    Hi
    Hopefully a quick question: can anyone confirm whether or not it is the subscription that controls transactional consistency when accessing change records in the change views?
    For example, if you have 20 source tables that you are capturing changes for and create one change source, one change set containing 20 change tables and one subscription for which you have 20 change views then because it is the subscription that you specify when performing the extend and purge operationa and hence it is the one subscription than ensures that when an extend is issued then all change records across the 20 change views will be transactionally consistent.
    I have had an alternative design proposed to me that is to use 20 separate subscriptions - one for each source table and change table. My concern is that this will not ensure transactional consistency across the 20 tables and that any ETL design (for example 20 separate threads running in parallel and each doing an extend, process, purge sequence) cannot ensure that the change records in the change views correspond to the same transaction across the tables in the source database.
    I hope that this is clear - any views and opinions on this will be very gratefully received.
    Many thanks
    Pete

    >
    Apologies if this appears to be belabouring the point - but it is an important bit of understanding for me.
    >
    The issue is not that you are belabouring the point but that you are not reading the doc quote I cited or the last paragraph on my last reply.
    Creating a consistent set of data and USING (querying) a consistent subset of that data are two different things. The publisher is responsible for creating a change set that includes the data you will want and the change set will make sure that a consistent set of data is available.
    Whether a subscriber makes proper use of that change set data or not is another thing altogether.
    If you create 20 subscriptions then those are totally independent of one another just lilke 20 people subscribing to the Wall Street Journal; those subscribers and subscriptions have NOTHING to do with one another. If you want to try to synchronize the 20 yourself have at it but as far as Oracle is concerned each subscriber is unique and independent.
    If you want to subscribe to, and use, a consistent subset of the change set then, as the doc quote said, you have to JOIN the tables together.
    Read the documentation - you can't understand it if you don't read the doc and go through the examples in it.
    Step 4 Subscribe to a source table and the columns in the source table shows you exactly how you have to do the join.
    The second step of that example says this
    >
    Therefore, if the subscriber wants to subscribe to columns in both publications, using EMPLOYEE_ID to join across the subscriber views, then the subscriber must use two calls, each specifying a different publication ID:
    >
    '. . . join across the subscriber views . . .'
    Don't be afraid of breaking Oracle by trying things!
    The SCOTT EMP and DEPT tables might currently have a consistent set of data in the two tables. But if I query EMP from one session (subscription) and query DEPT form another session (subscription) the results I get might not be consistent between them because I used to independent operations to get the data and the data might have changed in between the two steps.
    However if I query DEPT and EMP using a JOIN then Oracle guarantees that the data from both tables will reflect the same instant in time.
    Not a perfect analogy but close to the subscription use. If you want to subscribe to data from mutiple tables/views in the change set AND get a consistent set of data you need to join the tables/views together. The mechanism for doing this is not the same as SQL but is shown in the above example in the doc.

  • Db links and transaction consistance

    db version 9.2.0.7 (both)
    db A has a table tab db B has a db link pointing to A and can select from tab
    on db A I do an update on a table t then commit it.
    immediately after the commit on db A on db B if I do a select on tab@A I get wrong results but if I wait what seems to be a short time like 5 seconds and run the query again I get the correct results.
    Why is this and how can I tell if db B is transaction consistant with db A.
    Thanks
    P

    I am not convinced you are looking at the same table or that it is in fact a table.
    SELECT owner, object_type
    FROM all_objects
    WHERE object_name = <table_name_here>;both on the local and the remote object.
    Are they the same? Is it a table?

  • Multiple Replicat -Transaction consistency

    When using the the @RANGE function to divide the processing workload among multiple Replicats do we have the transaction commit orders preserved.If one REPLICAT is ahead of the other could it cause data inconsistency ?
    For example, the following splits the replication workload into two ranges (between two Replicat processes) based on the ID
    column of the source account table.
    MAP Source.Account, TARGET Target.account, FILTER (@RANGE (1, 2, ID));
    On the source we have the following order.
    1)Update accounts set balance='NEGATIVE';
    2)Update accounts set balance='ZERO';
    3)Update accounts set balance='NEGATIVE';
    4)Update accounts set balance='POSITIVE';
    When we split the transactionS based on the Hash-value of the primary key and if we have 1,2 Assigned to Replicat1 and 3,4 Assigned to Replicat2 and if Replicat2 finishes before Replicat1 there will be data inconsistency.
    Can we preserve the commit order when using Multiple Replicats.

    hi,
    When using the @RANGE to split up transactions it is always possible that one replicat is quicker then the other one(s).
    This can result in the operations being applied in a different order then in the original transaction.
    But this "inconsistency" will only be for a very very short moment (unless one of the replicats has a huge delay, or is stopped)
    In your example you are using the ID field to calculate the hash value for the @RANGE function.
    As long as this ID field stays the same, the same record gets processed every time by the same replicat, so when the replicats are finished, the data is the same as on the source, no inconsistencies
    regards,
    Eric

  • Transaction and Weblogic MDB

    Hi,
    I am using Coherence as a cache for my weblogic application. I use a Message Driven Bean that receive a message, write something in Coherence and then write an other message in a result queue.
    I want all this operations to be fully transactional. To do this I try to use the Coherence Container Integration with JCA (see http://wiki.tangosol.com/display/COH33UG/Transactions%2C+Locks+and+Concurrency).
    My first problem here is to install the rar file in weblogic. I try in weblogic version 10 and 8.1 (my coherence version is 3.2) and I got the following errors :
    in version 8.1
    <13 juin 2007 15 h 51 CEST> <Error> <Deployer> <BEA-149201> <Failed to complete the deployment task with ID 1 for the application coherence-tx.
    java.lang.NoClassDefFoundError: com/tangosol/util/WrapperException
    in version 10 :
    weblogic.connector.exception.RAConfigurationException: There are 1 nested errors: weblogic.descriptor.DescriptorException: VALIDATION PROBLEMS WERE FOUND /mnt/appli/bestofbreed/bea/user_projects/domains/bob_domain/servers/srv1/stage/coherence-tx/coherence-tx.rar/META-INF/ra.xml:36:4:36:4: problem: cvc-enumeration-valid: string value 'boolean' is not a valid enumeration value for config-property-typeType in namespace http://java.sun.com/xml/ns/j2ee: at weblogic.descriptor.internal.MarshallerFactory$1.evaluateResults(MarshallerFactory.java:234) at weblogic.descriptor.internal.MarshallerFactory$1.evaluateResults(MarshallerFactory.java:208) at weblogic.descriptor.internal.MarshallerFactory$1.createDescriptor(MarshallerFactory.java:146) at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:292) at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:260) at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:322) at weblogic.application.descriptor.AbstractDescriptorLoader.createDescriptor(AbstractDescriptorLoader.java:347) at weblogic.application.descriptor.AbstractDescriptorLoader.createDescriptor(AbstractDescriptorLoader.java:331) at weblogic.application.descriptor.AbstractDescriptorLoader.getDescriptor(AbstractDescriptorLoader.java:240) at weblogic.application.descriptor.AbstractDescriptorLoader.getRootDescriptorBean(AbstractDescriptorLoader.java:220) at weblogic.connector.configuration.ConnectorDescriptor.getConnectorBean(ConnectorDescriptor.java:287) at weblogic.connector.configuration.DDUtil.getRAInfo(DDUtil.java:121) at weblogic.connector.deploy.ConnectorModule.loadDescriptors(ConnectorModule.java:747) at weblogic.connector.deploy.ConnectorModule.prepare(ConnectorModule.java:165) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:93) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:360) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:56) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:46) at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:615) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:26) at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:191) at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:147) at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:61) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:189) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:87) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:719) at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1186) at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:248) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:157) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:157) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:12) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:45) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:464) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200) at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)
    Thanks,
    Luc

    Hi William,
    Are you sure this is correct? During the prepare phase I would have expected the
    changes to have been made persistent (durable within the grid) but not immediately
    visible on at least another node within the cluster assuming Coherence is using the
    grid itself as a transaction log service.What I wrote is how the TransactionMap API documentation describes it.
    I believe, the idea behind it is that the commit phase writes data to the underlying cache with putAll and removeAll operations which are supposed to be fail-safe and are not waiting for any other threads if the client owns the locks for the entries, even in case of cluster node failures.
    With the transaction consistency and isolation verified in prepare() and all relevant locks owned, there is no transactional reason why the commit could fail. The only possible causes of failure are disastrous conditions or errors in write-through cache store operations preventing success of the putAll/remove operations (or coding errors in serialization/deserialization/indexed-methods).
    If not then how would Coherence ensure the commit would be successfully executed
    after a voting to commit during the prepare phase even in the event of a failure
    occurring before commit. The TransactionMap 2PC is not supposed to be interleaved with other 2PC operations. It is supposed to work only above the Coherence caches (actually you can add one 1PC operation between the TransactionMap prepare()-s and TransactionMap commit()-s, if you implement CacheFactory.commitTransactionCollection manually).
    Full XA is not supported over the caches by Coherence.
    The XA-related stuff I mentioned is when you use the Coherence CacheAdapter to enlist Coherence caches under a JTA transaction. However in this case the caches together act as a 1PC resource (JCA LocalTransaction mode) from the JTA TransactionManager-s point of view and you do not see anything from it being internally 2PC.
    In this case, the JTA transaction 2PC operation proceeds as follows:
    1. All real XA resources enlisted to the JTA transaction are prepared. After this point all JDBC changes over an XA-driver JDBC connection are flushed to the database, so all locks to be acquired are acquired.
    2. If all were prepared successfully, then the transactional caches enlisted under the JCA Adapter are committed together with code equivalent to CacheFactory.commitTransactionCollection(). The transactional caches are practically TransactionMap-s wrapped in two or three layer of wrapper objects.
    3. If the CacheFactory.commitTransactionCollection() succeeded, then all the XA resources enlisted to the JTA transaction are committed. All JDBC locks are released only at this point.
    Why I mentioned XA and locks and TRANSACTION_EXTERNAL in this thread at all is that if you modified equivalent entries in Coherence to what you modified in XA JDBC, then you don't in fact need to lock those entries in Coherence, because equivalent locks with a broader lifetime are existing in the database. TRANSACTION_EXTERNAL allows you to do just that.
    Hope this clears this up, but feel free to ask if it does not.
    Best regards,
    Robert

  • Multiple records for a single transaction - Issue in LSMW

    Hi,
    i'm facing a issue in LSMW
    i have the data coming up in the flat file. the data which constitutes the single transaction consists of  data from multiple records from the flat file..
    suppose we have 10 records in the flat file... and all the 10 records related to ony  2 transactions i.e say 6 records to first transaction and next 4 records to next transaction.
    we have a direct input method to handle this data like field1 for first record, field2 for 2nd record and so on...
    while uploading, we will get all the records one by one into our source structure. My question is can we handle this scenario in lsmw? if yes, pls suggest
    Thanks in advance
    Shekhar

    Hi Kris,
    this is regarding the asset creation via AS01..
    we are getting the flat file in that way. Lets assume like this:
    suppose for one transaction we may need to fill 5 depreciation keys and for another option only 3 dep. keys we need to fill.
    then in the flat file, we can get like 5 records for first transaction i.e asset and 3 records for 2nd transaction
    Can we handle this via lsmw
    Regards
    shekhar

  • Apply slowdown on transaction with ROLLBACK TO UBA LCRs

    I have streams configuration consisting of several captures and propagations on four servers an one destination db with several applys.
    Each capture has its own propagation and its own apply on destination server.
    All works almost fine.
    But sometimes one apply process becomes slowly.
    As each stream has heartbeat table with 30 seconds insert/delete sequence, normal delay between message capture time and apply time 1-3 minutes as max.
    But sometimes it became 3-4 hours.
    I started to search the root of problem and found following:
    When problem exist, apply reader process in latch shared pool contention. The OS process of apply reader is at 100% CPU usage.
    DBA_APPLY_SPILL_TXN showing one transaction for that apply.
    I use following script to see contents of that transaction:
    DECLARE
      TYPENM   VARCHAR2(61);
      DDLLCR   SYS.LCR$_DDL_RECORD;
      PROCLCR  SYS.LCR$_PROCEDURE_RECORD;
      ROWLCR   SYS.LCR$_ROW_RECORD;
      RES      NUMBER;
      NEWLIST  SYS.LCR$_ROW_LIST;
      OLDLIST  SYS.LCR$_ROW_LIST;
      DDL_TEXT CLOB;
      EXT_ATTR ANYDATA;
      I        NUMBER;
    BEGIN
      I := 0;
      SELECT COUNT(*)
        INTO I
        FROM aq$qta_strm1 T
       WHERE T.QUEUE = 'QA_STRM1'
       ORDER BY T.MSG_ID ASC;
      DBMS_OUTPUT.PUT_LINE('### CNT: ' || I);
      I := 0;
      FOR C IN (SELECT *
                  FROM aq$qta_strm1 T
                 WHERE T.QUEUE = 'QA_STRM1'
                 ORDER BY T.MSG_ID ASC) LOOP
        IF (C.USER_DATA IS NOT NULL) THEN
          TYPENM := C.USER_DATA.GETTYPENAME();
          IF (TYPENM = 'SYS.LCR$_ROW_RECORD') THEN
            RES := C.USER_DATA.GETOBJECT(ROWLCR);
              DBMS_OUTPUT.PUT_LINE('MSG_ID: ' || C.MSG_ID);
              PRINT_LCR(C.USER_DATA);
              I := I + 1;
          END IF;
        END IF;
      END LOOP;
      DBMS_OUTPUT.PUT_LINE('### NN: ' || I);
    END;And it gives me following:
    <tt>
    ### CNT: 3150
    MSG_ID: 03690459
    source database: SOURCEDB.MYDOMAIN.COM
    owner:
    object:
    is tag null: Y
    command_type: ROLLBACK TO UBA
    transaction_id: 9.46.1687138
    MSG_ID: 03690460
    source database: SOURCEDB.MYDOMAIN.COM
    owner:
    object:
    is tag null: Y
    command_type: ROLLBACK TO UBA
    transaction_id: 9.46.1687138
    MSG_ID: 03690461
    source database: SOURCEDB.MYDOMAIN.COM
    owner:
    object:
    is tag null: Y
    command_type: ROLLBACK TO UBA
    transaction_id: 9.46.1687138
    MSG_ID: 03690462
    source database: SOURCEDB.MYDOMAIN.COM
    owner:
    object:
    is tag null: Y
    command_type: ROLLBACK TO UBA
    transaction_id: 9.46.1687138
    +... and so on ...+
    </tt>
    I didn't find any description about lcr type ROLLBACK TO UBA.
    Whole transaction consists of messages of such type.
    So I just put transaction to IGNORE_TRANSACTION parameter of apply and problem disappeared for now.
    But I believe it will come back again.
    So, who can explain me what lcr type ROLLBACK TO UBA means? where it come from? and how to avoid this problem again?
    PS:
    Source database 10.2.0.4 64bit-linux
    Dest. db 11.2.0.3 64bit-linux
    Edited by: user5464827 on 02.04.2013 0:05

    Hi Onno,
    If you have enabled "auto commit", then you cannot call methods like "commit()" or "rollback()". After you have executed your INSERT, a "commit" automatically happens -- if the INSERT succeeds -- or a "rollback" automatically happens (if the INSERT fails). You are calling the "commit()" method when there is no open transaction -- which is wrong and therefore you get an error (SQLException).
    Same problem occurs when you catch the error caused by your "commit" and try to "rollback" -- you cannot "rollback" when there is no open transaction.
    As I see it, you have two choices:
    1. disable auto commit, or
    2. remove the calls to "commit()" and "rollback()"
    Hope this helps you.
    Good Luck,
    Avi.

  • XML Payload to Transaction Input

    Hi,
    For clarification, used to be (and still appears to be) that for transaction input properties, data type needed to be 'simple' (ie: string etc), and although we can have transaction input properties of type XML, they still appear in the WSDL as string.  Although I can see that this works fine even if the string content is actually XML, and the inbound data (which in fact is XML as a string) still looks like XML in the transaction, is this still the recommended approach?  Or is there another recommended way to get an XML payload passed in to a transaction using Runner / SOAPRunner from an external application.
    Additionally, when calling the Runner (or SOAPRunner) from an external application, is there then any limit to the length of the URI call?  I know that browsers have limits on the path, but here we're playing with a path + query string outside of browsers? 
    Are there any limitations on the Netweaver side?
    By the way, currently using MII V12.2.2
    Regards
    Kevin.

    Hi Kevin,
    You can pass XML as string to a transaction and make input parameter as XML in transaction.
    That is perfectly fine.
    As far as I know about the limit of URL, It depends on the server.
    However there are two methods to call a http webservice. GET and POST.
    Use the POST method which is having more limit than GET method. I have faced issue with GET on high volume of data so replaced it with POST method. I have used the url with query parameters of a few MB's (approx 4-5) and never faced the limit issue with POST. You can check exact limit with some load testing on the server.
    Thanks
    Anshul

  • IDOC Listener not calling BLS Transaction

    All,
    I am running xmii 11.5.4.  I have a single IDOC Listener configured with a Routing Rule for Message Type = "PROCESS_MESS_DOWNLOAD".  The Routing Rule is defined to trigger a BLS called "TEST_BLS".  The Listener is configured with the BLS Input Parameter.
    The Listener processes the incoming message by writing the XML file to the defined Path with no problems.
    The issue is the BLS Transaction will not trigger.  The Transaction has a single Transaction Property of data type XML with no default value defined.
    I have tried using a simple Transaction with nothing but an Event Logger or XML Trace actions to verify the Transaction is being executed. But nothing occurs.
    I have viewed the General Log and see no errors or warnings or fatal log entries.
    The Transaction works fine manually from inside Logic Editor and even runs from a URL invokation.
    Any ideas of what to look for or try would be greatly appreciated.
    Thanks,
    Chuck

    Udayan,
    The Transaction consists of only an Event Logger Action and an XML Tracer Action.  Neither of these are apparently executing.  No event is logged and no entry is made to the XML Tracer file. The Transaction has a Transaction Property defined as stated in my first post.
    I have noticed through trial and error that the IDOC Listener only works with the asterick message type.  The actual message type of the IDOC is evidently not being read by the Listener.  The IDOC being created is coming from a PPPI Process Message.  The first few lines are:
    <?xml version="1.0" encoding="UTF-8"?>
    <PROCESS_MESS_DOWNLOAD>
    <INPUT>
    <CLIENT>200</CLIENT>
    </INPUT>
    <TABLES>
    <MSEL>
    <item>
    <MSID>100000000000000537</MSID>
    Thanks,
    Chuck

  • JTA transaction is not present or the transaction is not in active state

    Hi,
    I am trying to execute an asynchronous bpel process. the bpel process has 5 OSB calls and is taking approximately 100 seconds for completion. The OSB calls in the BPEL are taking 90 seconds for completion. In the end of the BPEL process, after completion I get the following error:
    [2010/07/22 01:56:44] BPEL process instance "1220007" completed
    [2010/07/22 01:56:44] There is a system exception while performing the BPEL instance, the reason is "JTA transaction is not present or the transaction is not in active state. The current JTA transaction is not present or it is not in active state when processing activity or instance "1,220,007". The reason is The execution of this instance "1220007" for process "BPELProcess1" is supposed to be in a jta transaction, but the transaction is not present or in active state, please turn on the application server transaction debug logs to get more information.. Please consult your administrator regarding this error. ". Please check the error log file for more infromation. Please try to use bpel fault handlers to catch the faults in your bpel process. If this is a system exception, please report this to your system administrator. Administrator could perform manual recovery of the instance from last non-idempotent activity or dehydration point.
    We do not want to increase the transaction-timeout properties in the server in the Transaction-manager.xml or in the orion-ejb-jar.xml since we have other projects with synchronous processes running in the same server.
    Can anybody please suggest a workaround to overcome this issue apart from increasing the transaction-timeout?

    Hi 783703,
    As Sridhar suggested for your problem you have to set transaction-time out in j2ee/home/config/transaction-manager.xml.
    If you use Idempotent as false for your partnerlinks, BPEL PM will store the status till that invoke(Proof that this invoke gets executed).
    So better to go for increasing the time instead of going for idempotent as it has some side effects.
    And coming to dehydration ....Ideally performance will be more if there are no much dehydration poitns in our process. But for some scenarios it is better to have dehydration(ex: we can know the status of the process...etc)
    Dehydration store will not get cleared after completion of the process. Here dehydration means ....it will store these dtails in tables(like Cube_instance,cube_scope...etc).
    Regards
    PavanKumar.M

  • What's kind of transaction JTA handle?

    I use JTA with JPA. In my opinion, JTA is something like delegate of database transaction. What's happen if database doesn't support transaction such as MySQL MyISAM engine? And, can I impose a transaction to properties of a bean?

    Hello,
    Thank you for kindness.
    I heard this patch number from coworkers and I also investigated where detail of this patch was written.
    I think my coworker's information maybe was wrong...
    I am not sure and will make sure actual information...
    Thanks lots again.

  • Table for Screen Field Properties

    Hi Guys,
    I have strange requirement.
    Lets say i have designed 5 fields in the Dialog Program. 2 fields belongs to Group1,another 2 fields belongs to Group2 and 3rd field is belongs to group3.
    Where can i find those values (I mean which table). I belive it should be Program,screen no and data type.
    Note : Groups you can set in Attributes of the field in Dialog Program.
    Thanks
    Poorna

    Setting Screen Field Attributes
    Every screen field has attributes that you set in the Screen Painter when you define the
    screen. At runtime, you may want to change these attributes, depending on what
    functions the user has requested in the previous screen. At runtime, attributes for each
    screen field are stored in a memory table called SCREEN. You do not need to declare
    this table in your program. The system maintains the table for you internally and updates
    it with every screen change.
    The memory table SCREEN contains the following fields:
    Name Length Description
    NAME 30 Name of the screen field
    GROUP1 3 Field belongs to field group 1
    GROUP2 3 Field belongs to field group 2
    GROUP3 3 Field belongs to field group 3
    GROUP4 3 Field belongs to field group 4
    ACTIVE 1 Field is visible and ready for input
    REQUIRED 1 Field input is mandatory
    INPUT 1 Field is ready for input
    OUTPUT 1 Field is for display only
    INTENSIFIED 1 Field is highlighted
    INVISIBLE 1 Field is suppressed
    LENGTH 1 Field output length is reduced
    DISPLAY_3D 1 Field is displayed with 3D frames
    VALUE_HELP 1 Field is displayed with value help
    To activate a field attribute, set its value to 1. To deactivate it, set it to 0. When you set
    the ACTIVE attribute to 0, the system suppresses the field and turns off the ready for
    input attribute. The user can neither see the field nor enter values into it.
    Note
    You can define values for each of these attributes in the Attribs. for 1 field section in
    the field list of the Screen Painter. If you need more information about attribute
    meanings, see BC ABAP/4 Workbench Tools.
    Modifying the Screen SAP AG
    Setting Screen Field Attributes
    32u20134 May 1997
    As an example of modifying the screen dynamically, start with transaction tz50
    (development class SDWA).
    The transaction consists of two screens. In the first screen the user can enter flight
    identifiers and either request flight details (by pressing a Display pushbutton) or press the
    Change pushbutton to change the data of screen 200.
    The field attributes are now set dynamically, according to whether the Display button or
    the Change button was selected. In both cases the same screen is now called, but with
    different field attributes.
    If the same attributes need to be changed for several fields at the same time, these fields
    can be grouped together. For example, in order to change the fields in screen 200
    dynamically, we assign these fields in the Screen Painter to the group MOD. You can
    specify up to four modification groups for each field. The contents of the Groups field
    are stored in the SCREEN table.
    The changes to the attributes of the fields in this group can be implemented in a PBO
    module:
    SAP AG Modifying the Screen
    Setting Screen Field Attributes
    May 1997 32u20135
    MODULE MODIFY_SCREEN OUTPUT.
    CHECK MODE = CON_SHOW.
    L0OP AT SCREEN.
    CHECK SCREEN-GROUP1 = u2019MODu2019.
    SCREEN-INPUT = u20190u2019.
    MODIFY SCREEN.
    ENDLOOP.
    ENDMODULE.
    The memory table SCREEN contains each field of the current screen together with its
    attributes.
    The LOOP AT SCREEN statement puts this information in the header line of this system
    table.
    In this example taken from transaction tz50, if the user chooses Display then SCREENINPUT
    is set to u20190u2019 and all fields belonging to the MOD group thus become display-only
    fields.
    Because attributes have been changed, the MODIFY SCREEN statement is used to write
    the header line back to the table.
    Modifying the Screen SAP AG
    Changing Screen Field Attributes with the Function Field Selection

  • SQL Connection - EJB Transaction Integration

    I am currently running Kodo 2.3.2 and JBoss 3.0 and I'm having the following
    problem:
    The SQLConnection returned from the PersistenceManager isn't always
    transactionally consistent when run within EJBs (BMT or CMT).
    For example, I have BeanA (BMT) that starts a transaction then retrieves a
    PM->SQL connection then performs an update using direct SQL (non-JDO). Then
    BeanA calls BeanB (CMT) that retrieves a PM and performs a JDO update. Once
    BeanB completes, BeanA rolls back the entire transaction. The problem is
    that the (non-JDO) SQL connection update isn't rolled back.
    I only see this behavior the very first time I run the scenario. If I run
    the scenario 2 -> N times it will be transactionally consistent.
    Does anyone have any ideas what is going wrong the first time I run this
    scenario?
    I have multiple scenarios with BMT/CMT beans using JDO and non-JDO updates
    where the first iteration of the update cause the non-JDO updates to not
    roll back.
    Thanks in advance.

    Are you using optimistic transactions? If you're using optimistic
    transactions with JDO, then an actual database transaction isn't started
    until you call commit(). At that time Kodo starts a transaction, flushes
    all the changes that have been made, and ends the transaction all at once.
    That's why optimistic transactions are so nice -- they don't consume any DB
    resources during the length of the transaction.
    Of course, that also means that any connection you retrieve won't be
    transactionally consistent, cause there's no transaction!

  • Client transactions.

    I'm trying to use client demarkation of transactions.
              Properties env = new Properties();
              env.put(Context.INITIAL_CONTEXT_FACTORY,"
              weblogic.jndi.WLInitialContextFactory");
              env.put(Context.PROVIDER_URL,
              "t3://AppServer:7001");
              Context initial = new InitialContext(env);
              UserTransaction ut = (UserTransaction)
              initial.lookup ("javax.transaction.UserTransaction");
              ut.begin ();
              // call ejb beans on AppServer
              ut.commit ();
              This one works fine. But I don't know how to do the same scenario for
              two servers. Can I start transaction on client side, access two EJB
              Servers and commit them from client side in the same transation?
              Thanks for the help,
              Igor
              

              Hi, I used the same transaction but it did not work. It called rollback(), no error or transaction exception happened, but it did not really rollback. I am using jdk.1.2.1, weblogic4.5.1, oracle8i, sunos5.6. Any response will be appreciated.
              Jingfang
              Igor Shitarev <[email protected]> wrote:
              >I'm trying to use client demarkation of transactions.
              >
              > Properties env = new Properties();
              > env.put(Context.INITIAL_CONTEXT_FACTORY,"
              >weblogic.jndi.WLInitialContextFactory");
              > env.put(Context.PROVIDER_URL,
              >"t3://AppServer:7001");
              >
              > Context initial = new InitialContext(env);
              >
              > UserTransaction ut = (UserTransaction)
              >initial.lookup ("javax.transaction.UserTransaction");
              > ut.begin ();
              > .......
              >// call ejb beans on AppServer
              > .......
              > ut.commit ();
              >
              >This one works fine. But I don't know how to do the same scenario for
              >two servers. Can I start transaction on client side, access two EJB
              >Servers and commit them from client side in the same transation?
              >
              >Thanks for the help,
              >
              >Igor
              >
              >
              >
              >
              >
              

Maybe you are looking for

  • Help with Select Decode !

    Hello all I dont have a idea how to write this Select. But this need to be in SQL not PL/SQL . I want recive answer Yes or No from two select in one query how i can do that ? (select decode(count(1),0,'No','Yes') from zia_all_diagnoses dgs where dgs.

  • Table to maintain RFC destination

    Hi,   An RFC enable Function Module from R/3 is to be called in CRM system. I dont want to hard the RFC destination for the R/3 whn calling the FM. Is there any other table tht can b used for this?   Is there any other way to achieve this??? Regards,

  • Javascript error in Oracle BI analysis

    Hi all, I am getting the following js error when i am creating analyses. this error happens when i drag and drop column from subject area to selected column panel. And nothing happen rather than showing column. Webpage error details User Agent: Mozil

  • Display of special character u0151 through std. text

    Hi, I want to display a special character <h3>ő</h3> through standard text. But when I enter the character into the text it automatically removes the double quotes with simple character "O". The same is working fine in other system. Any pointers will

  • ADOBE Jpeg inserts etc. LOVE HATE RELATIONSHIP

    I have tried to insert a jpeg and a pdf signature. Im aware and have used the digital signatures as well. I am bothered that their are no links to research JPEG signature etc. When I did come across a help file pertaining to inserting a clip art, the