Multiple Replicat -Transaction consistency

When using the the @RANGE function to divide the processing workload among multiple Replicats do we have the transaction commit orders preserved.If one REPLICAT is ahead of the other could it cause data inconsistency ?
For example, the following splits the replication workload into two ranges (between two Replicat processes) based on the ID
column of the source account table.
MAP Source.Account, TARGET Target.account, FILTER (@RANGE (1, 2, ID));
On the source we have the following order.
1)Update accounts set balance='NEGATIVE';
2)Update accounts set balance='ZERO';
3)Update accounts set balance='NEGATIVE';
4)Update accounts set balance='POSITIVE';
When we split the transactionS based on the Hash-value of the primary key and if we have 1,2 Assigned to Replicat1 and 3,4 Assigned to Replicat2 and if Replicat2 finishes before Replicat1 there will be data inconsistency.
Can we preserve the commit order when using Multiple Replicats.

hi,
When using the @RANGE to split up transactions it is always possible that one replicat is quicker then the other one(s).
This can result in the operations being applied in a different order then in the original transaction.
But this "inconsistency" will only be for a very very short moment (unless one of the replicats has a huge delay, or is stopped)
In your example you are using the ID field to calculate the hash value for the @RANGE function.
As long as this ID field stays the same, the same record gets processed every time by the same replicat, so when the replicats are finished, the data is the same as on the source, no inconsistencies
regards,
Eric

Similar Messages

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • Handling multiple EDI transaction

    Hi,
    We are getting multiple EDI transactions (940, 850 etc) from our customer. I have defined one Sender Agreement with service interface as functional acknowledgment and one each for EDI transactions (with service interface 940 and service interface 850 etc) in the same configuration scenario.
    There wil be only one communication channel to receive all the EDI files. My problem is whenever I send the EDI 850 transaction it is taking the service interface of 940. Because of that it is executing the 940 interface map instead of 850!!!
    Am I doing any wrong in my configuration? When I test the data in test tab of configuration scenario, it is taking the correct service interface, but it is not happening in the run time... I cleared the cache also.. still no luck...
    Can someone please help me in this? Please guide me if I am doing any wrong.
    Regards,
    Vas

    Hi,
    I assume that you are using Seeburger for EDI data handlings..if yes
    sender interface details for the payload (850,940) will be determined from the seeburger workbench configurations ..
    after bic mapping execution it will result in two doucments one with FunctionalAck and other with attachement(payload..can be 850 or 940) based on this attachment name will compare with the entries of seeburger workbench and fetches the sender details and based on this receiver will be identified...
    Hope this gives you need info..
    Regards
    Rajesh

  • Multiple insert transaction for image uploads

    Hello !
    I can't figure it out how to connect multiple insert transactions on one page?
    For example - I want to upload images from page with different categories(have menu list) and insert by the one button all the images at once.
    image 1, cat 1
    image 2, cat 2
    image 3, cat 3
    button to insert
    Any Ideas? Where to start from?
    Thanks, Nick.
    Please help ! Maybe link for some tutorials?

    Hi Nick,
    What you are asking for is very possible to make with ADDT but if you are totally unfamiliar with ADDT you should probably go over the manual to go over the many functions that ADDT offers. You can find the manual two ways.
    1. After you have created your site and created one blank page you can open the page to have the Developer Toolbox Tab un-greyed out. Then you can click on the last icon (Control Panel) and press the help button.
    That will open up the manual.
    2. Go to: http://help.adobe.com/en_US/Dreamweaver/10.0_ADDT/help.html?content=MXK3_052000_MX_K3_con trol_panel.htm
    Same place the help button takes you.
    Yes, Waleed is having hosting issues that I think he is sorting out.

  • CDC Subscription question concerning transactional consistency

    Hi
    Hopefully a quick question: can anyone confirm whether or not it is the subscription that controls transactional consistency when accessing change records in the change views?
    For example, if you have 20 source tables that you are capturing changes for and create one change source, one change set containing 20 change tables and one subscription for which you have 20 change views then because it is the subscription that you specify when performing the extend and purge operationa and hence it is the one subscription than ensures that when an extend is issued then all change records across the 20 change views will be transactionally consistent.
    I have had an alternative design proposed to me that is to use 20 separate subscriptions - one for each source table and change table. My concern is that this will not ensure transactional consistency across the 20 tables and that any ETL design (for example 20 separate threads running in parallel and each doing an extend, process, purge sequence) cannot ensure that the change records in the change views correspond to the same transaction across the tables in the source database.
    I hope that this is clear - any views and opinions on this will be very gratefully received.
    Many thanks
    Pete

    >
    Apologies if this appears to be belabouring the point - but it is an important bit of understanding for me.
    >
    The issue is not that you are belabouring the point but that you are not reading the doc quote I cited or the last paragraph on my last reply.
    Creating a consistent set of data and USING (querying) a consistent subset of that data are two different things. The publisher is responsible for creating a change set that includes the data you will want and the change set will make sure that a consistent set of data is available.
    Whether a subscriber makes proper use of that change set data or not is another thing altogether.
    If you create 20 subscriptions then those are totally independent of one another just lilke 20 people subscribing to the Wall Street Journal; those subscribers and subscriptions have NOTHING to do with one another. If you want to try to synchronize the 20 yourself have at it but as far as Oracle is concerned each subscriber is unique and independent.
    If you want to subscribe to, and use, a consistent subset of the change set then, as the doc quote said, you have to JOIN the tables together.
    Read the documentation - you can't understand it if you don't read the doc and go through the examples in it.
    Step 4 Subscribe to a source table and the columns in the source table shows you exactly how you have to do the join.
    The second step of that example says this
    >
    Therefore, if the subscriber wants to subscribe to columns in both publications, using EMPLOYEE_ID to join across the subscriber views, then the subscriber must use two calls, each specifying a different publication ID:
    >
    '. . . join across the subscriber views . . .'
    Don't be afraid of breaking Oracle by trying things!
    The SCOTT EMP and DEPT tables might currently have a consistent set of data in the two tables. But if I query EMP from one session (subscription) and query DEPT form another session (subscription) the results I get might not be consistent between them because I used to independent operations to get the data and the data might have changed in between the two steps.
    However if I query DEPT and EMP using a JOIN then Oracle guarantees that the data from both tables will reflect the same instant in time.
    Not a perfect analogy but close to the subscription use. If you want to subscribe to data from mutiple tables/views in the change set AND get a consistent set of data you need to join the tables/views together. The mechanism for doing this is not the same as SQL but is shown in the above example in the doc.

  • Multiple currency transaction with DI not possible

    Hi all!
    I'm trying to make a jounral entry with multiple currencies. For example I want to book on Debit in EUR and on Credit side in USD. When I want to add the journal entry I got an error:
    Transaction includes more than one currency.
    I can reproduce the error in Business One, making a Journ Entry in the form. But there I have a form setting called "Allow multiple currency transactions". When I check it I can add my journal entry without any problem.
    Can anyone help me find a solution in this?
    As the Add-on I'm developing is a stock-exchange reporting tool and it passes journal entries to B1, multi currency transactions are regular task
    Thanks!
    Jörg

    Ok, I find a solution by myself.
    In the AdminInfo object is a property called MultiCurrencyCheck. When this is set to cc_NoMessage, I can save the journal entry.
    If you want to change this setting in Business One:
    Select Administration --> System Initialization --> Document Settings --> Per Document tab --> Journal Entry tab
    There is a check box "Allow  multiple currency transactions". The description is wrong and it should say "Block multiple currency transactions", like the other. When it is unchecked, I can make multiple currency transaction otherwise not.

  • Db links and transaction consistance

    db version 9.2.0.7 (both)
    db A has a table tab db B has a db link pointing to A and can select from tab
    on db A I do an update on a table t then commit it.
    immediately after the commit on db A on db B if I do a select on tab@A I get wrong results but if I wait what seems to be a short time like 5 seconds and run the query again I get the correct results.
    Why is this and how can I tell if db B is transaction consistant with db A.
    Thanks
    P

    I am not convinced you are looking at the same table or that it is in fact a table.
    SELECT owner, object_type
    FROM all_objects
    WHERE object_name = <table_name_here>;both on the local and the remote object.
    Are they the same? Is it a table?

  • Multiple DB transaction

    Hi
    Weblogic 8.1sp2
    I am planning to have multiple DB (A and B) updates in single transaction in Stateless bean. I know for this scenario i need to use XA drivers or 2phase commit.
    1) Say if dont use either and insert to Database B fails. Will both inserts rollback and if not why ?
    2) Say i use XA driver and prepare phase is sucessful. As soon as Database A is committed and before Database B is committed Database B goes down. How is this scenario handled. There will be inconsistent data.
    Please explain
    Thanks
    cw

    Cool Water wrote:
    Hi
    Weblogic 8.1sp2
    I am planning to have multiple DB (A and B) updates in single transaction in Stateless bean. I know for this scenario i need to use XA drivers or 2phase commit.
    1) Say if dont use either and insert to Database B fails. Will both inserts rollback and if not why ?If you don't use XA, our product will stop you when you try to get a connection to that second
    DBMS. If you get non-transactional connections, then you are responsible for your transactional
    integrity. If the insert to DBMS b fails, you would rollback the first connection too. The
    problem is later: when you're done, and commit on A, but your commit on B fails. What do you do?
    2) Say i use XA driver and prepare phase is sucessful. As soon as Database A is committed> and before Database B is committed Database B goes down. How is this scenario handled.
    > There will be inconsistent data.
    Yep. There is no protocol that can guarantee consistency across all failures. In this case
    XA will log the facts, and when DBMS B is back online, this in-doubt transaction will be
    recovered if possible. In some cases XA will resort to logging unrecoverable transactions
    so that human intervention can be applied.
    Joe Weinstein
    >
    Please explain
    Thanks
    cw

  • Outbound EDI X12 document with multiple ST transactions segments

    Hello all.
    I am STILL using Oracle B2B 10.1.2.3 MLR 16 and need the ability to send outbound transactions (EDI X12 856 4010) with one ISA envelope and multiple ST segments. When I enqueue the 856 to the IP Out queue with multiple ST's, B2B does not generate unique ST Control #'s and the segment count is doubled for the second ST loop. Is it possible to get an accurate segment count with the MACRO's?
    If this is not possible, will someone please help me understand the batching process. I have followed the instructions in the B2B_TN_012_EDI_OutBound_Batching.pdf file, but I am getting very generic null pointer errors in B2B.
    Any help will be greatly appreciated.
    Thank you.
    Nick Graves

    Hello All.
    I am desperate for some help with batching. I cannot get the count/ID based batching to work or the time interval batching to work. I recveive the following error message with batch ID based batching:
    Error -: AIP-50014: General Error: java.lang.NullPointerException
         at oracle.tip.adapter.b2b.engine.Engine.processOutgoingMessage(Engine.java:1260)
         at oracle.tip.adapter.b2b.engine.Engine.handleMessageEvent(Engine.java:2549)
         at oracle.tip.adapter.b2b.engine.Engine.processEvents(Engine.java:2482)
         at oracle.tip.adapter.b2b.data.MsgListener.onMessage(MsgListener.java:530)
         at oracle.tip.adapter.b2b.data.MsgListener.run(MsgListener.java:376)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.NullPointerException
         at oracle.tip.adapter.b2b.packaging.mime.MimePackaging.pack(MimePackaging.java:107)
         at oracle.tip.adapter.b2b.msgproc.Request.outgoingBatchRequest(Request.java:1445)
         at oracle.tip.adapter.b2b.engine.Engine.processOutgoingMessage(Engine.java:1239)
         ... 5 more
    Time interval batching produces this error:
    Error Brief :
    Duplicate transaction node GUID encountered.{br}{br}This error was detected at:{br}{tab}Segment Count: (N/A){br}{tab}Composite Position: 0{br}{tab}Sub-Element Position: (N/A){br}{tab}Characters: 0 through 0
    Duplicate transaction node GUID encountered.{br}{br}This error was detected at:{br}{tab}Segment Count: (N/A){br}{tab}Composite Position: 0{br}{tab}Sub-Element Position: (N/A){br}{tab}Characters: 0 through 0

  • Multiple Start Transactions

    This may be a long shot, but I'm aware that you can set your start transaction in the first screen (Extras >> Set start transaction) so that when you log on you are automatically in the required screen instead of the SAP Easy Access screen.
    But if there are 2 transactions you always use, is it possible to get SAP to load up one transaction and open up a new session in another transaction as soon as you log on?
    Any feedback would be greatly appreciated.

    Hi Kellis
    We cant have multiple sessions to be triggered during login. I would suggest you to set one of the transation as the 'Start transaction' and add the other one to your Favourites (right click on favouties icon, choose Insert transaction).
    Once you login to the start transaction, press Ctrl+ and a new session will be created. Double clcik on the other tcode in favourites.
    Sorry, couldnt think of any other options with less keystrokes and clicks
    Best Regards
    Sathees Gopalan

  • Common follow-up task for multiple lead transaction types

    Hi Gurus,
    Happy New Year.
    I am creating 3 lead transaction types as per the business requirement and i want to use common lead follow-up task for all three Lead transaction types.  My question over here is what are the problems if i use common follow-up task for multiple lead types.
    Thanks and Regards,
    Arun

    Hi DJ,
    Thanks for the Quick response....
    comming to your questions,
    1. As of now i am planning to use same number ranges for 3 lead transaction types.
    2. Yes these lead types are user specific. first two lead transaction types will be seen by User A , and Lead Type 3 will be seen to User B.
    please suggest me how to go on.
    Awaiting for your response.
    Thanks and Regards,
    Arun

  • Need Help in generating unique ST02&SE02 values in Multiple PO Transaction

    Hi All,
    We have a requirement where we need to send multiple PO's in a single transaction using the EDI X12 over Generic Exchange protocol.We could successfullly validate and generate a EDI flat file which contains multiple PO's. But, the ST02 & corresponding SE02 values which are getting generated for multiple PO's are not unique,they are same for all PO's.
    But, I want all the different PO's ST02 and SE02 values to be unique.Currently,I am using #ControlNumber# in my EDI XML file to generate these values in EDI flat file.
    Please do let me know how to achieve this.
    Thanks in Advance.
    Regards,
    Kaavya

    Hi Kaavya,
    To address this usecase, please use EDI batching.
    Please refer to http://www.oracle.com/technology/products/integration/b2b/pdf/B2B_TN_012_EDI_OutBound_Batching.pdf
    Regards,
    Dheeraj

  • Transactional/Consistent properties

    I'm thinking that it would be very nice to have related properties updated all at the same time, only firing their events when they are all in a consistent state. So for exampe, if I have a widthProperty and a heightProperty, and a third property depended on that (areaProperty) I would like to make it so that areaProperty is always in a consistent state.
    However, as soon as I change either widthProperty or heightProperty, the areaProperty will get recalculated and since it queries the other x/y property it will calculate a non-existant area value as an intermediary value.
    So, I'm wondering if there is something I can do about it. This example is simplified, it might be about a dozen related properties, not just two.
    Anyway, I thought a few solutions:
    1) have a versionProperty that is only updated after all the other properties have been updated -- areaProperty can then trigger off that and only calculate the value when a new version is available. This kind of sucks because direct binding to the "real" properties would then be discouraged.
    2) have a containing property, like Dimension in this simplified case, and then do bindings that start with the containing property (like Bindings.select(dimensionProperty, "x")). This way I could create a new Dimension object (certain that it has no listeners), set its x/y and then update the dimensionProperty -- listeners will see all changes at once.
    I don't like either of these solutions. The first one because it makes it impossible to direct bindings, and the second one because I'd like to avoid doing Bindings that are only checked at runtime (and involve creating new objects, which makes it hard to do direct bindings without monitoring the containing object as well).
    Are there any other solutions?

    Yes I have noticed the same problem while implementing the same scenario I detailed in my response. Luckily for me all the values finally converged to the right value, I think the ChangeListener was called multiple times with the final one having access to the final values. But in the end it was a non-issue for me as I only ended up relying on the height property, and did not have to worry about the width. But still I see the point.
    For 1) does this assume that both dependent properties will change at the same time? I imagine there are three scenarios; one of the dependent properties changes, the other one changes, or both change.
    A simple solution for the case that both dependent properties must change together is to use two boolean flags. Since the ChangeListener is an anonymous class, the two booleans can be member variables. The same ChangeListener is applied to both height and width properties. When changed() is called set the flag of the ObservableValue (either height or width), then change the area if both flags are set, then after calculating the new area change the flags back to false. Though this still will not take care of the situation where only one dependent property might change and not the other.
    If the solution must allow the scenario of only one dependency being changed, as well as both, then I just do not see how to avoid having the intermediary value... though maybe my imagination is lacking today.
    thanks
    jose
    Edited by: jmart on Sep 10, 2012 1:36 AM

  • Inserting multiple payment transactions using BDC

    Hi all,
    Using BDC i displayed   '5'  payment transactions(xk01)
    for banking details.
    But i am not able to insert more than  '5'  payment transactions
    i.e. if i entered 6th and 7th details means then 4th and 5th
    payment transactions r overwritten.
    So tell me how to display the all more than 5 details...

    >
    cranjith kumar wrote:
    > Hi all,
    > Using BDC i displayed   '5'  payment transactions(xk01)
    > for banking details.
    > But i am not able to insert more than  '5'  payment transactions
    > i.e. if i entered 6th and 7th details means then 4th and 5th
    > payment transactions r overwritten.
    > So tell me how to display the all more than 5 details...
    This is not a Web Dynpro ABAP related question.  Please post to the correct forum.

  • Multiple Row Transaction and prepareForDML method

    Hi
    I want to create dept and employees record on the same page. For each employee record I created I want to update the average salary field on dept table and create some employee related records. For this I override the prepareForDML method on EmployeeImpl class and do updates and creations depending on the post state of the employee object. Say that I created one department record and and 10 employee record. If employee record passes validation its prepareForDMl method executed, if not it's not executed and entire transaction fails. If I correct the invalid employee records and press commit, for all employees prepareForDMl method is executed again and all with POSTSTATE STATUS_NEW even the ones that passed validation and executed prepareForDML in the first trial. So my update regarding average salary would be wrong and I would be creating more than one employee related record which is also wrong. How can I undo the changes made in prepareForDML? is the prepareForDML the wrong method to implement such things? I tried setclearcacheonRollback(false|true), jbo.txn.handleafterpostexc(true|false) parameters to no avail. I appreciate your helps?
    Best Regards,
    Salim

    Hi
    I want to create dept and employees record on the same page. For each employee record I created I want to update the average salary field on dept table and create some employee related records. For this I override the prepareForDML method on EmployeeImpl class and do updates and creations depending on the post state of the employee object. Say that I created one department record and and 10 employee record. If employee record passes validation its prepareForDMl method executed, if not it's not executed and entire transaction fails. If I correct the invalid employee records and press commit, for all employees prepareForDMl method is executed again and all with POSTSTATE STATUS_NEW even the ones that passed validation and executed prepareForDML in the first trial. So my update regarding average salary would be wrong and I would be creating more than one employee related record which is also wrong. How can I undo the changes made in prepareForDML? is the prepareForDML the wrong method to implement such things? I tried setclearcacheonRollback(false|true), jbo.txn.handleafterpostexc(true|false) parameters to no avail. I appreciate your helps?
    Best Regards,
    Salim

Maybe you are looking for

  • How do I delete my icloud account from Iphone ios 7.

    How do I delete my icloud account from Iphone 5 using ios 7. I need reinstall this because my apple ID is linked to a different email now. When I try to delete the account from my iphone it requires a password to authorise this, but the password does

  • Unable to add fields

    hi    I hav modified the standard HTML code for BP Search according to my requirement and hav uploaded.   now when i go to IMG->customer relationship management->IC winclient -> customer-specific system modifications for IC -> define customer specifi

  • HELP Sending IDOC

    Hi, I want to send an IDOC but get the following error message: - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="1">   <SAP:Category>XIAdapter</SAP:Category>   <SAP:C

  • Populating XML file for form set

    Hi there, We have several forms that we need to fill out with the same information for each "case" we deal with. Its the usual stuff like names, dates and addresses. At present we fill out each form one at a time - very repetitive. I want to have a u

  • Blurry Images when Placing

    When I place images on triggers or slideshows or anything in general, the images appear at a very low quality and very blurry (even if I don't re-size them in Muse). My images are 300dpi and I've tried placing both jpgs and pngs. But the resolution i