Question reg. transaction

Can you tell me how can I generate an idoc from FBE3 transaction?
For ex: for VA02, you goto header -> output ->  and adjust output options there...
How do you do that with FBE3 transaction?
Thanks for the help.

Hi Krishen,
I don't think you can generate payment IDoc from either transaction FBE2/FBE3.
Please try this standard program RFFOALEI instead.
Regards,
Ferry Lianto

Similar Messages

  • Uicode Conversion Project - Question on transaction UCCHECK

    Hello,
    We are about to start an Unicode conversion Project. We have SAP 4.7C & we are going to make it Unicode enabled. In order to plan the ABAP/4 resource requirements precisely, I ran transaction UCCHECK to get the list of the development objects with Unicode errors. I have an urgent question with the way the transaction is executed:
    When I run UCCHECK with selection screen option "Display lines that cannot be analyzed statically" , I get too many error & majority of them say that  "the system couldnu2019t perform the check on the current statement & it can only be carried out at runtime"  When I ran UCCHECK without option "Display lines that cannot be analyzed statically" , the total error count was way low u2026
    Can someone please explain me the correct way to use transaction UCCHECK?
    Thanks in avance,
    Umang

    Pl. see this help text. U can access the same from UCCHECK's selection screen.
    ABAP Unicode Scan Tool UCCHECK
    You can use transaction UCCHECK  to examine a Unicode program set for syntax errors without having to set the program attribute "Unicode checks active" for every individual program. From the list of Unicode syntax errors, you can go directly to the affected programs and remove the errors. It is also possible to automatically create transport requests and set the Unicode program attribute for a program set.
    Some application-specific checks, which draw your attention to program points that are not Unicode-compatible, are also integrated.
    Selection of Objects:
    The program objects can be selected according to object name, object type, author (TADIR), package, and original system. For the Unicode syntax check, only object types for which an independent syntax check can be carried out are appropriate. The following object types are possibilities:
    PROG Report
    CLAS Class
    FUGR Function groups
    FUGX Function group (with customer include, customer area)
    FUGS Function group (with customer include, SAP area)
    LDBA Logical Database
    CNTX Context
    TYPE Type pool
    INTF Interface
    Only Examine Programs with Non-Activated Unicode Flag
    By default, the system only displays program objects that have not yet set the Unicode attribute. If you want to use UCCHECK to process program objects that have already set the attribute, you can deactivate this option.
    Only Objects with TADIR Entry
    By default, the system only displays program objects with a TADIR entry. If you want to examine programs that don't have a TADIR entry, for example locally generated programs without a package, you can deactivate this option.
    Exclude Packages $*
    By default, the system does not display program objects that are in a local, non-transportable package. If you want to examine programs that are in such a package, you can deactivate this option.
    Also Display Modified SAP Objects
    By default, SAP programs are not checked in customer systems. If you also want to check SAP programs that were modified in a customer system (see transaction SE95), you can activate this option.
    Maximum Number of Programs:
    To avoid timeouts or unexpectedly long waiting times, the maximum number of program objects is preset to 50. If you want to examine more objects, you must increase the maximum number or run a SAMT scan (general program set processing). The latter also has the advantage that the data is stored persistently. Proceed as follows:
    - Call transaction SAMT
    - Create task with program RSUNISCAN_FINAL, subroutine SAMT_SEARCH
    For further information refer to documentation for transaction SAMT.
    Displaying Points that Cannot Be Analyzed Statically
    If you choose this option, you get an overview of the program points, where a static check for Unicode syntax errors is not possible. This can be the case if, for example, parameters or field symbols are not typed or you are accessing a field or structure with variable length/offset. At these points the system only tests at runtime whether the code is sufficient for the stricter Unicode tests. If possible, you should assign types to the variables used, otherwise you must check runtime behavior after the Unicode attribute has been set.
    To be able to differentiate between your own and foreign code (for example when using standard includes or generated includes), there is a selection option for the includes to be displayed. By default, the system excludes the standard includes of the view maintenance LSVIM* from the display, because they cause a large number of messages that are not relevant for the Unicode conversion. It is recommended that you also exclude the generated function group-specific includes of the view maintenance (usually L<function group name>F00 and L<function group name>I00) from the display.
    Similarly to the process in the extended syntax check, you can hide the warning using the pseudo comment ("#EC *).
    Applikation-Specific Checks
    These checks indicate program points that represent a public interface but are not Unicode-compatible. Under Unicode, the corresponding interfaces change according to the referenced documentation and must be adapted appropriately.
    View Maintenance
    Parts of the view maintenance generated in older releases are not Unicode-compatible. The relevant parts can be regenerated with a service report.
    UPLOAD/DOWNLOAD
    The function modules UPLOAD, DOWNLOAD or WS_UPLOAD and WS_DOWNLOAD are obsolete and cannot run under Unicode. Refer to the documentation for these modules to find out which routines serve as replacements.

  • CDC Subscription question concerning transactional consistency

    Hi
    Hopefully a quick question: can anyone confirm whether or not it is the subscription that controls transactional consistency when accessing change records in the change views?
    For example, if you have 20 source tables that you are capturing changes for and create one change source, one change set containing 20 change tables and one subscription for which you have 20 change views then because it is the subscription that you specify when performing the extend and purge operationa and hence it is the one subscription than ensures that when an extend is issued then all change records across the 20 change views will be transactionally consistent.
    I have had an alternative design proposed to me that is to use 20 separate subscriptions - one for each source table and change table. My concern is that this will not ensure transactional consistency across the 20 tables and that any ETL design (for example 20 separate threads running in parallel and each doing an extend, process, purge sequence) cannot ensure that the change records in the change views correspond to the same transaction across the tables in the source database.
    I hope that this is clear - any views and opinions on this will be very gratefully received.
    Many thanks
    Pete

    >
    Apologies if this appears to be belabouring the point - but it is an important bit of understanding for me.
    >
    The issue is not that you are belabouring the point but that you are not reading the doc quote I cited or the last paragraph on my last reply.
    Creating a consistent set of data and USING (querying) a consistent subset of that data are two different things. The publisher is responsible for creating a change set that includes the data you will want and the change set will make sure that a consistent set of data is available.
    Whether a subscriber makes proper use of that change set data or not is another thing altogether.
    If you create 20 subscriptions then those are totally independent of one another just lilke 20 people subscribing to the Wall Street Journal; those subscribers and subscriptions have NOTHING to do with one another. If you want to try to synchronize the 20 yourself have at it but as far as Oracle is concerned each subscriber is unique and independent.
    If you want to subscribe to, and use, a consistent subset of the change set then, as the doc quote said, you have to JOIN the tables together.
    Read the documentation - you can't understand it if you don't read the doc and go through the examples in it.
    Step 4 Subscribe to a source table and the columns in the source table shows you exactly how you have to do the join.
    The second step of that example says this
    >
    Therefore, if the subscriber wants to subscribe to columns in both publications, using EMPLOYEE_ID to join across the subscriber views, then the subscriber must use two calls, each specifying a different publication ID:
    >
    '. . . join across the subscriber views . . .'
    Don't be afraid of breaking Oracle by trying things!
    The SCOTT EMP and DEPT tables might currently have a consistent set of data in the two tables. But if I query EMP from one session (subscription) and query DEPT form another session (subscription) the results I get might not be consistent between them because I used to independent operations to get the data and the data might have changed in between the two steps.
    However if I query DEPT and EMP using a JOIN then Oracle guarantees that the data from both tables will reflect the same instant in time.
    Not a perfect analogy but close to the subscription use. If you want to subscribe to data from mutiple tables/views in the change set AND get a consistent set of data you need to join the tables/views together. The mechanism for doing this is not the same as SQL but is shown in the above example in the doc.

  • A question about transactions and point-of-time

    We have an operation which we want to serialize, since it can be called concurrently by same user from different web servers which would cause duplicates
    We have a Table 'Search'
    UserID
    <other fields>
    the Search table is updated in stored proc (INUserID, INSearchString) where INSearchString is a complete SELECT statement that can encompass many tables. Currently this is what the proc does:
    DELETE FROM Search WHERE UserID = INUserID;
    EXECUTE IMMEDIATE 'INSERT INTO Search SELECT INUserID,RowNum FROM (' || INSearchString || ')';
    I want to add a new table SearchLock (UserID)
    and add a SELECT * FROM SearchLock WHERE UserID = INUserID FOR UPDATE
    to this stored proc
    my question is, where exactly should I put the transaction BEGIN and COMMIT to insure that a second call, after waiting if needed, will see the contents of Search AFTER the first call has commited? would the following sequence work every time?
    BEGIN TRANSACTION ...
    SELECT ... FOR UPDATE
    DELETE...
    EXECUTE ...
    COMMIT
    (obviously will add en EXCEPTION handling)
    my worry, and what i'm trying to prevent, is that the second call, after waiting, will see the contents of Search BEFORE the first call has commited it

    To avoid this, just create a primary key for your
    table. If a row is currently inserted in a
    non-committed transaction, all other transactions
    that want to insert the same primary key in the same
    table will wait on the first transaction to complete.
    If the first transaction commits, all other
    transaction will get ORA-0001 error: unique contraint
    ... violated.
    If the first transaction rollbacks, one will be able
    to insert the primary key and the same process
    applies with other transactions.
    Oracle automatically locks the row with the primary
    key to be inserted for you.intriguing, but I am trying to protect several statements, I don't want the second sessin any chance to be able to run a DELETE.
    I am thinking the SELECT ... FOR UPDATE will have a NOWAIT option so that the second session will get immediate error notification.

  • Help, a question about transaction...

    sorry for my english:(
    my application env:
    kodo 3.2.3 / Oracle 9i /JBoss 3.2.5
    Application logic is written in EJB(CMP) and Stored procedure. they must
    be finished in a transaction :
    ejb business method begin....
    get PM;
    do kodo database operate ....;
    get Connection from PM;
    get stored procedure from connection;
    (1) execute stored procedure;
    close stored procedure;
    close connection;
    close PM
    ejb biz method finished.
    It's a so simple function .
    My question is:
    when I reach (1) line, the new data that generated by stored procedure
    has committed to database!! ----Why! The Connection gotten from PM is out
    of the Transaction??? In fact, I can't rollback the modification made by
    stored procedure when exceptions occurred:(
    What can I do ?

    Are you using managed datasource? Are you using pessimistic or
    optimistic transactions?
    In the first case, Kodo will defer to the managed datasource so if your
    transaction is not rolled back, the stored procedure will run through.
    If using optimistic transactions, you should trigger some other
    transactional change so that Kodo will know to start a datastore
    transaction. Otherwise, depending on how your stored procedure is
    written, it may change the datastore.
    Kodo 4.0 includes a KodoPersistenceManager.beginStore () to ensure that
    the connection is transactional.
    yan wrote:
    sorry for my english:(
    my application env:
    kodo 3.2.3 / Oracle 9i /JBoss 3.2.5
    Application logic is written in EJB(CMP) and Stored procedure. they must
    be finished in a transaction :
    ejb business method begin....
    get PM;
    do kodo database operate ....;
    get Connection from PM;
    get stored procedure from connection;
    (1) execute stored procedure;
    close stored procedure;
    close connection;
    close PM
    ejb biz method finished.
    It's a so simple function .
    My question is:
    when I reach (1) line, the new data that generated by stored procedure
    has committed to database!! ----Why! The Connection gotten from PM is out
    of the Transaction??? In fact, I can't rollback the modification made by
    stored procedure when exceptions occurred:(
    What can I do ?
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • Question about transactions...

    Halo...
    I am still in a process of understanding oracle transactions and this is the question :
    If I do in store procedure this:
    PROCEDURE SomeProc....
    varField NUMBER(15,2);
    BEGIN
    SAVEPOINT UndoAll;
    UPDATE aTable SET aField = 1000, bField = 'YES' WHERE cField = 'id10';
    SELECT aField INTO varField FROM aTable WHERE bField = 'YES' AND cField = 'id10';
    EXCEPTION
    WHEN OTHERS THEN ROLLBACK TO UndoAll;
    END SomeProc;
    Should I do COMMIT in between UPDATE and SELECT !?
    Or, because it is a single transaction the SELECT statemant fetchs the changes done by previous UPDATE statemant !?

    Should I do COMMIT in between UPDATE and SELECT !?
    Or, because it is a single transaction the SELECT
    statemant fetchs the changes done by previous UPDATE statemant !? When you do the UPDATE, you lock those rows. If you do not COMMIT then do a SELECT on those same rows, you'll see your updates.
    If you do the UPDATE and then COMMIT, you may or may not see your changes. As soon as you commit, your locked rows become unlocked. Then anyone else may update those rows. Some of your rows may have been updated in the (very brief) time between your COMMIT and your SELECT.
    Of course, in your example that's not likely to happen. But in a highly concurrent environment...who knows?
    The reason you do a COMMIT is to make permanent a logical unit of work.

  • Lock creditcard due to a questionable Itunes transaction

    I've called my bank becasuse my creditcard was lock. They told me that it was locked due to a questionable transaction from itunes in amount of 1 USD. I haven't made any purchise that day hauever I've may tryed to activane my new creditcard (the one that is beeng locked). What is the prosigure from Itunes how do they check if the new creditcards are real and working? Iknow paypal do such a transaction, they sand small amount of money and then take it back to check if all is ok with your bank data. However I wasn't inform from iTunes for such think. Is there?

    So I;ve got the respond from iTunes. At this particular day was no transaction from iTunes. my credit card was not connected with the account. It was froad. I've locked the credit Card and ordered a new one.

  • Newbie question: distributed transaction

    Hi,
    I am totally new to JMS, and have a question about distributed transaction. Can the sender and receiver both involve in the same distributed transaction? I don't think it's possible, but want to confirm here.
    The reason I ask is that I wonder if I can make use of JMS to allow Ruby on Rails app to participate in a distributed transaction.
    Any help or tips will be appreciated.
    Thanks,
    Ykng

    JMS is built around a messaging paradigm. The large majority of JMS providers will be using a broker to decouple the actions of message sending and of message receiving.
    As such, the only transactional contract that is possible is between the broker and the client (the message consumer or producer).
    When the client resides in a container (like a java app server), the client container can take the role of the transaction manager and extend the transactional aspect of the application to other participants, like a database for example.
    But in most JMS providers, that are using the broker as a "man in the middle" architectural decoupler, transactional behavior between the sender and the receiver is not possible.
    If a transactional behavior is necessary for this, then JMS is most probably the wrong solution for you.
    TE

  • RE : BI APO Question Reg Data feeding from cube to Planning area.

    Hi BW Experts,
    iam working in an Implementation project for SCM in BW prcisely working with APO Bw..
    For that I have taken historical data as a flat file and loaded it in to the external BW Infocube and its fine...
    Second step I have created generate export datasource on topr of BW infocube and replicated in to Bw and used this export datasource as datasource for APO BW Infocube which is inbulit BW System from External Bw..
    also I have created tranformations and data is loaded in the BW cube in APO system.Also Included Version charecterstics..
    When I try to fed the APO Cube data to planning area Iam gettinga the following  warning itsnot an error:
    1.Key figure copy: InfoCube - planning area (DP) 01.01.2010 to 31.12.2010-- Successful
    *2.No data exists for the selection made (see long text*
      Diagnosis:Data could not be loaded from the Cube for the selection you made. Check whether the Cube actually contains data that is relevant for your selection.
    For the second point I have time charecterstics filled in the infocube which Iam feding to a Planning area like 0CALMONTH,0CALWEEK,FiscVarnt,0CALMONTH
    3.Characteristic assignment: No data copied --- Message
    Can you please help me with your thoughts so that i wll try to corner the issue I will be highly obliged

    Hi,
    As I understood, you have loaded data from external BW cube to APO BW cube and now loading planning area from APO BW cube.
    I hope your settings in /SAPAPO/TSCUBE transaction code would be correct and you have selected the correct planning version with correct cube.
    Check if Data in APO BW cube is available for reporting or not and data is avilable for given selction (if any but I guess you are not giving any).
    Thanks,
    S

  • A question about transaction consistency between multible target tables

    Dear community,
    My replication is ORACLE 11.2.3.7->ORACLE 11.2.3.7 both running on linux x64 and GG version is 11.2.3.0.
    I'm recovering from an error when trail file was moved away while dpump was writing to it.
    After moving the file back dpump abended with an error
    2013-12-17 11:45:06  ERROR   OGG-01031  There is a problem in network communication, a remote file problem, encryption keys for target and source do
    not match (if using ENCRYPT) or an unknown error. (Reply received is Expected 4 bytes, but got 0 bytes, in trail /u01/app/ggate/dirdat/RI002496, seqno 2496,
    reading record trailer token at RBA 12999993).
    I googled for It and found no suitable solution except for to try "alter extract <dpump>, entrollover".
    After rolling over trail file replicat as expected ended with
    REPLICAT START 1
    2013-12-17 17:56:03  WARNING OGG-01519  Waiting at EOF on input trail file /u01/app/ggate/dirdat/RI002496, which is not marked as complete;
    but succeeding trail file /u01/app/ggate/dirdat/RI002497 exists. If ALTER ETROLLOVER has been performed on source extract,
    ALTER EXTSEQNO must be performed on each corresponding downstream reader.
    So I've issued "alter replicat <repname>, extseqno 2497, extrba 0" but got the following error:
    REPLICAT START 2
    2013-12-17 18:02:48 WARNING OGG-00869 Aborting BATCHSQL transaction. Detected inconsistent result:
    executed 50 operations in batch, resulting in 47 affected rows.
    2013-12-17 18:02:48  WARNING OGG-01137  BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01004 Aborted grouped transaction on 'M.CLIENT_REG', Database error
    1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "M"."CLIENT_REG" SET "CLIENT_CODE" =
    :a1,"CORE_CODE" = :a2,"CP_CODE" = :a3,"IS_LOCKED" = :a4,"BUY_SUMMA" = :a5,"BUY_CHECK_CNT" =
    :a6,"BUY_CHECK_LIST_CNT" = :a7,"BUY_LAST_DATE" = :a8 WHERE "CODE" = :b0>).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:02:48 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.CHECK OCI Error ORA-00001:
    unique constraint (M.CHECK_PK) violated (status = 1). INSERT INTO "M"."CHECK"
    ("CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20).
    2013-12-17 18:02:48  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    The report stated the following:
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 0 records
    Report at 2013-12-17 18:02:48 (activity since 2013-12-17 18:02:46)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    At that time I came to the conclusion that using etrollover was not a good idea Nevertheless I had to upload my data to perform consistency check.
    My mapping templates are set up as the following:
    LS.CHECK->M.CHECK
    LS.CHECK->M.TL_CHECK
    (such mapping is set up for every table that is replicated).
    TL_CHECK is a transaction log, as I name it,
    and this peculiar mapping is as the following:
    ignoreupdatebefores
    map LS.CHECK, target M.CHECK, nohandlecollisions;
    ignoreupdatebefores
    map LS.CHECK, target M.TL_CHECK ,colmap(USEDEFAULTS,
    FILESEQNO = @GETENV ("RECORD", "FILESEQNO"),
    FILERBA = @GETENV ("RECORD", "FILERBA"),
    COMMIT_TS = @GETENV( "GGHEADER", "COMMITTIMESTAMP" ),
    FILEOP = @GETENV ("GGHEADER","OPTYPE"), CSCN = @TOKEN("TKN-CSN"),
    RSID = @TOKEN("TKN-RSN"),
    OLD_CODE = before.CODE
    , OLD_STATE = before.STATE
    , OLD_IDENT_TYPE = before.IDENT_TYPE
    , OLD_IDENT = before.IDENT
    , OLD_CLIENT_REG_CODE = before.CLIENT_REG_CODE
    , OLD_SHOP = before.SHOP
    , OLD_BOX = before.BOX
    , OLD_NUM = before.NUM
    , OLD_NUM_VIRT = before.NUM_VIRT
    , OLD_KIND = before.KIND
    , OLD_KIND_ORDER = before.KIND_ORDER
    , OLD_DAT = before.DAT
    , OLD_SUMMA = before.SUMMA
    , OLD_LIST_COUNT = before.LIST_COUNT
    , OLD_RETURN_SELL_CHECK_CODE = before.RETURN_SELL_CHECK_CODE
    , OLD_RETURN_SELL_SHOP = before.RETURN_SELL_SHOP
    , OLD_RETURN_SELL_BOX = before.RETURN_SELL_BOX
    , OLD_RETURN_SELL_NUM = before.RETURN_SELL_NUM
    , OLD_RETURN_SELL_KIND = before.RETURN_SELL_KIND
    , OLD_INSERTED = before.INSERTED
    , OLD_UPDATED = before.UPDATED
    , OLD_REMARKS = before.REMARKS), nohandlecollisions, insertallrecords;
    As PK violation fired for CHECK, I've changed nohandlecollisions to handlecollisions for LS.CHECK->M.CHECK mapping and restarted an replicat.
    To my surprise it ended with the following error:
    REPLICAT START 3
    2013-12-17 18:05:55 WARNING OGG-00869 Aborting BATCHSQL transaction. Database error 1 (ORA-00001:
    unique constraint (M.CHECK_PK) violated).
    2013-12-17 18:05:55 WARNING OGG-01137 BATCHSQL suspended, continuing in normal mode.
    2013-12-17 18:05:55 WARNING OGG-01003 Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-00869 OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK)
    violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55 WARNING OGG-01004 Aborted grouped transaction on 'M.TL_CHECK', Database error 1
    (OCI Error ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO
    "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26)).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    2013-12-17 18:05:55 WARNING OGG-01154 SQL error 1 mapping LS.CHECK to M.TL_CHECK OCI Error
    ORA-00001: unique constraint (M.PK_TL_CHECK) violated (status = 1). INSERT INTO "M"."TL_CHECK"
    ("FILESEQNO","FILERBA","FILEOP","COMMIT_TS","CSCN","RSID","CODE","STATE","IDENT_TYPE","IDENT","CLIENT_REG_CODE","SHOP","BOX","NUM","KIND","KIND_ORDER","DAT","SUMMA","LIST_COUNT","RETURN_SELL_CHECK_CODE","RETURN_SELL_SHOP","RETURN_SELL_BOX","RETURN_SELL_NUM","RETURN_SELL_KIND","INSERTED","UPDATED","REMARKS")
    VALUES
    (:a0,:a1,:a2,:a3,:a4,:a5,:a6,:a7,:a8,:a9,:a10,:a11,:a12,:a13,:a14,:a15,:a16,:a17,:a18,:a19,:a20,:a21,:a22,:a23,:a24,:a25,:a26).
    2013-12-17 18:05:55  WARNING OGG-01003  Repositioning to rba 1149 in seqno 2497.
    I've expected that batchsql will fail cause it does not support handlecollisions, but I really don't understand why any record was inserted into TL_CHECK and caused PK violation, cause I thought that GG guarantees transactional consistency and that any transaction that caused an error in _ANY_ of target tables will be rollbacked for _EVERY_ target table.
    TL_CHECK has PK set to (FILESEQNO, FILERBA), plus I have a special column that captures replication run number and it clearly states that a record causing PK violation was inserted during previous run (REPLICAT START 2).
    BTW report for the last shows
    Reading /u01/app/ggate/dirdat/RI002497, current RBA 1149, 1 records
    Report at 2013-12-17 18:05:55 (activity since 2013-12-17 18:05:54)
    From Table LS.MK_CHECK to LSGG.MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    From Table LS.MK_CHECK to LSGG.TL_MK_CHECK:
           #                   inserts:         0
           #                   updates:         0
           #                   deletes:         0
           #                  discards:         1
    So somebody explain, how could that happen?

    Write the query of the existing table in the form of a function with PRAGMA AUTONOMOUS_TRANSACTION.
    examples here:
    http://www.morganslilbrary.org/library.html

  • Question: No transaction type listed during creation new transaction

    Per below screen, when I try to create a new transaction via CRMD_ORDER, but it seems blank on the type list..
    Any advice what is the issue about, thanks.

    Hi Peter ,
    The new transaction type should be created first via spro settings and the same can be added in CRMD_ORDER transaction via navigating to settings and " Specfic" tab maintaining the newly created transaction type .Once done the button is available for the newly added transaction type and the transaction can now be created .
    Also make sure in the customising while defining the transaction type it should not be set to inactive (falg should not be marked ) along with this the channel  - "GUI  CRM Webclient UI "should be mainatined for the transaction type in customising so that the same is availble in CRMD_ORDER .
    Hope this will help .
    Regards
    Shweta

  • HT5312 How do I get support to send me the rescue e-mail in order for me to answer the security questions and transact?

    Cannot recall the answers to my security questions. How do I reset these or get the answers?

    You need to ask Apple to reset your security questions; ways of doing so include clicking here and picking a method for your country, and filling out and submitting this form.
    (96448)

  • Basic question reg. distributed installatio

    Hi everybody,
              i have a very basic question, for which i wasn`t able to find a simple
              answer/solution.
              I am planning to set up Bea in a distributed environment. the idea is
              to have a physical machine for the presentation, meaning
              webserver/jsp/servlets in the dmz1 and a machine with the application
              server holding ejbs in a different dmz.
              This results in an architecture where the presentation layer only can
              be contacted via http/https by the users and the logic layer
              communicates with the pres. layer via RMI/T3.
              Is there any documentation on such a setup ? any hints ?
              Thanks in advance, i'll keep on searching the dox.
              Berthold Krumeich
              

    [att1.html]
              

  • Question Reg. Queue Process

    Hi All,
            I configured queue process in my ECC5.0 system. Once i sent idoc from WE19 transaction for outbound process, it went to queue(T.code weoutqueue). I checked T.code SMQ1, there i can see some queue's name also. If idouble click that queue name i have seen details like user, queue name, Fm etc.I want to know how to change that FM.It always shows like IDOC_INBOUND_IN_QUEUE . I want to process with some other Function modules. Could you help me out.
    Thanks
    Krishnan.

    Hi
    Every additional work process will consume a small amount of memory (~ 10mb). You can estimate it in transaction RZ10 (choose your instance profile) -> Basic Maintenance -> Change
    Here you can increase the number of processes, and you will see the memory requirement.
    Regards Michael

  • Question reg. output type

    I copied and modified sapscript MEDRUCK for three forms RFQ, Scheduling agreement and Contract (all are MM module). Now I am trying to set up output type for these forms in NACE transaction. Can you tell me what applications and output types to chose from there?
    (I know for RFQ, it is application EA and output type NEU, what about for the other two?)
    I appreciate your input.
    Thanks in advance.

    Thanks Anurag. I too think it is EV. What about the output type?
    The forms I modified for Scheduling Agreement and Contract are almost the same. I think just by assigning to a different output type, the data flow can be different for these two forms? Please correct me if I am wrong.
    Thanks a lot.

Maybe you are looking for

  • How to treat multiple instances of the same image

    In my W/F I use Lightroom as my catalog and main edit tool but also create (using capture NX) tiff images to be edited in PS and also create jpegs in different formats for web browsing or slideshows. To my knowledge, while LR provide a powerful versi

  • Additional field in CRM_DNO_MONITOR

    Hi All, I am working on Solman7.0 EHP and  configured incident management for my service desk. Question is I want to add one additional field in Tr.CRM_DNO_MONITOR (eg.Location) let me know if need to use any badi. Regards Subbaraju

  • Hyperlink don't work with "download a document"

    I use iWeb 2006. My web site <www.xicro.it> have a link with a document to download. This kind of hyperlink don't work on line, but it work when you check the site in local host. Do you have any answer or any other experiences concerning this problem

  • Scroll Pane content doesn't work

    ok, so i've got a movie with a scrollpane (god these things are a hassle) on the stage. the movie tells the scrollpage to load content from an outside swf called testes.swf (short for testimonials). in testes.swf, there are a series of invisable butt

  • How can I reset iPhone 4 to factory setting ?

    This is a problem after I JB iPhone 4 and I did reset too. But after that my Iphone4 couldnt wake up now !!!