Duplicate Payment item

Hi,
when posting electronic bank statement in FEBA duplicate payment item generating for same refernce. Can anyone tell root cause.
Regards
MRS

Check the iput file and also check the posting rules assigned to that external transaction in EBS costomization.
Regards,
SDNer

Similar Messages

  • Duplicate payment

    Hi,
    I have a following case --
    There are two duplicate payment lots created -  X and Y .
    Then all items in Y lot are reversed.
    But in the FBL3N , duplicate entries can be seen. Is there any process missing ?
    Thanks.
    Nachiket

    Check on selection screen your open items per date and ensure you only display those.
    Else check the document itself, do you see in the reversed document the clearing information- date and number?
    If so it should not be shown under section Open items
    Cheers
    Hein

  • IHC Duplicate Payment Request

    Dear All,
    I have an interesting scenario for IHC. Is there a way of checking and stopping duplicate payment requests.
    My scenario is as follows:
    Step 1: Execute payment run for subsidiary for external vendor payments (F110). This creates an IDoc and is sent to the clearing partner.
    Step 2. IHC: - Post the payment order in IHC. This creates the payment request for the specific vendor. All nice and good.
    Step 3. Now, someone resets and reverses the document posted at the subsidiary (FBRA). This creates open items again on the vendor.
    Step 4. Executes payment run for subsidiary for external vendor payments.
    Step 5, IHC - Post the payment order in IHC creating duplicate payment requests.
    Step 6. Execute payment run for external vendors at IHC - F111.
    Step 3 and 4 should not be done under normal circumstances.
    How do I make sure that should step 3 and 4 be executed by the subsidiary, I will be able to detect the duplicate payment request to stop creating the second payment request.
    Please see attached for this example of duplicate payment requests.
    Your suggestions are most welcome.
    Regards,
    Godhelp

    You can add a custom validation to fail payment request creation in head office if there is already an existing payment request with same Reference number (XBLNR), transaction amount (PAMTF), transaction currency (PACUR) & payee which is still not reversed (XREVE is blank).
    Alternatively, while reversing payment document in subsidiary, check for status of payment order it created originally in IHC. You can find Subs company code + payment document number in AWKEY in IHC_DB_PN table and then read the status. If the status is "D4" then allow reset and reversal of payment document else not.

  • Duplicate payments in Payment list of F110 payment run

    Hello all,
    We had run APP trhu F110 payment run was as expected with amount 100000 EUR but when we checked payment list (EDIT- Payment - Payment list) there are duplicate entries and total is showing as 200000 EUR.
    Following are onservations
    1) In SAP posting made correctly to all vendors with a total of 100000 EUR. So no double payments posted in vendor account 
    2) Payment list is showing amounts with a doument numbers. Duplicate payments also having document numbers but which are not exists in SAP
    3) Documents posted with doc type ZP in sap is not in serial i.e 1,2,3,4 Its posted like 1,3,5,7... the document no 2,4 can be seen in the payment list
    4) Also I checked log for the payment run and I found one warning message as "Check whether a duplicate payment medium has been created"
    5) Also status of payment run is "Posting orders: 1,174 generated, 605 completed"
    6) I also checked setting in FBZP which is also correct
    We need to correct the DME file with correect postings
    If anyone has faced issue then please share your inputs
    Thanks & regards

    Dear Rajan,
    the payment document validation works as follows:
    If you select this parameter, a form is only printed if the related
    payment document has already been posted.
    Note that it is not advisable to schedule the payment program and the
    data medium programs to run at the same time, if you want the system to
    be able to carry out validation of the payment documents - because the
    system does not start posting the documents at the same time as the
    program runs, and in order to ensure that the payment program generally
    finishes its run before all the payment documents have been posted, the
    payment medium program (started after the payment program) would display
    in the error list any documents that have not (yet) been found.
    As a result double payments are not possible as no payment media is
    created if the payment document is not posted. The items are still
    open and are selected in the next payment run again.
    If the payment document validation is not used, the payment media is
    created but the open item is not cleared as the payment document is
    not posted. In this case you have to clear the open items manually
    (if a repeat update is not possible) to avoid double payments.
    dear Prashant,
    It is normal that if the automatic payment does not pay all the items, You can find them hanging into sm13.
    However, in general, when this happens you could try to use the edit >
    payments > after termination > draw up again option, if it is
    available.  If there is an entry in SM13, as in Your case, you should process it.
    But sometimes the System does not allow to do it.
    Anyway the only problem I can see is the one reported by the note 545340:
    When the payment program is terminated, it may be the case that not all
    payment documents exist on the updated database while the entries
    already exist in the tables REGUH and REGUP. However, this basic
    procedure has the advantage that the payment media can already be
    created for the purpose of fast forwarding to the bank when for example
    the payment documents update is delayed.
    For this problem please refer to the Note 545340 point [4]
    that answers to it.
    Furthermore please be aware that:
    as I told You in the beginning, If the payment program does not pay
    all the invoices contained into the payment proposal, It will depend on
    the fact that during the payment proposal run time and the payment
    proposal time, something changed respect one of the selected invoices.
    This means that one document number was NOT posted even if It is
    contained in the tables REGUH and REGUP.
    Please be informed payment data tables REGU* are used by only payment
    program and no need to take any corrective action and should not be a
    problem with auditors.
    So You can pay the invoices manually or by the next automatic payment
    run without any problem.
    I hope now the System behaviour is more clear.
    Mauri

  • Vendor duplicate payment-Report

    Dear Guru
       I have an requirement to develop a report that can tell about vendor duplicate payment considering Partial payment and residual payment and full payment.Is there any standard report available ? O r I have to develop a new Custom report.If Custom report that how can i do this ?

    Hi,
    Standared report available to get Partial/Residual/full payment details T.Code: FBL1N
    I think it is not satisfy your requirement, better to develop customized report by using tables BSIK,BSAK,BKPF
    Regards
    Viswa

  • BTREE and duplicate data items : over 300 people read this,nobody answers?

    I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
    Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
    The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
    While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
    I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
    I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
    Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
    Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
         while (i < hcp->dup_tlen) {
              memcpy(&len, data, sizeof(db_indx_t));
              data += sizeof(db_indx_t);
              DB_SET_DBT(cur, data, len);
              * If we find an exact match, we're done. If in a sorted
              * duplicate set and the item is larger than our test item,
              * we're done. In the latter case, if permitting partial
              * matches, it's not a failure.
              *cmpp = func(dbp, dbt, &cur);
              if (*cmpp == 0)
                   break;
              if (*cmpp < 0 && dbp->dup_compare != NULL) {
                   if (flags == DB_GET_BOTH_RANGE)
                        *cmpp = 0;
                   break;
    What's the expert opinion on this subject?
    Vincent
    Message was edited by:
    user552628

    Hi,
    The special thing about it is that with a given key,
    there can be a LOT of associated data, thousands to
    tens of thousands. To illustrate, a btree with a 8192
    byte page size has 3 levels, 0 overflow pages and
    35208 duplicate pages!
    In other words, my keys have a large "fan-out". Note
    that I wrote "can", since some keys only have a few
    dozen or so associated data items.
    So I configure the b-tree for DB_DUPSORT. The default
    lexical ordering with set_dup_compare is OK, so I
    don't touch that. I'm getting the data items sorted
    as a bonus, but I don't need that in my application.
    However, I'm seeing very poor "put (DB_NODUPDATA)
    performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
    Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
    Setting the cache and the page size to their ideal values is a process of experimenting.
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
    While there may be a lot of reasons for this anomaly,
    I suspect BDB spends a lot of time tracking down
    duplicate data items.
    I wonder if in my case it would be more efficient to
    have a b-tree with as key the combined (4 byte
    integer, 8 byte integer) and a zero-length or
    1-length dummy data (in case zero-length is not an
    option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
    You can have records with a zero-length data portion.
    Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
    Another possibility would be to just add all the
    data integers as a single big giant data blob item
    associated with a single (unique) key. But maybe this
    is just doing what BDB does... and would probably
    exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
    Or, the slowdown is a BTREE thing and I could use a
    hash table instead. In fact, what I don't know is how
    duplicate pages influence insertion speed. But the
    BDB source code indicates that in contrast to BTREE
    the duplicate search in a hash table is LINEAR (!!!)
    which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
    This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
    Regards,
    Andrei

  • Duplicate Payment Report  - AP

    hi
    i want to run report based on Duplicate Payment . which table contains this information? this report   should return different and/ or the same vendors with the same invoice numbers, and/or identical  payment amounts with different invoices and the same vendor - 
    any table contain this information?

    Standard Extractor 0FI_AP_4 brings data from BSIK and BSAK.
    you should be able to query data from InfoProviders based on this data source.
    You might be able to use standard DSO 0FIAP_O03 for this.
    If you have two or more clearing documents with the same invoice number, you might have some partial payments or duplicate payments.
    You might want to involve your busienss users to find out the exact criteria to identify duplicate payments. It will be lot easier to meet the requirements if you try to understand the functional requirements from the business and then locate the fields that are needed for this purpose and try to come up with the correct logic for this.
    Good Luck.
    MP.

  • Duplicate Payment Order

    Hi,
    After making sucessfull payment run duplicate payment order genearting in backround. Can anyone advise reason.
    Regards
    MRS.

    Hi,
    Select the checkbox "Payment Document Check" in the printout program variant used in F110.
    Regards,
    SDNer

  • Duplicate payments and general error advice

    In EBS 11 what is the best way to identify duplicate payments?
    I am not from an accountancy background but what happens if someone identifies a duplicate payment, is there any data in the database that may show "it was dealt with"?
    Also, aside from genuine error duplicate payments - what other checks and areas should be analyzed to check the business hasnt lost money through error? Any tips?

    Hello.
    Can you please explain your idea in more detail?
    EBS does not allow to enter duplicate payments.
    Octavio

  • EBP PO : Unable to duplicate/copy  item,GR_NON_VAL issue

    Hello,
    I am using SRM 5.O .
    In Process PO when I go to create the PO with more than one line item following issue comes :
    When I entered one line item and check its ok when I click on  <b>Duplicate Selected</b> Item or <b>Copy</b> push button than check  following error appears .
    <b><i>Flag 'Automatic Settlement' at item level is different; Change not possible 
    Flag 'Invoice Expected' at item level is different; Change not possible </i></b>
    Thanks ,
    Sachin
    null
    null

    Hello,
    I have debugged whole program and found when there is single line item everything is fine & when i clicked on <b>Duplicate Selected  Item</b> the value of GR_NON_VAL indicator set in first line item and second Items indicator as it is blank .Where single line Item indicator was blank .
    When I am copying the line item than its working OK .
    Due to mismatch in items followinng program raise error message .
    PERFORM downward_inheritance USING     p_hgp_ecom
                                                 p_hgp_icom
                                                 p_guid
                                                 p_object_type
                                                 p_itm_icom
                                                 ls_igp_icom
                                                 p_changed
                                       CHANGING  ls_header.
    Is there any idea why system behaving like this ?
    Thanks,
    Sachin

  • Is it possible to duplicate an item in the project panel with scripts?

    Hey  all, I'm trying to make a script that will duplicate an item in the project panel. I know you can duplicate an item in a comp, but I'd like to duplicate a project Item...app.project.item(2).duplicate();
    Something like that, is it possible with some other coding to do that?
    Thanks

    Dave, I'm trying to duplicate in a script running inside AE.  I guess I could try to do a system command to duplicate, but I'd really like to do it inside AE so I can keep track of the new layer.

  • New error message :duplicate line item

    Hi,
    We have to add a new error message so that when we try to add a duplicate line item in our qty contract
    1.     we should not be allowed
    2.     we should get a error message u201Cduplicate line item not allowedu2019
    How to achieve this
    Thanks
    Arun

    Search this forum for "USEREXIT_SAVE_DOCUMENT" and "USEREXIT_SAVE_PREPARE".

  • How to avoid duplicate BOM Item Numbers?

    Hello,
    is there a way to avoid duplicate BOM Item Numbers (STPO-POSNR) within one BOM?
    For Routings I could avoid duplicate Operation/Activity Numbers with transaction OP46 by setting T412-FLG_CHK = 'X' for Task List Check. Is there an aquivalent for BOMs?
    Regards,
    Helmut Gante

    Hello,
    is there a way to avoid duplicate BOM Item Numbers (STPO-POSNR) within one BOM?
    For Routings I could avoid duplicate Operation/Activity Numbers with transaction OP46 by setting T412-FLG_CHK = 'X' for Task List Check. Is there an aquivalent for BOMs?
    Regards,
    Helmut Gante

  • R1: tcAPIException: Duplicate schedule item for a task that does not allow multiples.

    Hi,
    I'm struggling with the following task:
    I have to assure an account exists for a given resource. I do provision it with the .tcUserOperationsIntf.provisionObject().
    I've created a createUser task to create the account.
    The task code checks if there is already matching account.
    If no account exists, is is created in the disabled state, and the object state of OIM account is set to 'Disabled' by means of task return code mapping.
    If it exists, it is 'linked' to OIM account.
    The problem is if the existing account is enabled, I have to change the OIM account state to 'Enabled' either.
    To implement this (thanks, Kevin Pinski https://forums.oracle.com/thread/2564011 )) I've created an additional task 'Switch Enable' which is triggered by a special task return code. This task always succeeds, and its only side effect is switching the object status to 'Enabled'.
    By I've getting the 'Duplicate schedule item for a task that does not allow multiples' exception constantly:
    This is the stack trace:
    Thor.API.Exceptions.tcAPIException: Duplicate schedule item for a task that does not allow multiples.\
      at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject(tcUserOperationsBean.java:2925)\
      at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.provisionObject(tcUserOperationsBean.java:2666)\
      at Thor.API.Operations.tcUserOperationsIntfEJB.provisionObjectx(Unknown Source)\
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\
      at java.lang.reflect.Method.invoke(Method.java:601)\
      at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)\
      at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)\
      at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)\
      ...skipped
      at Thor.API.Operations.tcUserOperationsIntfDelegate.provisionObject(Unknown Source)\
      ... skipped
    What did I wrong?
    Regards,
    Vladimir

    Hi Vladimir,
    Please select 'Allow Multiple Instance' checkbox for the process task.
    Thanks,
    Pallavi

  • Is terms of delivery and payment item data or header data

    Hello Gurus,
           is terms of delivery and payment item data or header data ? where does it configure ?
    Many thanks,
    Frank

    Dear Frank
    It is at header level only.  Whatever datas you maintain in customer master, by default, it will flow both in header and item level.  However, if you wish to change at item level, depending upon the requirement, you may do so based on which, there will be document split.
    thanks
    G. Lakshmipathi

Maybe you are looking for