Duplicate transactions in GL tables

Hi
We have noticed that some of the transactions in our GL tables are duplicate , we get the data from the Third parties which directly insert into gl_interface and which in turn flows to gl base tables.
Now the transactions are in different batches and its very difficult to find those and remove them .Could anyone provide a simple solution to remove the entries from the GL tables or if there is any other way out.
Thanks
Rishabh

Rishabh,
First, we would focus on the issue here,
you say single line appears thrice in the system, i.e. 1 Line sent by Third party application to GL Interface is appearing as 3 Lines in Oracle General Ledger when imported...? Also you say these lines are different Journal Batches, and not the same or single batch ...
From my perspective, this is caused due to absence of validation checks when the data from Third Party system is interfaced to GL Interface ... Ideal action plan is to take the journal batches that has duplicates, look carefully whether you can spot a trend in them ... and revist the validations on scripts in the pre-interface or staging tables ....
Also make sure to purge the GL interface in a test run ...
For the issue in hand, i.e. to correct the incorrect journal lines ... deleting those will not be the right thing, since if there are duplicate lines causing excess debit there would be suspense posting offsetting the credit side of it ...Hence run the account analysis report and trial balance report and find out the amount that needs to be reversed and pass a Single Manual Journal entry. (I agree it is not a easier exercise, highly time consuming as well ...)
The above is my view, I anticipate better suggestions from other members as well ....
Regards,
Ivruksha

Similar Messages

  • Duplicate transaction in interface table

    Hi Consultants,
    We need to delete the duplicate transactions from pa_transaction_interface_all table as we are getting duplicate transaction from OTL and some of them are processed into PA and some of them are rejected and stay in interface table. now we want to indentity those duplicate transaction in interface and need to delete them from interface table.
    anyone have look on my below query for one project#'MA0304007' and correct me. thanks in advance.
    SELECT *
    FROM pa_transaction_interface_all int
    WHERE 1=1
    AND EXISTS
    (SELECT 1
    FROM apps.PA_PROJECTS_ALL PA,
    apps.PA_PROJECT_STATUSES PST
    WHERE PA.PROJECT_STATUS_CODE =PST.PROJECT_STATUS_CODE
    AND PST.PROJECT_STATUS_NAME NOT IN ('Abandoned','Closed','Rejected')
    AND project_number = 'MA0304007'
    AND transaction_status_code = 'R'
    AND EXISTS
    (SELECT 1
    FROM apps.PA_EXPENDITURE_ITEMS_ALL XA,
    apps.PA_EXPENDITURES_ALL XE
    WHERE XA.EXPENDITURE_ID =XE.EXPENDITURE_ID
    AND XA. TRANSACTION_SOURCE ='GOLD'
    AND XA.PROJECT_ID = ( select project_id
    from pa_projects_all
    where segment1 = int.project_number)
    AND XA.TASK_ID = ( select task_id
    from pa_tasks
    where task_number =int.task_number
    and project_id = ( select project_id
    from pa_projects_all
    where segment1 = int.project_number) )
    AND TRUNC(XA.EXPENDITURE_ITEM_DATE) = TRUNC(INT.EXPENDITURE_ITEM_DATE)
    AND XA.EXPENDITURE_TYPE = INT.EXPENDITURE_TYPE
    AND XA.QUANTITY = int.quantity
    AND XA.ORIG_TRANSACTION_REFERENCE =int.ORIG_TRANSACTION_REFERENCE
    Thanks,
    Ashok.

    Hi
    PA_TRANSCTION_INTERFACE table will have all unique records, I would recommed you to extract all transactions for that particular project in xls sheet, identify the duplicate ones and take the transaction reference and now put a joining condition on the transaction-reference field and get rid of the same
    Thanks
    Krishna

  • Multiple transactions on same table at the same time causing deadlocks

    hi i have a transaction within it multiple sql statements are there. when multiple users start using the applications multiple transaction will begin and it is leading to a deadlock. how to achieve this ? if anyone has come across the same situation please
    post the solution. 
    thanks in advance.
    Shilpa

    this is normal behaviour, once a transaction is started untill that is committed orroll back, another transaction cannot be started.
    Are you looking for a solution to this or asking more about this? Maybe you are looking for a singleton pattern, you can check whether same transaction is behind running on that table or not, if yes prompt user for that, else go ahead.
    http://blogs.msdn.com/b/developingfordynamicsgp/archive/2008/12/05/identifying-duplicate-transactions.aspx
    http://social.msdn.microsoft.com/Forums/en/transactsql/thread/74837a9f-3be6-446a-84c7-cbdfc93a24f3
    regards
    joon

  • Creation and checkout errors, duplicate transactions using iPhoto print projects?

    My fiance and I have been using iPhoto to create calendars to give away as gifts for a couple years. I have had multiple problems with the latest iPhoto. They occurred on two Macbooks on two different wireless networks (an old white MB from 2006 running Snow Leopard, and the other an MBP running Lion), both with the latest iPhoto. Apple Support told me I should have used Time Capsule to avoid the first two of these errors (which, I was told, are documented bugs in their software). I was bummed because I initially spent around over 8 hours building the print book project!
    - Fonts changing when opening journal print book projects (fixed by completely restarting an 80 page book :-/ )
    - Pages cannot be rearranged in print books - they actually swap rather than drag and drop (fixed by again, completely restarting the project!)
    - When checking out, in calendars have an unspecified error and I have to cancel and fix individual photos (fixed by canceling
    - Photos randomly become blank icons and I have to reupload the photos
    - Photos that are dragged and dropped into a calendar don't show up in a random spot in the project and I have to go find them
    - When ordering calendars for this holiday season, errors when uploading have resulted in 16 duplicate transactions on my debit card resulting in over $400 of pending transactions that have been unable to be resolved by Apple Support
    Has anyone else experienced these problems? Any real fixes?

    If you're using iPhoto 5 as you say then, yes, 24k photos is heading close to the Limit of 25k. iPhoto 6 is good for 250,000 images, iPhoto 11 is good for 1,000,000.
    I'm going to guess that's a typo and that you have iPhoto 9.5.
    I've simply never heard of "too many photos" causing the problem you describe. You don't need more memory. You don't need to break up the Library. "Flushing Memory" is a non-isssue. One of the big improvements in 10.9 is in memory management.
    There are a couple of ways to break up a Library while preserving everything. It's dead easy with Aperture - export projects as Library - you can do it with iPhoto simply by duplicating the Library and deleting from the two libraries or use iPhoto Library Manager.
    But, as I say, 24k is a small Library. My main library is more than twice the size of that.
    Are Aperture and iPhoto using the same Library?
    As a Test:
    Hold down the option (or alt) key and launch iPhoto. From the resulting menu select 'Create Library'
    Import a few pics into this new, blank library. Is the Problem repeated there?
    Post back with the result.

  • How to delete the duplicate records in a table without promary key

    I have a table that contains around 1 million records and there is no promary key or auto number coulums. I need to delete the duplicate records from this table. what is the simple effective way to do this.

    Please see this link:
    Remove duplicate records ...
    sqldevelop.wordpress.com

  • How to delete the duplicate data  from PSA Table

    Dear All,
    How to delete the duplicate data  from PSA Table, I have the purchase cube and I am getting the data from Item data source.
    In PSA table, I found the some cancellation records for that particular records quantity  would be negative for the same record value would be positive.
    Due to this reason the quantity is updated to target but the values would summarized and got  the summarized value  of all normal and cancellation .
    Please let me know the solution how to delete the data while updating to the target.
    Thanks
    Regards,
    Sai

    Hi,
    in deleting the records in PSA table difficult and how many you will the delete.
    you can achieve the different ways.
    1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
    2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
    3.you can restrict the cancellation records at query level.
    Thanks,
    Phani.

  • How to delete duplicate records in all tables of the database

    I would like to be able to delete all duplicate records in all the tables of the database. Many of the tables have LONG columns.
    Thanks.

    Hello
    To delete duplicates from an individual table you can use a construct like:
    DELETE FROM
        table_a del_tab
    WHERE
        del_tab.ROWID <> (SELECT
                                MAX(dups_tab.ROWID)
                          FROM
                                table_a dups_tab
                          WHERE
                                dups_tab.col1 = del_tab.col1
                          AND
                                dups_tab.col2 = del_tab.col2
                          )You can then apply this to any table you want. The only differences will be the columns that you join on in the sub query. If you want to look for duplicated data in the long columns themselves, I'm pretty sure you're going to need to do some PL/SQL coding or maybe convert them to blobs or something.
    HTH
    David

  • Find duplicate values in a table based on combination of two columns.

    In table which have millions of records -
    I have a Table B under Schema A and i have to find the duplicate records in a table. My Table B uses the combination of columns to identify the unique values.
    Scenario is something like this-
    One SV_Id can have multiple SV_Details_Id and i have to find out if there is any duplicate in the table. (I have to use combination of columns to identify the unique values and there is no Primary Key, Foreign Key or Unique Key on the table)
    then wrote the following query -
    select SV_Id,SV_Details_Id count (*) from SchemaA.TableB group by SV_Id,SV_Details_Id having Count (*) > 1 ;
    Is it correct as after firing the above query it is returning the rows which is not matching to the duplicate.
    kindly help me.
    Looking forward for some guidance -
    Thanks in advance.

    What is the desired output??
    Do you want to see only unique records??
    Or you want to see the duplicate records.
    Giving an example will make it easier to provide the required querty.
    BTW here is an simple query which is used to delete the duplicate records and keep the unique records in table.
    DELETE
      FROM table_name     a
    WHERE EXISTS (SELECT 1
                     FROM table_name      b
                    WHERE a.sv_id         = b.sv_id
                      AND a.sv_detail_id  = b.sv_detail_id
                      AND a.ROWID         > b.ROWID
                   );Regards
    Arun

  • Transaction code for Table Maitainence for table

    Hi all,
    i have created Table Maintainece for Table and also i need to create transaction code
    for table maintainence.
    there when i create transaction code with TRANSACTION with PARAMETERS and SKIP FIRST  SCREEN.
    I can see all the records in the table into table Maintainence.
    Is ther is any possibility like i can restrict records on the key fields (like selection screen).
    Will anybody let me know how to goahead with this requirement.
    Regards,
    Madhavi

    You can build a small report that call the maintenance view. In the report, convert the SELECT-OPTIONS input to the [DBA_SELLIST|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=dba_sellist&adv=false&sortby=cm_rnd_rankvalue] parameter of function module [VIEW_MAINTENANCE_CALL|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=view_maintenance_call&adv=false&sortby=cm_rnd_rankvalue].
    If you have "pertinent" key to filter the data, you may define these as sub-key in a mainetance view, those fields will be asked for when entering the maintanance dialog. Or you can build a [view cluster|https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=se55&adv=false&sortby=cm_rnd_rankvalue] using these sub-set keys.
    Regards

  • Best way to remove duplicates based on multiple tables

    Hi,
    I have a mechanism which loads flat files into multiple tables (can be up to 6 different tables) using external tables.
    Whenever a new file arrives, I need to insert duplicate rows to a side table, but the duplicate rows are to be searched in all 6 tables according to a given set of columns which exist in all of them.
    In the SQL Server Version of the same mechanism (which i'm migrating to Oracle) it uses an additional "UNIQUE" table with only 2 columns(Checksum1, Checksum2) which hold the checksum values of 2 different sets of columns per inserted record. when a new file arrives it computes these 2 checksums for every record and look it up in the unique table to avoid searching all the different tables.
    We know that working with checksums is not bulletproof but with those sets of fields it seems to work.
    My questions are:
    should I use the same checksums mechanism? if so, should I use the owa_opt_lock.checksum function to calculate the checksums?
    Or should I look for duplicates in all tables one after the other (indexing some of the columns we check for duplicates with)?
    Note:
    These tables are partitioned with day partitions and can be very large.
    Any advice would be welcome.
    Thanks.

    >
    I need to keep duplicate rows in a side table and not load them into table1...table6
    >
    Does that mean that you don't want ANY row if it has a duplicate on your 6 columns?
    Let's say I have six records that have identical values for your 6 columns. One record meets the condition for table1, one for table2 and so on.
    Do you want to keep one of these records and put the other 5 in the side table? If so, which one should be kept?
    Or do you want all 6 records put in the side table?
    You could delete the duplicates from the temp table as the first step. Or better
    1. add a new column WHICH_TABLE NUMBER to the temp table
    2. update the new column to -1 for records that are dups.
    3. update the new column (might be done with one query) to set the table number based on the conditions for each table
    4. INSERT INTO TABLE1 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 1
    INSERT INTO TABLE6 SELECT * FROM TEMP_TABLE WHERE WHICH_TABLE = 6
    When you are done the WHICH_TABLE will be flagged with
    1. NULL if a record was not a DUP but was not inserted into any of your tables - possible error record to examine
    2. -1 if a record was a DUP
    3. 1 - if the record went to table 1 (2 for table 2 and so on)
    This 'flag and then select' approach is more performant than deleting records after each select. Especially if the flagging can be done in one pass (full table scan).
    See this other thread (or many, many others on the net) from today for how to find and remove duplicates
    Best way of removing duplicates

  • Identifying duplicate records in a table

    I am trying to identify duplicate records in a table - well they are broadly duplicated but some of the fields are changed on each insert whilst others are always the same.
    I can't work out the logic and it is driving me #$%$#^@ crazy !

    Here are a couple of other examples:
    Method 1: -- Makes use of the uniqueness of Oracle ROWIDs to identify duplicates.
    =========
    To check for single column duplicates:
    select rowid, deptno
    from dept outer
    where
    outer.rowid >
    (select min(rowid) from dept inner
    where inner.deptno=outer.deptno)
    order by deptno;
    To check for multi-column (key) duplicates:
    select rowid, deptno, dname
    from dept outer
    where
    outer.rowid >
    (select min(rowid) from dept inner
    where inner.deptno&#0124; &#0124;inner.dname=outer.deptno&#0124; &#0124;outer.deptno)
    order by deptno;
    Method 2: -- Makes use of resultset groups to identify uniqueness
    =========
    To check for single column duplicates:
    select rowid, deptno
    from dept
    where
    deptno in
    (select deptno from dept group by deptno having count(*) > 1)
    order by deptno;
    To check for multi-column (key) duplicates:
    select rowid, deptno, dname
    from dept
    where
    deptno&#0124; &#0124;dname in
    (select deptno&#0124; &#0124;dname from dept group by deptno&#0124; &#0124;dname having count(*) > 1)
    order by deptno;
    null

  • Counting duplicates in a big table

    Dear all,
    We are counting the number of duplicate records in a table as follows :
    SELECT COUNT (*)
    FROM semateam b
    WHERE cdate < '02-apr-09'
    AND ROWID IN (
    SELECT MAX (ROWID)
    FROM incalls a
    WHERE a.cdate = b.cate
    AND a.bno = b.bno
    AND a.b_suno = b.b_subno
    AND a.c_type = b.c_type
    AND a.calldn = b.calldn
    AND a.bamt = b.bamt
    AND a.preb = b.preb
    AND a.postb =b.postb
    GROUP BY a.cdate,
    a.subno,
    a.b_no,
    a.calldn,
    a.c_e,
    bamt,
    preb,
    postb
    HAVING COUNT (*) > 1)
    The table seamteam has got 8 million rows which is properly indexed. please let me have your suggestion inorder to tune it ?
    OS : solaris 5.10
    DB : 10.2.0.4.
    Kai

    HI..
    What is the explain plan of the query.To check the duplicate rows you can refer to below links:-
    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:15258974323143]
    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1224636375004]
    Anand

  • Is it possible to find the transaction code via table name ??

    Hi
    Can any body please let me know is it possible to find the transaction code via table name ??
    Thanks in advance
    Sesh
    Edited by: seshu_sapfico on Dec 8, 2009 12:21 PM

    Please, specify your requirement... A table could be modified by various programs which are called by numerous transactions.

  • Short dumop in J2I5 (provide duplicate entry in Standard table)

    Hello Expert ,
                              We have a problem in  T.Code J2I5  ( Excise Register Extraction) input entry is lelect Excise group 20 . and a date   from 04.08.09 onwards. and select the register RG23D . it shows the run time error  ( Eg The ABAP/4 Open SQL array insert results in duplicate database record ) .  but in the standard Tcode is there possible to provide duplicate entry in Standard table
    Thaks & regards
    Aditya Kr Tripathi

    Runtime Errors         SAPSQL_ARRAY_INSERT_DUPREC
    Except.                CX_SY_OPEN_SQL_DB
    Date and Time          29.01.2010 10:57:09
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    problem occurs in this code :
       assign I_RG23D_TAB-I_RG23D_TYP to <x_rg23dtyp> casting.
       <x_extrctdata> = <x_rg23dtyp>.
        class CL_ABAP_CONTAINER_UTILITIES definition load.
        call method CL_ABAP_CONTAINER_UTILITIES=>FILL_CONTAINER_C
          EXPORTING
            IM_VALUE               = i_rg23d_tab-i_rg23d_typ
          IMPORTING
            EX_CONTAINER           = i_report_tab-extrctdata
          EXCEPTIONS
            ILLEGAL_PARAMETER_TYPE = 1
            others                 = 2.
       I_REPORT_TAB-EXTRCTDATA = I_RG23D_TAB-I_RG23D_TYP.
        COMPUTE I_REPORT_TAB-EXTRCTLNGT = STRLEN( I_REPORT_TAB-EXTRCTDATA ).
        APPEND I_REPORT_TAB.
      ENDLOOP.
      IF M_EXTRACTED = 'X'.
        LOOP AT I_RG23D_KEY.
          DELETE
          FROM  J_2IEXTRCT
          WHERE BUDAT    = I_RG23D_KEY-BUDAT
          AND   SERIALNO = I_RG23D_KEY-SERIALNO
          AND   REGISTER = I_RG23D_KEY-REGISTER
          AND   EXGRP    = I_RG23D_KEY-EXGRP.
        ENDLOOP.
      ENDIF.
    Control table check here for data Extraction
      INSERT J_2IEXTRCT FROM TABLE I_REPORT_TAB.
    If the insertion of the extract table is successfull then the table
    for Extraction is Inserted
      IF SY-SUBRC EQ 0.
        PERFORM FILL_EXTDT USING C_RG23D M_EXTRACTED.
      ENDIF.
    ENDFORM.                                                    " RG23D
    *&      Form  RG23CPART1
    Purpose : RG23C Part I extraction logic
    FORM RG23CPART1.
      DATA: $PART1      TYPE PART1_TYP,
            $LINCNT     LIKE SY-LINCT,
            M_EXTRACTED VALUE '',
            $RC         LIKE SY-SUBRC.
    *********************************************************************************************8

  • Removing duplicates in the Internal Table

    Dear friends,
      Could any one of you kindly help me with a code to delete the duplicates in the internal table, but each duplicate should be counted, how many times that appeared and should be displayed again as a report with the messages and no of times that message appeared.
    Thank you,
    Best Regards,
    subramanyeshwer

    You can try something like this.
    report zrich_0001.
    data: begin of itab1 occurs 0,
          fld1 type c,
          fld2 type c,
          fld3 type c,
          end of itab1.
    data: begin of itab2 occurs 0,
          fld1 type c,
          fld2 type c,
          fld3 type c,
          end of itab2.
    data: counter type i.
    itab1 = 'ABC'.  append itab1.
    itab1 = 'DEF'.  append itab1.
    itab1 = 'GHI'.  append itab1.
    itab1 = 'DEF'.  append itab1.
    itab1 = 'GHI'.  append itab1.
    itab1 = 'DEF'.  append itab1.
    itab2[] = itab1[].
    sort itab1 ascending.
    delete adjacent duplicates from itab1.
    loop at itab1.
    clear counter.
      loop at itab2 where fld1 = itab1-fld1
                     and  fld2 = itab1-fld2
                     and  fld3 = itab1-fld3.
        counter = counter + 1.
      endloop.
    write:/ itab1-fld1, itab1-fld2, itab1-fld3,
             'Number of occurances:', counter.
    endloop.
    Regards,
    Rich Heilman

Maybe you are looking for

  • Issue in accessing Client Web Service 401 authentication error

    Hi, I have a requirement where i need to call a web service from SOA composite. When i deploy the service on SOA Server and try accessing it i get below error **oracle.fabric.common.FabricException: Cannot read WSDL "{http://www.service-now.com}Servi

  • G5 PowerPc vs G5 Intel

    Hi all, this is more of an "asking opinions" thing. With the new G5 Intels coming around the corner. Is there any point in acquiring the current G5 models or the ones before that? I'm just noticing alot of G5s being put up for sale, including the lat

  • IWeb sites in internet explorer

    I have created a site for our company that can be viewed at www.ourlandscapers.com. On Safari & Firefox, it looks fine, but on Internet explorer (which 99% of our clients use), it looks like a big mess, with overwriting, field boxes, red x's everywhe

  • Legacy Consumer Number

    Hi Everyone, A very basic CRM question. How do you often maintain the legacy consumer number of your BPs when doing data migration? What is the best practice here? Thanks, Dave

  • How to use HG8009G40 LUT in Avid

    we have a production that wants us to bake-in the HG8009G40 LUT into their rushes, but Im unable to find this LUT for download. Is it possible?  Or is it already included in the list of Sony LUTs in Avid, but under a different name? in my Media Compo