Duplicated records

Dear friends,
One question for you from a PL/SQL beginner:
I have to individuate duplicated records (and set them as errated):
With the below code I retrieve only the plus records, but I need ALL the duplicated records (example, if I have 2 equal records, I need the 2 records, not only one)
SELECT
ID_AMBS
, ASL
, CD_PRESIDIO
, GGMM_CONTATTO
, NR_RICETTA
, CD_CONT_PRESCR
, NR_PROG_INT
FROM NOC_AMBS_WORK
WHERE ID_AMBS IN
    SELECT
    min(ID_AMBS)
    FROM
    NOC_AMBS_WORK
    GROUP BY
    ASL, CD_PRESIDIO, GGMM_CONTATTO, NR_RICETTA, CD_CONT_PRESCR, NR_PROG_INT
    HAVING
    COUNT (*) > 1
)With the below code I retrieve all the records of the table (not the duplicated)
       SELECT *
        FROM noc_ambs_work a,
             noc_ambs_work b
       WHERE NVL(a.asl,'x') = NVL(b.asl,'x')
         AND NVL(a.cd_presidio,'x') = NVL(b.cd_presidio,'x')
         AND NVL(a.ggmm_contatto,'x') = NVL(b.ggmm_contatto,'x')
         AND NVL(a.nr_ricetta,'x') = NVL(b.nr_ricetta,'x')
         AND NVL(a.cd_cont_prescr,'x') = NVL(b.cd_cont_prescr,'x')
         AND NVL(a.nr_prog_int,'x') = NVL(b.nr_prog_int,'x')Can you help me?
Tks
leo

Since you have not mentioned, which columns are getting duplicated, whats table structure, which column is pk, I think you can take help from below sample
SQL> ed
Wrote file afiedt.buf
  1  with c as
  2  (
  3  select 1 id, 'A' n from dual union all
  4  select 1,'A' from dual union all
  5  select 1,'A' from dual union all
  6  select 2,'e' from dual union all
  7  select 3,'r' from dual)
  8  select *
  9  from c
10  where id =(select t1.id
11         from c t1
12         group by id
13*        having count(*) >1)
SQL> /
        ID N
         1 A
         1 A
         1 A
SQL>

Similar Messages

  • How to delete the duplicated records, not just surpress the records?

    I am new to CR. Right now I am doing a project which needs CR get query from Oracle. I got the query from Oracle. There are records with duplicated fields. For example (the following only show part of fields):
    ID    body_code  
    1            10            
    2             10            
    3             15            
    4              15           
    5              15           
    6              16           
    I need to only select records (not surpress, because I will do some caluculate later) like following:
    ID         body_code        
    1             10         
    2              15
    3               16   
    I tried to creat selection fomula in fomula workshop, shown as follows:
    onlastrecord;
    <>next
    but CR said next can be evaluated. I think it must have something to do with the print-time. So what show I do to delete the duplicated records.Thank you very much.

    Ting,
    Try this:
    Insert a group on body_code.  Then create a running total called Distinct Count.  Field to summarize -> ID, Evaluate on change of group Group # 1 Body Code, and Never Reset.
    Then insert a chart in the report header:
    In the advanced layout, select body_code on change of and select Distinct Count running total in the Show values.
    I hope I understood what you're looking to accomplish.
    Z

  • Deleting duplicated records by date

    Hi,
    what could I do to delete duplicated records from test1 table
    the test1 table have the next columns that I describe below:
    code
    operator
    phone
    init_date
    end_date The records are duplicated by code, operator, phone, init_date
    and I need to delete the records with min(end_date)
    thanks in advanced...

    /* Formatted on 1/12/2012 7:28:44 AM (QP5 v5.149.1003.31008) */
    CREATE TABLE data
    AS
       (SELECT 'A' code,
               'Bob' operator,
               '111-2222' phone,
               ADD_MONTHS (SYSDATE, -1) init_date,
               SYSDATE end_date
          FROM DUAL
        UNION ALL
        SELECT 'A',
               'Bob',
               '111-2222',
               ADD_MONTHS (SYSDATE, -1) init_date,
               SYSDATE + 1 end_date
          FROM DUAL);
    DELETE FROM data
          WHERE (code, operator, phone, init_date, end_date) IN
                   (SELECT code,
                           operator,
                           phone,
                           init_date,
                           end_date
                      FROM (SELECT data.*,
                                   COUNT (
                                   OVER (
                                      PARTITION BY code,
                                                   operator,
                                                   phone,
                                                   init_date)
                                      cnt,
                                   ROW_NUMBER ()
                                   OVER (
                                      PARTITION BY code,
                                                   operator,
                                                   phone,
                                                   init_date
                                      ORDER BY end_date)
                                      rn
                              FROM data)
                     WHERE cnt > 1 AND rn = 1);

  • Duplicated records on infoobect data load

    Hi,
    I have a problem when loading dato to 0UCINSTALLA infoobject.
    It goes to Red flga and It reports duplicated records in /BI0/QUCINSTALLA and /BI0/YUCINSTALLA tables.
    I checked the infopackage and the "PSA only" checkbox is selected, and "Continuing..." and "Ingnore dup records" checkboxes are selected, too.
    If the "Ignore duplicated records" is selected, why is reporting the error?
    I don't know what to do with this problem.
    any ideas?
    thanks for the help.
    Mauricio.

    In transfer structure write a start routine that delete duplicate record like that:
    sort DATAPAK by /BIC/filed1 descending /BIC/filed2
    /BIC/filed3.
    delete adjacent duplicates from DATAPAK comparing /BIC/filed1 /BIC/filed2.
    Hope it helps.
    Regards

  • LSO: Duplicated Records Created For Each Person During Booking For Course

    Dear All,
    Since 2 months ago, I've been hitting the following issue.
    When I book an employee to a course through ESS, duplicated course participation records are created in the system.
    The duplicated records are having different booking priorities.
    And I aware that, this happened to the course which has workflow tagged to it.
    Sometimes, instead of having duplicated records created, it hit dump with the following message:
    "The ABAP/4 Open SQL array insert results in duplicate database records."
    Does anyone know what problem is this?
    Thanks.

    Did you solve this problem, Im facing the same problem.
    My HRP1001 table is increasing.

  • How to connect COEP-ESLH tables - duplicated records

    Dear All,
    I'd like to ask you to please give a hand in this issue:
    I connected COEP and ESLH tables in SQVI. The tables are connected through fields EBELN & EBELP. The problem is that I get duplicated records and I till now I haven't been able to figure out what the reason might be...
    How can I solve this? Please...
    Thanks,
    Csaba

    Hi Christine,
    Thanks for your help.
    To tell the truth I've got a task to check why someone's else query doesn't work...after I while I found out that the problem is about this table-connection...
    I'm not (fully) aware of the business purpose of this query, I'm working in logistics field...
    Example of duplicated reords:
    ESLH
    packno_ebeln_ebelp
    000001_0011_0010
    000002_0011_0010
    COEP
    belnr_buzei_ebeln_ebelp
    000A_0001_0011_0010
    000A_0002_0011_0010
    As a result of COEP-ESLH query I get four records instead of 2 (I guess first record in COEP belong to first record in ESLH, etc but of course it's my opinion and I SAP cannot consider it...)
    BR
    Csaba

  • Select duplicated records using SE16 ?

    Is it possible to discover the duplicated records using SE16?
    Thanks!

    Yes it is!
    You display the whole table (all lines and all fields), than you compare it line by line. Some sorting might be useful
    On the other hand if there is at least one key field in the table (which is always the case), there should not be much duplicate entries...

  • Outbound: Duplicated records, Scripting tool, Campaign name

    Hi All;
    Is there an option to remove duplicated records in the outbound which could happen when importing two files that have some same numbers, so we need the new to be used while old to be removed (this option is called: remove the duplication .. or something like this).
    Also, I am looking for a scipting tool, so when the agents is talking with the customer, there is a script that help the agent to know how to deal with the customer, this script need to be created per each campaign. Is there something like this?
    Another thing, when the call reaches to the agent, is it possible the name of the campaign to be shown at the CTI client?
    Regards
    Bilal

    Dear Pallavi,
    Very useful post!
    I am looking for similar accelerators for
    Software Inventory Accelerator
    Hardware Inventory Accelerator
    Interfaces Inventory
    Customization Assessment Accelerator
    Sizing Tool
    Which helps us to come up with the relevant Bill of Matetials for every area mentioned above, and the ones which I dont know...
    Request help on such accelerators... Any clues?
    Any reply, help is highly appreciated.
    Regards
    Manish Madhav

  • Duplicated records for DTP

    Hello Gurus,
             I have a process chain failed. data flow is from sap datasource to write-optimized DSO.  due to duplicated records in the datasource, so DTP failed.  how can I resolve this issue in detailed steps?
    Many thanks

    Hi,
    There is a setting in DTP handled the duplicate key.Just check that and try to load it again.
    It will reslove the issue.
    Cheers,
    Reena

  • Handling Duplicated Records in DTP

    Dear Experts,
    I am trying to load to the master data of 0PLANT using datasource 0BBP_PLANT_LOCMAP_ATTR (Location/Plant Mapping) using DTP.
    This standard datasource is neither language- nor time-dependent. Also, in the source system, it is not marked to handle duplicate records.
    I have also referred to OSS Note 1147563 - Consulting note: Indicator "Duplicate records". One of the key highlight is "If you use a DTP to transfer master data or texts, you can set the indicator 'Duplicate records' in the DTP tab page 'Data transfer'." I would suppose that this means the "Handle Duplicated Record Keys" option under tab "Update" of the respective DTP.
    In this OSS Note, it was also mentioned that
    "You must not set the indicator if the following prerequisites apply:
    The indicator 'DataSource delivers duplicate records' is not set in the DataSource."
    >> which is currently the case of datasource 0BBP_PLANT_LOCMAP_ATTR
    Checked in SAP Help [link|http://help.sap.com/saphelp_nw04s/helpdata/en/42/fbd598481e1a61e10000000a422035/frameset.htm]:
    You can specify how duplicate data records within a request are handled, independently of whether the setting that allows DataSources to deliver potentially duplicate data records has been made. This is useful if the setting was not made to the DataSource, but the system knows from other sources that duplicate data records are transferred (for example, when flat files are loaded).
    My question is, I can't load the master data mainly because of these duplicated record key errors and when I checked on the indicator to handle duplicated record keys, I was given the error message "Enter a valid value". Therafter, I can't do anything at all - activate DTP, click on other tabs - and it just got stucked at this point, and I could only choose to exit the transaction.
    Can anyone advise if I have basically missed anything?
    Thank you in advance.
    Regards,
    Adelynn

    Hi,
    Handling Duplicate Data Records 
    Use
    DataSources for texts or attributes can transfer data records with the same key into BI in one request. Whether the DataSource transfers multiple data records with the same key in one request is a property of the DataSource. There may be cases in which you want to transfer multiple data records with the same key (referred to as duplicate data records below) to BI more than once within a request; this is not always an error. BI provides functions to handle duplicate data records so that you can accommodate this.
    Features
    In a dataflow that is modeled using a transformation, you can work with duplicate data records for time-dependent and time-independent attributes and texts.
    If you are updating attributes or texts from a DataSource to an InfoObject using a data transfer process (DTP), you can specify the number of data records with the same record key within a request that the system can process. In DTP maintenance on the Update tab page, you set the Handle Duplicate Record Keys indicator to specify the number of data records.
    This indicator is not set by default.
    If you set the indicator, duplicate data records (multiple records with identical key values) are handled as follows:
    ●      Time-independent data:
    If data records have the same key, the last data record in the data package is interpreted as being valid and is updated to the target.
    ●      Time-Dependent Data
    If data records have the same key, the system calculates new time intervals for the data record values. The system calculates new time intervals on the basis of the intersecting time intervals and the sequence of the data records.
    Data record 1 is valid from 01.01.2006 to 31.12.2006
    Data record 2 has the same key but is valid from 01.07.2006 to 31.12.2007
    The system corrects the time interval for data record 1 to 01.01.2006 to 30.06.2006. As of 01.07.2006, the next data record in the data package (data record 2) is valid.
    If you set the indicator for time-dependent data, note the following:
    You cannot include the data source field that contains the DATETO information in the semantic key of the DTP. This may cause duplicate data records to be sorted incorrectly and time intervals to be incorrectly calculated.
    The semantic key specifies the structure of the data packages that are read from the source.
    Example
    You have two data records with the same key within one data package.
    In the following graphic, DATETO is not an element of the key:
    In the data package, the data records are in sequence DS2, DS1. In this case, the time interval for data record 1 is corrected:
    Data record 1 is valid from 1.1.2002 to 31.12.2006.
    Data record 2 is valid from 1.1.2000 to 31.12.2001.
    In the following graphic, DATETO is an element of the key:
    If DATETO is an element of the key, the records are sorted by DATETO. In this case, the data record with the earliest date is put before the data record with the most recent date. In the data package, the data records are in sequence DS2, DS1. In this case, the time interval for data record 2 is corrected:
    Data record 2 is valid from 1.1.2000 to 31.12.2000.
    Data record 1 is valid from 1.1.2001 to 31.12.2006.
    If you do not set this indicator, data records that have the same key are written to the error stack of the DTP.
    Note
    You can specify how duplicate data records within a request are handled, independently of whether the setting that allows DataSources to deliver potentially duplicate data records has been made. This is useful if the setting was not made to the DataSource, but the system knows from other sources that duplicate data records are transferred (for example, when flat files are loaded).

  • The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records

    Hello Everyone
    I need some help/advice for the following issue
    SAP BW version 3.5
    The Delta load for ODS - 0FIAP_O03 works correctly
    The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records
    NB Noticed one other forum user who has raised the same issue but question is not answered
    My questions are 
    1. Is this a known problem?
    2. Is there a fix available from SAP?
    3. If anyone has had this issue and fixed it, could you please share how you achieved this
    i have possible solutions but need to know if there is a standard solution
    Thankyou
    Pushpa

    Hello Pushpa,
    I assume that you are using the Delta load to the initial ODS and then to the CUBE as well.
    If the Delta is placed in both the places then there should not be any issue while sending the data to CUBE.
    If you are using the FULL Load then normally the data will gets aggregated in the cube as objects are with the Addition mode in the Cube.
    Can you post the exact error that you are facing here as this also can be of the design issue.
    Murali

  • Master Data load duplicated records - 0ART_SALES_ATTR

    Hi All,
    I'm trying to load Master Data in the Test System and I'm getting an error that says that There are duplicates of the data record 70114 with the key '10 30 000000000001034170 ' for characteristic 0MAT_SALES .
    This is wrong, no data is duplicated, I already check the PSA and it's fine.
    The dataflow is the 3.5.
    The Infopackage has been set up with Update Initialization with Data Transfer.
    The PSA and the Infoobjects are empty, they have never been loaded.
    The Processing in the Infopackage has been set as PSA and Infoobject (Package by package).
    I can't modify the Infopackage as I am in a Test System.
    Any Idea on how to solve this?

    The problem was that the person who designed the dataflow made a mistake and swapeed two fields.

  • Duplicated records - continued

    Hi All,
    I am using following script to find records, having two columns with duplicated values in one or more records. It works OK, but columns used to find the records are not keys and therefore can't precisely identify them, so I need to print column WORKORDER_NUMBER to identify the record.
    Environment: Oracle 10g.
    SELECT compid, compjobid, count(*)
    FROM Workorder
    WHERE deptid = 221000001 AND wostatus NOT IN (7)
    GROUP BY compid, compjobid
    HAVING count(*) > 1
    ORDER BY compid, compjobid
    Please advice the solution. Many thanks in advance.
    Aleks

    You want something like:
    SQL> ed
    Wrote file afiedt.buf
      1  WITH t AS (select 1 AS PID, 123 as PAKey, 1 as SG from dual union all
      2             select 2, 123, 1 from dual union all
      3             select 3, 123, 1 from dual union all
      4             select 4, 123, 2 from dual union all
      5             select 5, 234, 1 from dual union all
      6             select 6, 234, 2 from dual union all
      7             select 7, 234, 2 from dual)
      8  -- END OF TEST DATA
      9  select t.pid, t.pakey, t.sg
    10  from t, (select pakey, sg
    11           from t
    12           group by pakey, sg
    13           having count(*) > 1
    14          ) tx
    15  where t.pakey = tx.pakey
    16* and   t.sg = tx.sg
    SQL> /
           PID      PAKEY         SG
             1        123          1
             2        123          1
             3        123          1
             6        234          2
             7        234          2
    SQL>

  • Help: Finding duplicated records

    I'd like to list record single out the records in a table having these fields to define them as duplicate records.
    ID     CATEGORY     SERVICE_FROM_DATE     SERVICE_TO_DATE
    249407     000055     08/28/1996      08/28/1996
    249869     000055     08/28/1996      08/28/1996
    268167     30     07/08/1996      07/25/1996
    268394     30     07/08/1996      07/25/1996 Using group method does not show the list of records. I need the list of records have these fields with duplicated values.
    Any suggestions are greatly appreciated.
    Jimmy

    Hi,
    This forum is dedicated to Oracle SQL Developer Data Modeler, so it is possible to get an answer here, but it is not very likely.
    Try to get help in in more appropriate forum from here
    https://forums.oracle.com/forums/main.jspa?categoryID=84
    Regards

  • Queary Help:Duplicated Records

    Hi,
    I want display the duplicated customers' names in my report. those record should include the address or phone# is 'null' value.
    For Example:
    Last_Name First_Name Address Phone#
    smith john 125 ave. 123321
    smith john 123321
    lee mary 245 st. 135425
    lee mary 245 st.
    Can anyone help?
    Thank u,
    Jun

    Hi Jun
    I did not quite understand what you meant.
    But if you database table HAS duplicate values for Name, Reports will simply display it without culling duplicate values.
    I suggest you to examine your query and ensure that you are not fetching DISTINCT columns.
    Regards
    Sripathy

Maybe you are looking for

  • How do I set up multiple Ipods on same ITunes account?

    Adding more and more tunes into my ITunes on PC; I filled up my 160 GB Ipod. How can I set it that the next batch of songs goes onto another Ipod?

  • Some images saved in Photoshop CS will not open in Photoshop 6

    We have confirmed that some files saved from Photoshop CS will not open in Photoshop 6. This is due to a bug in Photoshop 6, not in CS. These files open correctly in Photoshop 3, 4, 5, 5.5, 7 and CS on both platforms. The solution is to upgrade the c

  • IWeb and domain name

    I have created a website on iWeb and would like to post it to a domain name. I don't know much about this part and would like advice. Where can I get a domain name and what can I expect to pay. Then, once I have it, how do I post the website I create

  • LOG_ARCHIVE_MAX_PROCESS

    Hii, 1) What should LOG_ARCHIVE_MAX_PROCESS be set when setting up DG(11.2.0.2)? 2) Does increasing LOG_ARCHIVE_MAX_PROCESS fasten up the apply process? 3) Logs at standby db are being made by LGWR not by ARCH. Would it still help us increasing the n

  • Smartform Device type

    Dear Friends, The page format I have created for printing a smartform is of dimensions: ( 91 mm X 290 mm ). It's showing the correct printout but when I try to take the printout, it gives error -->                          <u> Device typeSAPWIN does