Identify Duplicate Records

Post Author: chrise
CA Forum: Crystal Reports
I know that Crystal has the ability to identify all distinct records, thereby, getting rid of all duplicate records. However, I need to do the exact opposite and so far have been unsuccessful. Can anyone aid me in creating a report to identify only duplicate records in Crystal?
Thanks.

Post Author: SKodidine
CA Forum: Crystal Reports
Check out this KBase article.
Retrieving duplicate records

Similar Messages

  • Identifying duplicate records in a table

    I am trying to identify duplicate records in a table - well they are broadly duplicated but some of the fields are changed on each insert whilst others are always the same.
    I can't work out the logic and it is driving me #$%$#^@ crazy !

    Here are a couple of other examples:
    Method 1: -- Makes use of the uniqueness of Oracle ROWIDs to identify duplicates.
    =========
    To check for single column duplicates:
    select rowid, deptno
    from dept outer
    where
    outer.rowid >
    (select min(rowid) from dept inner
    where inner.deptno=outer.deptno)
    order by deptno;
    To check for multi-column (key) duplicates:
    select rowid, deptno, dname
    from dept outer
    where
    outer.rowid >
    (select min(rowid) from dept inner
    where inner.deptno| |inner.dname=outer.deptno| |outer.deptno)
    order by deptno;
    Method 2: -- Makes use of resultset groups to identify uniqueness
    =========
    To check for single column duplicates:
    select rowid, deptno
    from dept
    where
    deptno in
    (select deptno from dept group by deptno having count(*) > 1)
    order by deptno;
    To check for multi-column (key) duplicates:
    select rowid, deptno, dname
    from dept
    where
    deptno| |dname in
    (select deptno| |dname from dept group by deptno| |dname having count(*) > 1)
    order by deptno;
    null

  • Identify duplicate records in a table

    I have this situation in the same table
    GOREMAL_PIDM     GOREMAL_EMAL_CODE     GOREMAL_EMAIL_ADDRESS     GOREMAL_STATUS_IND     GOREMAL_PREFERRED_IND
    2238954     REC1     [email protected]     A     Y
    2238954     REC1     [email protected]     A     N
    I need to identify those records (look they are the same(email address), just upper and lower case) then I need delete one the one with the GOREMAL_STATUS_IND = N
    I am running this query
    select
    a.goremal_pidm    pidm_a,
    b.goremal_pidm    pidmb,
    a.goremal_emal_code emal_codea,
    b.goremal_emal_code emal_codeb,
    a.goremal_email_address email_addressa,
    b.goremal_email_address email_addressb,
    a.goremal_status_ind    status_inda,
    b.goremal_status_ind    status_indb,
    a.goremal_preferred_ind preferred_inda,
    b.goremal_preferred_ind preferred_indb,
    a.goremal_activity_date activity_datea,
    b.goremal_activity_date activity_dateb,
    a.goremal_user_id        user_ida,
    b.goremal_user_id        user_idb,
    a.goremal_comment        commenta,
    b.goremal_comment        commentb,
    a.goremal_disp_web_ind   web_inda,
    b.goremal_disp_web_ind   web_indb,
    a.goremal_data_origin    origina,
    b.goremal_data_origin    originb
    FROM
    goremal a, goremal b, saradap
    WHERE
    --b.goremal_pidm = 2216086
    b.goremal_preferred_ind = 'N'
    AND a.goremal_preferred_ind = 'Y' 
    AND  a.goremal_emal_code = 'REC1'
    AND B.goremal_emal_code = 'REC1'
    and a.goremal_email_address =  b.goremal_email_address
    and a.goremal_pidm = b.goremal_pidm
    and a.goremal_pidm = saradap_pidm
    AND Saradap_term_code_entry = 200990
    AND Saradap_program_1 = 'UBA' to identify the records but it is not working sometimes give me recoirds that only have one row in the table (goremall)
    what will be a good way to do this, again I need to identify all the records that have 2 rows, but the email is the same then I need to delete the ones with the
    GOREMAL_STATUS_IND = N
    tHANK YOU

    Hi,
    user648177 wrote:
    Sorry but the previous answer won't work
    I need to be able to identify (before the delete) all the records
    like this one
    GOREMAL_PIDM     GOREMAL_EMAL_CODE     GOREMAL_EMAIL_ADDRESS     GOREMAL_STATUS_IND     GOREMAL_PREFERRED_IND
    2238954     REC1     [email protected]     A     Y
    2238954     REC1     [email protected]     A     N
    where the GOREMAL_EMAL_CODE are equal and the GOREMAL_EMAIL_ADDRESS is Y in one record and N in another record, then I will delete the records with N, but I want to de a select before I do the deleteWhat is wrong with Mathiasm's suggestion? What exactly "won't work" when you try it?
    It would help if you formatted the data so that the columns lined up and posted it between {code} tags.
    It could be that there was some confusion about what the column names were, and you only have to substitute the right names for it to work.
    If you want to see the rows that will be deleted, rather than actually deleting them, change "DELETE" to "SELECT *".

  • Problem in identifying unique records from different tables

    Hello gurus,
    I am on E-Recruitment module.
    I order to get more fields in CANDIDATE_ATTR datasource I have enhanced it. I am getting additional fields from HRP5103 (Employer, Employment Start Date & Employment End Date) & from HRP5104 (Institute, City, Region). In both of the tables there are 9 primary keys out of which only two are different through which we can identify duplicate records i.e. OBJID (denote Candidate ID in both tables) & SEQNR(Number of Infotype Record with Same Key).
    I know that compounding InfoObjects (since i need to pull duplicate records from table as one candidate can have many institute & employer but they will always be referred by same OBJID) is one of the way but how to manage data through compounding coming from 2 tables.
    Also i donot want to create new objects as it will effect whole of the setup.
    Is there any other way.
    Can anyone give idea as to how to get records in BW.
    Thanks in advance.

    Hi Sundar,
    Thanks for your help. I wen tthrough the two discussions and found the following:
    1. I cannot include the primary/unique keys in the selection as it would make everything distinct. This restricts me from pointing me to the exact record that has changed.
    2. The columns would be dynamic. I want a config kind of solution where i can define the tables, columns on both the sides. And add/remove compare fields in the setup tables.
    Thanks,
    Faruq.

  • In SAP is it possible to identify duplicate BP master record?

    hi,
    In SAP is it possible to identify duplicate BP master record?
    Regards,
    babu

    Hi,
    You can identify the BP dupliate check. See the link
    http://help.sap.com/saphelp_crm50/helpdata/en/9a/6f9a3d13ce0450e10000000a114084/frameset.htm
    Regards
    Srinu

  • Duplicate record identifier and update

    My records look like 
    Name City Duplicateindicator 
    SAM   NYC   0
    SAM   NYC1 0
    SAM    ORD  0
    TAM   NYC  0
    TAM   NYC1  0 
    DAM   NYC  0  
    for some reason numeric character are inserted into city which duplicated my records , 
    I need to 
    Check for the duplicate records by name ( If name is repeating ) check for city if they  having same city (NYC and NYC1) are consider same city here. I am ok to do this for one city at a time.
    SAM has a duplicate record as NYC and NYC01 , the record which is having  SAM   NYC1 0 must be updated to SAM   NYC1 1 

    Good day tatva
    Since the Cities names is not exactly the same, you will need to parse the text somehow in order to clean the numbers from the name, this is best to do with SQLCLR using regular expression (If this fit your need, then I can post the CLR code for you).
    In this case you use simple regular expression replace function.
    On the result of the function you use simple query with the function ROW_NUMBER over (partition by RegularExpressionReplace(ColumnName, '[0-9]') order by
    ColumnName)
    on the result of the ROW_NUMBER  every row with ROW_NUMBER  more then 1 is duplicate
    I hope this useful :-)
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • How to delete Duplicate records in IT2006

    Dear Experts
    We have a situation like where we have duplicate records with same start and end dates in IT2006. This is because of the incorrect configuration which we have corrected now, but we need to do  a clean-up for the existing duplicate records. Any idea on how to clean it?  I ran report RPTKOK00 to find these duplicates but I could not delete the duplicate/inconsistenct record using report RPTBPC10 or HNZUPTC0, i Could only delete the deductions happened in the record.
    Is there any standard report/any other means of deleting the duplicate records created in IT2006?
    Thanks in advance for all your help.
    Regards
    Vignesh.

    You could probably use se16n to identify the duplicates and create the list of quotas to delete, and you could probably use t-code lsmw to write up a script to delete them, but be aware that you can't delete a Quota if it's been deducted from.
    You'd have to delete the Absence/Attendance first, then delete the Quota, then recreate the Absence/Attendance.

  • Avoiding duplicate records while inserting into the table

    Hi
    I tried the following insert statement , where i want to avoid the duplicate records while inserting itself
    but giving me the errror like invalid identifier, though the column exists in the table
    Please let me know Where i'm doing the mistake.
    INSERT INTO t_map tm(sn_id,o_id,txt,typ,sn_time)
       SELECT 100,
              sk.obj_id,
              sk.key_txt,
              sk.obj_typ,
              sysdate,
              FROM S_KEY sk
        WHERE     sk.obj_typ = 'AY'
              AND SYSDATE BETWEEN sk.start_date AND sk.end_date
              AND sk.obj_id IN (100170,1001054)
               and   not exists  (select 1
                                                                   FROM t_map tm1 where tm1.O_ID=tm.o_id
                                                                        and tm1.sn_id=tm.sn_id
                                                                        and tm1.txt=tm.txt
                                                                        and tm1.typ=tm.typ
                                                                        and tm1.sn_time=tm.sn_time )

    Then
    you have to join the table with alias tml where is that ?do you want like this?
    INSERT INTO t_map tm(sn_id,o_id,txt,typ,sn_time)
       SELECT 100,
              sk.obj_id,
              sk.key_txt,
              sk.obj_typ,
              sysdate,
              FROM S_KEY sk
        WHERE     sk.obj_typ = 'AY'
              AND SYSDATE BETWEEN sk.start_date AND sk.end_date
              AND sk.obj_id IN (100170,1001054)
               and   not exists  (select 1
                                                                   FROM t_map tm where sk.obj_ID=tm.o_id
                                                                        and 100=tm.sn_id
                                                                        and sk.key_txt=tm.txt
                                                                        and sk.obj_typ=tm.typ
                                                                        and sysdate=tm.sn_time )

  • ABAP / Query to Identify Duplicate Rows in Cube

    Dear Experts,
    We have a situation were some of our Cubes (due to compression and varying levels of forceful reloads) now contain duplicate rows.
    What I need to know is :-
    1) Is there a way to identify duplicate rows where one of the characteristics are different but all key figures are identical.
    2) If so what is easier to achieve, ABAP routine/program or Query
    3) If ABAP suggestions on how to code such
    4) If query same.
    What I need it to do is tell me which ClaimNo record (Primary Key) has duplicates and what characteristic has caused it.
    I know I am asking for a lot but I really need to get this resolved as it's causing mayhem and trying to pinpoint these records is both time consuming and painful.  What we are looking to do with the records is establish how they became duplicated so we can prevent this happening in the future.
    Your help as always much appreciated.
    Regards
    Craig
    Message was edited by: Craig Armstead

    Hi Craig,
    My previous answer can find out what all cubes and data targets have been loaded based on a request.
    Actually for your query. The following information will be surely useful.
    tables: /BIC/**(source ) , /BIC**(target)
    parameter : fieldname like /BIC/****-fieldname ( In ur case the it can be primary key or Duplicate entry )
    data: itab_source  like /BIC/*** occurs 0 with header line,
          itab_destination like /BIC/*** occurs 0 with header line.
    data: wa_itab_destination like line of itab_destination.
    select *
      from /BIC/*****
      into corresponding fields of table itab_source.
      where fieldname = fieldname.
    ******Include your piece of code which is for deleting records
    Delete adjacent duplicates from itab_source comparing characteristic ( i.e  duplicate characteristic you specified)
    ****Use this to delete the ODS Data before writing into it
    call function 'RSDRI_ODSO_DELETE_RFC'
      exporting
        i_odsobject  = 'ODS Name'
        i_delete_all = 'X'.
    if sy-subrc = 0.
      loop at itab_source.
        move-corresponding itab_source to itab_destination.
        append itab_destination.
      endloop.
      modify /BIC/*** from table itab_destination[].   “target being written from itab.
      commit work.
    endif.
    Please Reward points if this helps really.
    Thanks,
    Srinivas.

  • Delete duplicate records based on condition

    Hi Friends,
    I am scratching my head as how to select one record from a group of duplicate records based upon column condition.
    Let's say I have a table with following data :
    ID   START_DATE   END_DATE    ITEM_ID     MULT    RETAIL            |                      RETAIL / MULT
    1     10/17/2008   1/1/2009     83     3     7                 |                            2.3333
    2     10/17/2008   1/1/2009     83     2     4                 |                            2
    3     10/17/2008   1/1/2009     83     2     4                 |                            2
    4     10/31/2008   1/1/2009     89     3     6                 |                            2
    5     10/31/2008   1/1/2009     89     4     10                |                            2.5
    6     10/31/2008   1/1/2009     89     4     10                |                            2.5
    7     10/31/2008   1/1/2009     89     6     6                 |                            1
    8     10/17/2008   10/23/2008     124     3     6                 |                            2From the above records the rule to identify duplicates is based on START_DATE,+END_DATE+,+ITEM_ID+.
    Hence the duplicate sets are {1,2,3} and {4,5,6,7}.
    Now I want to keep one record from each duplicate set which has lowest value for retail/mult(retail divided by mult) and delete rest.
    So from the above table data, for duplicate set {1,2,3}, the min(retail/mult) is 2. But records 2 & 3 have same value i.e. 2
    In that case pick either of those records and delete the records 1,2 (or 3).
    All this while it was pretty straight forward for which I was using the below delete statement.
    DELETE FROM table_x a
          WHERE ROWID >
                   (SELECT MIN (ROWID)
                      FROM table_x b
                     WHERE a.ID = b.ID
                       AND a.start_date = b.start_date
                       AND a.end_date = b.end_date
                       AND a.item_id = b.item_id);Due to sudden requirement changes I need to change my SQL.
    So, experts please throw some light on how to get away from this hurdle.
    Thanks,
    Raj.

    Well, it was my mistake that I forgot to mention one more point in my earlier post.
    Sentinel,
    Your UPDATE perfectly works if I am updating only NEW_ID column.
    But I have to update the STATUS_ID as well for these duplicate records.
    ID   START_DATE   END_DATE    ITEM_ID     MULT    RETAIL    NEW_ID   STATUS_ID |   RETAIL / MULT
    1     10/17/2008   1/1/2009     83     3     7         2         1      |     2.3333
    2     10/17/2008   1/1/2009     83     2     4                                |     2
    3     10/17/2008   1/1/2009     83     2     4           2         1      |     2
    4     10/31/2008   1/1/2009     89     3     6           7         1      |     2
    5     10/31/2008   1/1/2009     89     4     10          7         1      |     2.5
    6     10/31/2008   1/1/2009     89     4     10          7         1      |     2.5
    7     10/31/2008   1/1/2009     89     6     6                            |     1
    8     10/17/2008   10/23/2008     124     3     6                            |     2So if I have to update the status_id then there must be a where clause in the update statement.
    WHERE ROW_NUM = 1
      AND t2.id != t1.id
      AND t2.START_DATE = t1.START_DATE
      AND t2.END_DATE = t1.END_DATE
      AND t2.ITEM_ID = t1.ITEM_IDInfact the entire where_ clause in the inner select statement must be in the update where clause, which makes it totally impossible as T2 is persistent only with in the first select statement.
    Any thoughts please ?
    I appreciate your efforts.
    Definitely this is a very good learning curve. In all my experience I was always writing straight forward Update statements but not like this one. Very interesting.
    Thanks,
    Raj.

  • Duplicate Records in DTP, but not in PSA

    Hi,
    I'm facing a strange behavior of the DTP while trying to load a Master Data, detecting duplicate notes where there is none.
    For example:
    ID 'cours000000000001000'
    In the source system: 1 record
    In the PSA: 1 record
    In the DTP Temporary Storage, 2 identical lines are identified.
    In fact, in this Temporary Storage, all the PSA records are duplicated... but only 101 are displayed as erroneous in the DTP...
    Here is my question: How to get rid of this duplication in the temporary storage?
    Thanks for your help
    Sylvain

    semantic keys selection could cause the duplicate issue in master data. if similar values in the keys were found then that will be taken as duplicate .
    in the second tab of DTP u can the handle duplicate records option choose that and load.
    Ramesh

  • Duplicate records in material master

    Hi All
    I am trying to init material master and I am getting this error message
    "281 duplicate record found. 0 recordings used in table /BI0/XMATERIAL"
    This is not the first time , I am initializing . Deltas were running for a month and then I had to run a repair request to capture some changes and then when I ran the delta from then onwards I am getting this message.
    I started by deleting all the records and running this inti packet.
    I have tried the error handling and also I do not see any option for "ignore duplicate records " in the packet.
    I cannot see any error (red) records also in the PSA enough though the message says there are errors
    Please advice
    Thanks

    Hi,
    The duplicate record check is in the extraction program. I would suggest you do not deactivate/comment it out.
    What you should do is to go back to your material master records from the source system and sort out the materials not having unique identifier. Once this is sorted out, you can then re-run your delta. You shouldn't have the problem again. I once had the same problem from an HR extraction and i had to go back to the source data and ask the business to correct the duplication. A record for an employee was changed and there was overlap of dates in the employees record. The BW extraction program saw this as a duplicate record.
    I hope this help.
    Do not forget to award the points please.
    Regards,
    Jacob

  • Duplicate records in a collection

    Hi Experts,
    Just now I've seen a thread related to finding duplicate records in a collection. I understand that it is not advisable to sort/filter data in a collection.
    (https://forums.oracle.com/thread/2584168)
    Just for curiosity I tried to display duplicate records in a collection. Please Please .. this is just for practice purpose only. Below is the rough code which I wrote.
    I'm aware of one way - can be handled effectively by passing data into a global temporary table and display the duplicate/unique records.
    Can you please let me know if there is any other efficient wayto do this.
    declare
      type emp_rec is record ( ename varchar2(40), empno number);
      l_emp_rec emp_rec; 
      type emp_tab is table of l_emp_rec%type index by binary_integer;
      l_emp_tab emp_tab;
      l_dup_tab emp_tab;
      l_cnt number;
      n number :=1;
    begin
    -- Assigning values to Associative array
      l_emp_tab(1).ename := 'suri';
      l_emp_tab(1).empno := 1;
      l_emp_tab(2).ename := 'surya';
      l_emp_tab(2).empno := 2;
      l_emp_tab(3).ename := 'suri';
      l_emp_tab(3).empno := 1;
    -- Comparing collection for duplicate records
    for i in l_emp_tab.first..l_emp_tab.last
    loop
        l_cnt :=0;  
    for j in l_emp_tab.first..l_emp_tab.last 
        loop      
           if l_emp_tab(i).empno  =  l_emp_tab(j).empno and l_emp_tab(i).ename  =  l_emp_tab(j).ename then
               l_cnt := l_cnt+1;          
                   if l_cnt >=2 then
                      l_dup_tab(n):= l_emp_tab(i);
                   end if;
           end if;                   
        end loop;  
    end loop;
    -- Displaying duplicate records
    for i in l_dup_tab.first..l_dup_tab.last
    loop
       dbms_output.put_line(l_dup_tab(i).ename||'  '||l_dup_tab(i).empno);
    end loop;
    end;
    Cheers,
    Suri

    Dunno if this is either easier or more efficient but it is different.  The biggest disadvantage to this technique is that you have extraneous database objects (a table) to keep track of.  The advantage is that you can use SQL to perform the difference checks easily.
    Create 2 global temporary tables with the structure you need, load them, and use set operators (UNION [ALL], INTERSECT, MINUS) to find the differences.  Or, create 1 GTT with an extra column identifying the set and use the extra column to identify the set records you need.

  • Duplicate records in PSA

    Hi all,
    how to identify & eliminate the duplicate records in the PSA??

    Hi,
    Here is the FI Help for the 'Handle Duplicate Record Keys' option in the Update tab:
    "Indicator: Handling Duplicate Data Records
    If this indicator is set, duplicate data records are handled during an update in the order in which they occur in a data package.
    For time-independent attributes of a characteristic, the last data record with the corresponding key within a data package defines the the valid attribute value for the update for a given data record key.
    For time-dependent attributes, the validity ranges of the data record values are calculated according to their order (see example).
    If during your data quality measures you want to make sure that the data packages delivered by the DTP are not modified by the master data update, you must not set this indicator!
    Use:
    Note that for time-dependent master data, the semantic key of the DTP may not contain the field of the data source containing the DATETO information. When you set this indicator, error handling must be activated for the DTP because correcting duplicate data records is an error correction. The error correction must be "Update valid records, no reporting" or "Update valid records, reporting possible".
    Example:
    Handling of time-dependent data records
    - Data record 1 is valid from 01.01.2006 to 31.12.2006
    - Data record 2 has the same key but is valid from 01.07.2006 to 31.12.2007
    - The system corrects the time interval for data record 1 to 01.01.2006 to 30.06.2006. As of 01.07.2006, the next data record in the data package (data record 2) is valid."
    By flagging this option in the DTP, you are allowing it to take the latest value.
    There is further information at this SAP Help Portal link:
    http://help.sap.com/saphelp_nw04s/helpdata/en/d0/538f3b294a7f2de10000000a11402f/content.htm
    Rgds,
    Colum

  • Query to find duplicate records (Urgent!!!!)

    Hi,
    I have a to load data from a staging table to base table but I dont want to load data already present in the base table. Criteria to identify the duplicate data is thorugh a field say v_id and status_flag. If these two are the same in both staging and base table then that record must be rejected as a duplicate record.
    Kindly help me with the SQL which i need to use in a Procedure.
    Thanks

    Hello
    Another alternative would be to use MINUS if the table structures match:
    --Source rows the first 5 are in the destination table
    SQL> select * from dt_test_src;
    OBJECT_ID OBJECT_NAME
    101081 /1005bd30_LnkdConstant
    90723 /10076b23_OraCustomDatumClosur
    97393 /103a2e73_DefaultEditorKitEndP
    106075 /1048734f_DefaultFolder
    93337 /10501902_BasicFileChooserUINe
         93013 /106faabc_BasicTreeUIKeyHandle
         94929 /10744837_ObjectStreamClass2
        100681 /1079c94d_NumberConstantData
         90909 /10804ae7_Constants
        102543 /108343f6_MultiColorChooserUI
         92413 /10845320_TypeMapImpl
         89593 /10948dc3_PermissionImpl
        102545 /1095ce9b_MultiComboBoxUI
         98065 /109cbb8e_SpanShapeRendererSim
        103855 /10a45bfe_ProfilePrinterErrors
        102145 /10a793fd_LocaleElements_iw
         98955 /10b74838_SecurityManagerImpl
        103841 /10c906a0_ProfilePrinterErrors
         90259 /10dcd7b1_ProducerConsumerProd
        100671 /10e48aa3_StringExpressionCons
    20 rows selected.
    Elapsed: 00:00:00.00
    --Destination table contents
    SQL> select * from dt_test_dest
      2  /
    OBJECT_ID OBJECT_NAME
        101081 /1005bd30_LnkdConstant
         90723 /10076b23_OraCustomDatumClosur
         97393 /103a2e73_DefaultEditorKitEndP
        106075 /1048734f_DefaultFolder
         93337 /10501902_BasicFileChooserUINe
    Elapsed: 00:00:00.00
    --try inserting everything which will fail because of the duplicates
    SQL> insert into dt_test_dest select * from dt_test_src;
    insert into dt_test_dest select * from dt_test_src
    ERROR at line 1:
    ORA-00001: unique constraint (CHIPSDEVDL1.DT_TEST_PK) violated
    Elapsed: 00:00:00.00
    --now use the minus operator to "subtract" rows from the source set that are already in the destination set
    SQL> insert into dt_test_dest select * from dt_test_src MINUS select * from dt_test_dest;
    15 rows created.
    Elapsed: 00:00:00.00
    SQL> select * from dt_test_dest;
    OBJECT_ID OBJECT_NAME
        101081 /1005bd30_LnkdConstant
         90723 /10076b23_OraCustomDatumClosur
         97393 /103a2e73_DefaultEditorKitEndP
        106075 /1048734f_DefaultFolder
         93337 /10501902_BasicFileChooserUINe
         89593 /10948dc3_PermissionImpl
         90259 /10dcd7b1_ProducerConsumerProd
         90909 /10804ae7_Constants
         92413 /10845320_TypeMapImpl
         93013 /106faabc_BasicTreeUIKeyHandle
         94929 /10744837_ObjectStreamClass2
         98065 /109cbb8e_SpanShapeRendererSim
         98955 /10b74838_SecurityManagerImpl
        100671 /10e48aa3_StringExpressionCons
        100681 /1079c94d_NumberConstantData
        102145 /10a793fd_LocaleElements_iw
        102543 /108343f6_MultiColorChooserUI
        102545 /1095ce9b_MultiComboBoxUI
        103841 /10c906a0_ProfilePrinterErrors
        103855 /10a45bfe_ProfilePrinterErrors
    20 rows selected.You could use that in conjunction with the merge statement to exclude all trully duplicated rows and then update any rows that match on the id but have different statuses:
    MERGE INTO dest_table dst
    USING(     SELECT
              v_id,
              col1,
              col2 etc
         FROM
              staging_table
         MINUS
         SELECT
              v_id,
              col1,
              col2 etc
         FROM
              destination_table
         ) stg
    ON
         (dts.v_id = stg.v_id)
    WHEN MATCHED THEN....HTH

Maybe you are looking for

  • Java edition compatible with standard edition?

    Hi I have a database created using Java edition 3.2.44 I want to write an application in C++ which reads this database using the standard Berkeley DB (possibly at the same time as the java application is also accessing them). Are the database files c

  • The option to create iCal calendars 'on my mac' has gone since updating to mountain lion

    I can create a new calendar 'on my mac' in busycal, but not in ical. I also cannot create a new reminder list 'on my mac' Is this intended behaviour, or is this a bug?

  • Decode Base64 and save as binary file

    Hi there, I am using Adobe Air 1.5 with JavaScript and want to save a file to my hard drive. I get the data from a WebService via SOAP as a Base64 encoded string. A test-string is attached. When I try to decode it with the WebKit function "atob()" an

  • Buddy gave me his Nano 2g!!! mean message on connecting...

    So my buddy bought a new iPod and i was in the right place at the right time! I took the machine home and promptly hooked it to my keyboard which didn't work and then it worked on the back of my iMac. When i start iTunes i get an error: The iPod "My

  • JAXB object creation for common XML

    Hi, Does JAXB let us create common Java Objects for common XMLs. Consider the following scenirio: XML1.xml <xsd:schema> <xsd:import namespace="Commons" schemaLocation="Commons.xsd"/> <xsd:complexType name="ComplexElementType"> <xsd:sequence> <xsd:ele