Returning Duplicate Rows

Hi. I'm working on this query, and I'm having problems. When I add an inner join, I start getting multiple rows returned.
What I want returned is ONLY the first row below, but I'm instead getting both rows (which are identical).
SALESMAN_NO | CUSTOMER_NO | ORDER | SALES | COGS | MARGIN
97 | 1306969000 | 00477023 | 517.40 | 298.20 | 219.18
97 | 1306969000 | 00477023 | 517.40 | 298.20 | 219.18
Here's my query. And when I add the inner joins, I get the 2nd row above. If I run the SAME query without the inner joins, I get the 1 and only 1 row - which is what I'm trying to get to. Getting this 2nd row is screwing up the SUM functions I'm ultimately trying to use.
SELECT
SOE.SALESMAN_NO AS [SALESMAN],
SOE.CUSTOMER_NO AS [CUSTOMER #],
(SOE.SALE_ORDER_NO) AS [ORDERS],
(SOE.SALES_AMT/100) AS [SALES],
(SOE.COST_GOODS_SOLD/100) AS [COGS],
((SOE.SALES_AMT/100) - (SOE.COST_GOODS_SOLD/100)) AS [MARGIN]
FROM
SOE_HEADER SOE
INNER JOIN CUST_NAME CUST
ON SOE.CUSTOMER_NO = CUST.CUSTOMER_NO
INNER JOIN OUTSIDE_SALES OUTSIDE
ON CUST.SALESMAN_NO = OUTSIDE.SALESMAN_NO
WHERE
SOE.SALESMAN_NO <> '83'
AND SOE.TYPE_OF_ORDER = '00'
AND SOE.ORDERS_STATUS = '06'
AND SOE.DATE_OF_ORDER > 20080101
AND SOE.TYPE_OF_FRGHT IN ('13','14')
AND SOE.CUSTOMER_NO = '1306969000'
I've looked at the data and at this query for about an hour now, and need some help. Thanks.

If you want to get the distinct rows only in this SELECT then you may use DISTINCT. ie replace the table with a query
,(Select Distinct SALESMAN_NO,CUSTOMER_NO, ORDER  SALES ,COGS , MARGIN From YourTableName) AS AliasName
Check this too
Find and/or Delete Duplicate Rows
http://code.msdn.microsoft.com/SQLExamples/Wiki/View.aspx?title=DuplicateRows&referringTitle=Home
Madhu
SQL Server Blog
SQL Server 2008 Blog

Similar Messages

  • Crystal report returning duplicate rows - Linking issue?

    Hello,
    I know this is a commonly brought up issue - that duplicate rows are returned in Crystal reports for various reasons.  In a lot of instances, where it's only a single row of data per case that I'm looking for, I'll move them to their own group.  But that solution doesn't work here.
    The tables and links used in the simple report I have are illustrated in the attached "Report Database Expert Links.jpg" file, and the conditions and report fields required as as follows:
    Select Expert conditions:
    {CLAIM_PERIODS.CPE_START_DATE} < Today and
    {CLAIM_PERIODS.CPE_END_DATE} > Today and
    {CLAIM_ROLES.CRO_START_DATE} < Today and
    IsNull({CLAIM_ROLES.CRO_END_DATE})
    Report Design fields (in Details section):
    CLAIMS.CLA_REFNO, CLAIM_ROLES.CRT_CODE, PARTIES.PAR_PER_FORENAME, PARTIES.PAR_PER_SURNAME
    So what this report is to do:
    It looks for benefit claims which are live (have a CLAIM_PERIODS.CPE_START_DATE prior to today, and CLAIM_PERIODS.CPE_END_DATE after today), and for these claims gives a breakdown of all people in the household (everyone associated with that claim, where CLAIM_ROLES.START DATE is prior to today, and end date is null - to pick out those who are currently active in the household).
    This works fine otherwise, but the issue is that each live claim can have either one or two rows present in CLAIM_PARTS and CLAIM_PERIODS (so could satisfy my first two conditions twice).  For claims with only a single claim part active, I get each household member listed once.  But when they have two claim parts active, I get eveyone listed twice.
    Is there a way - either in how I'm linking the tables up, or how I'm arranging the report design, that I can have every household member only appear once no matter how many related rows there are in CLAIM_PARTS and CLAIM_PERIODS?
    Many thanks in advance,
    Sami
    P.S. I don't seem to be allowed to attache image files to my post, but have embedded the content instead:

    Thank you very much Don - this is a feature I wasn't aware of previously, and is a big step in the right direction for me.  Really useful.
    In the report mentioned, this sort of works.  It works very well on individual fields - so I could just "Suppress If Duplicated" on the forename and then remove duplicates on that column when exported to Excel (or simply bring in a unique field such as the party reference number and do likewise).
    However, is there a way to get Crystal to do this for me?  So to suppress the whole line based on whether or not I'm suppressing this one field?
    Just applying "Suppress If Duplicated" to all fields at once seems to have slightly unpredictable behaviour - not suppressing every field in the row.  So I experimented a bit with using the "Suppress" tick to do this, then applying a formula to it.  But the closest I can get for the formula is something like:
    -     {PARTIES.PAR_PER_FORENAME} = previous({PARTIES.PAR_PER_FORENAME}) and
    {PARTIES.PAR_PER_SURNAME} = previous({PARTIES.PAR_PER_SURNAME}) and
    Which will only compare the current row with previous.  Is there a way to do the same comparison but with all previous rows?

  • Report Builder 2.0 returning duplicate rows in query designer

    Hi,
    I have a query running off a model, when I explore the data in the model and in a standard sql table it returns one row for each record, which include a unique id for each which is correct.
    When I go into report builder query designer, it shows duplicate rows for each record. I have removed and added fields to try and pin point why and the only reason i can come up with is that it runs fine until I add in a field thats varchar(255) or
    varchar(max). This is when it starts to duplicate the records.
    Can anyone tell me why it does this or point me in the direction of how to stop this. I can't edit the query in text to add DISTINCT so thats not an option.
    Many thanks,
    JJSJ

    OK - I have found a partial answer.   By Googling Report Builder and VARCHAR, I found another post which reported problems with the semantic query builder when the underlying table/view was returning columns of type VARCHAR(MAX).   
    On looking at the length setting for the data column in the DSV,  this reported a length of 2,147,483,647 for this field.   This is very odd because on querying the underlying table the largest length that I can find for this column is 191,086
    (which is far larger than I would have expected - I am investigating this separately).
    However, why should the Report Model think that this field contains such a large value?
    Anyway,  the other post that I found reported that they had solved their problem by converting the field to a VARCHAR (255).    I tried this (by casting the column in my View to VARCHAR(255))  and this resolved the problem
    - no more duplicate rows when adding this field to the query!!!
    I also tried CASTING to TEXT and to VARCHAR(8000).  The former did not resolve the problem; the latter did.
    So I have a workaround but I don't understand why.  
    Can anybody explain why having a VARCHAR(max) column in my entity causes duplicate rows.    
    I suspect it is to do with the fact that for some reason the Report Model seems to think that there is an exceptionally long text string stored in this column in one of the rows but that again is a puzzle.
    Thanks
    Richard

  • Returning duplicate rows only

    Hi All,
    I need a Query to get only duplicate rows from a table. It should print the duplicate rows the number of times .
    it exists in the table.

    Added some duplicate records in emp table
    select count(*),empno,ename,job,mgr,hiredate,deptno from emp group by empno,ename,job,mgr,hiredate,deptno
      COUNT(*)  EMPNO ENAME      JOB              MGR HIREDATE      DEPTNO
         2       7782 CLARK      MANAGER         7839 09-JUN-81         10
         5       7844 TURNER     SALESMAN        7698 08-SEP-81         30
         3       7698 BLAKE      MANAGER         7839 01-MAY-81         30
         1       7900 JAMES      CLERK           7698 03-DEC-81         30
         1       7654 MARTIN     SALESMAN        7698 28-SEP-81         30
         1       7788 SCOTT      ANALYST         7566 19-APR-87         20
         1       7566 JONES      MANAGER         7839 02-APR-81         20
         6       7521 WARD       SALESMAN        7698 22-FEB-81         30
         5       7369 SMITH      CLERK           7902 17-DEC-80         20
         4       7934 MILLER     CLERK           7782 23-JAN-82         10
         1       7499 ALLEN      SALESMAN        7698 20-FEB-81         30Count(*)>1 are having duplicates
    Filter this with condition Count(*)>1
    select count(*),empno,ename,job,mgr,hiredate,deptno from emp group by empno,ename,job,mgr,hiredate,deptno having count(*)>1
      COUNT(*)  EMPNO ENAME      JOB              MGR HIREDATE      DEPTNO
         2       7782 CLARK      MANAGER         7839 09-JUN-81         10
         5       7844 TURNER     SALESMAN        7698 08-SEP-81         30
         3       7698 BLAKE      MANAGER         7839 01-MAY-81         30
         6       7521 WARD       SALESMAN        7698 22-FEB-81         30
         5       7369 SMITH      CLERK           7902 17-DEC-80         20
         4       7934 MILLER     CLERK           7782 23-JAN-82         10Edited by: Lokanath Giri on २४ जनवरी, २०१२ १०:४६ पूर्वाह्न

  • DB / Integation adapter returning duplicate rows on select [not all rows]

    I am seeing some unusal behavior, we have Application adapter invoking BEPL and which invokes DB adapter to make a select call to ERP database. Select should return 2 different records, where as in the response xml it has same record [only first record] listed twice. On DB adapter wsdl, -"Return Single Resultset " is unchecked. Should this be checked? or is there any other reason why the first record may be cached. I have not seen this in other connectors, restarted server but still same issue. connector is in prod environment. Any help/ideas are welcome.
    Edited by: user3622460 on Aug 5, 2009 10:23 PM

    Can you tell us more about the options you have selected in the Adapter to select the records ?

  • Join returns duplicate rows

    Hi All,
    I am joining 3 tables:OWOR, WOR1 and IBT1. Following is the query:
    SELECT OWOR.DOCNUM, OWOR.ITEMCODE, OWOR.CLOSEDATE, OWOR.DOCENTRY,
    WOR1.ISSUEDQTY, WOR1.DOCENTRY,
    IBT1.QUANTITY, IBT1.BATCHNUM
    FROM OWOR INNER JOIN WOR1 ON OWOR.DOCENTRY = WOR1.DOCENTRY
    IBT1 INNER JOIN ON OWOR.ITEMCODE = IBT1.ITEMCODE
    WHERE OWOR.CLOSEDATE ='[%0]' AND OWOR.DOCNUM = CONVERT(INT,IBT1.BATCNUM)
    O/P:
    10  100  19.03.11  20   15   30    10
    10  100  19.03.11  20   15   14    10
    20  121  19.03.11  25   31     5    25
    20  121  19.03.11  25   31   10    25
    20  121  19.03.11  25   28     5    25
    20  121  19.03.11  25   28   10    25
    Note: I have tried Grouping on OWOR.DOCNUM, WOR1.DOCENTRY, also tried
    Suppress with the combination of Prevoius
    and Next. But could not succeed always the running totals of WOR1.ISSUEDQTY
    and IBT1.QUANTITY are getting messed
    up.
    Please let me know if you have any suggestions.
    Thanks,
    Vineela.

    Hi Ian,
    Sorry for the delay in replying, was stuck due to some Internet connectivity issues. Coming to the problem, I solved it myself, by changing the Running Total conditions when to evaluate and and when to reset. And my required O/P was:
    Required O/P:
    10 100 19.03.11 20 15 44 10
    20 121 19.03.11 25 59 15 25
    Thanx for your reply,
    Vineela.

  • Advanced Table Duplicate Rows

    I have an advancedTable (master-detail). In one instance the "advancedTable" object suppresses duplicate master records, but in another instance, duplicate rows are not suppressed. If I run the VO query associated with the advancedTable object, the query returns duplicate rows in both instances. Has anyone seen this before? I have also run the pages from JDev connecting to each instance and the same result is exhibited.
    Thanks,
    LC

    -- application release 11.5.10.2 (both environments)
    -- database version 10.2.0.3 (stage) and 11.1.0.7 (dev)
    -- OS - RHEL 5 (both evironments)
    Both instances are not identical. The stage environment is RAC vs. dev which is not RAC.
    Steps to reproduce:
    Create an advancedTable in an advancedTable in JDev using a viewLink to establish the master-detail relationship. Run the page either from JDev connecting to each dev and stage environment directly. In addition to running local, I have also migrated all source to both the dev and stage instance. Run the page - in the dev instance, the results table displays only a subset of the rows (duplicate rows are removed - ie instead of 12 rows which is what the query returns both from executing the select statement in SQLPlus and by SOP vo.getRowCount() only 6 rows are displayed in the master table). However, when the same page is executed using the stage environment, all applicable rows are displayed - duplicates are not removed. This happens both from running the page locally and from logging into each respective application server.
    Thanks,
    LC

  • Duplicate rows returned by contex index

    Hi
    I have a context index - locally partitioned with concatenated datastore.
    When I run a query on this it gives me duplicate rows (But for some rare cases only).
    Is this a bug in Oracle text?
    The table is partitioned on column norm_state_query - This query gives a duplicate -
    select rowid
    from mv_borrower_branch_details
    where contains ( norm_state_query, '( ( (fuzzy(${TATA},60,20,n) OR fuzzy(${PGIMENTS},60,20,n) OR fuzzy(${TATAPGIMENTS},60,20,n)) within norm_concat_name ) ) and ( ( ({43} OR fuzzy(${CHOWRINGHEE},60,20,n) ) within norm_concat_address ) )', 1 ) > 0
    and norm_state_query = 'WEST BENGAL'
    AAAKOGAAPAAAETMAAQ
    AAAKOGAAPAAAETMAAQ
    Thanks and regards
    Pratap

    Have you changed the partition definitions since you first created the index?
    Maybe that could cause the problem.
    Otherwise, it does sound like a bug. Please contact support so they can work
    through it with you.

  • Order by results in duplicates rows returned

    Hi All,
    Just got the following question that I could not understand at all. Let's assume we have a query like this:
    select d.d_name, e.e_name
    from dept d, emp e
    where d.d_id = e.d_id
    And this query returns 3 rows only:
    IT JOHN
    IT JAMES
    ADMIN BILL
    It was found that if the query was changed as:
    select d.d_name, e.e_name
    from dept d, emp e
    where d.d_id = e.d_id
    ORDER BY d.d_name, d.e_name
    sometimes it returns as 4 rows instead of 3:
    ADMIN BILL
    IT JAMES
    IT JAMES
    IT JOHN
    How this be possible?
    This happened sometimes on windows platform.

    Hi All,
    Just got the following question that I could not understand at all. Let's assume we have a query like this:
    select d.d_name, e.e_name
    from dept d, emp e
    where d.d_id = e.d_id
    And this query returns 3 rows only:
    IT JOHN
    IT JAMES
    ADMIN BILL
    It was found that if the query was changed as:
    select d.d_name, e.e_name
    from dept d, emp e
    where d.d_id = e.d_id
    ORDER BY d.d_name, d.e_name
    sometimes it returns as 4 rows instead of 3:
    ADMIN BILL
    IT JAMES
    IT JAMES
    IT JOHN
    How this be possible?
    This happened sometimes on windows platform.

  • Duplicate rows showing up in a table on the UI

    Hi All,
    We have a VO that returns 6 rows in a table on the UI. The rows become editable when a checkbox(a column in the table) is selected.
    On running it, sometimes we are getting 12 rows on the UI such that the first 6 rows are shown again in the table in the same order.
    When changes are made to this table and saved, only 6 rows are saved in the database table. Also, if 1 row is selected and the duplicate row of this row is not selected, then if we go to some other page and come back, the row that was selected earlier, will be unselected. If both the rows of same values are selected, only then the changes will be saved.
    The VO query has 2 tables. Checked data in both of these tables and executed the query too, but did not find any data issue. Also, we are having an order by clause in the query and the data in the table is not showing up in order.
    This is not occurring consistently but occurring for long durations.
    Can anybody fix this?
    Thanks,
    Sakshi

    Do you still have the "Music Video" Smart Playlist in iTunes? If you right click on a video file, which normally shows up under "Movies" you can pick the video type of Movie, Music Video, or TV Show. When you select "Music Video" the video file drops out of the Movies section and if the Music Video playlist is present will show up there.
    You might have deleted that playlist so it wouldn't be on the iPod either.
    So create a Smart Playlist where the rule is Video Kind IS Music Video...
    Patrick

  • First attempt to remove duplicate rows from a table...

    I have seen many people asking for a way to remove duplicate rows from data. I made up a fairly simple script. It adds a column to the table with the cell selected in it, and adds the concatenation of the data to the left into that new column. then it reads that into a list, and walks through that list to find any that are listed twice. Any that are it marks for DELETE.
    It then walks through to find each one marked for delete and removes them (you must go from bottom to top to do this, otherwise your row markings for delete don't match up to the original rows anymore). Last is to delete the column we added.
    tell application "Numbers"
    activate
    tell document 1
    -- DETERMINE THE CURRENT SHEET
    set currentsheetindex to 0
    repeat with i from 1 to the count of sheets
    tell sheet i
    set x to the count of (tables whose selection range is not missing value)
    end tell
    if x is not 0 then
    set the currentsheetindex to i
    exit repeat
    end if
    end repeat
    if the currentsheetindex is 0 then error "No sheet has a selected table."
    -- GET THE TABLE WITH CELLS
    tell sheet currentsheetindex
    set the current_table to the first table whose selection range is not missing value
    end tell
    end tell
    log current_table
    tell current_table
    set list1 to {}
    add column after column (count of columns)
    set z to (count of columns)
    repeat with j from 1 to (count of rows)
    set m to ""
    repeat with i from 1 to (z - 1)
    set m to m & value of (cell i of row j)
    end repeat
    set value of cell z of row j to m
    end repeat
    set MyRange to value of every cell of column z
    repeat with i from 1 to (count of items of MyRange)
    set n to item i of MyRange
    if n is in list1 then
    set end of list1 to "Delete"
    else
    set end of list1 to n
    end if
    end repeat
    repeat with i from (count of items of list1) to 1 by -1
    set n to item i of list1
    if n = "Delete" then remove row i
    end repeat
    remove column z
    end tell
    end tell
    Let me know how it works for y'all, it worked good on my machine, but I know localization is causing errors sometimes when I post things.
    Thanks,
    Jason
    Message was edited by: jaxjason

    Hi jason
    I hope that with the added comments it will be clear.
    Ask if something is always opaque.
    set {current_Range, current_table, current_Sheet, current_Doc} to my getSelection()
    tell application "Numbers09"
    tell document current_Doc to tell sheet current_Sheet to tell table current_table
    set list1 to {}
    add column after column (count of columns)
    set z to (count of columns)
    repeat with j from 1 to (count of rows)
    set m to ""
    tell row j
    repeat with i from 1 to (z - 1)
    set m to m & value of cell i
    end repeat
    set value of cell z to m
    end tell
    end repeat
    set theRange to value of every cell of column z
    repeat with i from (count of items of theRange) to 1 by -1
    (* As I scan the table backwards (starting from the bottom row),
    I may remove a row immediately when I discover that it is a duplicate *)
    set n to item i of theRange
    if n is in list1 then
    remove row i
    else
    set end of list1 to n
    end if
    end repeat
    remove column z
    end tell
    end tell
    --=====
    on getSelection()
    local _, theRange, theTable, theSheet, theDoc, errMsg, errNum
    tell application "Numbers09" to tell document 1
    set theSheet to ""
    repeat with i from 1 to the count of sheets
    tell sheet i
    set x to the count of tables
    if x > 0 then
    repeat with y from 1 to x
    (* Open a trap to catch the selection range.
    The structure of this item
    «class
    can't be coerced as text.
    So, when the instruction (selection range of table y) as text
    receive 'missing value' it behaves correctly and the lup continue.
    But, when it receive THE true selection range, it generates an error
    whose message is errMsg and number is errNum.
    We grab them just after the on error instruction *)
    try
    (selection range of table y) as text
    on error errMsg number errNum (*
    As we reached THE selection range, we are here.
    We grab the errMsg here. In French it looks like:
    "Impossible de transformer «class
    The handler cuts it in pieces using quots as delimiters.
    item 1 (_) "Impossible de transformer «class » "
    item 2 (theRange) "A2:M25"
    item 3 (_) " of «class NmTb» "
    item 4 (theTable) "Tableau 1"
    item 5 (_) " of «class NmSh» "
    item 6 (theSheet) "Feuille 1"
    item 7 (_) " of document "
    item 8 (theDoc) "Sans titre"
    item 9 ( I drop it ) " of application "
    item 10 ( I drop it ) "Numbers"
    item 11 (I drop it ) " en type string."
    I grab these items in the list
    {_, theRange, _, theTable, _, theSheet, _, theDoc}
    Yes, underscore is a valid name of variable.
    I often uses it when I want to drop something.
    An alternate way would be to code:
    set ll to my decoupe(errMsg, quote)
    set theRange to item 2 of ll
    set theTable to item 4 of ll
    set theSheet to item 8 of ll
    set theDoc to item 10 of ll
    it works exactly the same but it's not so elegant.
    set {_, theRange, _, theTable, _, theSheet, _, theDoc} to my decoupe(errMsg, quote)
    exit repeat (*
    as we grabbed the interesting datas, we exit the lup indexed by y.*)
    end try
    end repeat -- y
    if theSheet > "" then exit repeat (*
    If we are here after grabbing the datas, theSheet is not "" so we exit the lup indexed by i *)
    end if
    end tell -- sheet
    end repeat -- i
    (* We may arrive here with two kinds of results.
    if we grabbed a selection, theSheet is something like "Feuille 1"
    if we didn't grabbed a selection, theSheet is the "" defined on entry
    and we generate an error which is not trapped so it stops the program *)
    if theSheet = "" then error "No sheet has a selected table."
    end tell -- document
    (* Now, we send to the caller the interesting datas :
    theRange "A2:M25"
    theTable "Tableau 1"
    theSheet "Feuille 1"
    theDoc "Sans titre" *)
    return {theRange, theTable, theSheet, theDoc}
    end getSelection
    --=====
    on decoupe(t, d)
    local l
    set AppleScript's text item delimiters to d (*
    Cut the text t in pieces using d as delimiter *)
    set l to text items of t
    set AppleScript's text item delimiters to "" (*
    Resets the delimiters to the standard value. *)
    (* Send the list to the caller *)
    return l
    end decoupe
    --=====
    Have fun
    And if it's not clear enough, you may ask for more explanations.
    Yvan KOENIG (from FRANCE mardi 27 janvier 2009 21:49:19)

  • How to Create primary key index with duplicate rows.

    Hi All,
    While rebuilding an index on a table , I am getting error that there are duplicate rows in a table.
    Searching out the reason led me to an interesting observation.
    Please follow.
    SELECT * FROM user_ind_columns WHERE table_name='SERVICE_STATUS';
    INDEX_NAME     TABLE_NAME     COLUMN_NAME     COLUMN_POSITION     COLUMN_LENGTH     CHAR_LENGTH     DESCEND
    SERVICE_STATUS_PK     SERVICE_STATUS     SUBSCR_NO_RESETS     2     22     0      ASC
    SERVICE_STATUS_PK     SERVICE_STATUS     STATUS_TYPE_ID     3     22     0     ASC
    SERVICE_STATUS_PK     SERVICE_STATUS     ACTIVE_DT     4     7     0     ASC
    SERVICE_STATUS_PK     SERVICE_STATUS     SUBSCR_NO     1     22     0     ASC
    SELECT index_name,index_type,table_name,table_type,uniqueness, status,partitioned FROM user_indexes WHERE index_name='SERVICE_STATUS_PK';
    INDEX_NAME     INDEX_TYPE      TABLE_NAME     TABLE_TYPE     UNIQUENESS     STATUS     PARTITIONED
    SERVICE_STATUS_PK     NORMAL     SERVICE_STATUS     TABLE     UNIQUE     VALID     NO
    SELECT constraint_name ,constraint_type,table_name,status,DEFERRABLE,DEFERRED,validated,index_name
    FROM user_constraints WHERE constraint_name='SERVICE_STATUS_PK';
    CONSTRAINT_NAME     CONSTRAINT_TYPE     TABLE_NAME      STATUS     DEFERRABLE     DEFERRED     VALIDATED     INDEX_NAME
    SERVICE_STATUS_PK     P     SERVICE_STATUS     ENABLED     NOT DEFERRABLE     IMMEDIATE VALIDATED     SERVICE_STATUS_PK
    1. Using index scan:
    SELECT COUNT (*)
    FROM (SELECT subscr_no, active_dt, status_type_id, subscr_no_resets
    FROM service_status
    GROUP BY subscr_no, active_dt, status_type_id, subscr_no_resets
    HAVING COUNT (*) > 1) ;
    no rows returned
    Explain plan:
    Operation     OBJECT Name     ROWS     Bytes     Cost     OBJECT Node     IN/OUT     PStart     PStop
    SELECT STATEMENT Optimizer MODE=CHOOSE          519 K          14756                     
    FILTER                                        
    SORT GROUP BY NOSORT          519 K     7 M     14756                     
    INDEX FULL SCAN     ARBOR.SERVICE_STATUS_PK     10 M     158 M     49184                     
    2. Using Full scan:
    SELECT COUNT (*)
    FROM (SELECT /*+ full(s) */ subscr_no, active_dt, status_type_id, subscr_no_resets
    FROM service_status s
    GROUP BY subscr_no, active_dt, status_type_id, subscr_no_resets
    HAVING COUNT (*) > 1) ;
    71054 rows returned.
    Explain Plan:
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE          1           24123                     
    SORT AGGREGATE          1                               
    VIEW          519 K          24123                     
    FILTER                                        
    SORT GROUP BY          519 K     7 M     24123                     
    TABLE ACCESS FULL     ARBOR.SERVICE_STATUS     10 M     158 M     4234                     
    Index SERVICE_STATUS_PK is a unique and composite primary key VALID index. And the constraint is ENABLED and VALIDATED still having duplicate rows in table.
    How it is possible?
    Is it an Oracle soft Bug??
    Regards,
    Saket Bansal

    saket bansal wrote:
    Values are inserted as single rows inserts through an GUI interface.And you still claim to have over 71K duplicate records, without the GUI getting any kind of errors?
    That does not add up and can only be explained by a "bug".
    I tried inserting a duplicate record but failed.
    SQL> insert into service_status (select * from service_status where rownum <2);
    insert into service_status (select * from service_status where rownum <2)
    ERROR at line 1:
    ORA-00001: unique constraint (ARBOR.SERVICE_STATUS_PK) violatedAre you really sure there is no other way data in this table is populated/manipulated in bulk?

  • Duplicate Rows In Oracle Pipelined Table Functions

    Hi fellow oracle users,
    I am trying to create an Oracle piplined table function that contains duplicate records. Whenever I try to pipe the same record twice, the duplicate record does not show up in the resulting pipelined table.
    Here's a sample piece of SQL:
    /* Type declarations */
    TYPE MY_RECORD IS RECORD(
    MY_NUM INTEGER
    TYPE MY_TABLE IS TABLE OF MY_RECORD;
    /* Pipelined function declaration */
    FUNCTION MY_FUNCTION RETURN MY_TABLE PIPELINED IS
    V_RECORD MY_RECORD;
    BEGIN
    -- insert first record
    V_RECORD.MY_NUM = 1;
    PIPE ROW (V_RECORD);
    -- insert second duplicate record
    V_RECORD.MY_NUM = 1;
    PIPE ROW (V_RECORD);
    -- return piplined table
    RETURN;
    END;
    /* Statement to query pipelined function */
    SELECT * FROM TABLE( MY_FUNCTION ); -- for some reason this only returns one record instead of two
    I am trying to get the duplicate row to show up in the select statement. Any help would be greatly appreciated.

    Can you provide actual output from an SQL*Plus prompt trying this? I don't see the same behavior
    SQL> SELECT * FROM V$VERSION;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> CREATE TYPE MY_RECORD IS OBJECT(MY_NUM INTEGER);
      2  /
    Type created.
    SQL> CREATE TYPE MY_TABLE IS TABLE OF MY_RECORD;
      2  /
    Type created.
    SQL> CREATE OR REPLACE FUNCTION MY_FUNCTION
      2  RETURN MY_TABLE
      3  PIPELINED
      4          AS
      5                  V_RECORD        MY_RECORD;
      6          BEGIN
      7                  V_RECORD.MY_NUM := 1;
      8                  PIPE ROW(V_RECORD);
      9
    10                  V_RECORD.MY_NUM := 1;
    11                  PIPE ROW(V_RECORD);
    12
    13                  RETURN;
    14          END;
    15  /
    Function created.
    SQL> SELECT * FROM TABLE(MY_FUNCTION);
                  MY_NUM
                       1
                       1

  • Deleting Duplicate Rows in a list

    Hey folks, I've scowered around a bit for the answer to this and can't for the life of my figure it out.
    I've got a list of ~2,000-3,000 words in the following format:
    Fact
    Fiction
    Funny
    Funny
    Funny
    Funky
    etc etc. I am looking to make numbers delete all the duplicate rows such that the above list would become:
    Fact
    Fiction
    Funny
    Funky
    All of these words are in column A on a separate sheet in a numbers document I'm using to run an experiment. Is there a built in command or something that would do this? (I'm a COMPLETE beginner at using Numbers as a heads up)
    Any help with this would be GREATLY appreciated.

    Teaghue wrote:
    Never mind! I found an old old post explaining that this can't be done in numbers, so I just did it in excel.
    Perfectly wrong !
    The way to achieve the described goal was described several times in this forum.
    *_You just didn't search carefully !_*
    Searching for delet AND duplicate return several threads.
    Here are two of them.
    I didn't made a typo. I used delet so that it retrieve delete as well as deleting
    http://discussions.apple.com/thread.jspa?messageID=12992492
    http://discussions.apple.com/thread.jspa?messageID=11559125
    Yvan KOENIG (VALLAURIS, France) mercredi 2 mars 2011 17:59:40

  • Duplicate rows in 5 table

    I have five tables(A,B,C,D,E), and I am trying to check for duplicate rows in all the tables, I tried using inner join, but the first three join did not return any table. is there another way.the tables only have two columns , the email(different values)
    and authour (which is the same person in the table)
    when i tried 
    select * from TABLE A
    INNER JOIN TABLE B
    ON TABLEA. EMAIL = TABLEB.EMAIL
    INNER JOIN TABLEC
    ON TABLEA.EMAIL=TABLEC.EMAIL
    it came back with no result, the other way I am thinking is to unioun all all the tables and try to use count and group by ,but this will only show me the duplicates and not the authours
    please any other way

    it came back with no result, the other way I am thinking is to unioun all all the tables and try to use count and group by (but I could not insert my result in a new table)
    please any other way
    I dont understand your point here...Are you trying the below?
    Create Table T1(name varchar(50),Email Varchar(50))
    Insert into T1 Values('SQL','[email protected]'),('.NET','[email protected]')
    Create Table T2(name varchar(50),Email Varchar(50))
    Insert into T2 Values('Server','[email protected]'),('BizTalk','[email protected]')
    Create Table T3(name varchar(50),Email Varchar(50))
    Insert into T2 Values('Sql','[email protected]'),('server','[email protected]')
    ;With cte as
    (Select * From T1
    Union All
    Select * From T2
    Union All
    Select * From T3)
    Select name,email, count(1) From cte Group by name , email having count(1)>1
    Drop table T1,T2,T3
    I was able to insert the union product in a table, and i used this 
    SELECT EMAIL,AUTHOR,COUNT(EMAIL) AS AMOUNT FROM ALLEMAIL
     GROUP BY AUTHOR,EMAIL
     ORDER BY  AMOUNT DESC, email desc
    but its showing the email and the count but its appearing like the duplicate is only associated with one authour

Maybe you are looking for

  • Including "Total Stock Qty" and "Total Stock Value" in Slow movingitm Query

    Dear Guys,                 Regards.I am using the standard "Slow moving Item's Query" based upon the "Slow moving item Multiprovider".The Query design is as follows: Filter: 0CALDAY(PERIOD FROM/PERIOD TO) Rows: 0MATERIAL(VARIABLE-OPTIONAL) 0MATERIALG

  • CS6 Bridge can't read .cr2 and .raf files anymore

    Been having some serious issues with Bridge CS6. Seemingly overnight I can't open .cr2 files from a Canon 5D Mark III or .raf from a Fuji x100 anymore. I've tried updating, uninstalling and reinstalling, purging the cache, manually deleting the cache

  • Missing TOC Items

    I'm using Frame 9 and updating an existing TOC, I've checked that I have the 5 styles that I want included in the TOC. But when I update it, the very first header style is not included and everything after that is renumber/lettered. There I'm includi

  • Can Internet explorer be conflicting with Firefox even when Firefox is set as default?

    Firefox works and is default browser. but,internet explorer starts trying to come on,i know this because my clock stops,all processes slow,i start to shut things down,this warning appears:InternetExplorer is not responding. I stop Internet. Expl. and

  • Opening 24-bit .tif files

    How do you open a 24-bit or 48-bit .tif file in anything other than the defalt 8-bit format? (in CS3, CS4 or CS5 I have tryed in each) I have a scanner that generates 24 or 48 -bit .tif files and I am unable to open them in anything other than 8-bit.