Duplicated values

i want to display employee last names, department numbers, and all the employees who work in the same department as a given employee.The tables are like this
departments
Name Null? Type
DEPARTMENT_ID NOT NULL NUMBER(4)
DEPARTMENT_NAME NOT NULL VARCHAR2(30)
MANAGER_ID NUMBER(6)
LOCATION_ID NUMBER(4)
desc employees
Name Null? Type
EMPLOYEE_ID NOT NULL NUMBER(6)
FIRST_NAME VARCHAR2(20)
LAST_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
COMMISSION_PCT NUMBER(2,2)
MANAGER_ID NUMBER(6)
DEPARTMENT_ID NUMBER(4)
output format is something like this
department employee colleague
     20          fay          hartstein
     20     hartstein Fay
50 davies matos
and so on..
i am getting duplicated values. I got 3530 rows.i am having 107 rows in employees and 33 row in department

You have what is know an a cartesion product.
SQL> select 107 * 33 from dual;
    107*33
      3531You need to Join
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/queries7.htm#2054014

Similar Messages

  • How eliminating duplicated values from selecItem's List...

    Hi guys,
    i've a problem.
    I've a list of selectItem elements.
    private List<SelectItem> names;
    filled with some values.
    In this values there are some duplicated values, so i need to eliminate them.
    How can i do it?
    Please help me,thanks

    thanks very much...
    but i have a problem.
    I need to use ArrayList.
    What i want now is creating a Set and copying in it the arraylist....it should work???
    I've tried in this manner
    Once i've created the arrayList names
    i've defined
    private Set<SelectItem> names2;and i've done
    names2=new HashSet();
    names2.addAll(names);calling in my page names2.
    The problem is that it shows me the same list, with changed order, but without eliminating duplicated values.
    Can you help me?

  • Duplicated values when added to list of list

    Hey all,
    I'm making a list of lists, when I print the values just after adding in the list of list everything seems fine, but when I print that list where I call it it has duplicate value instead of the 2 different that I see when I first print it, I really don't know what's wrong.
    I tried 2 different declarations for the list of list but is the same result, I also tried using index to add the list but no change either.
    What is wrong? or how should I do it instead.
    Thanks in advance
    The code is this
    public class ReadFile extends Metrics{
        public List loadFile(String filename) throws IOException{//void
            FileInputStream fstream = null;
            ///TRACE POINTS
                List<TracePoint> CharPoints=new ArrayList<TracePoint>();
                List<List> ListaChars = new ArrayList<List>();
                //List<List<TracePoint>> ListaChars = new ArrayList<List<TracePoint>>();
            try {
                fstream = new FileInputStream(filename);//"xy.dat");
                DataInputStream in = new DataInputStream(fstream);
                BufferedReader br = new BufferedReader(new InputStreamReader(in));
                String strLine;
                Normalize norm = new Normalize();
           //Read File Line By Line
                while((strLine = br.readLine())!= null) {
                    int INDEX=0;
                    ///READING VALUES FROM XY FILE and splittting
                    String[] temp = strLine.split(" ");
                    ///clear list before reusing it
                    CharPoints.clear();
                    String[] tempNorm = norm.normalize(temp, 0, 1);
                    for (int i = 1 ; i < tempNorm.length ; i=i+2){
                        CharPoints.add(new TracePoint(Float.parseFloat(tempNorm),Float.parseFloat(tempNorm[i+1])));
    ListaChars.add(INDEX, CharPoints);
    System.out.println(" Charpoints is "+ CharPoints);
    System.out.println(" LISTofCHARS is "+ ListaChars);
    INDEX++;
    //Close the input stream
    in.close();
    }catch (FileNotFoundException ex) {
    Logger.getLogger(ReadFile.class.getName()).log(Level.SEVERE, null, ex);
    }finally {
    try {
    fstream.close();
    } catch (IOException ex) {
    Logger.getLogger(ReadFile.class.getName()).log(Level.SEVERE, null, ex);
    //return CharPoints;
    return ListaChars;
    public static void main(String args[]) throws IOException{
    ReadFile test2 = new ReadFile();
    List<List> characters = new ArrayList<List>();
    characters = test2.loadFile("xy.dat");
    //System.out.println("\ncharacters = " +" "+ characters.get(0));
    The result is this, the first look like its copying ok, but when the second is added is duplicated and the first value is not there anymoreCharpoints is [TracePoint: 0.395683, 0.913669, 0.000000, TracePoint: 0.136691, 0.928058, 0.000000, TracePoint: 0.000000, 1.000000, 0.000000]
    LISTofCHARS is [[TracePoint: 0.395683, 0.913669, 0.000000, TracePoint: 0.136691, 0.928058, 0.000000, TracePoint: 0.000000, 1.000000, 0.000000]]
    Charpoints is [TracePoint: 1.000000, 0.004096, 0.000000, TracePoint: 0.999937, 0.003151, 0.000000, TracePoint: 0.999811, 0.001765, 0.000000, TracePoint: 0.998866, 0.000000, 0.000000]
    LISTofCHARS is [[TracePoint: 1.000000, 0.004096, 0.000000, TracePoint: 0.999937, 0.003151, 0.000000, TracePoint: 0.999811, 0.001765, 0.000000, TracePoint: 0.998866, 0.000000, 0.000000], [TracePoint: 1.000000, 0.004096, 0.000000, TracePoint: 0.999937, 0.003151, 0.000000, TracePoint: 0.999811, 0.001765, 0.000000, TracePoint: 0.998866, 0.000000, 0.000000]]
    Edited by: mtiv on Oct 24, 2009 6:02 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    while((strLine = br.readLine())!= null) {
                    int INDEX=0;
                    INDEX++;
    }On each iteration of the while loop, index equals 0. Try moving the variable index outside of the while loop.
    List<TracePoint> CharPoints=new ArrayList<TracePoint>();By convention variable names begin with a lower case letter, it is an important practise to live by as I was lead to believe CharPoints was a class at first.
    Mel

  • Eliminating the duplicated values

    Hi,
    I have an array and i am filing this array with values which comes from defined function and these values are some time duplicated.
    can any one tell me how can i eliminate them?
    hope to hear from you.
    Best regards
    Yasir Noman

    Hi Yasir,
    What about using Hashtable instead?
    Regards,
    Marc

  • How to find duplicated values in rows obtained from multiple tables

    Hi.
    I need to find the duplicates stored in different tables of my database. I have some tables like the following model (I know it could be nonsense, but that's because it's simplified):
    table person { id, name, surname }.
    table zoo {id, owner, name, city} (zoo.owner -> person.id)
    table area {id, zoo, type, name} (area.zoo -> zoo.id).
    table dog {id, area, name, colour} (dog.area -> area.id)
    table elephant {id, area, name, height} (elephant.area -> area.id)
    As ids are autoincremental, it could happen that a person has two zoos with identical areas (meaning, these areas has the same type and name, same dogs (same name and colour) and same elephants (same name and height)). In an example with data:
    person
    id     name     surname
    p1     john     doe
    zoo
    id     owner     name          city
    z1     p1          central     NY
    z2     p1          central     NY
    area
    id     zoo     type     name
    a1     z1     open     main
    a2     z2     open     main
    dog
    id     area     name          colour
    d1     a1     jaro          brown
    d2     a1     chispa     white
    d3     a2     jaro          brown
    d4     a2     chispa     white
    elephant
    id     area     name          height
    e1     a1     dumbo     5
    e2     a1     elphy          4
    e3     a2     dumbo     5
    e4     a2     elphy          4
    That is: John Doe has two zoos in the same city and with the same name. These two zoos, has one open main area each. Each of these areas has two dogs with the same names and colours and two elephants with the same names and heights. So this zoos would be identical. What I want is to delete z2 zoo.
    I'd like to find a SQL function which returns me the id of one of these zoos, so it can respond to the question. Has the person called "John Doe" more than one area with the same type, name, dogs and elephants?
    Is it possible?

    Hi,
    Interesting problem!
    Two distinct zoos are duplicates of each other if they have the same owner, the same name, the same areas, and the same animals in each of those areas.
    It's pretty easy to tell if two zoos are distinct, but have the same owner and name. We could do that with a self-join:
    SELECT       z1.*
    ,       z2.id          AS id_2
    FROM       zoo     z1
    JOIN       zoo     z2  ON   z1.owner     = z2.owner
                    AND      z1.name     = z2.name
                  AND      z1.id          < z2.idBut how can we tell if two zoos that meet these criteria have the same aminals in the same areas? (It amuses me to call these the "critter criteria".)
    One way is to make a list, for each zoo, of all the animals, with all their attributes (that is, the attributes that matter for determining if they are the same), in each area, and then see if the lists for those two zoos is identical. Like this:
    WITH     all_animals     AS
         SELECT     d.id, d.area
         ,     d.name          -- || '~' || d.colour || ...
                   AS attributes
         ,     'DOG'     AS animal
         FROM     dog     d
        UNION ALL
         SELECT     e.id, e.area
         ,     e.name          -- || '?`' || e.height || ...
                        AS attributes
         ,     'ELEPHANT'     AS animal
         FROM     elephant     e
    -- UNION ALL   ... cat ... monkey ...
    ,     got_analytics     AS
         SELECT     z.id          AS zoo_id
         ,     ar.type          -- , ar.name, ...
         ,     an.animal
         ,     an.attributes
         ,     COUNT (*)     OVER ( PARTITION BY  z.id )     AS cnt
         ,     ROW_NUMBER () OVER ( PARTITION BY  z.id
                                   ORDER BY          ar.type     -- , ar.name, ...
                             ,                an.animal
                                         ,                    an.attributes
                           )                    AS r_num
         FROM           zoo          z
         LEFT OUTER JOIN      area          ar  ON   ar.zoo          = z.id
         LEFT OUTER JOIN      all_animals     an  ON       an.area     = ar.id
    SELECT       z1.*
    ,       z2.id          AS id_2
    FROM       zoo     z1
    JOIN       zoo     z2  ON   z1.owner     = z2.owner
                    AND      z1.name     = z2.name
                  AND      z1.id          < z2.id
    WHERE     NOT EXISTS
              SELECT  type     -- , name, ...
              ,          animal, attributes
              ,          cnt, r_num
              FROM    got_analytics
              WHERE   zoo_id     = z1.id
           MINUS         
              SELECT  type     -- , name, ...
              ,          animal, attributes
              ,          cnt, r_num
              FROM    got_analytics
              WHERE   zoo_id     = z2.id
    ORDER BY  z1.id
    ,            z2.id
    ;The first sub-query, all_animals, combines all the animals from all of the separate species tables. For each one, it makes a delimited list (the attributes column) of the columns that count in determining if 2 animals are the same. Different species may have different attributes and different numbers of attributes. The lists have to be delimited by some sub-string (not necessarily a single character) that can never occur in any of the attributes for that species. For example, dogs may never have '~' in their names (or their colours, if you expand the example to include colour). Elephants might have any character, but never have a '?' followed immediately by a grave accent '`'.
    The next sub-query, got_analytics, joins the zoo and area tables to this combined list of animals. The joins are outer joins, so we can detect as duplicates two zoos that have no areas, or two areas that have no animals.
    In the main query, I used MINUS to tell if the lists for two given zoos were identical. If x MINUS y produces no results, then every row in x has a matching row in y. To say that x and y are identical, we also have to know that every row in y has a matching row in x. Rather than do another MINUS, I counted the rows (got_analytics.cnt), so that if y has more rows than x, no rows will match when we do the MINUS.
    Here's the output I got from your sample data:
    `       ID      OWNER NAME             ID_2
             1          1 CENTRAL NY          2This shows the distinct ids, and the common attributes, of all distinct pairs of zoos that match. You can join this to the person table, if you want to see details about the owner.
    If a query is correct, it will not only find all the results it is supposed to find; it will also not find all the results it is not supposed to find, for all the various reasons. I added some more sample data to test one such reason:
    INSERT INTO ZOO (ID, OWNER, NAME) VALUES (91, 1, 'CENTRAL NY');
    INSERT INTO AREA (ID, ZOO, TYPE) VALUES (92, 91, 'MAIN');
    INSERT INTO DOG (ID, AREA, NAME) VALUES (93, 92, 'JARO');
    INSERT INTO DOG (ID, AREA, NAME) VALUES (94, 92, 'CHISPA');Zoo 91 is identical to zoos 1 and 2, except that 91 has no elephants. To thoroughly test this solution, you need to add some more test data, especially zoos that will not be selected for various reasons: for example, two zoos that have all the same animals, but different numbers of areas, or that both have two areas, but the animals are distributed differently between the areas.

  • External Hier - Duplicated Values

    Hy Gurus,
    I have created an External Hier with Cost Center (like External CH) and Cost Element (like leaf of the hier). The problem is that in BEx values are replicated for every node, not considering Cost Center detail. Example:
    CC: 1000    Amount = 6000
       CE: ABCD Amount = 1000
       CE: EFGH Amount = 5000
    CC: 1100    Amount = 6000
       CE: ABCD Amount = 1000
       CE: EFGH Amount = 5000
    Instead of:
    CC: 1000    Amount = 3400
       CE: ABCD Amount =  400
       CE: EFGH Amount = 3000
    CC: 1100    Amount = 2600
       CE: ABCD Amount =  600
       CE: EFGH Amount = 2000
    How can I solve this problem ?
    Thank you in advance and I will reward points.
    I you need more info please send me an e-mail I will send you printscreen of the BEx.
    Ciao.
    Riccardo.
    Message was edited by: Riccardo Venturini
    Message was edited by: Riccardo Venturini

    Hi Riccardo,
    defining an external hierarchy will be very high effort... the only possibility I see is to create a custom characteristic e.g. ZCOST which consists of both cost center and cost element values. On this characteristic ZCOST you can now apply a hierarchy
    -OVHD
    1100
    -HR Cost
    1000
    1100
    This solution of course implies you would have to reload cube data and change cube design from cost center and cost element to ZCOST... I do not think this solution is feasible for you but I do not know any better suggestion. External hierarchy is not possible as described above and hierarchy in attributes is also not practicable because of the n:m-relation between cost element and cost center. Maybe someone else can propose an easier solution!
    Best regards,
    Björn
    EDIT: I forgot most important point! If you create external hierarchy, each node must be unique. So in above example hierarchy it is necessary to create unique keys for ZCOST (for example by concatenating keys of cost element and cost center). For example keys might be
    -OVHD
    OVHD1100
    -HR Cost
    HRCOST1000
    HRCOST1100

  • Duplicating values

    I created a very simple car reservation system in CF.
    Basically, the customer enters their account number, the start date
    (pickup) and end date (return). At the end of the month, i simply
    count (sum) the number of occurances of each account number,
    calculate a percentage, and bill according to the percentage. Now a
    new requirement has come up and i have to bill according to the
    number of days they have the car. I know I have to do a datediff
    between end date and start date to figure out the number of days.
    If it is 0 or 1, then it is one day and I only count the account
    number once. If it is 2 days, then I have to count it twice, even
    thought it is only entered once. Based on the datediff value, how
    do I add and/or count the account number that number of times.

    The syntax for datediff varies from db to another, but this
    should show you the logic.
    select accountnum,
    sum(case when returndate <> pickupdate then
    datediff(day, returndate, pickupdate
    else 1 end) days
    from yourtables
    where whatever
    group by accountnum

  • Move row based on one duplicated value

    is it possbile to move entire rows base on just the value in the first column?
    1    a
    1    b
    2    c
    2   d
    the result should be
    1   a
    2   c

    Yes - double the header row, then use a formula like the one that I posted but start in row 1, with the formula
    =COUNTIF($A$1:$A1,$A1)
    Then copy down, copy that column, paste as values, and finally, sort the entire table
    based on that value. Then List1 will all have 1s in that column, List2 will have 2s, etc.

  • Why does this keep duplicating values?

    Hi, I have this sub vi that reads opc values(addresses) from a server across a local network. It uses a data socket read within a for loop to pass these values into an array. The problem i have is every so often it duplicates the same value rather than just reading the exact address in the opc array.
    e.g.
    123      123
    103      103
    110      103
    121      110
    Which may values that i save in a spread sheet out of sequence. Anyone knows whats wrong here?
    Stuart
    Attachments:
    RandD Cell room Software Project.llb ‏415 KB

    Your code has a lot of issues.
    First do a block diagram cleanup, you have a lot of wires with unnecessary bends and sections of code scattered all over the place.
    Why do you have 13? stop buttons.  Then proceed to crash your code by wiring the real Stop button to a Stop LabVIEW stop sign?  That is like hitting the abort button on the toolbar.  Not a good way to stop a program.
    I think you are seeing the effects of a race condition caused by the use of local variables.  You are writing to an indicator on one loop and reading from its local variable in another loop.  Depending on where each independent parallel loop is in its execution, you could read duplicate values (i.e., the indicator was not updated in between), or miss a value (indicator updated twice without being read.)  You should be using a producer/consumer architecture with queues to pass data if you want to avoid this problem.
    I would suggest spending more time on the forums reading up on race conditions, queues, and the LabVIEW style guide.
    Message Edited by Ravens Fan on 01-28-2010 11:24 AM

  • Duplicating values in Qualified Tables.

    Hi all
    We're using a vendor repository that contains a qualified table about company code information. We use the standard xsd file (CREMDM) for export the master data to R3, but we want to duplicate the information (segment E1LFB1M) as many times as neccesary according to a relation between company and purchasing organization:
    Ej.
    Purchasing Organization  Company
    ABMX                             126
    ABMX                             127
    ABMX                             128
    ABMX                             129
    When the record is created, the user only create it for one Purchasing organization and one company (qualified), and we need the same information entered by the user to be repeated for all the companies. Is there anyway to do this trhu syndication map or data manager formula?
    Idoc looks like:
    <?xml version="1.0" encoding="UTF-8" ?>
    - <CREMDM04>
    - <IDOC BEGIN="1">
    - <E1LFA1M SEGMENT="1">
      <MSGFN>005</MSGFN>
      <LIFNR>0010001000</LIFNR>
      <ANRED>/</ANRED>
      <BRSCH>SEJ</BRSCH>
      <DATLT>/</DATLT>
      <DTAWS>/</DTAWS>
      <ERDAT>/</ERDAT>
      <ERNAM>/</ERNAM>
      <KTOKK>VPRN</KTOKK>
      <KUNNR>/</KUNNR>
      <LAND1>/</LAND1>
      <LNRZA>/</LNRZA>
      <NAME1>/</NAME1>
      <NAME2>/</NAME2>
      <NAME3>/</NAME3>
      <NAME4>/</NAME4>
      <ORT01>/</ORT01>
      <ORT02>/</ORT02>
      <PFACH>/</PFACH>
      <PSTL2>/</PSTL2>
      <PSTLZ>/</PSTLZ>
      <REGIO>/</REGIO>
      <SORTL>/</SORTL>
      <SPRAS>/</SPRAS>
      <STCD1>BBQ030122SEO</STCD1>
      <STKZU>X</STKZU>
      <STRAS>/</STRAS>
      <TELBX>/</TELBX>
      <TELF1>/</TELF1>
      <TELF2>/</TELF2>
      <TELFX>/</TELFX>
      <TELTX>/</TELTX>
      <TELX1>/</TELX1>
      <XCPDK>/</XCPDK>
      <VBUND>/</VBUND>
      <FISKN>/</FISKN>
      <ADRNR>/</ADRNR>
      <MCOD1>/</MCOD1>
      <MCOD2>/</MCOD2>
      <MCOD3>/</MCOD3>
      <REVDB>/</REVDB>
      <KTOCK>/</KTOCK>
      <PFORT>/</PFORT>
      <WERKS>/</WERKS>
      <LTSNA>/</LTSNA>
      <WERKR>/</WERKR>
      <PLKAL>/</PLKAL>
      <DUEFL>/</DUEFL>
      <TXJCD>/</TXJCD>
      <FITYP>03</FITYP>
      <STCDT>04</STCDT>
      <ACTSS>PG</ACTSS>
    + <E1LFA1A SEGMENT="">
    + <E1ADRMAS SEGMENT="1">
    + <E1BPAD1VL SEGMENT="1">
    + <E1BPAD1VL1 SEGMENT="1">
    + <E1BPADTEL SEGMENT="1">
    - <E1LFB1M SEGMENT="">
      <MSGFN>005</MSGFN>
      <LIFNR>0010001000</LIFNR>
      <BUKRS>102</BUKRS>
      <AKONT>0021051001</AKONT>
      <ZWELS>C</ZWELS>
      <ZTERM>Z000</ZTERM>
      <FDGRV>A1</FDGRV>
      <REPRF>X</REPRF>
      <HBKID>BNMX</HBKID>
      <ALTKN>62058</ALTKN>
      </E1LFB1M>+ <E1LFM1M SEGMENT="">
    + <E1WYT3M SEGMENT="">
      <MSGFN>005</MSGFN>
      <LIFNR>0010001000</LIFNR>
      <EKORG>ABMX</EKORG>
      <PARVW>PR</PARVW>
      </E1WYT3M>
      </E1LFM1M>
    + <E1LFBKM SEGMENT="1">
    + <E1LFASM SEGMENT="1">
      </E1LFA1M>
      </IDOC>
      </CREMDM04>

    Hi Jose,
    In the syndicator, after opening the XSD, go to the destination properties,
    Select the segment - E1LFB1M and in the properties pane, Tick the property -  Repeatable XMl Node in the second column.
    I hope this will solve your problem.
    Thanks and Regards
    Nitin Jain

  • Output Text values getting duplicated when UDF attached to view form

    Hi Experts,
    I am getting a error where the output text is having duplicated values, when i am trying to attach existing UDF to view form. Like for instance, as mentioned below:
    Phone Number - 23456789
    State - CACA
    Country - UnitedStatesUnitedStatesUnitedStates
    When i add COuntry UDF, first it printed UnitedStates once, When i add State UDF, the country UDF value came as two time (UnitedStatesUnitedStates). When i added UDF Phone Number, Phone number came once, But state came twice (CACA) and Country came thrice (UnitedStatesUnitedStatesUnitedStates) and so on.
    The following are the two contrasting scenarios.
    1. First, I used existing UDFs which we created longback and tried attaching to the view form, I am getting the error i mentioned.
    2. Second is I created two new UDFs and attached to the view form, i am not getting any error. Its printing all the values once.
    It happened that i already customized the view form using Data Component - Catalog instead of Data Component - manage User for view form customization. I published that sandbox. Then I realized that it was a mistake, so i created a new sandbox, removed all the fields and then published that sandbox. The view form came back to as it was before modification. After that, i created one more sandbox, tried attaching UDFs to view form using Data Component - manage User---> UserV01, i am getting the error of multiple values as i mentioned above.
    Please guide on what i can do further to solve the error. Thank you for your timie in advance.
    Regards

    Hi Nishith,
    I think it's only display issue. We have contacted oracle and raised an SR. they say it's fine when they recreate the same scenario. As i said, this isuue is only when we use existing UDFs and the sandbox is already published. Thats the complication now.

  • When I use Merge duplicate Values

    Hi
    When I use MERGE for to INSERT It is duplicating values, I put condition in ON
    MERGE                         /*+  append nologging */ INTO sysadm.ps_loc_item_sn_zz2 t3
       USING (WITH tmp_ps_loc_item AS
                   (SELECT loc_cntr_id_sn, setid, companyid, effdt, setid_product, loc_product_sn,
                           ROW_NUMBER () OVER (PARTITION BY loc_cntr_id_sn, setid, companyid, effdt, setid_product, loc_product_sn ORDER BY linha)
                                                                                          seqno_item_sn,
                           loc_item_status, NULL ken_data_ativ_sn, NULL inactive_date_sn,
                           'LO' cntrct_origin_sn, SYSDATE row_added_dttm,
                           'CARGA PS 29052007' row_added_oprid, SYSDATE row_lastmant_dttm,
                           'CARGA PS 29052007' row_lastmant_oprid, 0 syncid, NULL syncdttm,
                           ken_component_sn
                      FROM (SELECT t1.*, ROWNUM linha
                              FROM sysadm.tmp_equip_crm t1,
                                   (SELECT     LEVEL l
                                          FROM DUAL
                                    CONNECT BY LEVEL <= 100)
                             WHERE l <= t1.qtd) )
              SELECT t2.loc_cntr_id_sn, t2.setid, companyid, effdt, setid_product, loc_product_sn,
                     t2.seqno_item_sn, loc_item_status, t5.ken_data_ativ_sn, inactive_date_sn,
                     cntrct_origin_sn, row_added_dttm, row_added_oprid, row_lastmant_dttm, syncid,
                     syncdttm, t2.ken_component_sn
                FROM tmp_ps_loc_item t2, sysadm.tmp_data_ativa t5
               WHERE t2.loc_cntr_id_sn = t5.loc_cntr_id_sn(+)
                 AND t2.ken_component_sn = t5.ken_component_sn(+)
                 AND t2.loc_item_status = t5.loc_item_status_sn(+)
                 AND t2.seqno_item_sn = t5.seqno_item_sn(+)) t4
       ON (    t3.loc_cntr_id_sn = t4.loc_cntr_id_sn
           AND t3.setid = t4.setid
           AND t3.companyid = t4.companyid
           AND t3.effdt = t4.effdt
           AND t3.setid_product = t4.setid_product
           AND t3.loc_product_sn = t4.loc_product_sn
           AND t3.seqno_item_sn = t4.seqno_item_sn)
       WHEN MATCHED THEN
          UPDATE
             SET t3.syncid = 0
       WHEN NOT MATCHED THEN
          INSERT (loc_cntr_id_sn, setid, companyid, effdt, setid_product, loc_product_sn, seqno_item_sn,
                  loc_item_status_sn, ken_data_ativ_sn, inactive_date_sn, cntrct_origin_sn,
                  row_added_dttm, row_added_oprid, row_lastmant_dttm, row_lastmant_oprid, syncid,
                  syncdttm)
          VALUES (t4.loc_cntr_id_sn, t4.setid, t4.companyid, t4.effdt, t4.setid_product,
                  t4.loc_product_sn, t4.seqno_item_sn, loc_item_status, t4.ken_data_ativ_sn,
                  t4.inactive_date_sn, t4.cntrct_origin_sn, t4.row_added_dttm, t4.row_added_oprid,
                  t4.row_lastmant_dttm, t4.ken_component_sn, t4.syncid, t4.syncdttm);

    I don't understand what you mean exactly?
    When an SQL statement encounters an error then all
    its work is rolled back ...
    So why do you expect that it worùs otherwise?simply
    WHEN MATCHED THEN      UPDATE         SET t3.syncid = 0Only

  • Compare two columns and match ALL recurring values, not just the first instance

    Hi everybody...
    I was looking for a way to compare values in two columns, identifying every duplicate value instance on a third column.
    Searching around the forums, I found a solution, albeit a partial one; I am using this formula: =IFERROR("Duplicate in row "&MATCH($A,$B,0),"") along column C, to compare values between columns A and B. When applied, the formula will render the first instance where there is a duplicate; unfortunately MATCH will only register the first instance of the duplicated values.
    For example:
    The first value on column A is 'Apple'. On column B there are three instances for the value 'Apple', the formula identifies the first of these values, but not the remaining two.
    I am not an advanced Numbers or Excel user, and the answer to this problem eludes me. I am attempting to compare columns that have no less than 1000 rows each, so you can imagine how, finding a solution to my problem would be really great.
    Thanks in advance,
    Pablo

    Unfortunately I can't see your screenshot, but supposing you have a table like this:
    Col1
    Col2
    1
    3
    Dupe
    2
    4
    Dupe
    3
    5
    Dupe
    4
    6
    5
    7
    Then here is one way to flag the duplicates.
    The formula in C2, copied down, is:
    =IF(COUNTIF($A,$B2)≥1,"Dupe","")
    Then filter on column C for 'Dupe', and copy the values in column B to wherever you need them.
    SG

  • Absurd values in BEx Report on Multicube

    Hi,
    when i run my query on the multicube (containing 2 cubes C1 and C2), i get a result with 3 rows and the fourth row says # and "not assigned" in all columns except the KF column.
    when i run my query only on the cube C1, then it shows me the correct result i.e. all the rows.
    I suspect there is something absurd in the multicube but cannot detect the problem. Pls help!!
    I am working on BW 2.0B
    Thanks,
    SD

    Hi,
    In a MultiProvider, every characteristic in each of the InfoProviders involved must correspond to exactly one characteristic or navigation attribute (wherever these are available). If it is not clear, at the MultiProvider definition stage, you have to specify to which InfoObject you want to assign the characteristic of the MultiProvider.
    The MultiProvider contains the characteristic 0COUNTRY and an InfoProvider contains the characteristic 0COUNTRY as well as the navigation attribute 0CUSTOMER__0COUNTRY. In this case, select exactly one of these InfoObjects in the assignment table.
    Select a key figure contained in a MultiProvider from at least one of the InfoProviders involved. In general, exactly one of the InfoProviders provides the key figure. However, there are cases where it is better to select from more than one InfoProvider:
    If the 0SALES key figure is stored redundantly in more than one InfoProvider (meaning that it is contained completely in all value combinations of the characteristics) you must select from exactly one of the InfoProviders involved (otherwise the duplicated value is added incorrectly in the MultiProvider).
    However, if 0SALES is stored as an actual value in one InfoProvider and as a planned value in another InfoProvider so that there is no overlap between the data records (in other words, sales are divided separately between several InfoProviders) then it makes sense to make a selection across several InfoProviders.
    Identification defines where the object has to get the data.
    When you create a query against MP ,each object gets the data from Infoproviders if check the identification only.
    http://help.sap.com/saphelp_nw04/helpdata/en/52/1ddc37a3f57a07e10000009b38f889/frameset.htm
    Hope this helps u..
    Best Regards,
    VVenkat..

  • Does pivotTable support rows with duplicated row names?

    I am developing an application using JDeveloper 11.1.1.6.0.
    Pivot table can display attribute values as row or column names. If the "row name" attribute for two records has the same/duplicated values, does pivot table show the two records in two rows with the same row name?
    My experiment shows only one row with blank value for "measure" in such case. Just want to confirm this is the expected behavior.

    Hi,
    don't quite understand the use case so answering in general: no data should be lost when using Pivot table
    Frank

Maybe you are looking for