TOP N analysis with same values

Dear Members,
Suppose we have the following data in the table Student.
Sname GPA
Jack 4.0
Smith 3.7
Rose 3.5
Rachel 3.5
Ram 2.8
I have seen many questions in this forum which gives good queries for TOP N analysis. But in my case those are not working.
There are total 5 students. I should write a query which should take an input and should give the students with top gpa as output in desc order.
Suppose if i give 4 as input i must get 4,3.7,3.5,3.5,2.8Gpa's since we have 2 gpa's which are same. Suppose i give 3 as the input i must get 4,3.7,3.5 and 3.5 GPA's.
The query must consider the GPA's which are same as one not different. How can we achive this. i.e the top three students (suppose input is 3) must be
Jack 4.0
Smith 3.7
Rose 3.5
Rachel 3.5
It must also include Rachel.
Any help is greatly appreciated.
Thanks
Sandeep

SQL> select * from test;
NAME                        GPA
Jack                          4
Smith                       3.7
Rose                        3.5
Rachel                      3.5
Ram                         2.8
SQL> select name,gpa
  2  from
  3    (select name,gpa,dense_rank() over(order by gpa desc) rn
  4     from test)
5 where rn <= 3
  6  order by rn;
NAME                        GPA
Jack                          4
Smith                       3.7
Rose                        3.5
Rachel                      3.5                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Clearing WHT on advance & Invoice with same value

    Hi SAP Gurus
    Could anybody suggest me what transaction code to be used or need any configuration change for clearing both the doc i.e. WHT deduct on advance payment with WHT deducted on Invoice with same value.
    In this scenario no further payment should be made.
    I have linked both the doc through TCode F-54 and try to clear both the doc with TCode F-44 but in this transaction no tds has been reveresed by the system and showing difference.
    In Partial advance payment the system has automatically reversed the TDS. No problem occured in this scenario.
    Regards
    Aman
    Edited by: Amandeep Garg on Mar 17, 2008 10:43 AM

    Hi Ahmed
    Thanx for your response...............
    But I have already used the same. Is there not any transaction other than F-53 in which bank is not  invovled.
    Regards
    Aman

  • Updating PK with same value - effect on CASCADE UPDATE

    Hello,
    I would like to understand how sql server 2008 deals with cascade updates
    For example I have
    Parent table: Employee with column Id as varchar(20) primary key
    Child table with IdEmployee as varchar(20) foreign key
    I set up Cascade Update for those two tables, meaning any change to primary key in Employee table will cause update in child table rows that match affecting Id
    Scenario 1:
    Update Employee
    set Id = 'ABC',
    Name = 'something new'
    where Id = 'CCC'
    Result of child table: all rows with foreign key IdEmployee and value of 'CCC' are updated. Expected behavior.
    Scenario 2:
    Update Employee
    set Id = 'ABC',
    Name = 'something new 2'
    where Id = 'ABC'
    This time, i am doing something different. I am beside update of column Name with new value, also update primary key but
    with SAME value
    Question is: what is going to happen to child rows? Are they ALL going to UPDATE due to CASCADE UPDATE
    So far, what i did in order to find solution is:
    1. I put an timestamp column in child table that should update each time row gets updated
    2. I put a trigger for update event on child table that will write something to some log table
    *After I set up those two I ran example like above just to be sure timestamp gets changed as well trigger is being fired
    Results of updating PK with same value:
    1. Timestamp didnt change
    2. Trigger didnt fire
    Is this enough to make conclusion that updating primary key with same value ALONG with updating some other columns won't
    affect child tables with UPDATE CASCADE ON
    Update:
    Database is CI AS collation
    If i do following
    Update Employee
    set Id = 'abc',
    Name = 'something new'
    where Id = 'ABC'
    1. Timestamp will change
    2. Trigger will fire
    Conclusion: Case sensitive is important here!
    Thank you very much in advance
    Milos

    >>  would like to understand how sql server 2008 deals with cascade updates <<
    Your posting has a number of conceptual errors. 
    1. The terms “parent” and “child” are not RDBMS; they are used in network databases. We have “referenced” and “referencing” tables; they can be the same table.
    2. A table models a SET of things, so there is no “Employee” table unless you truly have a one-man company. We want a collective or plural name for the SET/table. A better name is “Personnel” for this table. 
    3. Her is no such thing as a generic “id” in RDBMS; it has to be “<something in particular>_id” to be valid. Identifiers are usually fixed length 
    4. It is very, very rude not to post DDL on a forum. You also do not know the ISO-11179 Rules for data element names. They do not change names from table to table! Does your name change whenever you use it in a new place?? NO! Same principle with data. 
    5. The ISO standard uses “<property>_<attribute property>” syntax, no the old PascalCase.
    6. Why did you post a useless narrative? How do we compile “I SET up Cascade UPDATE for those two tables,..” to test it?? 
    CREATE TABLE Personnel
    (emp_id CHAR(20) NOT NULL PRIMARY KEY,
     emp_name VARCHAR(25) NOT NULL,
    CREATE TABLE Health_Plan
    (health_plan_acct CHAR(20) NOT NULL PRIMARY KEY,
     emp_id CHAR(20) NOT NULL 
     REFERENCES Personnel(emp_id)
     ON UPDATE CASCADE
     ON DELETE CASCADE,
    Scenario 1:
    UPDATE Personnel
       SET emp_id = 'ABC',
           emp_name = 'something new'
     WHERE emp_id = 'CCC';
    Result of child table: all rows with foreign key emp_id and value of 'CCC' are updated. Expected behavior.
    Scenario 2:
    UPDATE Personnel
       SET emp_id = 'ABC',
           emp_name = 'something new 2'
     WHERE emp_id = 'ABC';
    This time, I am doing something different. I am beside UPDATE of column emp_name with new value, also UPDATE PRIMARY KEY but
    with SAME value.
    >> Question is: what is going to happen to child [sic: referencing]  rows? Are they ALL going to UPDATE due to CASCADE UPDATE. <<
    SQL uses a set-oriented model, so the whole table is updated as a unit of work in theory. 
    So far, what I did in order to find solution is:
    >> I put an timestamp column in child [sic: referencing] table that should UPDATE each time row gets updated <<
    Why? It is not in the SET clause list; it cannot change. As an aside,  The T-SQL TIMESTAMP is not the ANSI/ISO TIMESTAMP; it is DATETIME2(n) in T-SQL. The old TIMESTAMP is being deprecated because it stinks both in concept and implementation. 
    >> I put a trigger for UPDATE event on child [sic: referencing] table that will write something to some log table.<<
    TRIGGERs are fired by what is called a “database event” shown in the ON [DELETE | UPDATE] clause. T-SQL adds INSERT as an event. An update to any value or to no value at all is still an update. Depending on the collation, case may or may not matter in the final
    outcome. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • JTree + FK with same value causing problems

    Hi
    I can't figure this out. If I create biz components for 2 tables having a parent-child relationship and a jTree with appropriate rules, things are ok only if the parent's id are of different values than the child. When parent.id and child.id have the same values the jTree seems to recursively fire valueChanged() at strange times.
    Example:
    CREATE TABLE PARENT
    PARENT_ID NUMBER CONSTRAINT PARENT_ID_NN NOT NULL,
    PARENT_NAME VARCHAR2(40 BYTE),
    CONSTRAINT PARENT_C_ID_PK
    PRIMARY KEY
    (PARENT_ID)
    CREATE TABLE CHILD
    CHILD_ID NUMBER CONSTRAINT CHILD_ID_NN NOT NULL,
    CHILD_NAME VARCHAR2(40 BYTE),
    PARENT_ID NUMBER,
    CONSTRAINT CHILD_C_ID_PK
    PRIMARY KEY
    (CHILD_ID)
    ALTER TABLE CHILD ADD (
    CONSTRAINT PARENT_FK
    FOREIGN KEY (PARENT_ID)
    REFERENCES PARENT (PARENT_ID));
    INSERT INTO PARENT VALUES (1, 'Parent 1');
    INSERT INTO PARENT VALUES (2, 'Parent 2');
    INSERT INTO PARENT VALUES (3, 'Parent 3');
    INSERT INTO CHILD VALUES (100, 'Child A', 1);
    INSERT INTO CHILD VALUES (200, 'Child B', 2);
    INSERT INTO CHILD VALUES (300, 'Child C', 3);
    I use the JDev 10.1.2 wizard to create biz components and test the AppMod to make sure the link works. Now I create a blank panel and drag over the ParentView data control. Using the jTree tree binding editor I create two rules:
    1) DataCollectionDef.ParentView - DisplayAttribute.ParentName - BranchRuleAccessor.ChildView
    2) DataCollectionDef.ChildView - DisplayAttribute.ChildName
    Now I add a tree selection listener:
    jTree1.addTreeSelectionListener(new TreeSelectionListener()
    public void valueChanged(TreeSelectionEvent e)
    DefaultMutableTreeNode selectedNode = (DefaultMutableTreeNode)jTree1.getLastSelectedPathComponent();
    if (selectedNode != null)
    System.out.println(selectedNode.getUserObject().toString());
    When I run my panel and watch in JDev everything is fine, i.e. nothing is printed to the screen when it first loads and when I click a node, the correct UserObject prints.
    Here's the rub, now I close the panel and update my child table as follows:
    UPDATE pdssuser.child SET child_id = 1 WHERE child_id = 100;
    UPDATE pdssuser.child SET child_id = 2 WHERE child_id = 200;
    UPDATE pdssuser.child SET child_id = 3 WHERE child_id = 300;
    This time, when I run my panel, the console shows that valueChanged() has been fired 3 times on load:
    Parent 1
    Parent 2
    Parent 3
    This behavior is causing problems with my real tree.
    I haven't had any luck finding threads about this. Any ideas?
    Thanks
    John

    ok, last one (i hope)
    The same issue occurs with 10.1.2.1
    ...but changing the location where I add the treeSelectionListener to after setBindingContext() seems to fix the problem.
    The second call to panelBinding.refreshControl() in setBindingContext() (the one after the call to jbinit()) is what fires the valueChanged events. Somewhere in there, DCBindingContainer.java or DCIteratorBinding.java, the treeSelectionListener is hearing that a value changed (maybe due to a query execution?). I guess the jTree1.setModel(...) call in jbinit() simply sets up the tree but the VO queries described in the Branch Rule accessors are executed on the refreshControl.....i'm out of my comfort zone here and may be confusing you with my "troubleshooting" so I'll just tell you my work-around.
    So to fix it I add the treeSelectionListener in my main method (not jbinit()) after the setBindingContext() has been called.
    I still have no idea why the FK triggers this behavior...but at least it's working.
    * the main method
    public static void main(String [] args)
    try
    UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
    catch(Exception exemp)
    exemp.printStackTrace();
    Panel1 panel = new Panel1();
    panel.setBindingContext(JUTestFrame.startTestFrame("DataBindings.cpx", "null", panel, panel.getPanelBinding(), new Dimension(400, 300)));
    panel.revalidate();
    // Now add the treeSelectionListener
    panel.addSelectionListener();
    * the JbInit method
    public void jbInit() throws Exception
    this.setLayout(borderLayout1);
    this.add(jTree1, BorderLayout.CENTER);
    jTree1.setModel((TreeModel)panelBinding.bindUIControl("ParentView1", jTree1));
    // DON'T ADD treeSelectionListener here
    * Add the selection listener to the tree
    public void addSelectionListener()
    jTree1.addTreeSelectionListener(new TreeSelectionListener()
    public void valueChanged(TreeSelectionEvent e)
    DefaultMutableTreeNode selectedNode = (DefaultMutableTreeNode)jTree1.getLastSelectedPathComponent();
    if (selectedNode != null)
    System.out.println(selectedNode.getUserObject().toString());
    }

  • Top N Analysis with multiple columns

    Hi
    I am using Oracle 9i. I do have a table which contains datewise promotional material types for an organisation.
    The structure is as follows:
    CREATE TABLE TEST
    (CDATE DATE,
    BROCHURE VARCHAR2(1),
    WEBSITE VARCHAR2(1),
    DIRECT_MAIL VARCHAR2(1),
    PRESS_RELEASE VARCHAR2(1),
    JOURNAL_AD VARCHAR2(1)
    and the sample data is as follows:
    CDate          Brochure     Website     Direct_Mail     Press_Release Journal_Ad
    01/04/1996     Y Y Y N N
    02/04/1996     Y Y N N N
    23/06/1996     Y N Y Y N
    13/09/1996     Y Y N N N
    01/04/1997     Y Y N N N
    02/04/1997     Y Y Y N Y
    23/06/1997     N Y N N Y
    13/09/1997     Y Y N N N
    01/04/1998     Y Y Y N N
    02/04/1998     Y N N Y N
    23/06/1998     N Y N N Y
    13/09/1998     Y Y N N Y
    01/04/1999     Y Y Y N Y
    02/04/1999     Y N N Y N
    23/06/1999     N Y N N N
    13/09/1999     Y Y Y N N
    I want to have year wise top 4 promotional types in terms of count of 'Y' only. The result should be like as follows:
    YEAR:1996
    TYPE     COUNT
    BROCHURE 4
    WEBSITE 3
    DIRECT_MAIL 2
    PRESS_RELEASE 1
    JOURNAL_AD 0
    YEAR:1997
    TYPE     COUNT
    WEBSITE 4
    BROCHURE 3
    JOURNAL_AD 2
    DIRECT_MAIL 1
    PRESS_RELEASE 1
    Please suggest a solution for the same. I am not able to sort it for multiple columns.
    Regards
    MS

    One of the questions that must be asked when you have a requirement to only show the top N ranked items in a list, is "what about a tie in the ranking?".
    Oracle has two ranking functions that allow you to deal with either requirement - RANK and DENSE_RANK. Both operate as either analytic or aggregate functions, so either will work for your requirements. The previous posting by Miguel demonstrated how to decode your Y/N flags and pivot the data.
    In this example, I've taken the liberty of adding some data to year 2000 that will show the difference between RANK and DENSE_RANK as well as how to use them to filter your results.
    First, here's the decoded/pivoted data:
    SQL>WITH test AS
      2  (         SELECT TO_DATE('01/04/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      3  UNION ALL SELECT TO_DATE('02/04/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      4  UNION ALL SELECT TO_DATE('23/06/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      5  UNION ALL SELECT TO_DATE('13/09/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      6  UNION ALL SELECT TO_DATE('01/04/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      7  UNION ALL SELECT TO_DATE('02/04/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
      8  UNION ALL SELECT TO_DATE('23/06/1997','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
      9  UNION ALL SELECT TO_DATE('13/09/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    10  UNION ALL SELECT TO_DATE('01/04/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    11  UNION ALL SELECT TO_DATE('02/04/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'N' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    12  UNION ALL SELECT TO_DATE('23/06/1998','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    13  UNION ALL SELECT TO_DATE('13/09/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    14  UNION ALL SELECT TO_DATE('01/04/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    15  UNION ALL SELECT TO_DATE('02/04/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'N' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    16  UNION ALL SELECT TO_DATE('23/06/1999','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    17  UNION ALL SELECT TO_DATE('13/09/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    18  UNION ALL SELECT TO_DATE('01/04/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    19  UNION ALL SELECT TO_DATE('02/04/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    20  UNION ALL SELECT TO_DATE('23/06/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    21  UNION ALL SELECT TO_DATE('13/09/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    22  )
    23  SELECT cyear
    24        ,ctype
    25        ,RANK()       OVER (PARTITION BY cyear ORDER BY num_media DESC) ranking
    26        ,DENSE_RANK() OVER (PARTITION BY cyear ORDER BY num_media DESC) dense_ranking
    27  FROM (SELECT TRUNC(CDATE,'Y') CYEAR
    28             ,'BROCHURE' CTYPE, SUM(DECODE(BROCHURE, 'Y', 1, 0)) NUM_MEDIA
    29        FROM test
    30        GROUP BY TRUNC(CDATE,'Y')
    31        UNION ALL
    32        SELECT TRUNC(CDATE,'Y') CYEAR
    33             ,'WEBSITE' CTYPE, SUM(DECODE(WEBSITE, 'Y', 1, 0))
    34        FROM test
    35        GROUP BY TRUNC(CDATE,'Y')
    36        UNION ALL
    37        SELECT TRUNC(CDATE,'Y') CYEAR
    38             ,'DIRECT_MAIL' CTYPE, SUM(DECODE(DIRECT_MAIL, 'Y', 1, 0))
    39        FROM test
    40        GROUP BY TRUNC(CDATE,'Y')
    41        UNION ALL
    42        SELECT TRUNC(CDATE,'Y') CYEAR
    43             ,'PRESS_RELEASE' CTYPE, SUM(DECODE(PRESS_RELEASE, 'Y', 1, 0))
    44        FROM test
    45        GROUP BY TRUNC(CDATE,'Y')
    46        UNION ALL
    47        SELECT TRUNC(CDATE,'Y') CYEAR
    48             ,'JOURNAL_AD' CTYPE, SUM(DECODE(JOURNAL_AD, 'Y', 1, 0))
    49        FROM test
    50        GROUP BY TRUNC(CDATE,'Y')
    51       )
    52* order by cyear desc, ranking
    53  /
    CYEAR                         CTYPE             RANKING DENSE_RANKING
    01-Jan-2000 00:00:00          BROCHURE                1             1
    01-Jan-2000 00:00:00          WEBSITE                 1             1
    01-Jan-2000 00:00:00          DIRECT_MAIL             3             2
    01-Jan-2000 00:00:00          PRESS_RELEASE           4             3
    01-Jan-2000 00:00:00          JOURNAL_AD              5             4
    01-Jan-1999 00:00:00          BROCHURE                1             1
    01-Jan-1999 00:00:00          WEBSITE                 1             1
    01-Jan-1999 00:00:00          DIRECT_MAIL             3             2
    01-Jan-1999 00:00:00          PRESS_RELEASE           4             3
    01-Jan-1999 00:00:00          JOURNAL_AD              4             3
    01-Jan-1998 00:00:00          BROCHURE                1             1
    01-Jan-1998 00:00:00          WEBSITE                 1             1
    01-Jan-1998 00:00:00          JOURNAL_AD              3             2
    01-Jan-1998 00:00:00          DIRECT_MAIL             4             3
    01-Jan-1998 00:00:00          PRESS_RELEASE           4             3
    01-Jan-1997 00:00:00          WEBSITE                 1             1
    01-Jan-1997 00:00:00          BROCHURE                2             2
    01-Jan-1997 00:00:00          JOURNAL_AD              3             3
    01-Jan-1997 00:00:00          DIRECT_MAIL             4             4
    01-Jan-1997 00:00:00          PRESS_RELEASE           5             5
    01-Jan-1996 00:00:00          BROCHURE                1             1
    01-Jan-1996 00:00:00          WEBSITE                 2             2
    01-Jan-1996 00:00:00          DIRECT_MAIL             3             3
    01-Jan-1996 00:00:00          PRESS_RELEASE           4             4
    01-Jan-1996 00:00:00          JOURNAL_AD              5             5You can see that in year 2000 there is a tie for first place (ranking #1). The RANK function will name the second highest count 3 (skipping the rank of 2 due to the tie), while the DENSE_RANK function will not skip a ranking.
    Now, to filter on the ranking, wrap your query in another in-line view like this - but use which ever ranking function YOUR requirements call for:
    SQL>WITH test AS
      2  (         SELECT TO_DATE('01/04/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      3  UNION ALL SELECT TO_DATE('02/04/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      4  UNION ALL SELECT TO_DATE('23/06/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      5  UNION ALL SELECT TO_DATE('13/09/1996','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      6  UNION ALL SELECT TO_DATE('01/04/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
      7  UNION ALL SELECT TO_DATE('02/04/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
      8  UNION ALL SELECT TO_DATE('23/06/1997','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
      9  UNION ALL SELECT TO_DATE('13/09/1997','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    10  UNION ALL SELECT TO_DATE('01/04/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    11  UNION ALL SELECT TO_DATE('02/04/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'N' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    12  UNION ALL SELECT TO_DATE('23/06/1998','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    13  UNION ALL SELECT TO_DATE('13/09/1998','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    14  UNION ALL SELECT TO_DATE('01/04/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    15  UNION ALL SELECT TO_DATE('02/04/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'N' AS WEBSITE, 'N' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    16  UNION ALL SELECT TO_DATE('23/06/1999','dd/mm/yyyy') AS CDATE, 'N' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    17  UNION ALL SELECT TO_DATE('13/09/1999','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    18  UNION ALL SELECT TO_DATE('01/04/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'Y' AS JOURNAL_AD FROM DUAL
    19  UNION ALL SELECT TO_DATE('02/04/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    20  UNION ALL SELECT TO_DATE('23/06/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'N' AS DIRECT_MAIL, 'N' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    21  UNION ALL SELECT TO_DATE('13/09/2000','dd/mm/yyyy') AS CDATE, 'Y' AS BROCHURE, 'Y' AS WEBSITE, 'Y' AS DIRECT_MAIL, 'Y' AS PRESS_RELEASE, 'N' AS JOURNAL_AD FROM DUAL
    22  )
    23  SELECT * FROM (
    24      SELECT cyear
    25            ,ctype
    26            ,RANK()       OVER (PARTITION BY cyear ORDER BY num_media DESC) ranking
    27            ,DENSE_RANK() OVER (PARTITION BY cyear ORDER BY num_media DESC) dense_ranking
    28      FROM (SELECT TRUNC(CDATE,'Y') CYEAR
    29                 ,'BROCHURE' CTYPE, SUM(DECODE(BROCHURE, 'Y', 1, 0)) NUM_MEDIA
    30            FROM test
    31            GROUP BY TRUNC(CDATE,'Y')
    32            UNION ALL
    33            SELECT TRUNC(CDATE,'Y') CYEAR
    34                 ,'WEBSITE' CTYPE, SUM(DECODE(WEBSITE, 'Y', 1, 0))
    35            FROM test
    36            GROUP BY TRUNC(CDATE,'Y')
    37            UNION ALL
    38            SELECT TRUNC(CDATE,'Y') CYEAR
    39                 ,'DIRECT_MAIL' CTYPE, SUM(DECODE(DIRECT_MAIL, 'Y', 1, 0))
    40            FROM test
    41            GROUP BY TRUNC(CDATE,'Y')
    42            UNION ALL
    43            SELECT TRUNC(CDATE,'Y') CYEAR
    44                 ,'PRESS_RELEASE' CTYPE, SUM(DECODE(PRESS_RELEASE, 'Y', 1, 0))
    45            FROM test
    46            GROUP BY TRUNC(CDATE,'Y')
    47            UNION ALL
    48            SELECT TRUNC(CDATE,'Y') CYEAR
    49                 ,'JOURNAL_AD' CTYPE, SUM(DECODE(JOURNAL_AD, 'Y', 1, 0))
    50            FROM test
    51            GROUP BY TRUNC(CDATE,'Y')
    52           )
    53      )
    54  where RANKING <= 4
    55* order by cyear desc, ranking
    56  /
    CYEAR                         CTYPE             RANKING DENSE_RANKING
    01-Jan-2000 00:00:00          WEBSITE                 1             1
    01-Jan-2000 00:00:00          BROCHURE                1             1
    01-Jan-2000 00:00:00          DIRECT_MAIL             3             2
    01-Jan-2000 00:00:00          PRESS_RELEASE           4             3
    01-Jan-1999 00:00:00          BROCHURE                1             1
    01-Jan-1999 00:00:00          WEBSITE                 1             1
    01-Jan-1999 00:00:00          DIRECT_MAIL             3             2
    01-Jan-1999 00:00:00          JOURNAL_AD              4             3
    01-Jan-1999 00:00:00          PRESS_RELEASE           4             3
    01-Jan-1998 00:00:00          BROCHURE                1             1
    01-Jan-1998 00:00:00          WEBSITE                 1             1
    01-Jan-1998 00:00:00          JOURNAL_AD              3             2
    01-Jan-1998 00:00:00          PRESS_RELEASE           4             3
    01-Jan-1998 00:00:00          DIRECT_MAIL             4             3
    01-Jan-1997 00:00:00          WEBSITE                 1             1
    01-Jan-1997 00:00:00          BROCHURE                2             2
    01-Jan-1997 00:00:00          JOURNAL_AD              3             3
    01-Jan-1997 00:00:00          DIRECT_MAIL             4             4
    01-Jan-1996 00:00:00          BROCHURE                1             1
    01-Jan-1996 00:00:00          WEBSITE                 2             2
    01-Jan-1996 00:00:00          DIRECT_MAIL             3             3
    01-Jan-1996 00:00:00          PRESS_RELEASE           4             4

  • Dimension foreign key on Fact gets updated with same value

    Hi All,
    I'm hoping someone has at least seen this problem and can provide insight.
    Background:
    - We are on OWB 10.2
    - Fact table is Billing_F
    - Fact table has foreign keys to Customer_Dim, SalesOrg_Dim, Time_Dim, etc.
    - Fact table has update mode Update/Insert
    Once in a while, every row in the Billing Fact gets updated with the SAME SalesOrg_Dim foreign key column value. In other words, the data would look like:
    Before:
    Line ID Amt SalesOrg_Key
    1 100 19241
    2 200 21925
    3 140 08585
    After:
    Line ID Amt SalesOrg_Key
    1 100 12345
    2 200 12345
    3 140 12345
    *** The Source tables contain values in the Before table. ***
    We can't reproduce this on a consistent basis. The only way to "correct" the data is to rerun the mapping from source -> staging -> fact again. No other change.
    Please, has anyone else at least experienced this problem? If so, how were you able to fix it permanently?
    Any help is greatly appreciated!
    irene

    Hi Irene
    Difficult to help here without reviewing your mappings/schedule but does SalesOrgKey 12345 represent a particular value e.g. Unknown, Default?
    I suspect it's something to do with your load order in that the data in the dimension is not available/ready when the fact is loaded and therefore a default value is being inserted. When the fact load is rerun I'm guessing the load of the dimension has finished and therefore the key lookup returns the proper keys from the dimension. Is the dimension truncated and reloaded?
    Regards
    Si

  • Grouping consecutive rows with same value?

    Hi
    I have a table in which three columns are:
    ROWID  SHOPNAME
    1      SHOP_C     
    2      SHOP_A     
    3      SHOP_C     
    4      SHOP_C     
    5      SHOP_A     
    6      SHOP_A     
    7      SHOP_C     
    8      SHOP_B     
    9      SHOP_B     
    10     SHOP_E    
    11     SHOP_A     
    12     SHOP_D     
    13     SHOP_D     
    14     SHOP_F     
    15     SHOP_G     
    16     SHOP_F     
    17     SHOP_C     
    18     SHOP_C     
    19     SHOP_C     
    20     SHOP_C      Rowid is a primary key column with unique ids
    Shopname is varchar2 column
    I want to make groupings of every "shopname" value "order by ROWID" such that when there are consecutive rows of same shopname the group remains same and as soon as there is different value in shopname column for the next row the "group" changes.
    below is my desired output is with 3rd column of groupings as follows:
    ROWID  SHOPNAME    Groups
    1      SHOP_C      1
    2      SHOP_A      2
    3      SHOP_C      3 ------> same grouping because Shop name is same
    4      SHOP_C      3 ------> same grouping because shop name is same
    5      SHOP_A      4 ----> different shopname so group changed.
    6      SHOP_A      4 ----> same group as above because shopname is same
    7      SHOP_C      5
    8      SHOP_B      6
    9      SHOP_B      6
    10     SHOP_E      7
    11     SHOP_A      8
    12     SHOP_D      9
    13     SHOP_D      9
    14     SHOP_F      10
    15     SHOP_G      11
    16     SHOP_F      12
    17     SHOP_C      13
    18     SHOP_C      13
    19     SHOP_C      13
    20     SHOP_C      13I want that to be done via analytics, so that I can partition by clause for different shops over different cities
    regards
    Ramis.

    with data(row_id, shop) as
    select 1,      'SHOP_C' from dual union all
    select 2,      'SHOP_A' from dual union all     
    select 3,      'SHOP_C' from dual union all
    select 4,      'SHOP_C' from dual union all    
    select 5,      'SHOP_A' from dual union all     
    select 6,      'SHOP_A' from dual union all
    select 7,      'SHOP_C' from dual union all
    select 8,      'SHOP_B' from dual union all
    select 9,      'SHOP_B' from dual union all
    select 10,     'SHOP_E' from dual union all
    select 11,     'SHOP_A' from dual union all
    select 12,     'SHOP_D' from dual union all
    select 13,     'SHOP_D' from dual union all
    select 14,     'SHOP_F' from dual union all
    select 15,     'SHOP_G' from dual union all
    select 16,     'SHOP_F' from dual union all
    select 17,     'SHOP_C' from dual union all
    select 18,     'SHOP_C' from dual union all
    select 19,     'SHOP_C' from dual union all
    select 20,     'SHOP_C' from dual
    get_data as
      select row_id, shop,
             lag(shop) over (order by row_id) prev_shop
        from data
    select row_id, shop,
           sum(case when shop = prev_shop
                      then  0
                      else 1
                      end
                ) over (order by row_id) grps
      from get_data;
    ROW_ID SHOP   GRPS
         1 SHOP_C    1
         2 SHOP_A    2
         3 SHOP_C    3
         4 SHOP_C    3
         5 SHOP_A    4
         6 SHOP_A    4
         7 SHOP_C    5
         8 SHOP_B    6
         9 SHOP_B    6
        10 SHOP_E    7
        11 SHOP_A    8
        12 SHOP_D    9
        13 SHOP_D    9
        14 SHOP_F   10
        15 SHOP_G   11
        16 SHOP_F   12
        17 SHOP_C   13
        18 SHOP_C   13
        19 SHOP_C   13
        20 SHOP_C   13
    20 rows selected

  • Query to insert a row in a table with same values except 1 field.

    Suppose I have a table with 100 columns(fields).
    I want to insert a row with 99 fileds being the same as previous ones except one fileld being different.
    The table doesn't have any constraints.
    Kindly suggest a query to solve the above purpose..

    And for much more lazy people, a desc of table is shorter to write. :-)
    Then copy & paste into any editor, then mode column to add a comma after every column name in one shot.Or have the system do it for you. Be lazy.
    SELECT Column_Name || ',' FROM User_Tab_Columns where Table_Name = '';
    Indeed, it can be converted to a script called desc(.sql)
    SELECT Column_Name || ',' FROM User_Tab_Columns where Table_Name = '&1' ORDER BY Column_Id;
    Then desc or @desc.

  • Purchase info record with same values

    Hi,
    I want to create purchase info records with the same material vendor purchasing org. plant ,how can it be done.
    This is required because it is possible for the same vendor to deliver material for the same plant and purchasing organisation.Is there a workaround or any reason for this.
    Regards

    In SAP , this is not possible to have 2 different inforecords for the combinaiton of MaterialVendorPur orgPlantinfo record type
    could you please explain more regarding the requirement ....
    Regards
    Mani

  • PinListener triggers many times with same value, then segfaults

    Hi.
    Trying a simple button listener with RPi connected to a gertboard, whenever I press the button connected to configured GPIO (GPIO3 in this case, since only GPIO2 and GPIO3 seem suitable in order to use internal pullups) I get many notifications. Keeping pressed button raises a segmentation fault very, very soon.
    This is my PinListener implementation:
    public class ButtonListener implements PinListener
        @Override
        public void valueChanged(PinEvent event)
            System.out.println(String.format("Pin %s value changed, now it is %b",
                    event.getDevice().toString(),
                    event.getValue()));
    Sadly, log cannot be captured when there is a segmentation fault, but after many:
    Pin com.oracle.deviceaccess.gpio.impl.GPIOPinImpl@b4900588 value changed, now it is true
    Pin com.oracle.deviceaccess.gpio.impl.GPIOPinImpl@b4900588 value changed, now it is true
    Pin com.oracle.deviceaccess.gpio.impl.GPIOPinImpl@b4900588 value changed, now it is true
    Pin com.oracle.deviceaccess.gpio.impl.GPIOPinImpl@b4900588 value changed, now it is true
    ...i can see on RPi monitor errors like:
    javacall_event_send: write of message size failed : Resource temporarily unavailable
    Any suggestions?
    Thanks!

    Connection is made through gertboard: you can find schematics here: https://dl.dropboxusercontent.com/u/7414592/JavaME/Assembled%20Gertboard%20Schematics.pdf
    Specifically, I left unconnected (i.e. oopen) P1/CON2 jumper and connected RPi Pin3 (marked as GPIO2 on J2 connector of my board, while it is marked as GPIO0 on schematics, due to pinout revision #2 against #1.
    It seems that the problem comes from Pullup setting, that could cause flickering when floating...

  • Material code line repeats with same values in a Query

    Hi Experts,
    I have created a small query (SQVI) to fetch the material master details of a plant. I see a lot a material codes repeating with the same data in each report I extract. So everytime I have to download the report to an excel, filter and then submit to users.
    Please advise your thoughts
    Tables used (in order) MARC > MBEW > MARA > MAKT
    Primary input fields are Plant & Material code
    other input fields are Material group and Material type
    Output fields are
    Plant
    Material code
    UOM
    Mat Type
    Description
    Old Material number
    Mat group
    MAP
    Std Price
    Val class
    Proc type
    Special Proc type
    Please help with your valuable inputs
    Thanks,
    Rajoo

    if a material has descriptions in sveal languages, then you get one record per language.
    if your material have split valuation active, then you get one record per valuation type.
    Add teh language to your selection to select only the description of a certain language (do the same for valuation type if your material is split valuated)

  • Group Record Number for consecutive Rows with same value.

    I have following Data in TEST table
    END_DATE      HOURS
    ====================
    8/30/2012     20
    7/30/2012     30
    7/1/2012 30     
    6/30/2012     20
    5/30/2012     20
    5/1/2012     20
    I would like to get the following result (GRP_REC_NUM column)
    END_DATE      HOURS GRP_REC_NUM
    ============================
    8/30/2012     20 1
    7/30/2012     30 1
    7/1/2012 30     2
    6/30/2012     20 1 *<<----- This should reset back to 1*
    5/30/2012     20 2
    5/1/2012     20 3
    Thanks
    slokam

    slokam wrote:
    I would like to get the following result (GRP_REC_NUM column)
    END_DATE      HOURS GRP_REC_NUM
    ============================
    8/30/2012     20 1
    7/30/2012     30 1
    7/1/2012 30     2
    6/30/2012     20 1 *<<----- This should reset back to 1*
    5/30/2012     20 2
    5/1/2012     20 3
    I could not understand why 5/30/2012 has the Rec_num column as 2. According to my understanding, you are trying to assign Ranks to each of the Dates. And if it is so, then 7/1/2012 should have GROUP_REC_NUM as 1 and 7/30/2012 as 2.
    Since, it is not clear about the logic to assign ranks, it will be helpful (for you as well as us) if you can elaborate the logic.
    Below is probably what you need.
    with data as
    select to_date('08/30/2012', 'MM/DD/YYYY') dt, 20 hrs from dual union all
    select to_date('07/30/2012', 'MM/DD/YYYY') dt, 30 hrs from dual union all
    select to_date('07/1/2012', 'MM/DD/YYYY') dt, 30 hrs from dual union all
    select to_date('06/30/2012', 'MM/DD/YYYY') dt, 20 hrs from dual union all
    select to_date('05/30/2012', 'MM/DD/YYYY') dt, 20 hrs from dual union all
    select to_date('05/1/2012', 'MM/DD/YYYY') dt, 20 hrs from dual
    select dt, hrs,
           dense_rank() over (partition by to_char(dt, 'MM') order by dt) rn
      from data
    order by dt;
    DT                        HRS                    RN                    
    01-MAY-12                 20                     1                     
    30-MAY-12                 20                     2                     
    30-JUN-12                 20                     1                     
    01-JUL-12                 30                     1                     
    30-JUL-12                 30                     2                     
    30-AUG-12                 20                     1

  • Populating internal table field with same value

    Is there syntax that will fill a field value in every record in an internal table without looping?

    Hi Rob,
    I didn't know this, after studying online help on MODIFY itab, ABAP Statement
    MODIFY itab - itab_lines
    Syntax
    ... itab FROM wa TRANSPORTING comp1 comp2 ... WHERE log_exp.
    I tried
    DATA:
        ls_t100 TYPE t100,
        lt_t100 TYPE TABLE OF t100.
      SELECT * INTO TABLE lt_t100 FROM t100 UP TO 20 ROWS.
      MODIFY lt_t100 FROM ls_t100 TRANSPORTING text where text &lt;&gt; ls_t100-text .
    clears field text in all rows of lt_t100.
    after many years of field-symbols finally a reason to use MODIFY again
    Regards,
    Clemens

  • Macro/VBA script to merge rows with same values in another column

    Hi.
    I'm developing a dance competition application, using Excel 2010, and have so far managed to put judges' marked scores into a worksheet through a userform.
    Now I would like to make the worksheet more presentable as a scoreboard, as you would manually, but via VBA scripts.
    Exhibit Numbers (Column G) are unique identifiers (per competition) for contestants and each contestant is typically judged by three judges.  The scores in three separate categories from one judge would spread in one row so each contestant would have three
    rows.
    I would like to merge rows of columns where the totals from the judges go (Columns P to U) for each contestant.
    I've considered using some kind of loop but I don't have enough experience in vba scripting, it gets overcomplicated. Could someone please help?
    Many thanks.
    Maki Koyama (Canberra, AUS)

    Hi,
    You cannot add a static "Y" inside a when looping over "i". You need to ensure that "Y" changes along with X. Try the modified code below.
    List elements = new ArrayList();
    for (int i=0; i<XXX; i++) {
    IZZZ.IVisibilityElement el = wdContext.createVisibilityElement();
    el.setVisAttr(i); // Change Y to i
    elements.add(el);
    wdContext.nodeVisibility().bind(elements);
    That should give you an idea of what is erroneous in the code.
    Thanks.
    HTH.
    p256960

  • Counting one of multiple rows with same value for same ID

    Hello,
    I have this query:
    SELECT
    count( case when EXISTS (
                SELECT *
                FROM  business_log bl, subject su
                WHERE  su.ID_SUBJECT = s.ID_SUBJECT
                and bl.id_subject = su.id_subject
                AND    bl.value = 'Solved'
    then 1 else null end) num_solved
    FROM subject s
    It is posible that one subject is more than once 'Solved' in table BUSINESS_LOG
    I want to count only one row solved for one subject.
    I must to use only SUBJECT table in the main query because of other counts.
    Thank You very much.
    Regards
    Milos
    Message was edited by: 2796614

    so in case you have no solved entry...you get NULL back instead of 0.....
    and....there is no need for the group by ....as you are only looking for one subject
    SELECT COALESCE((SELECT max(1)
                      FROM  business_log bl
                     WHERE bl.id_subject = s.id_subject
                       AND bl.value = 'Solved'),0) num_solved
      FROM subject s
    but....I would like to have a proof, that this one is faster than the outer-join solution.
    So...perhaps we'll see an execution plan for both of them.
    corrected....missing bracket

Maybe you are looking for