Need help with group by query with condition

Name
ID
A1
1
A2
1
A3
1
A1
1
A2
2
A2
3
A3
3
A1
4
A2
4
A3
5
I want to get result of total count  where ID is less than 4
A1 - 2
A2 - 3
A3 -  2

Hi,
If you want to get 0 counts for names that don't have any low ids, then use a CASE expression instead of a WHERE clause.
You didn't post CREATE TABLE and INSERT statements for your sample data, so I'll use scott.emp to illustrate.
Say you want to see how many sals in each job are under 3000, like this:
JOB       UNDER_3_CNT
CLERK               4
SALESMAN            4
PRESIDENT           0
MANAGER             3
ANALYST             0
Here's one way to do that:
SELECT    job
,         COUNT (CASE WHEN sal < 3000 THEN 1 END)  AS under_3_cnt
FROM      scott.emp
GROUP BY  job
This shows that there are ANALYSTs and PRESIDENTs in the scott.emp table; there just don't happen to be any with sals under 3000.

Similar Messages

  • Need help in optimizing the query with joins and group by clause

    I am having problem in executing the query below.. it is taking lot of time. To simplify, I have added the two tables FILE_STATUS = stores the file load details and COMM table that is actual business commission table showing records successfully processed and which records were transmitted to other system. Records with status = T is trasnmitted to other system and traansactions with P is pending.
    CREATE TABLE FILE_STATUS
    (FILE_ID VARCHAR2(14),
    FILE_NAME VARCHAR2(20),
    CARR_CD VARCHAR2(5),
    TOT_REC NUMBER,
    TOT_SUCC NUMBER);
    CREATE TABLE COMM
    (SRC_FILE_ID VARCHAR2(14),
    REC_ID NUMBER,
    STATUS CHAR(1));
    INSERT INTO FILE_STATUS VALUES ('12345678', 'CM_LIBM.TXT', 'LIBM', 5, 4);
    INSERT INTO FILE_STATUS VALUES ('12345679', 'CM_HIPNT.TXT', 'HIPNT', 4, 0);
    INSERT INTO COMM VALUES ('12345678', 1, 'T');
    INSERT INTO COMM VALUES ('12345678', 3, 'T');
    INSERT INTO COMM VALUES ('12345678', 4, 'P');
    INSERT INTO COMM VALUES ('12345678', 5, 'P');
    COMMIT;Here is the query that I wrote to give me the details of the file that has been loaded into the system. It reads the file status and commission table to show file name, total records loaded, total records successfully loaded to the commission table and number of records that has been finally transmitted (status=T) to other systems.
    SELECT
        FS.CARR_CD
        ,FS.FILE_NAME
        ,FS.FILE_ID
        ,FS.TOT_REC
        ,FS.TOT_SUCC
        ,NVL(C.TOT_TRANS, 0) TOT_TRANS
    FROM FILE_STATUS FS
    LEFT JOIN
        SELECT SRC_FILE_ID, COUNT(*) TOT_TRANS
        FROM COMM
        WHERE STATUS = 'T'
        GROUP BY SRC_FILE_ID
    ) C ON C.SRC_FILE_ID = FS.FILE_ID
    WHERE FILE_ID = '12345678';In production this query has more joins and is taking lot of time to process.. the main culprit for me is the join on COMM table to get the count of number of transactions transmitted. Please can you give me tips to optimize this query to get results faster? Do I need to remove group and use partition or something else. Please help!

    I get 2 rows if I use my query with your new criteria. Did you commit the record if you are using a second connection to query? Did you remove the criteria for file_id?
    select carr_cd, file_name, file_id, tot_rec, tot_succ, tot_trans
      from (select fs.carr_cd,
                   fs.file_name,
                   fs.file_id,
                   fs.tot_rec,
                   fs.tot_succ,
                   count(case
                            when c.status = 'T' then
                             1
                            else
                             null
                          end) over(partition by c.src_file_id) tot_trans,
                   row_number() over(partition by c.src_file_id order by null) rn
              from file_status fs
              left join comm c
                on c.src_file_id = fs.file_id
             where carr_cd = 'LIBM')
    where rn = 1;
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS
    LIBM    CM_LIBM.TXT          12345678                5          4          2
    LIBM    CM_LIBM.TXT          12345677               10          0          0Using RANK can potentially produce multiple rows to be returned though your data may prevent this. ROW_NUMBER will always prevent duplicates. The ordering of the analytical function is irrelevant in your query if you use ROW_NUMBER. You can remove the outermost query and inspect the data returned by the inner query;
    select fs.carr_cd,
           fs.file_name,
           fs.file_id,
           fs.tot_rec,
           fs.tot_succ,
           count(case
                    when c.status = 'T' then
                     1
                    else
                     null
                  end) over(partition by c.src_file_id) tot_trans,
           row_number() over(partition by c.src_file_id order by null) rn
    from file_status fs
    left join comm c
    on c.src_file_id = fs.file_id
    where carr_cd = 'LIBM';
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS         RN
    LIBM    CM_LIBM.TXT          12345678                5          4          2          1
    LIBM    CM_LIBM.TXT          12345678                5          4          2          2
    LIBM    CM_LIBM.TXT          12345678                5          4          2          3
    LIBM    CM_LIBM.TXT          12345678                5          4          2          4
    LIBM    CM_LIBM.TXT          12345677               10          0          0          1

  • Need help on group by query problem.

    Hi,
    I've a table like..
         CREATE TABLE DPT(DEPTNO NUMBER(4),HIREDATE DATE);
         INSERT INTO DPT VALUES(10,SYSDATE);
         INSERT INTO DPT VALUES(10,SYSDATE+1);
         INSERT INTO DPT VALUES(10,SYSDATE+2);
         INSERT INTO DPT VALUES(10,SYSDATE+3);
         INSERT INTO DPT VALUES(10,SYSDATE+4);
         INSERT INTO DPT VALUES(10,SYSDATE+5);
         INSERT INTO DPT VALUES(20,SYSDATE);
         INSERT INTO DPT VALUES(20,SYSDATE+1);
         INSERT INTO DPT VALUES(20,SYSDATE+2);
         INSERT INTO DPT VALUES(20,SYSDATE+3);
         INSERT INTO DPT VALUES(30,SYSDATE);
         INSERT INTO DPT VALUES(30,SYSDATE+1);
         INSERT INTO DPT VALUES(30,SYSDATE+2);
         INSERT INTO DPT VALUES(30,SYSDATE+3);
         INSERT INTO DPT VALUES(30,SYSDATE+4);
    SELECT DEPTNO,HIREDATE FROM DPT;
    DEPTNO     HIREDATE
    10     30-OCT-09
    10     31-OCT-09
    10     1-NOV-09
    10     2-NOV-09     
    10     3-NOV-09
    10     4-NOV-09
    20     30-OCT-09
    20     31-OCT-09
    20     1-NOV-09
    END SO ON..
    Actually DPT table having 11 lacs records and Every deptno has got different hiredates.
    I want to retrieve the deptno with minimum hiredate i.e deptno having first date.
    So i've written a query like this using group by clause , When i use the group by clause
    it is taking almost 15 sec to give the result.
    SELECT DEPTNO,MIN(HIREDATE) FROM DPT GROUP BY DEPTNO;
    DEPTNO     HIREDATE
    10     30-OCT-09
    20     30-OCT-09
    30     30-OCT-09
    So is there any otherway of achieving the above result without using group by hence it is taking much time.
    Am using Oracle 10g, TOAD 8.6.1 version
    I've done the analyzing of the table & there are Indexes on both the columns.
    Explan result Below:
    PLAN_TABLE_OUTPUT
    Plan hash value: 682192801
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 231K| 2938K| | 3804 (4)| 00:00:46 |
    | 1 | SORT GROUP BY | | 231K| 2938K| 32M| 3804 (4)| 00:00:46 |
    | 2 | INDEX FAST FULL SCAN| DPT_NDX1 | 1180K| 14M| | 1019 (1)| 00:00:13 |
    Please give the other ways of writing the same query for improving the performance.
    Regards,
    Arjun.

    Hi Karhick,
    Thanks for the reply. It is giving expected result,but performance not improved.Please find the below Eplan
    PLAN_TABLE_OUTPUT
    Plan hash value: 1880836313
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1180K| 32M| | 6714 (2)| 00:01:21 |
    |* 1 | VIEW | | 1180K| 32M| | 6714 (2)| 00:01:21 |
    |* 2 | WINDOW SORT PUSHED RANK| | 1180K| 14M| 54M| 6714 (2)| 00:01:21 |
    | 3 | INDEX FAST FULL SCAN | DPT_NDX1 | 1180K| 14M| | 1019 (1)| 00:00:13 |
    Predicate Information (identified by operation id):
    1 - filter("RNO"=1)
    2 - filter(ROW_NUMBER() OVER ( PARTITION BY "DEPTNO" ORDER BY "HIREDATE")<=1)
    What i've to do now. Please let me know.
    Regards,
    Arjun.

  • Need help in Group by query

    Hi All,
    I am using Oracle 11 g.
    I have a query which is giving output as
    SELECT o.campgn_no , m.lang , COUNT(1)
    FROM tg_main m, sp_offer o
    WHERE m.offer_id = o.offer_id AND o.campgn_no IN (SELECT campgn_no FROM sp_campgn_cntrl WHERE product_code = 'ONB')
    GROUP BY o.campgn_no, m.lang;
    Campgn Lang Count
    CNB26 F 4
    ONB26 E 1065
    CNB26 E 316
    ONB26 F 96
    But i need output as
    Campgn     E     F
    ONB26     1065     96
    CNB26     316     4
    Let me know how to do this.
    Thanks in Advance.
    Thanks,
    lony

    Maybe (seems I'd better leave the Forum for a while)
    select campgn_no,
           max(e) e,
           max(f) f
      from (SELECT o.campgn_no,
                   sum(case when m.lang = 'E' then 1 end) e,
                   sum(case when m.lang = 'F' then 1 end) f
              FROM tg_main m,
                   sp_offer o
             WHERE m.offer_id = o.offer_id
               AND o.campgn_no IN (SELECT campgn_no
                                     FROM sp_campgn_cntrl
                                    WHERE product_code = 'ONB'
             GROUP BY o.campgn_no,m.lang
    GROUP BY campgn_noRegards
    Etbin

  • Need help in refining the query

    Hello Experts,
    Need your help in refining the query further more.
    table structure 
    Mskey  Col A Col B
    1   empno [20141127-20151128]1234
    1   empno [20151201-99991231]232544
    1   salutation [20141127-99991231]Mrs
    1   salutation [20151127-99991231]Mr
    2   empno [20141127-20151128]1234
    2   empno [20151201-99991231]232544
    2   salutation [20141127-99991231]Mrs
    2   salutation [20151127-99991231]Mr
    My requirement is to find the list of overlapping records based on the dates
    user details may be varying from time to time as new data would be pushed through HR systems to identity store via an interface.
    The job is getting failed whenever there is any overlapping with dates. So we proactively decided to schedule a job in this regards which runs weekly and would let us know for which and all the users there is overlapping dates are there, so that we can send the list to HR team for pushing new data.
    Overlapping Issue Example:
    The users employee id for an year it is 1234 and later he moved to another department and his employee id got changed and it became 2345 remaining all details are same. So the HR systems send the data for this user as empno - [20141127-20151128]1234 and empno - [20151201-99991231]232544
    it means from 20141127 to 20151128 his employee no is 1234 and from 20151201 to 99991231 his employee would be 2345.
    This is a correct case and the tool would accept this data.
    the below cases are invald
    Case 1: 1 salutation [20141127-99991231]Mrs 1 salutation [20151127-99991231]Mr
    Case 2: 2 salutation [20141127-99991231]Mrs 2 salutation [20141127-99991231]Mr
    So we wanted to find these overlapping records from tables.
    My Query:
    I am able to successfully write query for the case 2 type but unable to write for case1.
    select id,colA
    count(left(ColB,CHARINDEX(']',ColB))) as 'Count'
    from tblA with (nolock)
    where id in (Select distinct id from tblb with (nolock))
    group by id, cola,left(ColB,CHARINDEX(']',ColB))
    having count(left(ColB,CHARINDEX(']',ColB)))>1

    Finally got the required answer with the below query
    WITH Cte AS
    SELECT ID,ColA,ColB,ROW_NUMBER() OVER(ORDER BY (SELECT 1)) AS RN,
    CAST(SUBSTRING(ColB,2,CHARINDEX('-',ColB)-2) AS DATE) AS StartDT,
    CAST(SUBSTRING(ColB,CHARINDEX('-',ColB)+1,8) AS DATE)  AS EndDT  FROM TblA
    SELECT c1.ID, c1.ColA,c1.ColB
    FROM Cte c1 JOIN Cte c2
    ON c1.RN != c2.RN
    AND c1.ID=c2.ID
    AND c1.ColA=c2.ColA
    AND (c1.StartDT BETWEEN c2.StartDT AND c2.EndDT OR c2.StartDT BETWEEN c1.StartDT AND c1.EndDT )

  • Need help in group left report

    i need help in group left report
    i am making group left report in which all group data on top is repeating as many time as detail record is .
    for axampel
    if
    group record:
    fielda fieldb field c
    fielda fieldb field c
    fielda fieldb field c
    fielda fieldb field c
    detail record :
    fieldd fielde feildf
    fieldd fielde feild
    fieldd fielde feild
    fieldd fielde feildf
    how i can show group record on top once and on each group record show detail record belong to master record.
    plz help me
    A.R

    I think you Should go for Group above report...
    and if you want group left only and also want to display master once then
    put all master fields above but yeah take care of it's master repeting frame
    it should not change...
    and if you want i can send you an example...
    Enjoy Oracle...

  • Need help with writing a query with dynamic FROM clause

    Hi Folks,
    I need help with an query that should generate the "FROM" clause dynamically.
    My main query is as follows
    select DT_SKEY, count(*)
    from *???*
    where DT_SKEY between 20110601 and 20110719
    group by DT_SKEY
    having count(*) = 0
    order by 1; The "from" clause of the above query should be generated as below
    select 'Schema_Name'||'.'||TABLE_NAME
    from dba_tables
    where OWNER = 'Schema_Name'Simply sticking the later query in the first query does not work.
    Any pointers will be appreciated.
    Thanks
    rogers42

    Hi,
    rogers42 wrote:
    Hi Folks,
    I need help with an query that should generate the "FROM" clause dynamically.
    My main query is as follows
    select DT_SKEY, count(*)
    from *???*
    where DT_SKEY between 20110601 and 20110719
    group by DT_SKEY
    having count(*) = 0
    order by 1; The "from" clause of the above query should be generated as below
    select 'Schema_Name'||'.'||TABLE_NAME
    from dba_tables
    where OWNER = 'Schema_Name'
    Remember that anything inside quotes is case-sensitive. Is the owner really "Schema_Name" with a capital S and a capital N, and 8 lower-case letters?
    Simply sticking the later query in the first query does not work.Right; the table name must be given when you compile the query. It's not an expression that you can generate in the query itself.
    Any pointers will be appreciated.In SQL*Plus, you can do something like the query bleow.
    Say you want to count the rows in scott.emp, but you're not certain that the name is emp; it could be emp_2011 or emp_august, or anything else that starts with e. (And the name could change every day, so you can't just look it up now and hard-code it in a query that you want to run in the future.)
    Typically, how dynamic SQL works is that some code (such as a preliminary query) gets some of the information you need to write the query first, and you use that information in a SQL statement that is compiled and run after that. For example:
    -- Preliminary Query:
    COLUMN     my_table_name_col     NEW_VALUE my_table_name
    SELECT     table_name     AS my_table_name_col
    FROM     all_tables
    WHERE     owner          = 'SCOTT'
    AND     table_name     LIKE 'E%';
    -- Main Query:
    SELECT     COUNT (*)     AS cnt
    FROM     scott.&my_table_name
    ;This assumes that the preliminary query will find exactly one row; that is, it assumes that SCOTT has exactly one table whose name starts with E. Could you have 0 tables in the schema, or more than 1? If so, what results would you want? Give a concrete example, preferably suing commonly available tables (like those in the SCOTT schema) so that the poepl who want to help you can re-create the problem and test their ideas.
    Edited by: Frank Kulash on Aug 11, 2011 2:30 PM

  • Performance of APD with source as 'Query with conditions'

    Hi,
    I have a question regarding the performance of an APD query with conditions when it is run in the background.
    I have an infocube with 80 million records.I have a query to find Top 15 (Condition In query) Customers per territory.
    With the help of an APD Query (Source) i wanted to populate the details (customer category , etc) of the Top 15 Customers into a DSO (Target)
    Right now it takes 6 minutes to run the query in web.
    I wanted to know how feasible it is to use an APD (run in backgrnd) to dump the results of the BEx query (which has conditions Top 15) into DSO.
    Also , what other options do i have ?
    Appreciate your answers.
    Regards

    Thomas thanks for the response.  I have checked the file on the app server and found that the lines with more than 512 characters are being truncated after 512 characters.  Basis team is not aware of any such limits on the app server.  I am sure few on the forum would have extracted data to a file on the application server using APD.  Please note that the file extraction to the client workstation is working as expected (posts lines more than 512 characters).

  • Need help in Creating DB manually with OFA

    Hi,
    I want to create database in 10g manually and for this purpose
    I need help.
    since we can use OFA for storing Control files, and redolog file
    and datafile by describing them in Create database command but
    how to store any instance related file like spfile, pfile or network
    related file.
    once the oracle 10g database software is installed then if we want
    to create database then how can we separate the database file and
    all other instance files from software binaries because I've noted
    that creating database with DBCA create all instance files along with
    binary files of the software although DBCA give you option of creating
    database related file in some other location/directory.
    kindly help me in this I want to achieve above task in compliance with
    OFA
    Thanks and Regards,
    D.Abbasi

    Abbasi wrote:
    Hi,
    I want to create database in 10g manually and for this purpose
    I need help.
    since we can use OFA for storing Control files, and redolog file
    and datafile by describing them in Create database command but
    how to store any instance related file like spfile, pfile or network
    related file.
    once the oracle 10g database software is installed then if we want
    to create database then how can we separate the database file and
    all other instance files from software binaries because I've noted
    that creating database with DBCA create all instance files along with
    binary files of the software although DBCA give you option of creating
    database related file in some other location/directory.
    kindly help me in this I want to achieve above task in compliance with
    OFA
    Thanks and Regards,
    D.AbbasiTHe OFA spec (as best I recall) states that one of the levels of directory for all of your data files will be the db name. So if you name your database 'orcl', every file system in which you keep any files related to 'orcl' will be in a directory name 'orcl'.
    As for what happend when you created your db. Oracle will default to putting things in the Oracle home structure because that's the only thing he can guarantee will be there. But you had the option (indeed, the obligation) to override these defaults.
    BTW, when using dbca, if you chose any of the pre-canned database types, the database will be created by doing a restore from a pre-canned backup. If you chose 'custom' database, the database will be created by way of a CREATE DATABASE script. It would be instructive time well spent for you to go through dbca and select 'custom database', pay attention to all of the options as you work through, then at then end, deselect "create database" and select "generate scripts". Do the same for one of the pre-defined database types. Then study the scripts generated by each method.
    Edited by: EdStevens on Aug 16, 2009 8:46 PM

  • Need help. importing pictures to timeline with various length.

    Need help cant find old toturial
    I am trying to make a template for a picture movie where i have everything ready so i just need to add 250 pictures and then it will automaticly shift the pictures to the beat of the music. I once saw a toturial how to do this but cant find it. it was something with manually setting markers at the music beat and then just add all the pictures. but i must have missed something pls help..
    Bo

    Its called automate to sequence with unnumbered markers.
    http://tv.adobe.com/watch/learn-premiere-pro-cs5/gs04-making-a-rough-cut-in-adobe-premiere -pro/
    http://help.adobe.com/en_US/premierepro/cs/using/WS1c9bc5c2e465a58a91cf0b1038518aef7-7d1ba .html

  • Need help in installing JBOSS along with Jdk version

    Hi,
    I am using Linux 64 bit Machine.
    I have Jdk1.5.0_18 and jboss-4.2.3.GA installed in my machine.
    But the JBOSS Server is failed in booting.
    What version of JBOSS Server needed for the 64 bit along with Jdk1.5.0_18
    Any help will be needful for me
    Thanks and Regards

    The name of this forum is "Database - General" not JBOSS, JDK, or "anything goes."
    Please delete this post and post your question in the appropriate forum.
    Thank you.

  • Need Help Please: Audigy2 Value conflict with onboard sound

    <SPAN class=postbody>System info: Windows XP Home Edition, 2.4 GHz IntelPent4, 52MB of RAM; DDR 200(33MHz), MSI MS-6547 (645 Ultra) motherboard. I need help! Sigh...well I am trying to put in a Soundblaster Audigy2 into my computer. I am under the impression that I have to disable and remove all onboard sound b/c it conflicts with the new sound card, ?. I currently have onboard sound ( AC97 Audio Controller ). I went to add/remove programs and removed the AC97 from there. I also went to MyComputer/Properties/Hardware/DeviceManager/Sound,Video,andGameControllers/AC97AudioContoller and deleted it. st Question: Under DeviceManager/Sound,Video,GameControllers...Do I need to remove the Legacy Audio Drivers? I know that I have to delete the rest of the drivers for AC97 and out of chance would Legacy be those drivers? I am guessing no...Also, is there anything else that I have to delete for the on board sound program to be gone? 2nd Question: I went to BIOS, and since I have windows XP home edition I think that configuration is a little different from what I read(I wont bore you with that). So, where do I go to "disable" the onboard sound in BIOS for Windows XP HomeEdition? Also, I read something about making sure the "Chip" was disabled but I could be paranoid. Do I need to worry about that? If you made it through all that text I thank you dearly. I hope that I can get some help from someone b/c I know nothing. I am afraid of making the wrong choice on BIOS and messing everything up. Thank You so Much! Ross Message Edited by RossRude3 on 04-06-2005 0:48 AM

    Ross,
    To fully disable teh onboard audio you would need to go into the BIOS to disable the Onboard Audio chip there. The manual with your motherboard or PC should have more detail on where exactly to find this setting, as it will differ per BIOS type. Generally it should be under a heading like Integrated Peripherals, and should be marked as the Onboard Audio chip, or AC97 Audio.
    Cat

  • Need help in Custom component creation with cq

    Hi,
    I have created my own component in adobe cQ and able to drag an drop in any page. for each instance I am seeing the copy of
    component under node /content/mypage/commoncomp. If I edit the properties using design dialog  I can able to store values as
    expected.
    Need help in
    1) When I drag and drop the component it shows null for the property by deafult if I click save on edit dialog then the
    value is showing. The default value is not shown by default while drag and drop my component it requires atleast one time to
    click save from dialog using design mode
    2) I need to use the default properties of the orginal components which is root and use the duplicated properties if incase user edit the properties of duplicated component. This may be too messy.... to be more clear
    I want to use a common component in many pages but the properties of the components should be updated only when user edit it in design dialog utill then I need the page to use properties of root component.
    Thanks

    You just need to create your own version of PlumtreeTopBarView (original in com.plumtree.common.uiparts), and use CustomActivitySpace.xml to override that view with your own view. Your version would use some kidn fo logic to determine whether or not to display certain elements. TheUI Customization Guide should help quite a bit, and we have several quickstarts for overriding views you can start from.
    Is there some Help file in PT that I can reference that will enable me to find what namespace does GetRequestURL() belongs to?I would simply do a string compare. Tokenize the URL into the base address / control arguments, perhaps splitting on the "?" symbol; then tokenize the base address on the ".". You'll end up with three strings. For portal.plumtree.com, the three strings would portal, plumtree, and com. You can then use those string in the logig you add to PlumtreeTopBarView.
    David Phipps
    Plumtree Software

  • Need help on the below query or Pl-SQL

    Hello Every one,
    Please let me know if some one can help me out with the below scenario either in pl-sql or sql.
    Source Data:
    0000253800     0.25          0845A     0900A
    0000253800     1          0900A     1000A
    0000253800     1          1300P     1400P
    0000253800     1          1500P     1600P
    0000253800     1          1600P     1700P
    Output needed:
    0000253800     1.25          0845A     1000A
    0000253800     1          1300P     1400P
    0000253800     2          1500P     1700P
    Thanks in Advance....
    Edited by: user12564103 on Dec 11, 2011 5:54 PM

    Hi,
    Welcome to the forum!
    Depending on your data and your requirements:
    WITH     got_times     AS
         SELECT     column_1, column_2, column_3, column_4
         ,     TO_DATE ( substr (column_3, 1, 4)
                   , 'HH24MI'
                   )     AS time_3
         ,     TO_DATE ( SUBSTR (column_4, 1, 4)
                   , 'HH24MI'
                   )     AS time_4
         FROM     table_x
    ,     got_grp_id     AS
         SELECT     column_1, column_2, column_3, column_4
         ,     time_3, time_4
         ,     time_4 - SUM (time_4 - time_3) OVER ( PARTITION BY  column_1
                                        ORDER BY         time_3
                                      )     AS grp_id
         FROM     got_times
    SELECT       column_1
    ,       SUM (column_2)     AS sum_2
    ,       MIN (column_3) KEEP (DENSE_RANK FIRST ORDER BY time_3)
                        AS min_3
    ,       MAX (column_4) KEEP (DENSE_RANK LAST  ORDER BY time_4)
                        AS max_4
    FROM       got_grp_id
    GROUP BY  column_1
    ,       grp_id
    ORDER BY  column_1
    ,       grp_id
    ;Whenever you have a problem, please post CREATE TABLE and INSERT statements for your sample data, as well as the results you want from that data. Explain, with specific examples, how you get the results you want from that data.
    Always say which version of Oracle you're using. The query above will work in Oracle 9.1 (and higher).
    Since this is your first thread, I'll do this for you:
    CREATE TABLE     table_x
    (     column_1     NUMBER
    ,     column_2     NUMBER
    ,     column_3     VARCHAR2 (5)
    ,     column_4     VARCHAR2 (5)
    INSERT INTO table_x (column_1, column_2, column_3, column_4) VALUES (253800, .25, '0845A', '0900A');
    INSERT INTO table_x (column_1, column_2, column_3, column_4) VALUES (253800, 1,   '0900A', '1000A');
    INSERT INTO table_x (column_1, column_2, column_3, column_4) VALUES (253800, 1,   '1300P', '1400P');
    INSERT INTO table_x (column_1, column_2, column_3, column_4) VALUES (253800, 1,   '1500P', '1600P');
    INSERT INTO table_x (column_1, column_2, column_3, column_4) VALUES (253800, 1,   '1600P', '1700P');Column_1 identifes a day.
    Column_2 is an amount that I need to total.
    Column_3 and column_4 are starting and ending times. We can assume that they are all valid times (in 'HH24MI' format, plus a redundant 'A' or 'P') on the same day, column_3 is always less than column_4, and that the range of two rows for the same value of column_1 never overlap. Column_4 of one row may be equal to column_3 of another rows with the same column_1, but it will never be greater.
    Each row of the output represent a contiguous group of rows (each ending exactly when the next one begins) with the same column_1, with the common column_1, the total of column_2, and the range of the group.
    For example, the first two rows for a single group, because they have the same value for column_1, and one ends exactly when the other begins (at 9:00 AM). This group represents day 253800, from 8:45 AM to 10:00 AM. The totla of column_2 fro this group is .25 + 1 = 1.25.
    The next row (from 1:00 PM to 2:00 PM) is a group all by itself, because there is a gap one either side of it separating it from its nearest neighbor on the same day."
    Of course, I'm guessing at lots of things.
    Edited by: Frank Kulash on Dec 11, 2011 9:44 PM
    Changed TO_DATE calls.
    Edited by: Frank Kulash on Dec 11, 2011 9:52 PM
    Added sample question.

  • Need help in building search query

    Guys ..
    Problem Description:
    I have a huge table that is indexed using CONTEXT.
    I want to write a search query that considers the following:
    1. number of keywords match
    2. takes care of spelling mistakes, synonyms and acronyms
    3. proximity - the keywords should not be too far of each other.
    e.g. I have this phrase: "Horizontal Stabilizer Trim Brake"
    I was thinking of writing a query like:
    SELECT SCORE(1) SCORE,
    TEXT text
    FROM MY_TABLE
    WHERE CONTAINS(TEXT, '(Horz | Horizontal) ACCUM (Stab | Stabilier) ACCUM Trim ACCUM (Brk | Break)', 1) >= 0
    ORDER BY SCORE DESC
    The results doesnt look satisfactory. I have not used "near" operator as i dont know how to use it.
    Please help me as I am very much new to Oracle Text.
    -G

    Well, I'm not going to write the function for you, but we can at least talk through a general strategy.
    A lot depends on how you help your users on the front end -- for example, if they're searching a technical document, you may want to return results that aren't perfect matches but you do want to make sure the user picks 'mandatory' and 'useful' keywords in a way that lets you figure out which ones are really important. On the other hand, if you're google and have to handle queries like 'horizontal stabilizer trim brake' and 'were Pete and Jenny in the break room' then you run the risk of spending too much time looking for interesting words, almost doing a full-text search on the query trying to derive meaning.
    So I'm going to presume that you have some control over what/how the users generate their searches so that finding keywords isn't the issue.
    The plan will be to parse the query a bit to find the interesting words, clean them up, and weigh their importance, then use transformed data to build the query template to score various combinations.
    So here's some pseudocode for the function:
    function parse_query(pQueryWords in clob) returns clob as
    begin
        generate_token_list (); -- split the query into a set of individual tokens/words
        for each token in token_list
            if it's a mandatory word then accumtokenlist := accumtokenlist || ' ' || token ||'*10' -- weigh the presence of the token strongly
            if it's a useful word then accumtokenlist := accumtokenlist || ' ' || token ||'*5' -- domain-specific words are also important
            if it's a stopword or reserved word, then do not add it to the list
            if it's not on my lists, then accumtokenlist := accumtokenlist || ' ' || token
                                         and normaltokenlist := normaltokenlist ||' ' || token
        end;
        --so now, we have two lists, one for NEAR and one for ACCUM
        now build the guts of the template
            querytemplate := querytemplate || '<seq> || normaltokenlist || '</seq>';
            querytemplate := querytemplate || '<seq> || replace (accumtokenlist, ' ',' ACCUM ') || '</seq>';
            querytemplate := querytemplate || '<seq>$' || replace(normaltokenlist,' ','$') || '</seq>';
            querytemplate := querytemplate || '<seq>? || replace(replace(accumtokenlist,' ',' ?'),' ', ' accum ') || </seq>';  -- first fuzzy the words, then accum
            querytemplate := querytemplate || '<seq>? || replace(replace(normaltokenlist,' ',' ?'),' ', ' near ') || </seq>';  -- first fuzzy the words, then near
        return querytemplate
    end;So, with a 'cooked' query text that is template-friendly, all we need to do is apply a template that is aware of your inputs:
    query_Template_string := '
    <query>
       <textquery lang="ENGLISH" grammar="CONTEXT"> horizontal stabilizer*5 trim brake*10
         <progression> '
    || parse_query('horizontal stabilizer trim brake')  ||
    '     </progression>
       </textquery>
      <score datatype="INTEGER" algorithm="COUNT"/>'
    </query>So that's an example of one approach.

Maybe you are looking for

  • Displaying Japanese Fonts on Report Builder 10g in PDF format.

    Hi, In one of my Report .I am using Japanese characters.All the contents are static data. If I run from Report builder it gives me perfect result and If I am trying to get output in PDF format from Report builder, I all the Japanese characters change

  • Can't figure out how to change username, shortname and home directory?

    Hello, Trying to change the username, shortname and home directory on a new Macbook. I tried the following article to do this: http://www.macworld.com/article/132693/2008/03/changeshortusername.html This is my problem after trying the above article:

  • Can I create a class and package for my inframe AS and access it global

    Ls, I made an app using only inframe actionscript in a .fla; transfrerring variables from frame 1 to frame 2 is unreliable. Some sources say the persist over frames but i found that not to be reliable. Arrays seem to be filled with the right values.

  • V2 style trigger

    Hi all I am having a form in which somebody has written a code in pre insert trigger but whenever i tried to open this trigger i got the following error message FRM-11604: V2-style triggers cannot be edited in Form Builder. How can i see the code wri

  • CS4, Bad ghosting exporting to iPod

    All, I need clips from a DVD transferred to a classic iPod. I got the sequence from the DVD in the timeline. I export the media to an mp4 large iPod format. The output shows people moving around like ghosts. How can I prevent it from trying to blend