Add a table in this query?

This summarizes the daily values into weekly values from a table.
create table tblBill (
mtrl   varchar2(10),
sales_quantity number,
posting_date date
insert into tblBill values('Label1',3100,'2012-05-01');
insert into tblBill values('Label1',1984,'2012-05-02');
insert into tblBill values('Label1',5670,'2012-05-03');
insert into tblBill values('Label1',30,'2012-05-04');
insert into tblBill values('Label1',3888,'2012-05-05');
insert into tblBill values('Label1',1651,'2012-05-06');
insert into tblBill values('Label1',1881,'2012-05-07');
insert into tblBill values('Label1',1985,'2012-05-08');
insert into tblBill values('Label1',3240,'2012-05-09');
insert into tblBill values('Label1',980,'2012-05-10');
insert into tblBill values('Label1',13165,'2012-05-17');
insert into tblBill values('Label1',1265,'2012-05-19');
insert into tblBill values('Label1',1165,'2012-05-23');
insert into tblBill values('Label1',3125,'2012-05-24');
insert into tblBill values('Label1',2311,'2012-05-29');
create table tblCon (
mtrl   varchar2(10),
con_quantity number,
posting_date date
insert into tblCon values('Label1',100,'2012-05-07');
insert into tblCon values('Label1',184,'2012-05-09');
insert into tblCon values('Label1',570,'2012-05-10');
insert into tblCon values('Label1',770,'2012-05-11');
insert into tblCon values('Label1',888,'2012-05-16');
insert into tblCon values('Label1',651,'2012-05-17');
insert into tblCon values('Label1',1081,'2012-05-18');
insert into tblCon values('Label1',1085,'2012-05-19');
insert into tblCon values('Label1',3240,'2012-05-20');
insert into tblCon values('Label1',990,'2012-05-24');
insert into tblCon values('Label1',1165,'2012-05-26');
insert into tblCon values('Label1',105,'2012-05-27');
insert into tblCon values('Label1',1165,'2012-05-28');
insert into tblCon values('Label1',2125,'2012-05-29');
insert into tblCon values('Label1',5311,'2012-05-30');
commit;
           SELECT  Material, qty as Quantity_Sales, fill_year_week as YearWeek
            from (
            WITH      weeks_vw AS
            (SELECT  DISTINCT TO_CHAR(fill_year_day,'IYYYIW') AS fill_year_week
                    FROM
                         (SELECT  to_date('2012-05-01','yyyy-mm-dd')  + (ROWNUM-1) AS fill_year_day FROM  DUAL
                            CONNECT BY      LEVEL <= TRUNC(to_date('2012-05-30','yyyy-mm-dd') - to_date('2012-05-01','yyyy-mm-dd') ) + 1))
                SELECT  a.material ,fill_year_week ,SUM(NVL(sales_quantity,0)) AS QTY
                FROM
                 (   SELECT  t.material ,t.sales_quantity, TO_CHAR(t.posting_date,'IYYYIW') AS posting_week
                         FROM    tblBill t
                         WHERE   t.material = 'Label1'
                        AND     t.Posting_date >= to_date('2012-05-01', 'yyyy-mm-dd')
                        AND     t.Posting_date <= to_date('2012-05-30','yyyy-mm-dd') ) A
               PARTITION BY (a.material) RIGHT OUTER JOIN weeks_vw  ON weeks_vw.fill_year_week = A.posting_week
               GROUP BY     a.material ,fill_year_week
               Order by fill_year_week desc)Now I want to add a second table that looks similar. Today, the answer looks like this:
MATERIALS QUANTITY YEAR_WEEK
Label1       8 086         1219I want to add one to the column from Table 2 (tblCon) that the answer looks like this:
MATERIAL     SALES_QUANTITY     CON_QUANTITY     YEAR_WEEK
Label1             8 086             1 624             1219How do I proceed?

Hi,
user570142 wrote:
This summarizes the daily values into weekly values from a table.
create table tblBill (
mtrl   varchar2(10),
sales_quantity number,
posting_date date
insert into tblBill values('Label1',3100,'2012-05-01'); ...
Thanks for posting the CREATE TABLE and INSERT statements. Remember why you go to all that trouble: it's to let the poeple who wnat to help you re-create the problem and test their ideas. If you post statements that don't work, its not very helpful.
None of your INSERT statements work on my system, because you're trying to insert VARCHAR2S (such as '2012-05-01') into a DATE column. You should insert DATEs into DATE columns. use TO_DATE or DATE literals.
SELECT  Material, qty as Quantity_Sales, fill_year_week as YearWeek
from (
WITH      weeks_vw AS
(SELECT  DISTINCT TO_CHAR(fill_year_day,'IYYYIW') AS fill_year_week
FROM
(SELECT  to_date('2012-05-01','yyyy-mm-dd')  + (ROWNUM-1) AS fill_year_day FROM  DUAL
CONNECT BY      LEVEL <= TRUNC(to_date('2012-05-30','yyyy-mm-dd') - to_date('2012-05-01','yyyy-mm-dd') ) + 1))
SELECT  a.material ,fill_year_week ,SUM(NVL(sales_quantity,0)) AS QTY
FROM
(   SELECT  t.material ,t.sales_quantity, TO_CHAR(t.posting_date,'IYYYIW') AS posting_week
FROM    tblBill t
WHERE   t.material = 'Label1'
AND     t.Posting_date >= to_date('2012-05-01', 'yyyy-mm-dd')
AND     t.Posting_date <= to_date('2012-05-30','yyyy-mm-dd') ) A
PARTITION BY (a.material) RIGHT OUTER JOIN weeks_vw  ON weeks_vw.fill_year_week = A.posting_week
GROUP BY     a.material ,fill_year_week
Order by fill_year_week desc)I don't believe this is the code you're actually running. It references a column called material, but there is no such column in the tbill table.
Now I want to add a second table that looks similar. Today, the answer looks like this:
MATERIALS QUANTITY YEAR_WEEK
Label1       8 086         1219
Again, this indicates you haven't posted your real query. The query above, if it could run, would produce 4 or 5 rows of output, one for each week. Post your actual code, and your actual, complete results.
I want to add one to the column from Table 2 (tblCon) that the answer looks like this:
MATERIAL     SALES_QUANTITY     CON_QUANTITY     YEAR_WEEK
Label1             8 086             1 624             1219How do I proceed?It looks like a job for JOIN. Compute the weekly sum for one table in a separate sub-query, before joining the other table, something like this:
WITH     params          AS
     SELECT     TO_DATE ('2012-05-01', 'yyyy-mm-dd') AS first_posting_date
     ,     TO_DATE ('2012-05-30', 'yyyy-mm-dd') AS last_posting_date
     FROM     dual
,     weeks_vw     AS
     SELECT     TRUNC (first_posting_date, 'IW')
              + (7 * (LEVEL - 1))              AS a_monday
     ,     TRUNC (first_posting_date, 'IW')
              + (7 *  LEVEL)              AS next_monday
     FROM    params
     CONNECT BY     LEVEL <= 1 + ( ( TRUNC (last_posting_date,  'IW')
                                 - TRUNC (first_posting_date, 'IW')
                         / 7
,     tblbill_agg     AS
     SELECT       w.a_monday
     ,       b.material
     ,       SUM (sales_quantity)     AS total_sales_quantity
     FROM          weeks_vw  w
     JOIN       tblbill   b  ON  b.posting_date >= w.a_monday
                        AND b.posting_date <  w.next_monday
     WHERE     b.material   IN ('Label1')
     GROUP BY  w.a_monday
     ,            b.material
SELECT       tc.mtrl
,       NVL ( MIN (ba.total_sales_quantity)
           , 0
           )                         AS total_sales_quantity
,       NVL ( SUM (tc.con_quantity)
           , 0
           )                         AS total_con_quantity
,       TO_CHAR (wv.a_monday, 'IYYY-IW')     AS year_week
FROM                 weeks_vw     wv
LEFT OUTER JOIN  tblbill_agg    ba  ON  ba.a_monday     =  wv.a_monday
LEFT OUTER JOIN  tblcon          tc  PARTITION BY  (tc.mtrl)
                                 ON  tc.posting_date >= wv.a_monday
                        AND     tc.posting_date <  wv.next_monday
GROUP BY  tc.mtrl     
,            wv.a_monday
ORDER BY  tc.mtrl     
,            wv.a_monday
;

Similar Messages

  • Need to Add a Table in Existing Query(SQVI)

    Hello All,
    I have a query zqry (in T-Code SQVI) using 2 tables mkpf & mseg, with some List Fields(Result) & some selection Fields(Select options).
    I need to add a new table makt into this existing query.
    Help is highly appriciated.
    Regards
    Arun.

    Hi Arun,
    look here:<a href="http://help.sap.com/saphelp_47x200/helpdata/en/b7/26dde8b1f311d295f40000e82de14a/frameset.htm">QuickViewer</a>
    and note: ...
    "Whenever you define a QuickView, you can specify its data source explicitly. Tables, database views, table joins, logical databases, and even InfoSets, can all serve as data sources for a QuickView. <b>You can only use additional tables and additional fields if you use an InfoSet</b> as a data source."...
    regards Andreas

  • How can I add reconciled payments to this query

    Select T4.[SlpName] as 'Sales Employee', T0.CardCode, T0.cardname as 'Customer',T0.Docdate as 'Invoice Date', T0.docnum as 'Invoice Number', T0.Taxdate as 'Month Of Service',isnull(T0.U_AIS_DVISFSO,T3.U_AIS_DVISFSO) as 'SO#',T0.NumAtCard as 'PO#', isnull(T0.U_AIS_DVIAdvNm,T3.U_AIS_DVIAdvNm) as 'Advertiser', isnull(T0.U_AIS_DVIOpptyNm,T3.U_AIS_DVIOpptyNm) as 'Campaign',T1.Dscription,-T1.Quantity as 'Impressions',T1.Pricebefdi as 'CPM', Case T0.CurSource When 'C' Then T0.DocCur When 'L' Then T5.MainCurncy When 'S' Then T6.SysCurrncy End As 'Currency DC', Case T0.CurSource When 'C' Then -T1.TotalFrgn When 'L' Then -T1.LineTotal When 'S' Then -T1.TotalSumSy End As 'Total Bef Discount DC', Case T0.CurSource When 'L' Then -T1.[LineVat] When 'C' Then -T1.[LineVatlF] When 'S' Then -T1.[LineVatS] END As 'Vat Tax DC', Case when t1.visorder = 0 Then (Cast(Round((Case T0.CurSource When 'L' Then -T0.DocTotal When 'C' Then -T0.DocTotalFC When 'S' Then -T0.DocTotalSy End), 2) As DECIMAL(18,2))) ELSE 0 END As 'Doc Total DC', Case when t1.visorder = 0 Then (Cast(Round((Case T0.CurSource When 'L' Then -(T0.DocTotal - T0.PaidToDate) When 'C' Then -(T0.DocTotalFC - T0.PaidFC) When 'S' Then -(T0.DocTotalSy - T0.PaidSys) End), 2) As DECIMAL(18,2))) ELSE 0 END As 'Balance after Payment DC', ISNULL(T6.SysCurrncy,'USD') As 'Currency SC', -T1.TotalSumSy As 'Total Bef Discount SC', -T1.[LineVatS] As 'Vat Tax SC', Case when t1.visorder = 0 Then (Cast(Round(-(T0.DocTotalSy), 2) As DECIMAL(18,2))) ELSE 0 END As 'Doc Total SC', Case when t1.visorder = 0 Then (Cast(Round(-(T0.DocTotalSy - T0.PaidSys), 2) As DECIMAL(18,2))) ELSE 0 END As 'Balance after Payment SC', T0.docstatus, 'Credit Memo' AS TransactionType, T0.CurSource,T0.[Comments],T0.[U_InvoiceAdj], T0.[U_DV_AdjustInvReason],T1.[U_DVIInvName], T1.[U_InvoiceAdj], T1.[U_DV_AdjustInvReason] from ORIN T0 left outer join RIN1 T1 on T0.docentry = T1.docentry left outer join RDR1 T2 on T1.Baseentry = T2.docentry and T1.baseline = T2.linenum left outer join ORDR T3 on T2.docentry = T3.docentry left outer join OSLP T4 ON T0.Slpcode= T4.Slpcode Left Join OADM T5 On T0.CurSource = 'L' Left Join OADM T6 On T0.CurSource = 'S' Where T0.cardname  Like '%[%0]%' and T0.docstatus ='O'

    Hi,
    Please check OITR,ITR1 Tables.
    Thanks,
    Nithi

  • 3rd Table to Union Query?

    Is it possible to add a 3rd table to this query
    Active is a text field, and tblOwnerInfo has the same field [OwnerID] number field, so I am trying only to get OwnerID that are Active...Thanks for any help..Bob
    tblOwnerInfo.Status = "Active" Same OwnerID
    SELECT tblInvoice.OwnerID, tblInvoice.OwnerName, tblInvoice.InvoiceDate AS OnDate,iif(tblInvoice.ClientInvoice=true,tblInvoice.OwnerName,funGetHorse(tblInvoice.InvoiceID) & "  @ " & Format(tblInvoice.OwnerPercent,"0.0%")) 
    AS Description, tblInvoice.OwnerPercentAmount AS AmountSummary, tblInvoice.InvoiceID, tblInvoice.InvoiceNo,0 AS Flag
    FROM tblInvoice
    UNION SELECT tblAccountStatus.OwnerID, NULL, tblAccountStatus.BillDate  AS OnDate, tblAccountStatus.ModeOfPayment  AS Description,(tblAccountStatus.PaidAmount * -1) AS Credit, NULL,BillID,-1 AS Flag
    FROM tblAccountStatus;
    xxx

    hi,
      use t-code RSRTQ and in that give the required query name and then execute there you will get detailed info.
    give the query techinal name and you will get desired info
    hope it helps
    regards
    laksh

  • MASS - how to add more tables to object types

    I am setting up a variant in the MASS transaction and I need to add the VBUP / order line item status table to the Object Type  BUS2032 - sales orders.   How can I add another table to this?   Currently VBAK, VBKD and VBAP are available.
    Thank you,
    Lisa

    > Hi Lisa,
    > I feel you are tyring out something which cannot be
    > done.
    > MASS will allow you to make a large number of changes
    > at one go by creating a BDC.
    > Now if you cannot do the change manually then you
    > cant do it using MASS.
    >
    > So trying to maintain VBUP thru MASS will not work.
    > You may need to find out the field in VBAP which
    > would trigger the VBUP updation in your scenario.
    >
    > Reward points if this clarifies your question.
    > regards
    > Biju
    Hello Biju,
    MASS is doing what I want it to do.  I can upload a file of sales orders that have not been delivered.  I can then change the line item pricing date using MASS, and that will trigger the Carry Out New Pricing function. 
    I just wanted to add the delivery-status field to the select options screen in MASS.  I don't want to change the delivery-status field value.    I know that BUS2032 is SAP-delivered, but it it possible to copy that object to ZBUS2032 and then add the VBUP table for my selection criteria?
    Thank you,
    Lisa

  • Why is DBXML doing a table scan on this query?

    I loaded a database with about 610 documents, each contains about 5000 elements of the form:
    <locations><location><id>100</id> ... </location> <location><id>200</id> ... </location> ... </locations>
    The size of my dbxml file is about 16G. I create this with all default settings, except that I set auto-indexing off, and added 3 indexes, listed here:
    dbxml> listIndexes
    Index: unique-edge-element-equality-string for node {}:id
    Index: edge-element-presence-none for node {}:location
    Index: node-element-presence-none for node {}:locations
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    4 indexes found.
    I am performing the following query:
    dbxml> query 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    This has the following query plan:
    dbxml> queryPlan 'for $location in (collection("CitySearch.dbxml")/locations/location[id = 41400]) return $location'
    <XQuery>
    <Return>
    <ForTuple uri="" name="location">
    <ContextTuple/>
    <QueryPlanToAST>
    <ParentOfChildJoinQP>
    <ValueFilterQP comparison="eq" general="true">
    <PresenceQP container="CitySearch.dbxml" index="unique-edge-element-equality-string" operation="prefix" child="id"/>
    <NumericLiteral value="4.140E4" typeuri="http://www.w3.org/2001/XMLSchema" typename="integer"/>
    </ValueFilterQP>
    <ChildJoinQP>
    <NodePredicateFilterQP uri="" name="#tmp0">
    <PresenceQP container="CitySearch.dbxml" index="node-element-presence-none" operation="eq" child="locations"/>
    <LevelFilterQP>
    <VariableQP name="#tmp0"/>
    </LevelFilterQP>
    </NodePredicateFilterQP>
    <PresenceQP container="CitySearch.dbxml" index="edge-element-presence-none" operation="eq" parent="locations" child="location"/>
    </ChildJoinQP>
    </ParentOfChildJoinQP>
    </QueryPlanToAST>
    </ForTuple>
    <QueryPlanToAST>
    <VariableQP name="location"/>
    </QueryPlanToAST>
    </Return>
    </XQuery>
    When I run the query, it is very clearly performing a table scan, the query takes about 10 minutes to run (argh!!) and the disk is read for the length of the query. Why is this doing a table scan, and what can I do to make this a simple, direct node access?
    Andrew

    Hi George,
    I took a subset of my data set, and left auto indexing on to see what the query plan would be, then I duplicated the index being used in my larger data set with auto indexing off. The problem with leaving auto indexing on for the entire data set was the apparent size of the file: with just the single index, the file was about 17G, with auto indexing on, it was climbing over 30G (with 40 indices, I didn't include all of the tags in my original post) when I killed it. Further data load was taking forever, it is much faster with auto indexing off and then add the single index.

  • How Can i add "DateDiff(day, T0.DueDate" as a column in this query?

    How Can i add "DateDiff(day, T0.DueDate" as a column in this query?
    SELECT T1.CardCode, T1.CardName, T1.CreditLine, T0.RefDate, T0.Ref1 'Document Number',
          CASE  WHEN T0.TransType=13 THEN 'Invoice'
               WHEN T0.TransType=14 THEN 'Credit Note'
               WHEN T0.TransType=30 THEN 'Journal'
               WHEN T0.TransType=24 THEN 'Receipt'
               END AS 'Document Type',
          T0.DueDate, (T0.Debit- T0.Credit) 'Balance'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')<=-1),0) 'Future'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=0 and DateDiff(day, T0.DueDate,'[%1]')<=30),0) 'Current'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>30 and DateDiff(day, T0.DueDate,'[%1]')<=60),0) '31-60 Days'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>60 and DateDiff(day, T0.DueDate,'[%1]')<=90),0) '61-90 Days'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>90 and DateDiff(day, T0.DueDate,'[%1]')<=120),0) '91-120 Days'
          ,ISNULL((SELECT T0.Debit-T0.Credit WHERE DateDiff(day, T0.DueDate,'[%1]')>=121),0) '121+ Days'
    FROM JDT1 T0 INNER JOIN OCRD T1 ON T0.ShortName = T1.CardCode
    WHERE (T0.MthDate IS NULL OR T0.MthDate > [%1]) AND T0.RefDate <= [%1] AND T1.CardType = 'C'
    ORDER BY T1.CardCode, T0.DueDate, T0.Ref1

    Hi,
    As you mentioned not possible to assign the dynamic column in the query.
    will give you example for generate a dynamic column name in SQL query, using this example you can achieve your requirement.
    DECLARE @cols AS NVARCHAR(MAX),
        @query  AS NVARCHAR(MAX)
    select @cols = STUFF((SELECT distinct ',' + QUOTENAME(C.Name) 
                        from [History]
                FOR XML PATH(''), TYPE
                ).value('.', 'NVARCHAR(MAX)')
            ,1,1,'')
    set @query = 'SELECT [Date],' + @cols +'
                 from
                    select [Date], Name, Value
                    from [History]
                 ) x
                pivot
                    max(value)
                    for Name in (' + @cols + ')
                ) p '
    execute(@query)

  • How can this query avoid full table scans?

    It is difficult to avoid full table scans in the following query because the values of column STATUS reiterant numbers. There are only 10 numbers values for the STATUS column (1..10)
    But the table is very large. there are more than 1 million rows in it. A full table scanning consumes too much time.
    How can this query avoid full table scans?
    Thank you
    SELECT SYNC,CUS_ID INTO V_SYNC,V_CUS_ID FROM CONSUMER_MSG_IDX
                      WHERE CUS_ID = V_TYPE_CUS_HEADER.CUS_ID AND
                            ADDRESS_ID = V_TYPE_CUS_HEADER.ADDRESS_ID AND
                            STATUS =! 8;Edited by: junez on Jul 23, 2009 7:30 PM

    Your code had an extra AND. I also replaced the "not equal" operator, which has display problems with the forum software
    SELECT SYNC,CUS_ID
       INTO V_SYNC,V_CUS_ID
      FROM CONSUMER_MSG_IDX
    WHERE CUS_ID = V_TYPE_CUS_HEADER.CUS_ID AND
           ADDRESS_ID = V_TYPE_CUS_HEADER.ADDRESS_ID AND
           STATUS != 8;Are you sure this query is doing a table scan? Is there an index on CUS_ID, ADDRESS_ID? I would think that would be mostly unique. So I'm not sure why you think the STATUS column is causing problems. It would seem to just be a non-selective additional filter.
    Justin

  • Urgent:how to add two table regions to one query region

    hello
    In my page a serach region is there and for that region i need to add two tables and two are based on two different view objects.how can i implemnet this thing please let me know.
    advance thanks

    Hi Wei Fang,
    You can try by creating a 2 line template (1 Template, 2 linetype) under a loop note.
    So your smartform tree structure will be shown like this:
    LOOP
        TEMPLATE1.
    On the LOOP part, pass the internal table of your data to the working areas.
    On the template put all the data of the summary on your first linetype, and put
    the detail data on your second linetype.
    Good luck and hopefully this will solve the problem
    Edited by: Prawira Fadjar on Oct 22, 2008 10:04 AM

  • Reason behind this query ,SELECT * FROM table WHERE 1 0

    HEllo,
    I would like to know the reason behind using this query ,
    SELECT * FROM <table> WHERE 1 < 0
    before executing the actual SQL query.
    Is there any special reason or the JDBC receiver side is configured like that.
    Is there any option to overcome this process like, can we remove this option or stop using this.
    Why the JDBC adapter basically sending this query on the DB?
    Thanks,
    Soorya,

    Hi,
    if you run this query, you wont be able to see any records of the table.
    SELECT * FROM <table> WHERE 1 < 0
    if you run this query you will see all records
    SELECT * FROM <table> WHERE 0 < 1
    same with SELECT * FROM <table> WHERE 1=1
    So you can check this out that whats happening in your code before executing actual query. just try to co-relate.
    regards
    Aashish Sinha
    PS : reward points if helpful

  • How I can change this query, so I can display the name and scores in one r

    How I can change this query, so I can add the ID from the table SPRIDEN
    as of now is giving me what I want:
    1,543     A05     24     A01     24     BAC     24     BAE     24     A02     20     BAM     20in one line but I would like to add the id and name that are stored in the table SPRIDEN
    SELECT sortest_pidm,
           max(decode(rn,1,sortest_tesc_code)) tesc_code1,
           max(decode(rn,1,score)) score1,
           max(decode(rn,2,sortest_tesc_code)) tesc_code2,
           max(decode(rn,2,score)) score2,
           max(decode(rn,3,sortest_tesc_code)) tesc_code3,
           max(decode(rn,3,score))  score3,
           max(decode(rn,4,sortest_tesc_code)) tesc_code4,
           max(decode(rn,4,score))  score4,
           max(decode(rn,5,sortest_tesc_code)) tesc_code5,
           max(decode(rn,5,score))  score5,
           max(decode(rn,6,sortest_tesc_code)) tesc_code6,
           max(decode(rn,6,score))  score6        
      FROM (select sortest_pidm,
                   sortest_tesc_code,
                   score,
                  row_number() over (partition by sortest_pidm order by score desc) rn
              FROM (select sortest_pidm,
                           sortest_tesc_code,
                           max(sortest_test_score) score
                      from sortest,SPRIDEN
                      where
                      SPRIDEN_pidm =SORTEST_PIDM
                    AND   sortest_tesc_code in ('A01','BAE','A02','BAM','A05','BAC')
                     and  sortest_pidm is not null 
                    GROUP BY sortest_pidm, sortest_tesc_code))
                    GROUP BY sortest_pidm;
                   

    Hi,
    That depends on whether spriden_pidm is unique, and on what you want for results.
    Whenever you have a problem, post a little sample data (CREATE TABLE and INSERT statements, relevamnt columns only) for all tables, and the results you want from that data.
    If you can illustrate your problem using commonly available tables (such as those in the scott or hr schemas) then you don't have to post any sample data; just post the results you want.
    Either way, explain how you get those results from that data.
    Always say which version of Oracle you're using.
    It looks like you're doing something similiar to the following.
    Using the emp and dept tables in the scott schema, produce one row of output per department showing the highest salary in each job, for a given set of jobs:
    DEPTNO DNAME          LOC           JOB_1   SAL_1 JOB_2   SAL_2 JOB_3   SAL_3
        20 RESEARCH       DALLAS        ANALYST  3000 MANAGER  2975 CLERK    1100
        10 ACCOUNTING     NEW YORK      MANAGER  2450 CLERK    1300
        30 SALES          CHICAGO       MANAGER  2850 CLERK     950On each row, the jobs are listed in order by the highest salary.
    This seems to be analagous to what you're doing. The roles played by sortest_pidm, sortest_tesc_code and sortest_test_score in your sortest table are played by deptno, job and sal in the emp table. The roles played by spriden_pidm, id and name in your spriden table are played by deptno, dname and loc in the dept table.
    It sounds like you already have something like the query below, that produces the correct output, except that it does not include the dname and loc columns from the dept table.
    SELECT    deptno
    ,       MAX (DECODE (rn, 1, job))     AS job_1
    ,       MAX (DECODE (rn, 1, max_sal))     AS sal_1
    ,       MAX (DECODE (rn, 2, job))     AS job_2
    ,       MAX (DECODE (rn, 2, max_sal))     AS sal_2
    ,       MAX (DECODE (rn, 3, job))     AS job_3
    ,       MAX (DECODE (rn, 3, max_sal))     AS sal_3
    FROM       (
               SELECT    deptno
               ,          job
               ,          max_sal
               ,          ROW_NUMBER () OVER ( PARTITION BY  deptno
                                              ORDER BY          max_sal     DESC
                                )         AS rn
               FROM     (
                             SELECT    e.deptno
                       ,           e.job
                       ,           MAX (e.sal)     AS max_sal
                       FROM      scott.emp        e
                       ,           scott.dept   d
                       WHERE     e.deptno        = d.deptno
                       AND           e.job        IN ('ANALYST', 'CLERK', 'MANAGER')
                       GROUP BY  e.deptno
                       ,           e.job
    GROUP BY  deptno
    ;Since dept.deptno is unique, there will only be one dname and one loc for each deptno, so we can change the query by replacing "deptno" with "deptno, dname, loc" throughout the query (except in the join condition, of course):
    SELECT    deptno, dname, loc                    -- Changed
    ,       MAX (DECODE (rn, 1, job))     AS job_1
    ,       MAX (DECODE (rn, 1, max_sal))     AS sal_1
    ,       MAX (DECODE (rn, 2, job))     AS job_2
    ,       MAX (DECODE (rn, 2, max_sal))     AS sal_2
    ,       MAX (DECODE (rn, 3, job))     AS job_3
    ,       MAX (DECODE (rn, 3, max_sal))     AS sal_3
    FROM       (
               SELECT    deptno, dname, loc          -- Changed
               ,          job
               ,          max_sal
               ,          ROW_NUMBER () OVER ( PARTITION BY  deptno      -- , dname, loc     -- Changed
                                              ORDER BY          max_sal      DESC
                                )         AS rn
               FROM     (
                             SELECT    e.deptno, d.dname, d.loc                    -- Changed
                       ,           e.job
                       ,           MAX (e.sal)     AS max_sal
                       FROM      scott.emp        e
                       ,           scott.dept   d
                       WHERE     e.deptno        = d.deptno
                       AND           e.job        IN ('ANALYST', 'CLERK', 'MANAGER')
                       GROUP BY  e.deptno, d.dname, d.loc                    -- Changed
                       ,           e.job
    GROUP BY  deptno, dname, loc                    -- Changed
    ;Actually, you can keep using just deptno in the analytic PARTITION BY clause. It might be a little more efficient to just use deptno, like I did above, but it won't change the results if you use all 3, if there is only 1 danme and 1 loc per deptno.
    By the way, you don't need so many sub-queries. You're using the inner sub-query to compute the MAX, and the outer sub-query to compute rn. Analytic functions are computed after aggregate fucntions, so you can do both in the same sub-query like this:
    SELECT    deptno, dname, loc
    ,       MAX (DECODE (rn, 1, job))     AS job_1
    ,       MAX (DECODE (rn, 1, max_sal))     AS sal_1
    ,       MAX (DECODE (rn, 2, job))     AS job_2
    ,       MAX (DECODE (rn, 2, max_sal))     AS sal_2
    ,       MAX (DECODE (rn, 3, job))     AS job_3
    ,       MAX (DECODE (rn, 3, max_sal))     AS sal_3
    FROM       (
                   SELECT    e.deptno, d.dname, d.loc
              ,       e.job
              ,       MAX (e.sal)     AS max_sal
              ,       ROW_NUMBER () OVER ( PARTITION BY  e.deptno
                                           ORDER BY       MAX (sal)     DESC
                                          )       AS rn
              FROM      scott.emp    e
              ,       scott.dept   d
              WHERE     e.deptno        = d.deptno
              AND       e.job                IN ('ANALYST', 'CLERK', 'MANAGER')
                  GROUP BY  e.deptno, d.dname, d.loc
              ,       e.job
    GROUP BY  deptno, dname, loc
    ;This will work in Oracle 8.1 and up. In Oracle 11, however, it's better to use the SELECT ... PIVOT feature.

  • Adding a folder to an existing disco report adds unnecessary table joins

    Hello,
    I have recently created a new disco folder and adding it to existing reports is causing the reports to error out after running for ever. When I had a look at the SQL inspector, what is happening is that a table ( which was already on the report prior to my amendments and is the table my new folder joins to) is being queried twice. When i build the query from scratch using exactly the same folders, this issue does not happen. The users are used to the already existing reports and all they want to do is add extra information from this new folder. It would be a big ask to expect them to build new reports from scratch. Kindly get back to me with any ideas. I am going to try to change my registry settings and see if it makes a difference

    No, the new folder is made up of only one table. The table that is being replicated is the underlying table of a folder by nre folder joins to and already exists in my report. This is the scenario: My current report has the personal details and address tables. I am now adding a folder that brings in the job details into the report

  • Is there any way to tune this query? EXPLAIN PLAN included

    DB version:10gR2
    The below query was taking more than 3 seconds. The statistics are up to date for these tables. Is there any other way i could tune this query?
    SELECT COUNT(1)
    FROM
    INVN_SCOPE_DTL, ship_dtl WHERE ship_dtl.WHSE = INVN_SCOPE_DTL.WHSE (+)
    AND 'QC' = INVN_SCOPE_DTL.FROM_WORK_GRP (+)
    AND  'MQN' = INVN_SCOPE_DTL.FROM_WORK_AREA (+)
    AND  ship_dtl.START_CURR_WORK_GRP = INVN_SCOPE_DTL.TO_WORK_GRP (+)
    AND  ship_dtl.START_CURR_WORK_AREA = INVN_SCOPE_DTL.TO_WORK_AREA (+)
    AND  ship_dtl.WHSE = '930' AND  ship_dtl.OWNER_USER_ID = 'CTZDM'
    OR ship_dtl.OWNER_USER_ID = '*'
    AND ship_dtl.STAT_CODE >= '10'
    AND ship_dtl.STAT_CODE <= '20'
    ORDER BY ship_dtl.OWNER_USER_ID DESC,
    ship_dtl.CURR_TASK_PRTY ASC, INVN_SCOPE_DTL.DISTANCE ASC, ship_dtl.RLS_DATE_TIME ASC, ship_dtl.TASK_ID ASC;
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation                      |  Name               | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                     |     1 |    86 |    86   (2)|
    |   1 |  SORT AGGREGATE                |                     |     1 |    86 |            |
    |   2 |   NESTED LOOPS OUTER           |                     |   898 | 77228 |    86   (2)|
    |   3 |    INLIST ITERATOR             |                     |       |       |            |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| ship_dtl            |   898 | 31430 |    85   (2)|
    |*  5 |      INDEX RANGE SCAN          | ship_dtl_IND_4      |  2876 |       |     1   (0)|
    |   6 |    TABLE ACCESS BY INDEX ROWID | INVN_SCOPE_DTL     |     1 |    51 |     2  (50)|
    PLAN_TABLE_OUTPUT
    |*  7 |     INDEX UNIQUE SCAN          | PK_INVN_SCOPE_DTL  |     1 |       |            |
    Predicate Information (identified by operation id):
       4 - filter("ship_dtl"."WHSE"='930' AND "ship_dtl"."STAT_CODE">=10 AND
                  "ship_dtl"."STAT_CODE"<=20)
       5 - access("ship_dtl"."OWNER_USER_ID"='*' OR "ship_dtl"."OWNER_USER_ID"='CTZDM')
       7 - access("INVN_SCOPE_DTL"."WHSE"(+)='930' AND
                  "INVN_SCOPE_DTL"."FROM_WORK_GRP"(+)='QC' AND "INVN_SCOPE_DTL"."FROM_WORK_AREA"(+)='MQN'
    PLAN_TABLE_OUTPUT
                  AND "ship_dtl"."START_CURR_WORK_GRP"="INVN_SCOPE_DTL"."TO_WORK_GRP"(+) AND
                  "ship_dtl"."START_CURR_WORK_AREA"="INVN_SCOPE_DTL"."TO_WORK_AREA"(+))
           filter("ship_dtl"."WHSE"="INVN_SCOPE_DTL"."WHSE"(+))
    25 rows selected.

    William Robertson wrote:
    I notice an OR predicate in the middle of some AND predicates without explicit bracketing. Are you sure it does what you think it does?I underline this point.
    A conjuction (AND expression) has a higher priority and will be executed (logically) before the disjunction (OR expression)! So your select looks like this
    SELECT COUNT(1)
    FROM INVN_SCOPE_DTL, ship_dtl
    WHERE
          ( ship_dtl.WHSE = INVN_SCOPE_DTL.WHSE (+)
          AND 'QC' = INVN_SCOPE_DTL.FROM_WORK_GRP (+)
          AND  'MQN' = INVN_SCOPE_DTL.FROM_WORK_AREA (+)
          AND  ship_dtl.START_CURR_WORK_GRP = INVN_SCOPE_DTL.TO_WORK_GRP (+)
          AND  ship_dtl.START_CURR_WORK_AREA = INVN_SCOPE_DTL.TO_WORK_AREA (+)
          AND  ship_dtl.WHSE = '930'
          AND  ship_dtl.OWNER_USER_ID = 'CTZDM'
    OR   ( ship_dtl.OWNER_USER_ID = '*'
          AND ship_dtl.STAT_CODE >= '10'
          AND ship_dtl.STAT_CODE <= '20'
    ;This might be want you want, but I doubt it very much. Please add parenthesis', to get it working the way it should be.
    Edited by: Sven W. on Oct 16, 2008 3:25 PM

  • Help need in this query

    Hi All,
    I create following query which return my required results. but when i insert a new entry in the table it returns a new group of the same month and not add it in the above group where i doing mistake any idea ????
    My required results
    SQL> SELECT DISTINCT TO_CHAR(T.POS_DATE, 'Month') Month,count(*) Date_rec,sum(t.amount) amount,
      2   t.Asset, avg(t.amount) avg_ora_col,Sum(t.amount)/ count(*) "Normal Average"
      3  from pos t
      4  group by t.pos_date,t.asset;
    MONTH      DATE_REC    AMOUNT ASSET      AVG_ORA_COL Normal Average
    November          3       750 Loans              250            250
    December          2       900 Loans              450            450
    SQL> select * from pos;
    POS_DATE  ASSET         AMOUNT
    01-NOV-07 Loans            100
    01-NOV-07 Loans            250
    01-NOV-07 Loans            400
    02-NOV-07 Loans            100
    02-NOV-07 Loans            250
    02-NOV-07 Loans            400
    03-NOV-07 Loans            100
    03-NOV-07 Loans            250
    03-NOV-07 Loans            400
    09-DEC-07 Loans            500
    09-DEC-07 Loans            400
    11 rows selected.
    After insertion records in table it looks like following:
    SQL> SELECT DISTINCT TO_CHAR(T.POS_DATE, 'Month') Month,count(*) Date_rec,sum(t.amount) amount,
      2   t.Asset, avg(t.amount) avg_ora_col,Sum(t.amount)/ count(*) "Normal Average"
      3  from pos t
      4  group by t.pos_date,t.asset;
    MONTH      DATE_REC    AMOUNT ASSET      AVG_ORA_COL Normal Average
    November          3       750 Loans              250            250
    December          2       900 Loans              450            450
    November          1       300 Loans              300            300
    December          1       300 Loans              300            300
    SQL> select * from pos;
    POS_DATE  ASSET         AMOUNT
    01-NOV-07 Loans            100
    01-NOV-07 Loans            250
    01-NOV-07 Loans            400
    02-NOV-07 Loans            100
    02-NOV-07 Loans            250
    02-NOV-07 Loans            400
    03-NOV-07 Loans            100
    03-NOV-07 Loans            250
    03-NOV-07 Loans            400
    09-DEC-07 Loans            500
    09-DEC-07 Loans            400
    10-DEC-07 Loans            300
    27-NOV-07 Loans            300
    13 rows selected.
    My requirment is following
    SQL> SELECT DISTINCT TO_CHAR(T.POS_DATE, 'Month') Month,count(*) Date_rec,sum(t.amount) amount,
      2   t.Asset, avg(t.amount) avg_ora_col,Sum(t.amount)/ count(*) "Normal Average"
      3  from pos t
      4  group by t.pos_date,t.asset;
    MONTH      DATE_REC    AMOUNT ASSET     
    November          4       637 Loans             
    December          2      600 Loans              I want to divide no. of occurance of dates.
    which is 4 in Nov and 2 in dec.
    Message was edited by:
    53637

    hi,
    try this query
    Instead of grouping by t.pos_date group by TO_CHAR(T.POS_DATE, 'Month')
    SELECT DISTINCT TO_CHAR(T.POS_DATE, 'Month') Month,count(*) Date_rec,sum(t.amount) amount,
    t.Asset, avg(t.amount) avg_ora_col,Sum(t.amount)/ count(*) "Normal Average"
    from pos t
    group by TO_CHAR(T.POS_DATE, 'Month'),t.asset
    hope this will work
    Regards,
    Sridhar

  • Logical Table source source query

    In OBIEE 10g we can have multiple logical table sources and we can also add multiple tables into a single logical table source(logical table source source). I wanted to know the difference between doing so and having multiple logical table sources for each logical source.
    Hope I made myself clear.
    Cheers
    Rem

    Hi Rem,
    When data is duplicated across different physical tables add them as separate LTS with column mapping pointing to most economical sources. Specifying the most economical source is about the idea that a single column exists in more than one table, based on the column mappings BI server picks up those LTS's which could satisfy the request with minimal joins.
    When the data is not duplicated add them in a single LTS source. When the physical sources are added in a single LTS, you have the flexibility of using outer joins. But specifying a join as outer join makes BI Server to include this source even if its not required otherwise when the join is inner, the sources will not be included if not required to satisfy the query.
    Hope this helps.
    Thanks!

Maybe you are looking for

  • Need help on quality of resized image!!

    I am required to resize images to max 1240pixel (longest dimension) as a submission of work, though when work is viewed it will be at 30"x40".  Can not figure out a way to do this where images don't become quite pixelated when viewed at larger size. 

  • How to disable Emergency Project in ATG BCC 11.1

    Hi, How to disable Emergency Project in ATG BCC 11.1? I don't want to hide and I want to grey out the link in BCC. Thanks, Prakash KS

  • Cannot save in illustrator file reverts back to an earlier save!

    I can't seem to save properly in Illustrator.  After I've saved the file when I close it it sometimes opens back up into a earlier saved version of the file.

  • Is updating worth $99 ?

    PS Elements 2.0 came free with my new scanner from Canon. The scanner is to start a project scanning all family photos and slides and adding text info. Would updating to PS Elements 6.0 be worth $99 ? Thanks. Windows XP, HP deskjet 812C, 1.0 GB RAM

  • Lumia920 losing battery while charging (while usin...

    As title, I have turned off all background applications, 3G, and put phone into battery saver mode. The phone was charging with USB power and only serve as a GPS in my car running Nokia Drive. And I found the power was dropped by 20%in within two hou