Optimizing the query containing 7 table joins

hi,, I have a query which is taking almost 20 minutes to retrieve the data from DB. let me know how can i further optimize the query.. the tables contains huge amount of data
Table1 a -> 1040131 rows
Table2 b -> 1040131 rows
Table3 c -> 2080262 rows
Table4 d -> 2749 rows
Table5 e -> 1040131 rows,
Table6 f -> 93819 rows
Table7 g -> 99203 rows
My query is
SELECT a.lid, g.image, f.product , d.manufacturer, b.desc, c.price, c.abbr, c.currency, c.class
FROM
Table1 a,
Table2 b,
Table3 c,
Table4 d,
Table5 e,
Table6 f,
Table7 g
WHERE (UPPER(b.desc) like '%TEST%' OR UPPER(b.desc) like '%BEST%')
and a.line = b.line
AND a.line = c.line
AND c.subset = 576
AND a.manufacturer = d.manufacturer
AND a.line = e.line
and a.product = f.product
and e.image = g.image
Please tell me how can i optimize this query further to work faster

user1708333 wrote:
WHERE (UPPER(b.desc) like '%TEST%' OR UPPER(b.desc) like '%BEST%')
I would imagine that that line is the main culprit.
You are doing a free text search which will always result in a full table scan the way you are doing it.
If you need free text searching then you should consider using [Oracle Text|http://www.oracle.com/technology/products/text/index.html]

Similar Messages

  • Optimizing the Query.

    Hello All,
    I need to uptimize the below query. I am using oralce 9i and rule base optimisation.
    "SELECT sp.store_cd wr_store_cd, l.store_cd, l.loc_cd, l.loc_tp_cd,
    sp.pd_cd, MIN(sp.priority), MIN(sp.ck_showroom)
    FROM store$pri sp, loc l
    WHERE l.store_cd >= nvl(sp.beg_store_cd,' ')
         AND l.store_cd <= nvl(sp.end_store_cd,' ')
    AND l.loc_cd >= nvl(sp.beg_loc_cd,' ')
         AND l.loc_cd <= nvl(sp.end_loc_cd,' ')
    AND l.pick = 'Y'
    GROUP BY sp.store_cd, sp.pd_cd, l.store_cd, l.loc_cd, l.loc_tp_cd
    ORDER BY wr_store_cd, pd_cd, MIN(priority), l.store_cd;"
    In the above query LOC table has the less data and STORE$PRI table has the more data. Also in the LOC table there is a Index (Composit) with LOC_cd and STORE_CD column. In the STORE$PRI table there are no index in the beg_store_cd, end_store_cd, beg_loc_cd and end_loc_cd column. Also if I am creating the index on it as I am using the NVL function it is not doing the index scan.
    Any suggestion how to optimize this query. It is taking 250 minute for getting the 396750 records. In LOC table there are 28859 records and in STORE$PRI table there are 465 redords.
    I am putting below the explan plan data.
    <pre>
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=RULE                                        
    SORT ORDER BY                                        
    SORT GROUP BY                                        
    TABLE ACCESS BY INDEX ROWID     INV.LOC                                   
    NESTED LOOPS                                        
    TABLE ACCESS FULL     SALES.STORE$PRI                                   
    INDEX RANGE SCAN     INV.LOC_PK          
    </pre>
    Trce File Data:-
    <pre>
    recursive calls     793
    db block gets     113
    consistent gets     32377
    physical reads     211
    redo size     15044
    bytes sent via SQL*Net to client     1431124
    bytes received via SQL*Net from client     49632
    SQL*Net roundtrips to/from client     1357
    sorts (memory)     4
    sorts (disk)     0
    </pre>
    Regards
    SUN
    Edited by: User SUN@ on Sep 17, 2008 6:51 PM
    Edited by: User SUN@ on Sep 17, 2008 9:09 PM

    could you please tell me how to use has join in the query?Hash joins are not a feature of the rule based optimizer. Normally you would keep your stats up to date and the cost-based optimizer will choose a hash join automatically where appropriate. Why are you using RBO?

  • The Connection String for the Query Log table is automatically encrypted.

    When I try to use the Usage Based Optimization to apply Aggregation Design to my measure group, it shows me the following
    error message.
    The connection string cannot be found. Open Microsoft SQL Server Management Studio and, in the Analysis Server Properties
    page, check the value of the Log\QueryLog\QueryLogConnectionString
    property.
    I encountered this error like two weeks ago.  At that time I just reset the connection string and every things seem
    to be fine.  A week ago, I successfully applied the Usage Based Optimization for one of my cubes.  However when I tried to apply UBO for my other cubes today, I encountered the same issue again!  I believe no one has changed the property of
    the connection string.
    Also if I query the Query Log table, I can see those latest queries made by the users.  I'm sure the queries are still
    logging into this table.
    This is really strange.  Anyone else has encountered the same issue?  Thanks.

    Hello Thomas,
    I encounterd this issue. And I am struggling trying to solve this problem. If you have resolved this issue and I guess you must've, because this post is two years old, could you kindly post how you resolved this issue?
    Thanks in advance
    Best Regards,
    Neeraja

  • The query contains a formula with the operator SUMCT

    I m trying to get query ready for Webi, getting the error as 'The query contains a formula with the operator %RT. Therefore, the query cannot be released for OLE DB for OLAP'
    I have not used any kind of s %GT, %RT, %CT, SUMGT, SUMRT, SUMCT and LEAF, also i have suppressed results / over results from the query, over to it, i have removed the calculate results as 'summation' feature and set to 'not defined' for all of my key figures.
    Still getting the error...
    Please suggest.
    Niraj

    hi,
    Ensure that all Link IDs are visible in the lsdal.ini file  located in the \Windows\system directory of the PAS Server as those are then available to all users. The lsdal.ini file may not be correctly updated if: - You have created the BI Link IDs while using PAS in Client Server Mode - You have enabled multi user access for remote desktop. In this case an additional lsdal.ini file may appear in the user specific Windows directory of someone who may have remotely connected to the system to create the Link ID. That additional lsdal.ini file will only be valid for that specific user.
    Also try adding  "ODBOSECURITY=0" entry to the end of the c:\windows\lsserver.ini file on the server.
    hope it helps
    regards
    laksh

  • Optimizing the Query  joining two tables multiple times

    Hi all,
    I need to formulate a query where I want to get data from two tables.Here are the table structures and sample data.
    Table1
    id firstname lastname accountnumber
    1 Sridh Peter SP456
    2 Gane San SS667
    3 Sway patel PP345
    Table 2
    id attributename attributevalue
    1 Manager Mike
    1 Lawyer Schwa
    1 Server maneka
    1 location langur
    1 System Novel
    2 Manager kane
    2 lawyer endun
    2 location colrado
    3 server queen
    3 system elanda
    The requirement is I need to generate a report like th follwoing
    Accountnumber firstname lastname manager lawyer System Server location
    SP456 Sridh Peter Mike schwa Novel maneka langur
    SS667 Gane San kane endun colrado
    Now I have done this report using a query where I join table1 and table2 multiple times to get the report's data. And that query only works If the user has all attributes.If any one attribut is missing it wont work.Can some onehelp me with this.
    The query i am using looks like this.
    select a.accountnumber,a.firstname,a.lastname,b.attributevalue,c.attributevalue, d.attributevalue, e.attributevalue,f.attributevalue from table1 a,table2 b where a.id=b.id and a.id=c.id and a.id=d.id and a.id=e.id and a.id=f.id and b.attributename ='manager' and c.attributename ='lawyer' and d.attributename='system' and e.attributename='server' and f.attributename='location'
    this query works well if a user has all attributes ,if any one is missing he is not shown in the report.Can some one suggest me a good way of querying than this.
    The query I am using is also taking lot of time..I think I have explained my question well ,please reply if you have questions.
    Thanks for reading till here patiently,
    Pandu

    ....if this .....
    <DIV><B>
    <P><FONT face=Tahoma size=2>select</FONT></B><FONT size=2><FONT face=Tahoma>
    Accountnumber||</FONT><FONT face=Tahoma>' '||firstname||' '||lastname||'
    '||manager||' '||<B>System</B>||' '||Server||' '</FONT></FONT><FONT face=Tahoma
    size=2>||location<BR><B>from<SPAN
    class=940214002-13042006>     </SPAN></B>(<B>select</B>
    * <BR><B><SPAN
    class=940214002-13042006>             
    </SPAN>from<SPAN class=940214002-13042006>  </SPAN></B>(<B>select</B> '1'
    id, 'Sridh' firstname, 'Peter' lastname, 'SP456'</FONT><FONT face=Tahoma size=2>
    accountnumber <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                        
    </SPAN>select</B> '2' id, 'Gane' firstname, 'San' lastname, 'SS667'</FONT><FONT
    face=Tahoma size=2> accountnumber <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                          </SPAN>select</B>
    '3' id, 'Sway' firstname, 'patel' lastname, 'PP345'</FONT><FONT face=Tahoma
    size=2> accountnumber <B>from</B> dual) x,<BR><BR><SPAN
    class=940214002-13042006>           
    </SPAN>(<B>select</B> * <BR><B><SPAN
    class=940214002-13042006>             </SPAN>from</B>
    (<B>select</B> id,<BR><SPAN
    class=940214002-13042006>                                  </SPAN>attributename,<BR><SPAN
    class=940214002-13042006>                                 
    </SPAN>lead(attributevalue,0</FONT><FONT face=Tahoma size=2>) over (<SPAN
    class=940214002-13042006><STRONG>partition by </STRONG>id </SPAN><B>order</B>
    <B>by</B> id) <B>as</B> Manager,<BR><SPAN
    class=940214002-13042006>                                 
    </SPAN>lead(attributevalue,1</FONT><FONT face=Tahoma size=2>) over (<B><SPAN
    class=940214002-13042006><STRONG>partition by </STRONG>id </SPAN><B>order</B>
    <B>by</B> id</B>) <B>as</B> Lawyer,<BR><SPAN
    class=940214002-13042006>                                 
    </SPAN>lead(attributevalue,2</FONT><FONT face=Tahoma size=2>) over (<B><SPAN
    class=940214002-13042006><STRONG>partition by </STRONG>id </SPAN><B>order</B>
    <B>by</B> id</B>) <B>as</B> System,<BR><SPAN
    class=940214002-13042006>                                 
    </SPAN>lead(attributevalue,3</FONT><FONT face=Tahoma size=2>) over (<B><SPAN
    class=940214002-13042006><STRONG>partition by </STRONG>id </SPAN><B>order</B>
    <B>by</B> id<SPAN class=940214002-13042006>)</SPAN></B> <B>as</B>
    Server,<BR><SPAN
    class=940214002-13042006>                                 
    </SPAN>lead(attributevalue,4</FONT><FONT size=2><FONT face=Tahoma>) over
    (<B><SPAN class=940214002-13042006><STRONG>partition by </STRONG>id
    </SPAN><B>order</B> <B>by</B> id</B>) <B>as</B> Location<BR><B><SPAN
    class=940214002-13042006>                     
    </SPAN>from</B> (<B>select</B> *<SPAN class=940214002-13042006>
    </SPAN></FONT></FONT><FONT size=+0><FONT face=Tahoma><FONT size=2><B>from</B>
    (<B>select</B> '1' id, 'Manager' attributename, 'Mike'</FONT></FONT></FONT><FONT
    face=Tahoma size=2> attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '1' id, 'Lawyer' attributename, 'Schwa'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '1' id, 'Server' attributename, 'maneka'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                      
    </SPAN>select</B> '1' id, 'location' attributename, 'langur'</FONT><FONT
    face=Tahoma size=2> attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                      
    </SPAN>select</B> '1' id, 'System' attributename, 'Novel'</FONT><FONT
    face=Tahoma size=2> attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '2' id, 'Manager' attributename, 'kane'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '2' id, 'lawyer' attributename, 'endun'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                      
    </SPAN>select</B> '2' id, 'location' attributename, 'colrado'</FONT><FONT
    face=Tahoma size=2> attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '3' id, 'server' attributename, 'queen'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual <B>union</B><BR><B><SPAN
    class=940214002-13042006>                                                       </SPAN>select</B>
    '3' id, 'system' attributename, 'elanda'</FONT><FONT face=Tahoma size=2>
    attributevalue <B>from</B> dual)<BR><B><SPAN
    class=940214002-13042006>                                </SPAN>order</B>
    <B>by</B> id, (<B>case</B> <B>when</B> attributename='Manager' <B>then</B>
    1</FONT><FONT face=Tahoma size=2> <BR><B><SPAN
    class=940214002-13042006>                                                            
    </SPAN>when</B> attributename='Lawyer' <B>then</B> 2</FONT><FONT face=Tahoma
    size=2> <BR><B><SPAN
    class=940214002-13042006>                                                            
    </SPAN>when</B> attributename='System' <B>then</B> 3</FONT><FONT face=Tahoma
    size=2> <BR><B><SPAN
    class=940214002-13042006>                                                            
    </SPAN>when</B> attributename='Server' <B>then</B> 4</FONT><FONT face=Tahoma
    size=2> <BR><B><SPAN
    class=940214002-13042006>                                                            
    </SPAN>when</B> attributename='Location' <B>then</B> 5</FONT><FONT
    face=Tahoma><FONT size=2> <B>end</B>) <B>asc</B>))<BR><B><SPAN
    class=940214002-13042006>           
    </SPAN>where</B> attributename='Manager'</FONT></FONT><FONT face=Tahoma size=2>)
    y<BR><B>where</B> x.id(+)=y.id)</FONT></P></DIV>
    < Jonel

  • Need help in optimizing the query with joins and group by clause

    I am having problem in executing the query below.. it is taking lot of time. To simplify, I have added the two tables FILE_STATUS = stores the file load details and COMM table that is actual business commission table showing records successfully processed and which records were transmitted to other system. Records with status = T is trasnmitted to other system and traansactions with P is pending.
    CREATE TABLE FILE_STATUS
    (FILE_ID VARCHAR2(14),
    FILE_NAME VARCHAR2(20),
    CARR_CD VARCHAR2(5),
    TOT_REC NUMBER,
    TOT_SUCC NUMBER);
    CREATE TABLE COMM
    (SRC_FILE_ID VARCHAR2(14),
    REC_ID NUMBER,
    STATUS CHAR(1));
    INSERT INTO FILE_STATUS VALUES ('12345678', 'CM_LIBM.TXT', 'LIBM', 5, 4);
    INSERT INTO FILE_STATUS VALUES ('12345679', 'CM_HIPNT.TXT', 'HIPNT', 4, 0);
    INSERT INTO COMM VALUES ('12345678', 1, 'T');
    INSERT INTO COMM VALUES ('12345678', 3, 'T');
    INSERT INTO COMM VALUES ('12345678', 4, 'P');
    INSERT INTO COMM VALUES ('12345678', 5, 'P');
    COMMIT;Here is the query that I wrote to give me the details of the file that has been loaded into the system. It reads the file status and commission table to show file name, total records loaded, total records successfully loaded to the commission table and number of records that has been finally transmitted (status=T) to other systems.
    SELECT
        FS.CARR_CD
        ,FS.FILE_NAME
        ,FS.FILE_ID
        ,FS.TOT_REC
        ,FS.TOT_SUCC
        ,NVL(C.TOT_TRANS, 0) TOT_TRANS
    FROM FILE_STATUS FS
    LEFT JOIN
        SELECT SRC_FILE_ID, COUNT(*) TOT_TRANS
        FROM COMM
        WHERE STATUS = 'T'
        GROUP BY SRC_FILE_ID
    ) C ON C.SRC_FILE_ID = FS.FILE_ID
    WHERE FILE_ID = '12345678';In production this query has more joins and is taking lot of time to process.. the main culprit for me is the join on COMM table to get the count of number of transactions transmitted. Please can you give me tips to optimize this query to get results faster? Do I need to remove group and use partition or something else. Please help!

    I get 2 rows if I use my query with your new criteria. Did you commit the record if you are using a second connection to query? Did you remove the criteria for file_id?
    select carr_cd, file_name, file_id, tot_rec, tot_succ, tot_trans
      from (select fs.carr_cd,
                   fs.file_name,
                   fs.file_id,
                   fs.tot_rec,
                   fs.tot_succ,
                   count(case
                            when c.status = 'T' then
                             1
                            else
                             null
                          end) over(partition by c.src_file_id) tot_trans,
                   row_number() over(partition by c.src_file_id order by null) rn
              from file_status fs
              left join comm c
                on c.src_file_id = fs.file_id
             where carr_cd = 'LIBM')
    where rn = 1;
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS
    LIBM    CM_LIBM.TXT          12345678                5          4          2
    LIBM    CM_LIBM.TXT          12345677               10          0          0Using RANK can potentially produce multiple rows to be returned though your data may prevent this. ROW_NUMBER will always prevent duplicates. The ordering of the analytical function is irrelevant in your query if you use ROW_NUMBER. You can remove the outermost query and inspect the data returned by the inner query;
    select fs.carr_cd,
           fs.file_name,
           fs.file_id,
           fs.tot_rec,
           fs.tot_succ,
           count(case
                    when c.status = 'T' then
                     1
                    else
                     null
                  end) over(partition by c.src_file_id) tot_trans,
           row_number() over(partition by c.src_file_id order by null) rn
    from file_status fs
    left join comm c
    on c.src_file_id = fs.file_id
    where carr_cd = 'LIBM';
    CARR_CD FILE_NAME            FILE_ID           TOT_REC   TOT_SUCC  TOT_TRANS         RN
    LIBM    CM_LIBM.TXT          12345678                5          4          2          1
    LIBM    CM_LIBM.TXT          12345678                5          4          2          2
    LIBM    CM_LIBM.TXT          12345678                5          4          2          3
    LIBM    CM_LIBM.TXT          12345678                5          4          2          4
    LIBM    CM_LIBM.TXT          12345677               10          0          0          1

  • Query contain inner join to external DB

    Hi every body,
    I have created a new sql query that contain inner join to external db of the production software. The query also uses parameters for filter data by date .
    I succeeded to create a qurey that will work including inner join to the external DB and parameters,
    Here is the query:
    SELECT T0.DocNum, T0.DocDate, T0.CardCode, T0.CardName, T1.ItemCode, T1.Dscription, T1.Quantity, T1.TotalSumSy, T1.LineTotal, T0.Canceled, T2.Family
       FROM  [XXX].dbo.AllItems T2
         RIGHT OUTER JOIN INV1 T1 ON T1.ItemCode=          T2.ItemPN collate SQL_Latin1_General_CP1_CI_AS
         INNER JOIN OINV T0 ON T1.DocEntry = T0.DocEntry
         INNER JOIN OITM T3 ON T1.ItemCode = T3.ItemCode
         INNER JOIN OITB T4 ON T3.ItmsGrpCod = T4.ItmsGrpCod
         INNER JOIN OCRD T5 ON T0.CardCode = T5.CardCode
         INNER JOIN OACT T6 ON T1.AcctCode = T6.AcctCode
    Where T0.DocDate BETWEEN [%0] AND [%1]
    but when I add a union statement the query does not work again.
    If I remove the parameters every thing is working fine.
    what is the problem with my query?
    Thank you all very much.

    SBO does not manage correctly variables in complicated query.  There is a note about it:
    Note 730960 - SAP Business One does not identify variables in long queries
    https://websmp130.sap-ag.de/sap/bc/bsp/spn/sapnotes/index2.htm?numm=730960
    This note describe the method to get the parameters in variables and their usage.

  • Rewrite the query with out joins and group by

    Hi,
    This was an interview question.
    Table Names: bookshelf_checkout
    bookshelf
    And the join condition between these two tables is title
    We need to rewrite below query without using join condition and group by clause ?
    SELECT b.title,max(bc.returned_date - bc.checkout_date) "Most Days Out"
               FROM bookshelf_checkout bc,bookshelf b
               WHERE bc.title(+)=b.title
               GROUP BY b.title;When I was in college, I read that most of the SELECT statements can be replaced by basic SQL operations (SET OPERATORS). Now I am trying to rewrite the query with SET operators but not able to get the exact result.
    Kindly help me on this.
    Thanks,
    Suri

    Something like this?
      1  WITH books AS (
      2  SELECT 'title 1' title FROM dual UNION ALL
      3  SELECT 'title 2' FROM dual UNION ALL
      4  SELECT 'title 3' FROM dual ),
      5  bookshelf AS (
      6  SELECT 'title 1' title, DATE '2012-05-01' checkout_date, DATE '2012-05-15' returned_date FROM dual UNION ALL
      7  SELECT 'title 1' title, DATE '2012-05-16' checkout_date, DATE '2012-05-20' returned_date FROM dual UNION ALL
      8  SELECT 'title 2' title, DATE '2012-04-01' checkout_date, DATE '2012-05-15' returned_date FROM dual )
      9  SELECT bs.title, MAX(bs.returned_date - bs.checkout_date) OVER (PARTITION BY title) FROM bookshelf bs
    10  UNION
    11  (SELECT b.title, NULL FROM books b
    12  MINUS
    13* SELECT bs.title, NULL FROM bookshelf bs)
    SQL> /
    TITLE   MAX(BS.RETURNED_DATE-BS.CHECKOUT_DATE)OVER(PARTITIONBYTITLE)
    title 1                                                           14
    title 2                                                           44
    title 3Lukasz

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Plz help in optimizing the query

    I have a query which is written to cater to the scenario that the entered start and end date do not overlap with the start and end date already present in the database records. Can someone help in optimizing this query. all inclusion and exclusion scenario's have to be taken care of.
    the query is as follows:
    SELECT COUNT(*) FROM CLAS WHERE TRIM(UPPER(CLAS_CDE)) =UPPER('timecheck') AND TRIM(UPPER(CLAS_TYPE_CDE))=UPPER('TEST_3')
    AND TRIM(UPPER(LANG_CDE))=UPPER('en')
    AND (
    (END_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
    OR
    (START_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
    OR (
    (START_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
    AND
    (END_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
    OR(('09/13/2007' BETWEEN TO_CHAR(START_DT,'MM/DD/YYYY') AND TO_CHAR(END_DT,'MM/DD/YYYY'))
    AND ('09/15/2007' BETWEEN TO_CHAR(START_DT,'MM/DD/YYYY') AND TO_CHAR(END_DT,'MM/DD/YYYY'))
    );

    I format your code in different way:
    SELECT COUNT(*)
    FROM CLAS
    WHERE TRIM(UPPER(CLAS_CDE)) = UPPER('timecheck')
    AND TRIM(UPPER(CLAS_TYPE_CDE))=UPPER('TEST_3')
      AND TRIM(UPPER(LANG_CDE))=UPPER('en')
       AND (
         (END_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
            OR
         (START_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
            OR (
              (START_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
              AND
              (END_DT BETWEEN TO_DATE('09/13/2007','MM/DD/YYYY') AND TO_DATE('09/15/2007','MM/DD/YYYY'))
         OR (
              ('09/13/2007' BETWEEN TO_CHAR(START_DT,'MM/DD/YYYY') AND TO_CHAR(END_DT,'MM/DD/YYYY'))
              AND
              ('09/15/2007' BETWEEN TO_CHAR(START_DT,'MM/DD/YYYY') AND TO_CHAR(END_DT,'MM/DD/YYYY'))
          );First thought that cross my mind it to check if changing this:
    UPPER('timecheck') to simple:
    'TIMECHECK' and do the same with other expressions with UPPER function.
    Peter D.

  • Optimizing the query - which takes more time

    Hi,
    Am having a query which was returning the results pretty fast one week back but now the same query takes more time to respond, nothing much changed in the table data, what could be the problem. Am using IN in the where clause, whether that could be an issue? if so what is the best method of rewriting the query.
    SELECT  RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE,
                    MAX(TR.TIME_STAMP) MSG_DATE_INFO FROM  TRACKING_REPORT TR, RESOURCE_INFO RI
                    WHERE TR.MSISDN IN ( SELECT  MSISDN FROM  RESOURCE_INFO WHERE GROUP_ID ='4'
                   AND COM_ID='12') AND RI.MSISDN = TR.MSISDN
                   GROUP BY  RI.RESOURCE_NAME,TR.MSISDN ORDER BY MSG_DATE_INFO DESC

    Hi
    i have followed this link http://www.lorentzcenter.nl/awcourse/oracle/server.920/a96533/sqltrace.htm in enabling the trace and found out the following trace output, can you explain the problem here and its remedial action pls.
    SELECT  RI.RESOURCE_NAME,TR.MSISDN,MAX(TR.ADDRESS1_GOOGLE) KEEP(DENSE_RANK
      LAST ORDER BY TR.MSG_DATE_INFO) ADDRESS1_GOOGLE,                      MAX(TR.TIME_STAMP)
      MSG_DATE_INFO
    FROM
      TRACKING_REPORT TR, RESOURCE_INFO RI                          WHERE RI.GROUP_ID ='426'                         AND
      RI.COM_ID='122' AND RI.MSISDN = TR.MSISDN                      GROUP BY  RI.RESOURCE_NAME,
      TR.MSISDN
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.02          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        6     13.69     389.03      81747     280722          0          72
    total        8     13.70     389.05      81747     280722          0          72
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 281 
    Rows     Row Source Operation
         72  SORT GROUP BY
    276558   NESTED LOOPS 
         79    TABLE ACCESS FULL RESOURCE_INFO
    276558    TABLE ACCESS BY INDEX ROWID TRACKING_REPORT
    276558     INDEX RANGE SCAN TR_INDX_ON_MSISDN_TIME (object id 60507)
    ********************************************************************************and the plan_table output is
    STATEMENT_ID     TIMESTAMP     REMARKS     OPERATION     OPTIONS     OBJECT_NODE     OBJECT_OWNER     OBJECT_NAME     OBJECT_INSTANCE     OBJECT_TYPE     OPTIMIZER     SEARCH_COLUMNS     ID     PARENT_ID     POSITION     COST     CARDINALITY     BYTES     OTHER_TAG     PARTITION_START     PARTITION_STOP     PARTITION_ID     OTHER     DISTRIBUTION     CPU_COST     IO_COST     TEMP_SPACE     ACCESS_PREDICATES     FILTER_PREDICATES
         23-Mar-11 23:36:45          SELECT STATEMENT                                   CHOOSE          0          115     115     1058     111090                                        115               
         23-Mar-11 23:36:45          SORT     GROUP BY                                        1     0     1     115     1058     111090                                        115               
         23-Mar-11 23:36:45          NESTED LOOPS                                             2     1     1     9     4603     483315                                        9               
         23-Mar-11 23:36:45          TABLE ACCESS     FULL          BSNL_RTMS     RESOURCE_INFO     2          ANALYZED          3     2     1     8     1     30                                        8               "RI"."GROUP_ID"=426 AND "RI"."COM_ID"='122'
         23-Mar-11 23:36:45          TABLE ACCESS     BY INDEX ROWID          BSNL_RTMS     TRACKING_REPORT     1          ANALYZED          4     2     2     1     3293     246975                                        1               
         23-Mar-11 23:36:45          INDEX     RANGE SCAN          BSNL_RTMS     TR_INDX_ON_MSISDN_TIME          NON-UNIQUE          1     5     4     1     1     3293                                             1          "RI"."MSISDN"="TR"."MSISDN"     

  • SQ01 Query issue - Table join

    Hi Gurus,
    I need to make a change in already created query generating a report concerning Billing documents. Tables which are being used in there are VBRK, VBRP, VBPA & KNA1.
    Now, I would like to add Purchase Order number as field in Field Group in related Infoset in SQ02.
    The issue is when I include table VBKD field VBKD-BKSTD  to fetch PO number and join it in infoset, it does get joined but the actual query does not work in SQ00 or SQ01 after getting this change done in infoset.
    Can you please suggest if I am doing something wrong in joining the tables or should I try joining some other table to get this requirement??
    Kindly help and thanks in advance for your suggestions!!
    Regards
    Bawa

    Hi Shiva,
    The issue is that I cannot use the field VBRP-AUBEL as it contains the document number of referenced Sales Document which is sales order type document number in my case.
    Kindly suggest if I am doing something wrong in joining or do I have take care of some steps involving joining of tables??
    Thanks
    Bawa
    Edited by: Bawa Bawa on Sep 23, 2009 9:44 PM
    Edited by: Bawa Bawa on Sep 24, 2009 9:32 AM

  • How to query a table join in SAP?

    Hi,
    We are rolling out SAP and our abapers are wondering how to debug production problems.  From our experience with PeopleSoft, there is often a need to query tables directly in production.  These queries are often joins.  SAP provides SE16 to query a single table, but how to SDN'ers query a join in PROD?
    For a basic example, how would you join a sales item and sales header?  We are thinking of using Oracle OpenSQL to have direct table access.  Solutions that somehow use SAP security for developers to access production are good!
    Any suggestions are welcome and points will be rewarded!!
    Peter

    Peter,
    Be careful if your company is large enough to have been impacted by Sarbanes-Oxley 404.  Your external auditing firm should be engaged in whatever approach you take.
    This type of information/querying is now much more monitored/controlled for SOX 404-impacted organizations.
    Example - SQVI has been removed from all PRD instances in our company b/c of SOX 404.

  • Optimizing the query performance?

    When I run this query it takes 2-3 hours to bring the data. It pulls the data starting 2008 december. How can I optimize the performance for the below query?
    SELECT e1.Eventdate,
    TO_CHAR( TO_DATE (e1.eventdate, 'mmddyyyy, ), 'MM/DD/YYYY HH24:MI:SS' ),
    e1.USERID,
    Vu.display_name_FMLS as displayname,
         e1.eventdetail1,
    e1.eventtype
    FROM event e1,
    View_User VU
    WHERE ((Vu.USERid = e1.USERID) or (Vu.UserLanid = e1.Userid))
    AND Vu.Useractive ='1' and
    e1.eventtype = 'SRCH'
    AND e1.eventresults = '1'
    AND NOT EXISTS (
    SELECT 1
    FROM event e2
    WHERE e2.userid = e1.userid
    AND e2.eventtype IN ('OPENTPC', 'OPENFAQ', 'KEYFACT')
    AND e2.eventrecordid =
    (SELECT MIN (eventrecordid)
    FROM event e3
    WHERE e3.userid = e2.userid
    AND e3.eventrecordid > e1.eventrecordid))

    not quite sure what your requirements are but just looking at this your are selecting from the same table (event) 3 times. couldn't you just get everything you need from the event table up front and then use case statements or aggregate functions or something to just pull the stuff you want.
    maybe if you gave a bit of sample data and what the output should look like someone might come up with an approach that doesn't involve mutiple selects from the same table.

  • Remove Power Query queries while keeping the query output tables

    I use power query to load data from external sources into several excel tables. Before sending this excel to a client, I would like to remove all power query queries (M code) while keeping the output/query tables at place.
    My current workaround is:
    unload Power Query
    convert each table to range
    load Power Query
    delete queries (M code)
    Is there a better/faster way to achieve what I want?

    You can unlink the existing tables and remove custom XML data in Document Inspector. No need to duplicate the worksheet.
    exactly. and even unlinking the tables is not neccessary for DocInspector to remove the custom XML of PQ.
    Do you have any suggestion on how to trigger the cleaning of PQ XML code via VBA? 
    the following code does not work:
    ActiveWorkbook.RemoveDocumentInformation (xlRDIAll)

Maybe you are looking for