Query runs fast in DB but slow in OBIEE

Hello guys
I am running some reports on the dashboard, which takes forever to come up. I capture the generated SQL and ran it in Toad, it would take only less than 2 mins to return the data.
What area of OBIEE configuration should I check to see what's happening?
Much thanks

Thanks.. Let me show you the query I am executing:
select T1011554.PECODE as c1,
T1011581.DATE as c2,
T1011581.DAYOFWEEK as c3
from
RD_TOT.TRANSACTIONTYPE T1011554 /*Transaction */ ,
RD_TOT.V_FACTS_SNAPSHOT_12 T1011580 /* Facts */ ,
RD_TOT.DATES T1011581 /* Date */
where ( T1011554.TRANSACTIONTYPEID = T1011580.TRANSACTIONTYPEID and T1011554.TRANSACTIONTYPECODE = '2' and T1011581.DAYOFWEEK = 'Monday' and T1011580.DOCUMENTCOMPANY = T1011581.COMPANYNUMBER and T1011580.SNAPSHOTDATE = T1011581.DATES and T1011581.DATES < TO_DATE('2009-10-28' , 'YYYY-MM-DD') )
When I run this query in Toad, it takes less than 30s to return, but from OBIEE, it's taking forever.. I am using Oracle 10g as DB... I am sure there are a lot of rows being fetched, because using the result I am getting in toad, if I do a record count, that process will take forever again..
There might be some configuration setup I need to do in OBIEE or config file. This is just my feelings, but I'd like to hear any of your thoughts and suggestions..
Much appreciated!
Thanks

Similar Messages

  • Strange problem... Query runs faster, but report runs slow...

    Hi Gurus,
    We are using Report 10g on 10g Application server and solaris. we created a report on a table which has 10,000 rows. The report has 25 columns. when we run this query in Toad it took 12 sec for fetching all these 10,000 rows
    But when we run the report with Destype = 'FILE' and Desformat = 'DELIMITEDDDATA', it is taking 5 to 8 minutes
    to open in excel ( we concatenated mimetype=vnd-msexcel at the end of the url if the Destype=FILE). We removed the layout in the report as it is taking 10 to 15 mins to run to Screen with Desformat=HTML/PDF(formating pages taking more time). We are wondering why DELIMITEDDATA format is taking long time as it runs only query.
    Does RWSERVLET take more time of writing the data to the Physical file in the cache dir? Our cache size is 1 GB. we have 2 report servers clustered. Tracing is off.
    Please advise me if there are any report server settings to boost the performance.
    Thanks alot,
    Ram.

    Duplicate of Strange problem... Query runs faster, but report runs slow... in the Reports forum.
    [Thread closed]

  • What are the ways to make Query run fast?

    Hi Experts,
    When a query runs slow, we generally go for creating an aggregate. My doubt is - what other things can be done to make a query run faster before creating an aggregate? What is the thumb rule to be carried out for creating an aggregate?
    Regards,
    Shreeem

    Hi Shreem,
    If you keep Query simple not complicate it with runtime calculations , it would be smooth. However as per business requirements we will have to go for it anyways mostly.
    regarding aggregates:
    Please do not use the standard proposal , it will give you hundreds based on std. rules , which consumes lots of space and adds up to load times. If you have users already using the Query and you are planning to tune it  then go for the statistics tables:
    1.RSDDSTAT_OLAP    find the query with long runtimes  get the Stepuid
    2. RSDDSTAT_DM
    3. RSDDSTATAGGRDEF  - use the stepuid above to see which aggregate is necessary for which cube.
    Another way to check ; check the users as in 1 to find the highest runtime users and find the last used bookmarks by user thru RSZWBOOKMARK for this query and check if the time matches and create the aggregates as in 3 above.
    You can also Use Transaction RSRT > execute & debug  (display stats ) - to create generic aggregates to support navigations for New queries and later refine as above.
    Hope it helps .
    Thnks
    Ram

  • Does SQL Query run faster with/without Conditions....

    Hi All, forgive my novice question.
    Was just wondering" In general if we run a SQL query on a single table; does my query run faster if there are multiple where conditions? or without. What happens if the conditions increase? My table is a big one with 5 million rows and some bitmap indexes defined on it.
    Thanks,
    Kon

    I think it's difficult to give general rule because there are too much dependencies on the fact that the columns are indexed or not, on the way tables and indexes statistics are computed or not, on the possible session or instance parameters that the optimizer may use, on the Oracle version, etc.
    Message was edited by:
    Pierre Forstmann

  • Query runs fine in 9i but results to ORA-01652 unable to extend temp in 10g

    Hi,
    We are having issues in running a SQL query in 10g. In 9i, it runs fine with no problems but when run in 10g, It takes forever and the temp tablespace grows very large upto 60GB until we get ORA-01652 error due to lack of disk space. This does not occur in 9i, where query runs in only 20 mins and does not take up temp that big. 9i version is 9.2.0.8. 10g is 10.2.0.3

    Heres the SQL query:
    SELECT
    J2.EMPLID,
    TO_CHAR(J2.EFFDT,'YYYY-MM-DD'),
    J2.EFFSEQ,
    J2."ACTION",
    J2.ACTION_REASON,
    TO_CHAR(J2.GRADE_ENTRY_DT,'YYYY-MM-DD'),
    J2.COMPRATE,
    J2.CHANGE_AMT,
    J2.COMP_FREQUENCY,
    J2.STD_HOURS,
    J2.JOBCODE,
    J2.GRADE,
    J2.PAYGROUP,
    PN2.NATIONAL_ID,
    TO_CHAR(PC.CHECK_DT,'YYYY-MM-DD'),
    SUM(PO.OTH_EARNS),
    To_CHAR(SUM(PO.OTH_EARNS)),
    PO.ERNCD,
    '3',
    TO_CHAR(PC.PAY_END_DT,'YYYY-MM-DD'),
    PC.PAYCHECK_NBR
    FROM PS_JOB J2,
    PS_PERS_NID PN2,
    PS_PAY_OTH_EARNS PO,
    PS_PAY_CHECK PC
    WHERE J2.EMPL_RCD = 0
    AND PN2.EMPLID = J2.EMPLID
    AND PN2.COUNTRY = 'USA'
    AND PN2.NATIONAL_ID_TYPE = 'PR'
    AND J2.COMPANY <> '900'
    AND J2.EFFDT <= SYSDATE
    AND PC.EMPLID = J2.EMPLID
    AND PC.COMPANY = PO.COMPANY
    AND PC.PAYGROUP = PO.PAYGROUP
    AND PC.PAY_END_DT = PO.PAY_END_DT
    AND PC.OFF_CYCLE = PO.OFF_CYCLE
    AND PC.PAGE_NUM = PO.PAGE_NUM
    AND PC.LINE_NUM = PO.LINE_NUM
    AND PC.SEPCHK = PO.SEPCHK
    AND EXISTS (SELECT ERNCD
    FROM PS_P1_CMP_ERNCD P1_CMP
    WHERE P1_CMP.ERNCD = PO.ERNCD AND EFF_STATUS = 'A')
    GROUP BY J2.EMPLID,
    J2.EFFDT,
    J2.EFFSEQ,
    J2.ACTION,
    J2.ACTION_REASON,
    J2.GRADE_ENTRY_DT,
    J2.COMPRATE,
    J2.CHANGE_AMT,
    J2.COMP_FREQUENCY,
    J2.STD_HOURS,
    J2.JOBCODE,
    J2.GRADE,
    J2.PAYGROUP,
    PN2.NATIONAL_ID,
    PC.CHECK_DT,
    PO.ERNCD,
    '3',
    PC.PAY_END_DT,
    PC.PAYCHECK_NBR

  • Named Query (JOIN query) runs in Toplink JPA but not in EclipseLink

    I have a namedquery in JPA entities like (Entities do not include JOIN colums specifically.. no many-to-many or one-to-many relation in entities)
    select a from table1 a, table2 b where a.id=b.id
    This named query runs on Toplink Essentials without any problem.
    When I run this query using EclipseLink
    EclipseLink generates the query below and gives an error.
    select table1.id, table1.name, table1.surname from table1 t0, table2 t1 where t0.id=t1.id
    There are error _"*table1.surname*"_ is invalid identifier. Because; table1 is not define as an alias, must be "t0.surname".
    How can I solce this problem.
    The code runs on Toplink Essential but not in EclipseLink

    I have a namedquery in JPA entities like (Entities do not include JOIN colums specifically.. no many-to-many or one-to-many relation in entities)
    select a from table1 a, table2 b where a.id=b.id
    This named query runs on Toplink Essentials without any problem.
    When I run this query using EclipseLink
    EclipseLink generates the query below and gives an error.
    select table1.id, table1.name, table1.surname from table1 t0, table2 t1 where t0.id=t1.id
    There are error _"*table1.surname*"_ is invalid identifier. Because; table1 is not define as an alias, must be "t0.surname".
    How can I solce this problem.
    The code runs on Toplink Essential but not in EclipseLink

  • Windows Time Running fast, cant seem to slow it down again.

    For some odd reason my system time has started to run fast, about 5sec to fast every minute. Started some days ago, and Iv tried lots of things, even did a fresh Windows SP2 Install.
    Read some place on the web that it might be the bios battery thats bad, so I changed that but without any luck.
    Iv checked that nothing else is interfering with the RPC IRQ 8.
    Iv checked the bios clock and theres nothing wrong, its when it inters windows XP things starte to go wrong.
    Iv tried to underclock my cpu, that didnt change anything either.
    Im running out of ideas here, hope theres some one out there that might be able to help me, because it annoys the hell out of me.
    Thanks in advance
    Looking forward to your reply's
    //Keba - System specs is in the signature

    Great description!
    After a BIOS update APIC - the interrupt control - is taken over by BIOS. It should normally be handled by Windows. Windows does a lot of its system managing through this, to make priorities etc.
    Updating BIOS is necessary only when a problem is said to be fixed this way, and it is recommended to do a repair reinstall of Windows afterwards. Maybe other things and question marks will straighten out as well.
    One should not update BIOS as one would update the antivirus program, as often as possible.

  • Query running ok in SQL but giving error in form trigger

    Hi guys
    here is the query
    SELECT NVL(BGM_PERAMT,0) INTO PERAMT FROM BROKERAGE_MASTER
    WHERE     BGM_BROKERAGETYPE = 'BR01' AND BGM_PERAMT <> 0
    The above query working fine in SQL but giving following error in when_button_pressed trigger
    [In a host language program, all records have been fetched. The return code from the fetch was +4 indicating that all records have been returned from the SQL query. ]
    please give me the solution for this problem.
    It is very urgent
    regards
    asha

    ok below is the code
    CURSOR RESALE_BROKERAGE IS SELECT am_brokerCd resalebrcd,'' resalesbrcd,NVL(SUM(AM_AMT),0) RESALE_AMT,COUNT(AM_RESALENO) TOTCNT,NVL(SUM(AM_UNITSAPPLD),0) RESALE_UNITS
    FROMRNT_RESALE_MASTER
    WHERE am_brokercd is not null AND AM_PROCTAG='Y' AND
    (to_date(to_char(AM_PROCDT,'DD/MON/YYYY')) BETWEEN
    to_date(to_char(:rnt_broker_date.fromdt,'DD/MON/YYYY')) AND
    to_date(to_char(:rnt_broker_date.todate,'DD/MON/YYYY')))
    GROUP BY am_brokerCd
    UNION
    SELECT AM_BROKERCD resalebrcd,     
    am_subbrokercd resalesbrcd,
    NVL(SUM(AM_AMT),0) RESALE_AMT,COUNT(AM_RESALENO) TOTCNT,
    NVL(SUM(AM_UNITSAPPLD),0) RESALE_UNITS
    FROM RNT_RESALE_MASTER
    WHERE     am_brokercd is not null AND AM_PROCTAG='Y' AND AM_BROKERCD = 'ARN-9760' AND (to_date(to_charAM_PROCDT,'DD/MON/YYYY')) BETWEEN
    to_date(to_char(:rnt_broker_date.fromdt,'DD/MON/YYYY')) AND
    to_date(to_char(:rnt_broker_date.todate,'DD/MON/YYYY')))
    GROUP BY AM_BROKERCD,am_subbrokercd;
    Asha

  • Sql query is running fast in sqlplus but too slow in oracle why?

    Hi,
    I am executing one query in sqlplus . it gives output in two mins. when run that query in oracle it takes above 7hrs. why?
    what is root cause of this problem?

    SQLPLUS is a part of Oracle :) Do you mean SQL server
    Edited by: Maran.Viswarayar on Apr 19, 2010 11:49 PM

  • First Query Runs Fast.  Subsequent Queries Get Slower

    I am using JDeveloper 11.1.1.6.
    I have a SelectOneChoice.
    I have 2 tables that get updated when the SelectOneChoice changes.
    There are only a few records displayed for each selection.
    When the table initially loads, it loads quickly.
    Each time I change the SelectOneChoice, the table load gets slower and slower.
    Could this be a memory issue?

    Frank,
    I couldn't find any tables in the HR schema alone that I could set up this way.
    I needed a table where each record had multiple records in 2 different tables.
    What I did was used the Employees table in the HR schema and the Orders and Customers table from the OE schema.
    My goal was to create a test where I would select an employee from a selectOneChoice and have the Orders and Customers table populate based on the Employee selection.
    I created 3 Entity objects (Employees, Customers, Orders).
    This automatically created the appropriate Associations and Links.
    I added an LOV for the EmployeeId field on the Employees table.
    I dragged the EmployeeId field from DataControls to my page as a SelectOneChoice.
    I dragged Orders and Customers from DataControls to my page as tables.
    I setup the properties for each control (AutoSubmit and PartialTriggers).
    I debugged my page.
    As soon as I attempted to change my Employee, I get an error "Too many objects match the primary key oracle.jbo.key[200]".
    Apparantly, my goal was not satisfied.
    Any thoughts?

  • Query with same explain-plan but slower in one env

    Hi there
    I have a stored procedure which is executed from a web application. It contains a query (insert-select-from statement). When this stored procedure is called by the web application in PROD, it takes 13sec but it takes 19sec in TEST env. I checked the explain plan for this insert statement in both instances and it is same (see below). Actually, the cost is lower in TEST env.
    ENV: Oracle 10gR2 EE, on ASM - RHEL 64bit
    The TEST server is on a better/faster hardware and will become the new PROD in near future (faster and 16 CPUs  vs 8 in PROD, high performance SAN, 132GB RAM vs 96GB in PROD, etc). The TEST database has exact same init parameter and version/patch level as current PROD. So the application is being tested against it at the moment.
    Here are the explain-plans from both environments:
    From PROD Server
    Plan
    INSERT STATEMENT ALL_ROWS Cost: 143 Bytes: 696 Cardinality: 3
    18 SORT ORDER BY Cost: 143 Bytes: 696 Cardinality: 3
    17 HASH UNIQUE Cost: 142 Bytes: 696 Cardinality: 3
    16 WINDOW SORT Cost: 143 Bytes: 696 Cardinality: 3
    15 HASH JOIN Cost: 141 Bytes: 696 Cardinality: 3
    13 HASH JOIN Cost: 128 Bytes: 519 Cardinality: 3
    11 TABLE ACCESS BY INDEX ROWID TABLE MKTG.SATDATAIMPORT Cost: 125 Bytes: 1,728 Cardinality: 12
    10 NESTED LOOPS Cost: 125 Bytes: 1,992 Cardinality: 12
    3 HASH JOIN Cost: 5 Bytes: 22 Cardinality: 1
    1 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_HDGS Cost: 2 Bytes: 12 Cardinality: 1
    2 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_DIRS Cost: 2 Bytes: 10 Cardinality: 1
    9 BITMAP CONVERSION TO ROWIDS
    8 BITMAP AND
    5 BITMAP CONVERSION FROM ROWIDS
    4 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_HEADINGNO Cost: 19 Cardinality: 4,920
    7 BITMAP CONVERSION FROM ROWIDS
    6 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_DIRNO Cost: 89 Cardinality: 4,920
    12 TABLE ACCESS FULL TABLE MKTG.MONTHS12 Cost: 2 Bytes: 84 Cardinality: 12
    14 TABLE ACCESS FULL TABLE MKTG.REF_WEST_CATEGORY Cost: 12 Bytes: 191,809 Cardinality: 3,251
    From TEST Server
    Plan
    INSERT STATEMENT ALL_ROWS Cost: 107 Bytes: 232 Cardinality: 1
    18 SORT ORDER BY Cost: 107 Bytes: 232 Cardinality: 1
    17 HASH UNIQUE Cost: 106 Bytes: 232 Cardinality: 1
    16 WINDOW SORT Cost: 107 Bytes: 232 Cardinality: 1
    15 HASH JOIN Cost: 105 Bytes: 232 Cardinality: 1
    13 HASH JOIN Cost: 93 Bytes: 173 Cardinality: 1
    11 TABLE ACCESS BY INDEX ROWID TABLE MKTG.SATDATAIMPORT Cost: 89 Bytes: 864 Cardinality: 6
    10 NESTED LOOPS Cost: 89 Bytes: 996 Cardinality: 6
    3 HASH JOIN Cost: 7 Bytes: 22 Cardinality: 1
    1 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_HDGS Cost: 3 Bytes: 12 Cardinality: 1
    2 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_DIRS Cost: 3 Bytes: 10 Cardinality: 1
    9 BITMAP CONVERSION TO ROWIDS
    8 BITMAP AND
    5 BITMAP CONVERSION FROM ROWIDS
    4 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_HEADINGNO Cost: 9 Cardinality: 2,977
    7 BITMAP CONVERSION FROM ROWIDS
    6 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_DIRNO Cost: 59 Cardinality: 2,977
    12 TABLE ACCESS FULL TABLE MKTG.MONTHS12 Cost: 3 Bytes: 84 Cardinality: 12
    14 TABLE ACCESS FULL TABLE MKTG.REF_WEST_CATEGORY Cost: 12 Bytes: 191,868 Cardinality: 3,252
    What else can I check to find out why the query is slower in TEST env?
    Please advise.
    Best regards

    Here is some more info. The query is below:
    select distinct dr.line_num 
                     ,row_number() over (partition by di.HEADINGNO,di.DIRECTORYNO order by reportyear,to_number(di.monthno)) monthposition
                     ,di.SATID,di.REPORTYEAR,di.MONTHNO,di.MONTHEN,di.MONTHFR,di.HEADINGNO,hn.NAME_EN,hn.NAME_FR,di.DIRECTORYNO
                     ,di.SUPERDIRECTORYNO,di.PRINTDIRCODE,di.DIRECTORYNAME,round(to_number(di.IMPTTOTAL)) imptotal
                     ,round(to_number(di.IMPBEST)) impbest ,round(to_number(di.IMPTAVERAGE)) imptaverage
                     ,round(to_number(di.CLICKTOTAL)) clicktotal,round(to_number(di.CLICKBEST)) clickbest
                     ,round(to_number(di.CLICKAVERAGE)) clickaverage
                     ,round(avg(to_number(impttotal)) over(partition by di.HEADINGNO,di.DIRECTORYNO)) avgimp
               from satdataimport di,tmpg_clicks_hdgs hd,tmpg_clicks_dirs dr, months12 m12, ref_west_category hn
               where di.headingno   = hd.id
                 and di.directoryno = dr.id
                 and dr.line_num=hd.line_num
                 and di.reportyear  = m12.year
                 and di.monthno     = m12.month
                 and hn.CATEGORY_CODE = di.headingno
               order by di.headingno, di.directoryno,di.reportyear,to_number(di.monthno)
    The largest table is "satdataimport" in the query has "12274818" rows. Rest of the tables are very small containing few rows to less than 4000 rows.
    I have refreshed the statistics of the larger table but this did not help either. Even a simple query like "select count(*) from satdataimport" is taking 15sec in TEST while it takes 4Sec in PROD when I run it from TOAD.
    The other strange thing is that when I run this stored procedure from TOAD, it takes 200 milli sec to complete. There is a logging table to which the stored procedure records the elapsed time taken by this INSERT statement.
    Since this query is in a stored procedure being called from the web app, the QA team wants quicker response. Current PROD is faster.
    The tables have same indexes, etc and contain identical data as that in PROD (were refreshed from PROD yesterday).
    What else can I check?
    Best regards

  • Query runs in management studio but not in SQLAgent job

    I have the following query which runs fine in Management Studio but when I put it in a SQLAgent job it fails saying
    Error formatting query, probably invalid parameters (SQLState 42000 Error 22050)I have tried changing Quote characters but to no avail.
    Does anybody have any idea why this would be happening?
    Regards
    Ron
    declare @servername nvarchar(150)
     set @servername = @@servername
     declare @mysubject nvarchar(200)
     set @mysubject = 'Toners adjusted out '+@servername+'.'
     EXEC msdb.dbo.sp_send_dbmail @recipients='[email protected]',
     @subject = @mysubject,
     @body = 'Toners were adjusted out. View attachment to see the details',
     @query = 'use livedatabase;select trc_part, trc_job, trc_qty, trc_inits from livedatabase.dbo.Traces
    where trc_part like "TONER%"
    and CAST(trc_date as date) = CAST(getdate() as date)
    and trc_typ = "O"',
     @query_result_width = 600,
     @attach_query_result_as_file = 1

    I have another SQLAgent job that is almost identical in what it does, ie sending an email with a query result and it works fine. See below.
    Therefore it can't be permissions or dbmail setup but I cannot see what it is.
    --== This is for SQL 2005 and higher. ==--
    --== We will create a temporary table to hold the error log detail. ==--
    --== Before we create the temporary table, we make sure it does not already exist. ==--
     IF OBJECT_ID('tempdb.dbo.ErrorLog') IS Not Null
     BEGIN
     DROP TABLE tempdb.dbo.ErrorLog
     END
     --== We have checked for the existence of the temporary table and dropped it if it was there. ==--
     --== Now, we can create the table called tempdb.dbo.ErrorLog ==--
    CREATE TABLE tempdb.dbo.ErrorLog (Id int IDENTITY (1, 1) NOT NULL,
    logdate DATETIME, procInfo VARCHAR(10), ERRORLOG VARCHAR(MAX))
    --== We create a 3 column table to hold the contents of the SQL Server Error log. ==--
    --== Then we insert the actual data from the Error log into our newly created table. ==--
     INSERT INTO tempdb.dbo.ErrorLog
     EXEC master.dbo.sp_readerrorlog
    --== With our table created and populated, we can now use the info inside of it. ==--
     BEGIN
    --== Set a variable to get our instance name. ==--
    --== We do this so the email we receive makes more sense. ==--
     declare @servername nvarchar(150)
     set @servername = @@servername
    --== We set another variable to create a subject line for the email. ==--
     declare @mysubject nvarchar(200)
     set @mysubject = 'Deadlock event notification on server
    '+@servername+'.'
     --== Now we will prepare and send the email. Change the email address to suite your environment. ==--
     EXEC msdb.dbo.sp_send_dbmail @recipients='[email protected]',
     @subject = @mysubject,
     @body = 'Deadlock has occurred. View attachment to see the deadlock info',
     @query = 'select logdate, procInfo, ERRORLOG from tempdb.dbo.ErrorLog where Id >= (select TOP 1 Id from tempdb.dbo.ErrorLog WHERE ERRORLOG Like ''%Deadlock encountered%'' order by Id DESC)',
     @query_result_width = 600,
     @attach_query_result_as_file = 1
     END
     --== Clean up our process by dropping our temporary table. ==--
     DROP TABLE tempdb.dbo.ErrorLog

  • Query runs faster when there is no statistics

    Hi Gurus
    I have a Oracle 10.2.0.1 instance and I'm seeing the following weird behavior, I appreciate any pointers or references to resolve this problem.
    Thank you very much.
    1. Run the below mentioned query for 10 times continuously from sqlplus. Elapsed time is around 115 seconds (around 2
    min) for each execution. Elapsed time is constant, no increase or decrease. Tables involved in the query have statistics with 100% sampling.
    2. delete the statistics on 2 tables involved in query. Flush the shared_pool
    (alter system flush shared_pool).
    3. Run the query for 10 times. Elapsed time is less than 2 seconds for each execution. Elapsed time is constant, no
    increase or decrease.
    The Query: (it is a generated query, no option for modifying it).
    select count(distinct itm1.itm_id) FROM ita ita1, ita ita2, itm itm1, itm itm2, itm itm3
    where itm1.itm_container_id = 2812
    and itm1.itm_version_id &lt;= 999999999
    and itm1.itm_next_version_id &gt;= 999999999
    and itm2.itm_primary_key = 'RAYBESTOS'
    and itm3.itm_primary_key = '1'
    and ita1.ita_node_id = 3111
    and itm2.itm_container_id = 2020
    and ita1.ita_item_id = itm1.itm_id
    and ita1.ita_version_id &lt;= 999999999
    and ita1.ita_next_version_id &gt;= 999999999
    and itm2.itm_id = ita1.ita_value_numeric
    and itm2.itm_version_id &lt;= 999999999
    and itm2.itm_next_version_id &gt;= 999999999
    and ita2.ita_node_id = 3118
    and itm3.itm_container_id = 2025
    and ita2.ita_item_id = itm1.itm_id
    and ita2.ita_version_id &lt;= 999999999
    and ita2.ita_next_version_id &gt;= 999999999
    and itm3.itm_id = ita2.ita_value_numeric
    and itm3.itm_version_id &lt;= 999999999
    and itm3.itm_next_version_id &gt;= 999999999;
    Query uses dynamic sampling when there is no statistics.
    tkprof report shows there is a small difference in execution plan between 2 cases. When there is no statistics, there is table access by index rowid. It may be the reason for faster response time.
    Rows Row Source Operation
    1 SORT GROUP BY (cr=47235 pr=0 pw=0 time=919461 us)
    758 {color:#ff0000}TABLE ACCESS BY INDEX ROWID TCTG_ITA_ITEM_ATTRIBUTES (cr=47235 pr=0 pw=0 time=600473 us){color}
    14163 NESTED LOOPS (cr=40652 pr=0 pw=0 time=299694 us)
    7081 NESTED LOOPS (cr=25708 pr=0 pw=0 time=538463 us)
    12771 NESTED LOOPS (cr=90 pr=0 pw=0 time=255699 us)
    1 MERGE JOIN CARTESIAN (cr=6 pr=0 pw=0 time=271 us)
    1 INDEX RANGE SCAN ICTG_ITM_1 (cr=3 pr=0 pw=0 time=74 us)(object id 105409)
    1 BUFFER SORT (cr=3 pr=0 pw=0 time=112 us)
    1 INDEX RANGE SCAN ICTG_ITM_1 (cr=3 pr=0 pw=0 time=43 us)(object id 105409)
    12771 INDEX RANGE SCAN ICTG_ITA_1 (cr=84 pr=0 pw=0 time=102210 us)(object id 105399)
    7081 INDEX RANGE SCAN ICTG_ITM_0 (cr=25618 pr=0 pw=0 time=363715 us)(object id 105408)
    7081 INDEX RANGE SCAN ICTG_ITA_0 (cr=14944 pr=0 pw=0 time=239803 us)(object id 105398)

    Hi Jonathan,
    Thanks again for your response. Yes, you are correct. Most of the rows have all 9s, and a small percentage have something else - and there are only a small number of distinct values.
    Here is the histogram info when there is statistics (some old data have been trimmed for this test).
    TABLE_NAME                            COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS   HISTOGRAM
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_COMPANY_ID                             2             2FREQUENCY
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_CATALOG_ID                            62            62FREQUENCY
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_ITEM_ID                           720867             1NONE
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_NODE_ID                              118           118FREQUENCY
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_VALUE_NUMERIC                     587504           254HEIGHT BALANCED
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_VALUE_STRING                     1060930           254HEIGHT BALANCED
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_VERSION_ID                            48            48FREQUENCY
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_NEXT_VERSION_ID                        1             1FREQUENCY
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_OCCURRENCE_ID                      16250           254HEIGHT BALANCED
    TCTG_ITA_ITEM_ATTRIBUTES              ITA_VALUE_STRING_IGNORECASE          1498257           254HEIGHT BALANCED
    TCTG_ITM_ITEM                         ITM_COMPANY_ID                             2             2FREQUENCY
    TCTG_ITM_ITEM                         ITM_ID                                720867             1NONE
    TCTG_ITM_ITEM                         ITM_CONTAINER_ID                          62            62FREQUENCY
    TCTG_ITM_ITEM                         ITM_PRIMARY_KEY                       531960             1NONE
    TCTG_ITM_ITEM                         ITM_VERSION_ID                            48            48FREQUENCY
    TCTG_ITM_ITEM                         ITM_NEXT_VERSION_ID                        1             1FREQUENCY
    TCTG_ITM_ITEM                         ITM_STATUS                                 3             1NONE
    TCTG_ITM_ITEM                         ITM_COLLAB_INFO                            7             1NONE
    TCTG_ITM_ITEM                         ITM_LAST_MODIFIED                     717098             1NONEdisplay_cursor without statistics
    SQL> @disp-cursor
                           756
    Elapsed: 00:00:03.81
    SQL_ID  d9q6j48ns19zv, child number 0
    select /*+ gather_plan_statistics */ count(distinct itm1.itm_id)   FROM ita ita1, ita ita2, itm itm1, itm itm2, itm itm3 where
    itm1.itm_container_id = 2812      and itm1.itm_version_id <= 999999999      and itm1.itm_next_version_id >= 999999999      and
    itm2.itm_primary_key = 'RAYBESTOS'      and itm3.itm_primary_key = '1'      and ita1.ita_node_id = 3111      and
    itm2.itm_container_id = 2020      and ita1.ita_item_id = itm1.itm_id      and ita1.ita_version_id <= 999999999      and
    ita1.ita_next_version_id >= 999999999      and itm2.itm_id = ita1.ita_value_numeric      and itm2.itm_version_id <= 999999999
    and itm2.itm_next_version_id >= 999999999      and ita2.ita_node_id = 3118      and itm3.itm_container_id = 2025      and
    ita2.ita_item_id = itm1.itm_id      and ita2.ita_version_id <= 999999999      and ita2.ita_next_version_id >= 999999999      and
    itm3.itm_id = ita2.ita_value_numeric      and itm3.itm_version_id <= 999999999      and itm3.itm_next_version_id >= 999999999
    Plan hash value: 2184662757
    | Id  | Operation                    | Name                     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |   1 |  SORT GROUP BY               |                          |      1 |      1 |      1 |00:00:03.61 |     178K| 73728 | 73728 |      |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| TCTG_ITA_ITEM_ATTRIBUTES |      1 |      1 |    756 |00:00:00.96 |     178K|       |  |           |
    |   3 |    NESTED LOOPS              |                          |      1 |      1 |  69879 |00:00:01.27 |     145K|       |  |           |
    |   4 |     NESTED LOOPS             |                          |      1 |      1 |  34939 |00:00:02.66 |   71815 |       |  |           |
    |   5 |      NESTED LOOPS            |                          |      1 |      1 |  35695 |00:00:00.71 |     229 |       |  |           |
    |   6 |       MERGE JOIN CARTESIAN   |                          |      1 |      1 |      1 |00:00:00.01 |       6 |       |  |           |
    |*  7 |        INDEX RANGE SCAN      | ICTG_ITM_1               |      1 |      1 |      1 |00:00:00.01 |       3 |       |  |           |
    |   8 |        BUFFER SORT           |                          |      1 |      1 |      1 |00:00:00.01 |       3 | 73728 | 73728 |      |
    |*  9 |         INDEX RANGE SCAN     | ICTG_ITM_1               |      1 |      1 |      1 |00:00:00.01 |       3 |       |  |           |
    |* 10 |       INDEX RANGE SCAN       | ICTG_ITA_1               |      1 |      1 |  35695 |00:00:00.29 |     223 |       |  |           |
    |* 11 |      INDEX RANGE SCAN        | ICTG_ITM_0               |  35695 |      1 |  34939 |00:00:01.14 |   71586 |       |  |           |
    |* 12 |     INDEX RANGE SCAN         | ICTG_ITA_0               |  34939 |      1 |  34939 |00:00:01.20 |   73590 |       |  |           |
    Predicate Information (identified by operation id):
       2 - filter("ITM3"."ITM_ID"="ITA2"."ITA_VALUE_NUMERIC")
       7 - access("ITM2"."ITM_PRIMARY_KEY"='RAYBESTOS' AND "ITM2"."ITM_CONTAINER_ID"=2020 AND "ITM2"."ITM_NEXT_VERSION_ID">=999999999 AND
                  "ITM2"."ITM_VERSION_ID"<=999999999)
           filter("ITM2"."ITM_VERSION_ID"<=999999999)
       9 - access("ITM3"."ITM_PRIMARY_KEY"='1' AND "ITM3"."ITM_CONTAINER_ID"=2025 AND "ITM3"."ITM_NEXT_VERSION_ID">=999999999 AND
                  "ITM3"."ITM_VERSION_ID"<=999999999)
           filter("ITM3"."ITM_VERSION_ID"<=999999999)
      10 - access("ITA1"."ITA_NODE_ID"=3111 AND "ITM2"."ITM_ID"="ITA1"."ITA_VALUE_NUMERIC" AND "ITA1"."ITA_NEXT_VERSION_ID">=999999999
                  AND "ITA1"."ITA_VERSION_ID"<=999999999)
           filter("ITA1"."ITA_VERSION_ID"<=999999999)
      11 - access("ITA1"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITM1"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM1"."ITM_CONTAINER_ID"=2812 AND
                  "ITM1"."ITM_VERSION_ID"<=999999999)
           filter(("ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999))
      12 - access("ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITA2"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA2"."ITA_NODE_ID"=3118 AND
                  "ITA2"."ITA_VERSION_ID"<=999999999)
           filter(("ITA2"."ITA_NODE_ID"=3118 AND "ITA2"."ITA_VERSION_ID"<=999999999))
    Note
       - dynamic sampling used for this statement
    54 rows selected.
    Elapsed: 00:00:00.04Display_cursor with statistics
    SQL> @disp-cursor
                           756
    Elapsed: 00:01:57.53
    SQL_ID  d9q6j48ns19zv, child number 0
    select /*+ gather_plan_statistics */ count(distinct itm1.itm_id)   FROM ita ita1, ita ita2, itm itm1, itm itm2, itm
    itm3 where itm1.itm_container_id = 2812      and itm1.itm_version_id <= 999999999      and itm1.itm_next_version_id
    = 999999999 and itm2.itm_primary_key = 'RAYBESTOS' and itm3.itm_primary_key = '1' andita1.ita_node_id = 3111      and itm2.itm_container_id = 2020      and ita1.ita_item_id = itm1.itm_id      and
    ita1.ita_version_id <= 999999999      and ita1.ita_next_version_id >= 999999999      and itm2.itm_id =
    ita1.ita_value_numeric      and itm2.itm_version_id <= 999999999      and itm2.itm_next_version_id >= 999999999
    and ita2.ita_node_id = 3118      and itm3.itm_container_id = 2025      and ita2.ita_item_id = itm1.itm_id      and
    ita2.ita_version_id <= 999999999      and ita2.ita_next_version_id >= 999999999      and itm3.itm_id =
    ita2.ita_value_numeric      and itm3.itm_version_id <= 999999999      and itm3.itm_next_version_id >= 999999999
    Plan hash value: 332134648
    | Id  | Operation                | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |   1 |  SORT GROUP BY           |            |      1 |      1 |      1 |00:01:54.75 |    3041K| 73728 | 73728 |          |
    |   2 |   NESTED LOOPS           |            |      1 |      1 |    756 |00:00:30.95 |    3041K|       |       |          |
    |   3 |    NESTED LOOPS          |            |      1 |      1 |  34939 |00:00:02.73 |   71818 |       |       |          |
    |   4 |     NESTED LOOPS         |            |      1 |      1 |  35695 |00:00:00.75 |     230 |       |       |          |
    |   5 |      MERGE JOIN CARTESIAN|            |      1 |      1 |      1 |00:00:00.01 |       6 |       |       |          |
    |*  6 |       INDEX RANGE SCAN   | ICTG_ITM_1 |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
    |   7 |       BUFFER SORT        |            |      1 |      1 |      1 |00:00:00.01 |       3 | 73728 | 73728 |          |
    |*  8 |        INDEX RANGE SCAN  | ICTG_ITM_1 |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
    |*  9 |      INDEX RANGE SCAN    | ICTG_ITA_1 |      1 |      1 |  35695 |00:00:00.32 |     224 |       |       |          |
    |* 10 |     INDEX RANGE SCAN     | ICTG_ITM_0 |  35695 |      1 |  34939 |00:00:01.19 |   71588 |       |       |          |
    |* 11 |    INDEX RANGE SCAN      | ICTG_ITA_1 |  34939 |      1 |    756 |00:01:52.76 |    2969K|       |       |          |
    Predicate Information (identified by operation id):
       6 - access("ITM3"."ITM_PRIMARY_KEY"='1' AND "ITM3"."ITM_CONTAINER_ID"=2025 AND
                  "ITM3"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM3"."ITM_VERSION_ID"<=999999999)
           filter("ITM3"."ITM_VERSION_ID"<=999999999)
       8 - access("ITM2"."ITM_PRIMARY_KEY"='RAYBESTOS' AND "ITM2"."ITM_CONTAINER_ID"=2020 AND
                  "ITM2"."ITM_NEXT_VERSION_ID">=999999999 AND "ITM2"."ITM_VERSION_ID"<=999999999)
           filter("ITM2"."ITM_VERSION_ID"<=999999999)
       9 - access("ITA1"."ITA_NODE_ID"=3111 AND "ITM2"."ITM_ID"="ITA1"."ITA_VALUE_NUMERIC" AND
                  "ITA1"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA1"."ITA_VERSION_ID"<=999999999)
           filter(("ITA1"."ITA_VALUE_NUMERIC" IS NOT NULL AND "ITA1"."ITA_VERSION_ID"<=999999999))
      10 - access("ITA1"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND "ITM1"."ITM_NEXT_VERSION_ID">=999999999 AND
                  "ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999)
           filter(("ITM1"."ITM_CONTAINER_ID"=2812 AND "ITM1"."ITM_VERSION_ID"<=999999999))
      11 - access("ITA2"."ITA_NODE_ID"=3118 AND "ITM3"."ITM_ID"="ITA2"."ITA_VALUE_NUMERIC" AND
                  "ITA2"."ITA_NEXT_VERSION_ID">=999999999 AND "ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID" AND
                  "ITA2"."ITA_VERSION_ID"<=999999999)
           filter(("ITA2"."ITA_VALUE_NUMERIC" IS NOT NULL AND "ITA2"."ITA_VERSION_ID"<=999999999 AND
                  "ITA2"."ITA_ITEM_ID"="ITM1"."ITM_ID"))
    51 rows selected.
    Elapsed: 00:00:00.28

  • BEX WAD 7.0:  Chart Takes Long Time to Display in Portal - Query runs FAST

    I have a BEx WAD 7.0 template which contains 3 column charts (each with it's own seperate DataProvider/Query).  When the page loads...two of the charts show up right away, but one of them takes almost a minute to display on the screen (I thought it was missing at first).
    I ran all three queries in the BEx Query Analyzer (including the one for the chart that takes forever to load) and they all complete within 3 seconds of hitting "Execute."  So I don't believe it is the query causing this issue.
    The chart that doesn't show up right away does have more data to display than the other two...but I have queries/charts on other web templates that contain 3-times the data of this one and show up fine when executed in the portal.
    Anyone else having this issue or have an idea on how I can optimize the WAD charts and/or find out what is causing this issue?  Again...the query that fuels this chart completes its execution in about 3-4 seconds.
    Thank you for your time and of course points will be assigned accordingly.
    Kevin

    Hi,
    have you already checked how much time the IGS consumes when creating the charts?
    Run TA SIGS and check the statistics values.
    Regards, Kai

  • Query runs from command line, but not from scheduler

    We use Control-M to schedule shell scripts to be run on a Solaris server. Some of the scripts have to access an Oracle database and in that case our security team will include the DB user and password in the script, then encrypt it and the sys admin team schedules the encrypted shell script with Control-M. That works fine, but we've been trying to have the DB user and password on a separate encrypted file so that we don't have to ask for file encryption every time it's necessary to modify a script (this is a test environment).
    We have the script at ~/system_name/scripts, the query at ~/system_name/sql and the encrypted file and key at ~/system_name/keys. The SQLPlus call in the script is:
    ${ORACLE_HOME}/bin/sqlplus "`decrypt -a 3des -k ./../keys/key.3des.system -i ./../keys/login.system`"@instance_name <<EOF
    @${DIR_SQL}/TEST_QUERY.SQL
    quit
    EOF
    The security analyst has tested is successfully from command line, but when we schedule it with Control-M the job abends and we get the following in the sysout:
    + decrypt -a 3des -k ./../keys/key.3des.system -i ./../keys/login.system
    decrypt: cannot open ./../keys/key.3des.system
    decrypt: invalid key.
    + /u00/app/oracle/product/11.1.0/db_1/bin/sqlplus @instance_name
    + 0<<
    @/sistemas/hmp/system_name/sql/TEST_QUERY.SQL
    quit
    SQL*Plus: Release 11.1.0.6.0 - Production on Mon May 3 09:41:55 2010
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    SP2-0310: unable to open file "instance_name.sql"
    Enter user-name: SP2-0306: Invalid option.
    Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER|SYSASM}]
    where <logon> ::= <username>[<password>][@<connect_identifier>] [edition=valu\
    e] | /
    SP2-0306: Invalid option.
    Usage: CONN[ECT] [logon] [AS {SYSDBA|SYSOPER|SYSASM}]
    where <logon> ::= <username>[<password>][@<connect_identifier>] [edition=valu\
    e] | /
    Enter password:
    ERROR:
    ORA-12545: Connect failed because target host or object does not exist
    SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus
    0000000080
    Any ideas?

    Looks like the command is being split in some way - the connection to sqlplus is being made before it completes the whole string
    It appears to be seeing the @instance_name as a script to execute rather than a db to connect to.
    Is the database on the same server as the script?
    If so, try setting your environment to the correct databsae, so that you can omit the @instance_name part of the syntax and see if it helps
    Also just noticed the failure to open the decrypt script. It would appear uyou are not using a full path name. Have you checked which directroy the scheduled job starts in? You may need to look at running some environment specific scripts first.
    Edited by: LindaA on 05-May-2010 07:43

Maybe you are looking for

  • Sun Java System Application Server as Testing server in Dreamweaver

    Hi, just want to know if I can use Sun App Server as my testing server in dreamweaver? because i've already used tomcat before but haven't use Sun App yet. I want to deploy my JSPs in sun app server. thanks in advance.

  • Unable to compile the Java Files generated by JAXB

    Hi, I have generated the Java Files for a DTD and .xjs file using JAXB. But when i tyr to compile the .java files generated i am getting errors. My DTD file is addctq.dtd <?xml version="1.0" encoding="UTF-8"?> <!ELEMENT AddCtq (Ctq*)> <!ELEMENT Ctq (

  • Getting trouble updating - please help!

    Hi The update server is not responding. The server might be offline temporarily, or the internet or firewall settings may be incorrect. The message above appears each time I try to update from the Creative Cloud panel. It starts to update and then fa

  • 7d cr2 files can't open in lr2.5 acr 5.5?

    Anyone can give my advice on the lightroom setting? Preferences? I might have the wrong settings because I can't open the cr2 files from the 7d. As above, I do have the lr 2.5 and acr 5.5 but still cannot open!

  • X201 w/ Centrino Ultimate-N 6300 Stops Working at Random

    About once a day, the wireless on my X201 stops working all of a sudden. I have tried updating the software, BIOS, etc, but still it continues. I have Win 7 Professional 64. Does anyone know how to solve this?