SQL query performance between TOAD and APEX

Hi Guys,
I would like to know if there is any performance difference between a simple query run in TOAD and APEX(classic report).
The reason being, I have a query based on a single table(conataining 15000 rows) which takes almost 30seconds in APEX whereas it takes just 2-3 seconds in TOAD.
Thanks,
Raj.

Varad,
Thanks for your suggestion.
I tried changing the pagination but not much it helped.
Basically I have 5 reports on the same page.
When the user first navigates to this page then Report-1 is generated first with data as links to other reports.
So I guess when I click on any of the column links on the Report-1 then the page is refreshed and this time its taking total time for Report-1 and Report-2.
Is there a possibility that we can circumvent the execution of the first query or cache the results of report-1 so that when the page is refreshed it displays the data from the cache for Report-1 and executes the query for Report-2 ?
-Raj

Similar Messages

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • SQL query performance issues.

    Hi All,
    I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
    SQL query performance issues.
    Following is the tkprof file.
    CURSOR_ID:76  LENGTH:2383  ADDRESS:f6b40ab0  HASH_VALUE:2459471753  OPTIMIZER_GOAL:ALL_ROWS  USER_ID:443 (APPS)
    insert into cos_temp(
    TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
    CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
    INVOICE_NUMBER, EXT_SALES, EXT_COS,
    GROSS_PROFIT, ACCT_DATE,
    SHIPMENT_TYPE,
    FROM_ORGANIZATION_ID,
    FROM_ORGANIZATION_CODE)
    select a.trx_date,
    g.segment5 dept,
    g.segment4 prd,
    m.segment1 part,
    d.customer_number customer,
    b.quantity_invoiced units,
    --       substr(a.sales_order,1,6) order#,
    substr(ltrim(b.interface_line_attribute1),1,10) order#,
    a.trx_number invoice,
    (b.quantity_invoiced * b.unit_selling_price) sales,
    (b.quantity_invoiced * nvl(price.operand,0)) cos,
    (b.quantity_invoiced * b.unit_selling_price) -
    (b.quantity_invoiced * nvl(price.operand,0)) profit,
    to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
    'DRP',
    l.ship_from_org_id,
    p.organization_code
    from   ra_customers d,
    gl_code_combinations g,
    mtl_system_items m,
    ra_cust_trx_line_gl_dist c,
    ra_customer_trx_lines b,
    ra_customer_trx_all a,
    apps.oe_order_lines l,
    apps.HR_ORGANIZATION_INFORMATION i,
    apps.MTL_INTERCOMPANY_PARAMETERS inter,
    apps.HZ_CUST_SITE_USES_ALL site,
    apps.qp_list_lines_v price,
    apps.mtl_parameters p
    where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
    and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
    and   a.batch_source_id = 1001     -- Sales order shipped other OU
    and   a.complete_flag = 'Y'
    and   a.customer_trx_id = b.customer_trx_id
    and   b.customer_trx_line_id = c.customer_trx_line_id
    and   a.sold_to_customer_id = d.customer_id
    and   b.inventory_item_id = m.inventory_item_id
    and   m.organization_id
         = decode(substr(g.segment4,1,2),'01',5004,'03',5004,
         '02',5003,'00',5001,5002)
    and   nvl(m.item_type,'0') <> '111'
    and   c.code_combination_id = g.code_combination_id+0
    and   l.line_id = b.interface_line_attribute6
    and   i.organization_id = l.ship_from_org_id
    and   p.organization_id = l.ship_from_org_id
    and   i.org_information3 <> '5108'
    and   inter.ship_organization_id = i.org_information3
    and   inter.sell_organization_id = '5108'
    and   inter.customer_site_id = site.site_use_id
    and   site.price_list_id = price.list_header_id
    and   product_attr_value = to_char(m.inventory_item_id)
    call        count       cpu   elapsed         disk        query      current         rows    misses
    Parse           1      0.47      0.56           11          197            0            0         1
    Execute         1   3733.40   3739.40        34893    519962154           11          188         0
    total           2   3733.87   3739.97        34904    519962351           11          188         1
    |         Rows Row Source Operation
    | ------------ ---------------------------------------------------
    |          188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
    |          741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
    |          741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
    |          741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
    |          741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
    |         3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
    |         3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
    |         3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
    |         1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
    |         1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
    |          486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
    |          486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
    |          486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
    |        75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
    |          486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
    |          486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
    |          486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
    |         1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
    |         2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
    |         1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
    |         1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
    |         3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
    |         3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
    |         3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
    |         3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
    |         3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
    |         3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
    |          741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
    |         6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
    Regards
    Ashish

    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
    Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
    Please post explain plan and optimizer* parameter settings.

  • Required info on SQL Server Performance Issue Analysis and Troubleshoot way

    Dear All,
    I am going to prepare the simple documentation steps on SQL Server Performance Issue Analysis and troubleshoot method. I am struggling to make this documentation since we have different checklist (like network latency,disk latency, memory/processor pressure,SQL
    query tuning etc) to validate once application performance issue reported from the customer.So, I am looking for the experts document or link sharing .
    Your input will help for document preparation in better way.
    Thanks in advance.

    Hi,
    Recommendations and Guidelines on configuring disk partitions for SQL Server
    http://support.microsoft.com/kb/2023571
    Disk and File Layout for SQL Server
    https://blogs.technet.com/b/dataplatforminsider/archive/2012/12/19/disk-and-file-layout-for-sql-server.aspx
    Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure
    http://www.packtpub.com/article/sql-server-2012-implementing-physical-database-strusture
    Database Mirroring Best Practices and Performance Considerations
    http://technet.microsoft.com/en-us/library/cc917681.aspx
    Hope the information helps.
    Tracy Cai
    TechNet Community Support

  • Need to run a sql query in the background and display the output in HTML

    Hi Guys,
    I have a link in my iprocurement webpage, when i click this link, a query (sql query) should be run and the output should be displayed in a HTML page. Any ideas of how this can be done. Help appreciated.
    We dont have OA Framework and we r using 11.5.7.
    help needed
    thanx

    Read Metalink Note 275880.1 which has the link to developer guide and personalization guide.

  • What is the diff in term of fastness and performance between retina and last mbp version in percentage ?

    What is the diff in term of fastness and performance between retina and last mbp version in PERCENTAGE ?
    Ty to not give links i want an answer here and now by someone who knows what hes talking about.

    But old mbp is not available with ssd you have to buy the option right ?
    Anyway, outside reboot speed is there a speed difference between retina and old version in term of general processing  ?
    I have a 2009 MBP that originally came with a SATA drive. Recently, I swapped drives; now, it has 512GB SSD. In other words, you can easily increase the speed of a standard MBP. There is going to be an increase in general processing with the Retina MBPs, but the significance depends on the application. For example, booting Photoshop used to take just over 30 seconds with a SATA drive; with an SSD, it's just under 10 seconds. But controlling for the SATA vs SSD, the differences are smaller and incremental.
    Ok you speak about a diff with the retina in term of better graphics when zooming but will this advantage be relevant also on my external  giant monitors ?
    I can't answer that, so I'll leave that to someone with more expertise in this area.

  • SQL Query Performance

    Hi There,
    We have a sql query that runs between 2 databases on the same machine, the sql takes about 2 mins and returns about 6400 rows. When the process started running we used to see results in about 13 secs, now it's taking almost 2 mins for the same data set. We have updated the stats (table and index) but to no use. I've been trying to get the execution plan to see if there is anything abnormal going on but as the core of the sql is done remotely, we haven't been able to get much out of it.
    Here is the sql:
    SELECT  
    --/*+ DRIVING_SITE(var) ALL_ROWS */
             ventity_id, ar_action_performed, action_date,
             'ventity_ar' ar_tab
        FROM (SELECT var.ventity_id, var.ar_action_performed, var.action_date,
                     var.familyname_id, var.status, var.isprotected,
                     var.dateofbirth, var.gender, var.sindigits,
                     LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                lag_familyname_id,
                     LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                       lag_status,
                     LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                  lag_isprotected,
                     LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                  lag_dateofbirth,
                     LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                       lag_gender,
                     LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                    lag_sindigits
                FROM cpp_schema.ventity_ar@CdpP var,
                     -- reduce the set to ventity_id that had a change within the time frame,
                     -- and filter out RETRIEVEs as they do not signal change
                     (SELECT DISTINCT ventity_id
                                 FROM cpp_schema.ventity_ar@CdpP
                                WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
                                  AND ar_action_performed <> 'RTRV') m
               WHERE var.action_date <= '10-APR-10'
                 AND var.ventity_id = m.ventity_id
                 AND var.ar_action_performed <> 'RTRV') mm
       WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
         -- most of the columns from the data table allow nulls
         AND (   (NVL (familyname_id, 0) <> NVL (lag_familyname_id, 0))
              OR (NVL (status, 'x') <> NVL (lag_status, 'x'))
              OR (NVL (isprotected, 2) <> NVL (lag_isprotected, 2))
              OR (NVL (dateofbirth, TO_DATE ('15000101', 'yyyymmdd')) <>
                           NVL (lag_dateofbirth, TO_DATE ('15000101', 'yyyymmdd'))
              OR (NVL (gender, 'x') <> NVL (lag_gender, 'x'))
              OR (NVL (sindigits, 'x') <> NVL (lag_sindigits, 'x'))
    ORDER BY ventity_id, action_date DESC
    6401 rows selected.
    Elapsed: 00:01:47.03
    Execution Plan
    Plan hash value: 3953446945
    | Id  | Operation        | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Inst   |IN-OUT|
    |   0 | SELECT STATEMENT |      |    12M|  1575M|       |   661K  (1)| 02:12:22 |        |      |
    |   1 |  SORT ORDER BY   |      |    12M|  1575M|  2041M|   661K  (1)| 02:12:22 |        |      |
    |*  2 |   VIEW           |      |    12M|  1575M|       |   291K  (2)| 00:58:13 |        |      |
    |   3 |    REMOTE        |      |       |       |       |            |          | CCP01  | R->S |
       2 - filter("action_date">='01_MAR-10' AND "action_date"<='10-APR-10' AND
                  (NVL("FAMILYNAME_id",0)<>NVL("LAG_FAMILYNAME_id",0) OR
                  NVL("STATUS",'x')<>NVL("LAG_STATUS",'x') OR NVL("ISPROTECTED",2)<>NVL("LAG_ISPROTECTED",2
                  ) OR NVL("DATEOFBIRTH",TO_DATE(' 1500-01-01 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss'))<>NVL("LAG_DATEOFBIRTH",TO_DATE(' 1500-01-01 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss')) OR NVL("GENDER",'x')<>NVL("LAG_GENDER",'x') OR
                  NVL("SINDIGITS",'x')<>NVL("LAG_SINDIGITS",'x')))
    Remote SQL Information (identified by operation id):
       3 - EXPLAIN PLAN SET STATEMENT_ID='PLUS4294967295' INTO PLAN_TABLE@! FOR SELECT
           "A2"."ventity_id","A2"."AR_ACTION_PERFORMED","A2"."action_date","A2"."FAMILYNAME_id","A2"
           ."STATUS","A2"."ISPROTECTED","A2"."DATEOFBIRTH","A2"."GENDER","A2"."SINDIGITS",DECODE(COU
           NT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1
           PRECEDING  AND 1 PRECEDING ),1,FIRST_VALUE("A2"."FAMILYNAME_id") OVER ( PARTITION BY
           "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING
           ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
           "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),1,FIRST_VALUE("A2"."STATUS")
           OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1
           PRECEDING  AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY
           "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING
           ),1,FIRST_VALUE("A2"."ISPROTECTED") OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
           "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER (
           PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1 PRECEDING
           AND 1 PRECEDING ),1,FIRST_VALUE("A2"."DATEOFBIRTH") OVER ( PARTITION BY
           "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING
           ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
           "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),1,FIRST_VALUE("A2"."GENDER")
           OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1
           PRECEDING  AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY
           "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING
           ),1,FIRST_VALUE("A2"."SINDIGITS") OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
           "A2"."action_date" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),NULL) FROM
           "CPP_SCHEMA"."ventity_AR" "A2", (SELECT DISTINCT "A3"."ventity_id"
           "ventity_id" FROM "CPP_SCHEMA"."ventity_AR" "A3" WHERE
           "A3"."action_date">='01_MAR-10' AND "A3"."action_date"<='10-APR-10' AND
           "A3"."AR_ACTION_PERFORMED"<>'RETRIEVE' AND TO_DATE('01_MAR-10')<=TO_DATE('10-APR-10'))
           "A1" WHERE "A2"."action_date"<='10-APR-10' AND "A2"."ventity_id"="A1"."ventity_id"
           AND "A2"."AR_ACTION_PERFORMED"<>'RETRIEVE' (accessing 'EBCP01.EBC.GOV.BC.CA' )Your advise and/or help is highly appreciated.
    THanks
    Edited by: rsar001 on Apr 20, 2010 6:57 AM

    Maybe I'm missing something but this subquery seems inefficient:
    SELECT var.ventity_id, var.ar_action_performed, var.action_date,
                     var.familyname_id, var.status, var.isprotected,
                     var.dateofbirth, var.gender, var.sindigits,
                     LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                lag_familyname_id,
                     LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                       lag_status,
                     LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                  lag_isprotected,
                     LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                  lag_dateofbirth,
                     LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                       lag_gender,
                     LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
                                                                    lag_sindigits
                FROM cpp_schema.ventity_ar@CdpP var,
                     -- reduce the set to ventity_id that had a change within the time frame,
                     -- and filter out RETRIEVEs as they do not signal change
                     (SELECT DISTINCT ventity_id
                                 FROM cpp_schema.ventity_ar@CdpP
                                WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
                                  AND ar_action_performed <> 'RTRV') m
               WHERE var.action_date <= '10-APR-10'
                 AND var.ventity_id = m.ventity_id
                 AND var.ar_action_performed != 'RTRV'I don't think accessing the VENTITY_AR table twice is helping you here. The comments looks like you want to restrict the set of VENTITY_IDs but if you look at the plan it is not happening. The plan is reading them from the index and joining against the full VENTITY_AR table anyways. I recommend you consolidate it into something like this:
    SELECT  var.ventity_id
    ,       var.ar_action_performed
    ,       var.action_date
    ,       var.familyname_id
    ,       var.status
    ,       var.isprotected
    ,       var.dateofbirth
    ,       var.gender
    ,       var.sindigits
    ,       LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date)         AS lag_familyname_id
    ,       LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date)                AS lag_status
    ,       LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date)           AS lag_isprotected
    ,       LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date)           AS lag_dateofbirth
    ,       LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date)                AS lag_gender
    ,       LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date)             AS lag_sindigits
    FROM    cpp_schema.ventity_ar@CdpP var
    WHERE   var.action_date BETWEEN TO_DATE('01-MAR-10','DD-MON-YY') AND TO_DATE('10-APR-10','DD-MON-YY')
    AND     var.ar_action_performed != 'RTRV'It may then be useful to put an index on (ACTION_DATE,AR_ACTION_PERFORMED) if one doesn't already exist.
    *::EDIT::*
    I noticed the large amount of NVL calls in your outer query. These NVLs could possibly be eliminated if you use the optional second and third arguments of the LAG analytical function. I'm not sure if this would improve performance but it may make the query more readable and maintainable.
    HTH!
    Edited by: Centinul on Apr 20, 2010 10:50 AM

  • Sql query performance need to get improved

    hi all..
    i got performnace issue with my sp where i used 3 cte's.. i'm posting my code.please help me how can i improve the performance of query
    i created non-clustered indexes for tables based on the keys with which the tables are joined..
    USE [OPTM]
    GO
    /****** Object: StoredProcedure [dbo].[GetSample] Script Date: 01/07/2014 10:29:32 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    ALTER PROCEDURE [dbo].[GetSample]
    @StartDate DateTime,
    @EndDate DateTime,
    @Portfolio int,
    @Program int,
    @Project int
    AS
    Date Author Purpose
    06/11/2012 Ajeesh.C To get the Workitem details for the Scope Workitem Green chart Report.
    06/11/2012 Shinoj.P T-SQL re-structuring.
    Testing :
    exec [dbo].[GetSample] '01/01/2013','12/31/2013',-1,-1,-1
    exec [dbo].[GetSample] '01/01/2013','12/31/2013',16,24,199
    exec [dbo].[GetSample] '11/01/2013','12/31/2013',-1,-1,703
    exec [dbo].[GetSample] '11/01/2012','11/30/2012',8,-1,-1
    select * from tb_Portfolio
    BEGIN
    DECLARE @Scope nvarchar(250),@ScopeID int,@ProjectID int,@WorkItem nvarchar(250),@ProgramID int, @PortfolioID int;
    -------Added 3 columns(StatusID,Status,TaskID)--------
    CREATE TABLE #GrnChartTempTable
    AllocationDate datetime NULL,
    Division nvarchar(50) NULL,
    DivisionID int NULL,
    ResourceName nvarchar(250) NULL,
    ResourceEmailID nvarchar(max) NULL,
    ResourceID int NULL,
    Project nvarchar(250) NULL,
    ProjectID int NULL,
    Scope nvarchar(MAX) NULL,
    ScopeID int NULL,
    WorkItem nvarchar(MAX) NULL,
    TaskStartDate datetime NULL,
    TaskEndDate datetime NULL,
    ProgramID int NULL,
    Program nvarchar(250) NULL,
    PortfolioID int NULL,
    Portfolio nvarchar(250) NULL,
    StatusID int NULL,
    Status nvarchar(50) NULL,
    TaskID int null,
    EstimateHrs nvarchar(250) NULL,
    ScopeEstimateHrs int NULL,
    Allocated int NOT NULL
    WITH Datematrix(AllocationDate)
    As
    SELECT @StartDate AS AllocationDate
    UNION ALL
    SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
    FROM Datematrix WHERE AllocationDate<@EndDate
    Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
    ,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
    AS
    SELECT
    DIV.Division
    ,RES.DivisionID
    ,RES.ResourceName
    ,ResourceEmailID = STUFF((
    SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
    FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
    dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    WHERE TSKRES.TaskID = TSK.TaskID
    FOR XML PATH('')), 1, 1, '')
    ,RES.UID ResourceID
    ,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
    ,PRJ.UID ProjectID
    ,SCP.Title Scope
    ,SCP.ScopeID
    ,TSK.Title WorkItem
    ,TSK.StartDate TaskStartDate
    ,TSK.EndDate TaskEndDate
    ,PRJ.ProgramID
    ,PR.Program
    ,PR.PortfolioID
    ,PF.Portfolio
    ,TSK.StatusID
    ,ST.Status
    ,TSK.TaskID
    ,TSK.EstimateHrs
    ,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
    --SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
    FROM Tasks TSK WITH(NOLOCK)
    INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
    INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
    INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
    INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
    LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
    LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
    LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
    WHERE (PRJ.UID = @Project OR @Project = -1)
    AND (PRJ.ProgramID = @Program OR @Program = -1)
    AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
    MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
    ,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
    AS
    ( SELECT
    Datematrix.*
    ,Allocation.*
    ,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
    FROM Datematrix FULL OUTER JOIN Allocation
    ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
    AND Datematrix.AllocationDate<=Allocation.TaskEndDate
    INSERT INTO #GrnChartTempTable
    SELECT * FROM MainData
    OPTION (MAXRECURSION 0);
    SELECT TOP 1 @Scope=Scope,@ScopeID=ScopeID, @ProjectID=ProjectID
    ,@WorkItem=WorkItem,@ProgramID=ProgramID,@PortfolioID=PortfolioID
    FROM #GrnChartTempTable WHERE Scope IS NOT NULL AND ISDATE(AllocationDate)=1 ORDER BY Scope ;
    SELECT AllocationDate,Division,DivisionID
    ,ResourceName,ResourceEmailID,ResourceID,Project
    ,ISNULL(ProjectID,@ProjectID) ProjectID
    ,ISNULL(Scope,@Scope) Scope
    ,ISNULL(ScopeID,@ScopeID) ScopeID
    ,ISNULL(WorkItem,@WorkItem) WorkItem
    ,TaskStartDate,TaskEndDate
    ,ISNULL(ProgramID ,@ProgramID) ProgramID
    ,Program
    ,ISNULL(PortfolioID,@PortfolioID) PortfolioID
    ,Portfolio,StatusID,Status,TaskID,EstimateHrs,isnull(ScopeEstimateHrs,0)ScopeEstimateHrs,Allocated
    FROM #GrnChartTempTable MainData
    WHERE ISDATE(MainData.AllocationDate)=1
    AND ISNULL(Scope,@Scope) IS NOT NULL
    --WHERE FinalData.Scope IS NOT NULL
    END
    this is my code pls help..
    lucky

    You need to focus on optimizing your code by looking at the logic and removing any extraneous rows that you do not need - stop depending on the optimizer to do your work. 
    You have the following in multiple lines:   ISDATE(AllocationDate)=1
    Look at your final resultset.  You do not want any other rows (where isdate() <> 1) so stop selecting them in the first place. In addition, you are using a full outer join in the first query that uses this logic.  Since you do not qualify
    your columns with the tablename (or alias - which is a best practice), I cannot say if the isdate logic negates the full outer join - but I suspect it might.  I also question the logic behind the assignment of the local variables and their use in the
    final query.  You could remove the separate assignment query (and the variables) by simply moving that query into a derived table (or cte) of the final query.  That might not be a significant improvement (you did not give any indications about the size
    of the various queries) but I think it is simpler, more resilient, more obvious.  I also question the reasoning behind the use of a full outer join.
    Why do you left join to dbo.tb_Status?  Does not every task have a status?
    You join to the TaskResource and tb_Resource tables multiple times (in the Allocation cte) - and it appears that there is a 1/m relationship between task and this joined resultset.  Yet the primary query of the Allocation cte does no aggregation. 
    That is concerning, but I don't know your data so perhaps this is correct.  OTH, perhaps it depends on an assumption and your existing data has not yet violated that assumption. 
    But these are only guesses.  As Erland indicates, optimzing requires knowledge of the tables, the data, your business logic, etc. 

  • Which SQL query performance is better

    Putting this in a new thread so that i dont confuse and mix the query with my similiar threads.
    Based on all of your suggestions for the below query,i finally came up with 2 RESOLUTIONS possible for it.
    So,which would be the best in PERFORMANCE from the 2 resolutions i pasted below?I mean which will PERFORM FASTER and is more effecient.
    ***The original QUERY is at the bottom.
    Resolution 1:-/***Divided into 2 sep. queries and using UNION ALL ****/ Is UNION ALL costly?
    SELECT null, null, null, null, null,null, null, null, null,
    null,null, null, null, null, null,null,null, count(*) as total_results
    FROM
    test_person p,
    test_contact c1,
    test_org_person porg
    WHERE p.CLM_ID ='11' and
    p.person_id = c1.ref_id(+)
    AND p.person_id = porg.o_person_id
    and porg.O_ORG_ID ='11'
    UNION ALL
    SELECT lastname, firstname,person_id, middlename,socsecnumber,
    birthday, U_NAME,
    U_ID,
    PERSON_XML_DATA,
    BUSPHONE,
    EMLNAME,
    ORG_NAME,
    EMPID,
    EMPSTATUS,
    DEPARTMENT,
    org_relationship,
    enterprise_name,
    null
    FROM
    SELECT
    beta.*, rownum as alpha
    FROM
    SELECT
    p.lastname, p.firstname, p.person_id, p.middlename, p.socsecnumber,
    to_char(p.birthday,'mm-dd-yyyy') as birthday, p.username as U_NAME,
    p.clm_id as U_ID,
    p.PERSON_XML_DATA.extract('/').getStringVal() AS PERSON_XML_DATA,
    c1.CONTACT_DATA.extract('//phone[1]/number/text()').getStringVal() AS BUSPHONE,
    c1.CONTACT_DATA.extract('//email[2]/address/text()').getStringVal() AS EMLNAME,
    c1.CONTACT_DATA.extract('//company/text()').getStringVal() AS ORG_NAME,
    porg.emplid as EMPID, porg.empl_status as EMPSTATUS, porg.DEPARTMENT,
    porg.org_relationship,
    porg.enterprise_name
    FROM
    test_person p,
    test_contact c1,
    test_org_person porg
    WHERE p.CLM_ID ='11' and
    p.person_id = c1.ref_id(+)
    AND p.person_id = porg.o_person_id
    and porg.O_ORG_ID ='11'
    ORDER BY
    upper(p.lastname), upper(p.firstname)
    ) beta
    WHERE
    alpha BETWEEN 1 AND 100
    Resolution 2:-
    /****here,the INNER most count query is removed ****/
    select *
    FROM
    SELECT
    beta.*, rownum as alpha
    FROM
    SELECT
    p.lastname, p.firstname, p.person_id, p.middlename, p.socsecnumber,
    to_char(p.birthday,'mm-dd-yyyy') as birthday, p.username as U_NAME,
    p.clm_id as U_ID,
    p.PERSON_XML_DATA.extract('/').getStringVal() AS PERSON_XML_DATA,
    c1.CONTACT_DATA.extract('//phone[1]/number/text()').getStringVal() AS BUSPHONE,
    c1.CONTACT_DATA.extract('//email[2]/address/text()').getStringVal() AS EMLNAME,
    c1.CONTACT_DATA.extract('//company/text()').getStringVal() AS ORG_NAME,
    porg.emplid as EMPID, porg.empl_status as EMPSTATUS, porg.DEPARTMENT,
    porg.org_relationship,
    porg.enterprise_name,
    COUNT(*) OVER () cnt -----This is the function
    FROM
    test_person p,
    test_contact c1,
    test_org_person porg
    WHERE p.CLM_ID ='11' and
    p.person_id = c1.ref_id(+)
    AND p.person_id = porg.o_person_id
    and porg.O_ORG_ID ='11'
    ORDER BY upper(p.lastname), upper(p.firstname)
    ) beta
    WHERE
    alpha BETWEEN 1 AND 100
    ORIGINAL QUERY
    SELECT
    FROM
    SELECT
    beta.*, rownum as alpha
    FROM
    SELECT
    p.lastname, p.firstname, porg.DEPARTMENT,
    porg.org_relationship,
    porg.enterprise_name,
    SELECT
    count(*)
    FROM
    test_person p, test_contact c1, test_org_person porg
    WHERE
    p.p_id = c1.ref_id(+)
    AND p.p_id = porg.o_p_id
    $where_clause$
    ) AS results
    FROM
    test_person p, test_contact c1, test_org_person porg
    WHERE
    p.p_id = c1.ref_id(+)
    AND p.p_id = porg.o_p_id
    $where_clause$
    ORDER BY
    upper(p.lastname), upper(p.firstname)
    ) beta
    WHERE
    alpha BETWEEN #startRec# AND #endRec#

    I have now run the explain plans and put them below seperately for each SQL.The SQL queries for each of the items are posted in the 1st post of this thread.
    ***The original QUERY is at the bottom.
    Resolution 1:-/***Divided into 2 sep. queries and using UNION ALL ****/ Is UNION ALL costly?
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1981931315
    | Id  | Operation                           | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                    | 22859 |   187M|       | 26722  (81)| 00:05:21 |
    |   1 |  UNION-ALL                          |                    |       |       |       |            |          |
    |   2 |   SORT AGGREGATE                    |                    |     1 |    68 |       |            |          |
    |   3 |    MERGE JOIN OUTER                 |                    | 22858 |  1517K|       |  5290   (1)| 00:01:04 |
    |   4 |     MERGE JOIN                      |                    | 22858 |   982K|       |  4304   (1)| 00:00:52 |
    |*  5 |      INDEX RANGE SCAN               | test_org_person_I3 | 24155 |   542K|       |   363   (1)| 00:00:05 |
    |*  6 |      SORT JOIN                      |                    | 22858 |   468K|  1448K|  3941   (1)| 00:00:48 |
    |*  7 |       TABLE ACCESS FULL             | test_PERSON        | 22858 |   468K|       |  3716   (1)| 00:00:45 |
    |*  8 |     SORT JOIN                       |                    | 68472 |  1604K|  4312K|   985   (2)| 00:00:12 |
    |   9 |      INDEX FAST FULL SCAN           | test_CONTACT_FK1   | 68472 |  1604K|       |   113   (1)| 00:00:02 |
    |* 10 |   VIEW                              |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
    |  11 |    COUNT                            |                    |       |       |       |            |          |
    |  12 |     VIEW                            |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
    |  13 |      SORT ORDER BY                  |                    | 22858 |  6875K|    14M| 21433   (1)| 00:04:18 |
    |  14 |       MERGE JOIN OUTER              |                    | 22858 |  6875K|       | 18304   (1)| 00:03:40 |
    |  15 |        MERGE JOIN                   |                    | 22858 |  4397K|       | 11337   (1)| 00:02:17 |
    |  16 |         SORT JOIN                   |                    | 22858 |  3013K|  7192K|  5148   (1)| 00:01:02 |
    |* 17 |          TABLE ACCESS FULL          | test_PERSON        | 22858 |  3013K|       |  3716   (1)| 00:00:45 |
    |* 18 |         SORT JOIN                   |                    | 24155 |  1462K|  3800K|  6189   (1)| 00:01:15 |
    |  19 |          TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON    | 24155 |  1462K|       |  5535   (1)| 00:01:07 |
    |* 20 |           INDEX RANGE SCAN          | test_ORG_PERSON_FK1| 24155 |       |       |   102   (1)| 00:00:02 |
    |* 21 |        SORT JOIN                    |                    | 68472 |  7422K|    15M|  6968   (1)| 00:01:24 |
    |  22 |         TABLE ACCESS FULL           | test_CONTACT       | 68472 |  7422K|       |  2895   (1)| 00:00:35 |
    Predicate Information (identified by operation id):
       5 - access("PORG"."O_ORG_ID"='11')
       6 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
           filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
       7 - filter("P"."CLM_ID"='11')
       8 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
           filter("P"."PERSON_ID"="C1"."REF_ID"(+))
      10 - filter("ALPHA"<=25 AND "ALPHA">=1)
      17 - filter("P"."CLM_ID"='11')
      18 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
           filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
      20 - access("PORG"."O_ORG_ID"='11')
      21 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
           filter("P"."PERSON_ID"="C1"."REF_ID"(+))
    -------------------------------------------------------------------------------terprise_name
    Resolution 2:-
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1720299348
    | Id  | Operation                          | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                    | 23518 |    13M|       | 11545   (1)| 00:02:19 |
    |*  1 |  VIEW                              |                    | 23518 |    13M|       | 11545   (1)| 00:02:19 |
    |   2 |   COUNT                            |                    |       |       |       |            |          |
    |   3 |    VIEW                            |                    | 23518 |    13M|       | 11545   (1)| 00:02:19 |
    |   4 |     WINDOW SORT                    |                    | 23518 |  3536K|       | 11545   (1)| 00:02:19 |
    |   5 |      MERGE JOIN OUTER              |                    | 23518 |  3536K|       | 11545   (1)| 00:02:19 |
    |   6 |       MERGE JOIN                   |                    | 23518 |  2985K|       | 10587   (1)| 00:02:08 |
    |   7 |        SORT JOIN                   |                    | 23518 |  1561K|  4104K|  4397   (1)| 00:00:53 |
    |*  8 |         TABLE ACCESS FULL          | test_PERSON        | 23518 |  1561K|       |  3716   (1)| 00:00:45 |
    |*  9 |        SORT JOIN                   |                    | 24155 |  1462K|  3800K|  6189   (1)| 00:01:15 |
    |  10 |         TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON    | 24155 |  1462K|       |  5535   (1)| 00:01:07 |
    |* 11 |          INDEX RANGE SCAN          | test_ORG_PERSON_FK1| 24155 |       |       |   102   (1)| 00:00:02 |
    |* 12 |       SORT JOIN                    |                    | 66873 |  1567K|  4216K|   958   (2)| 00:00:12 |
    |  13 |        INDEX FAST FULL SCAN        | test_CONTACT_FK1   | 66873 |  1567K|       |   110   (1)| 00:00:02 |
    Predicate Information (identified by operation id):
       1 - filter("ALPHA"<=25 AND "ALPHA">=1)
       8 - filter("P"."CLM_ID"='11')
       9 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
           filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
      11 - access("PORG"."O_ORG_ID"='11')
      12 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
           filter("P"."PERSON_ID"="C1"."REF_ID"(+))
    ORIGINAL QUERY
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 319284042
    | Id  | Operation                          | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
    |*  1 |  VIEW                              |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
    |   2 |   COUNT                            |                    |       |       |       |            |          |
    |   3 |    VIEW                            |                    | 22858 |   187M|       | 21433   (1)| 00:04:18 |
    |   4 |     SORT ORDER BY                  |                    | 22858 |  6875K|    14M| 21433   (1)| 00:04:18 |
    |   5 |      MERGE JOIN OUTER              |                    | 22858 |  6875K|       | 18304   (1)| 00:03:40 |
    |   6 |       MERGE JOIN                   |                    | 22858 |  4397K|       | 11337   (1)| 00:02:17 |
    |   7 |        SORT JOIN                   |                    | 22858 |  3013K|  7192K|  5148   (1)| 00:01:02 |
    |*  8 |         TABLE ACCESS FULL          | test_PERSON        | 22858 |  3013K|       |  3716   (1)| 00:00:45 |
    |*  9 |        SORT JOIN                   |                    | 24155 |  1462K|  3800K|  6189   (1)| 00:01:15 |
    |  10 |         TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON    | 24155 |  1462K|       |  5535   (1)| 00:01:07 |
    |* 11 |          INDEX RANGE SCAN          | test_ORG_PERSON_FK1| 24155 |       |       |   102   (1)| 00:00:02 |
    |* 12 |       SORT JOIN                    |                    | 68472 |  7422K|    15M|  6968   (1)| 00:01:24 |
    |  13 |        TABLE ACCESS FULL           | test_CONTACT       | 68472 |  7422K|       |  2895   (1)| 00:00:35 |
    Predicate Information (identified by operation id):
       1 - filter("ALPHA"<=25 AND "ALPHA">=1)
       8 - filter("P"."CLM_ID"='1862')
       9 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
           filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
      11 - access("PORG"."O_ORG_ID"='1862')
      12 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
           filter("P"."PERSON_ID"="C1"."REF_ID"(+))
    -------------------------------------------------------------------------------Edited by: user10817659 on Feb 19, 2009 11:47 PM
    Edited by: user10817659 on Feb 21, 2009 12:23 AM

  • How to run a sql query from a button in apex 3.0

    Hi,
    I am brand new and went through/installed the obe project tracker. I have need to create a simple application that displays a result (2 fields, name and license number) based on two parameters (dob and login id) which all are stored in 1 table in the database. I could this very simply in VB or VB.net but have no idea how to do it in apex.
    Please provide guidance,
    Thank you,
    Tom

    Hi Tom,
    Sounds like a report region will satisfy your requirements.
    Create a new report region on one of your pages.
    Choose SQL Report and give the region a title.
    When you get to the "Enter SQL Query or PL/SQL function returning a SQL Query:" step, type:
    SELECT name, license_number
    FROM   <insert_your_table_name_here>
    WHERE  dob = :P<n>_dob
    AND    login_id = :P<n>loginid(replace <n> with the page number that the region is on and use your own table name).
    Don't try to run the page yet - it will give 'No data found'
    Now, go back to the Page Definition screen and add two items in the region you just created - call them P<n>dob and P<n>login_id
    Then, create a button in the same region (to be displayed amongst the region's items) - call it P<n>_GO and click 'Create' (take all the other defaults).
    Now you can run the page, put some values into the fields and click go.
    If you want to get fancier, you can change the text items to select lists etc. - let us know if you need help with that.
    Hope this helps,
    Bryan.

  • SQL Query Performance needed.

    Hi All,
       I am getting performance issue with my below sql query. When I fired it, It is taking 823.438 seconds, but when I ran query in, it is taking 8.578 seconds, and query after in   is taking 7.579 seconds.
    SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
     BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2, BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
     BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS, BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
     SEG.SEGMENT_DESCRIPTION  FROM ACC_SEGMENTS_V_TST SEG , ACC_BALANCE_STG BAL where BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE AND SEG.SEGMENT_COLUMN = 'SEGMENT99' AND BAL.ACCOUNTING_PERIOD = 'MAY-10' and BAL.comb_id
    in  
    (select comb_id from
     (select comb_id, rownum r from
      (select distinct(comb_id),LAST_UPDATE_DATE from ACC_BALANCE_STG where accounting_period='MAY-10' order by LAST_UPDATE_DATE )    
         where rownum <=100)  where r >0)
    Please help me in fine tuning above. I am using Oracle 10g database. There are total of 8000 records. Let me know if any other info required.
    Thanks in advance.

    In recent versions of Oracle an EXISTS predicate should produce the same execution plan as the corresponding IN clause.
    Follow the advice in the tuning threads as suggested by SomeoneElse.
    It looks to me like you could avoid the double pass on ACC_BALANCE_STG by using an analytical function like ROW_NUMBER() and then joining to ACC_SEGMENTS_V_TST SEG, maybe using subquery refactoring to make it look nicer.
    e.g. something like (untested)
    WITH subq_bal as
        ((SELECT *
          FROM (SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
                 BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2,
                 BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
                 BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS,
                 BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
                 ROW_NUMBER() OVER (ORDER BY LAST_UPDATE_DATE) rn
                 FROM   acc_balance_stg
                 WHERE  accounting_period='MAY-10')
          WHERE rn <= 100)
    SELECT *
    FROM   subq_bal bal
    ,      acc_Segments_v_tst seg
    where  BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE
    AND    SEG.SEGMENT_COLUMN = 'SEGMENT99';However, the parentheses you use around comb_id make me question what your intention is here in the subquery?
    Do you have multiple rows in ACC_BALANCE_STG for the same comb_id and last_update_date?
    If so you may want to do a MAX on last_update_date, group by comb_id before doing the analytic restriction.
    Edited by: DomBrooks on Jun 16, 2010 5:56 PM

  • How to replace a dates on a SQL query on Visual Studio (and get the query to work in there in the first place)?

    Morning all,
    I've just been assigned a report-related project but I have not created much of anything in C# or .Net before!
    I was wondering if someone could help me get started. Here are the specifications:
    Basically, I am to create an automated report application. I have the query and I will include it further down
    in this post. The page is to have a couple blanks to specify the Start Date and End Date and replace those dates in the query, and generate the report. What I need some help on is how to make the SQL query work in the application which I will connect to the
    intended database to generate the report (basic I know, but I'm new at this) on Visual Studio 2010. I also need some help on programming the Start Date blank and End Date blank so that what the user types in for those blanks will replace the date fields in
    the SQL query, then generate the report with the new dates. 
    I appreciate the help!
    The SQL query and what the dates are replacing:
    select 
    PTH.INST_ID ,
    PTH.EMPLOYEE_ID,
    DBH.HR_DEDUCTION_AND_BENEFITS_CODE,
    replace(DB.DESCRIPTION,',',''),
    DB.WITHHOLDING_LIABILITY_ACCOUNT_MASK,
    DBH.HR_DEDUCTION_AND_BENEFITS_ID,
    DBH.CHECK_DATE,
    DBH.CHECK_NO,
    DBH.FIN_INST_ACCT_ID,
    replace(replace (DBH.COMMENT,CHAR(10),' '),CHAR(13),' '),
    DBH.HR_DEDUCTION_AND_BENEFIT_CYCLE_CODE,
    DBH.LENGTH,
    DBH.EMPLOYEE_COMPUTED_AMOUNT,
    DBH.EMPLOYEE_BANK_ROUTING_NUMBER,
    DBH.EMPLOYEE_ACCOUNT_TYPE,
    DBH.EMPLOYEE_ACCOUNT_NUMBER,
    DBH.EMPLOYER_COMPUTED_AMOUNT,
    DBH.EMPLOYEE_GROSS_AMOUNT,
    DBH.EMPLOYER_GROSS_AMOUNT,
    DBH.PAYROLL_EXCLUDE,
    PTH.VOID_DATE,
    PTH.BATCH_QUEUE_ID,
    B.BATCH_CODE,
    BQ.FY,
    BQ.END_DATE,
    BQ.COMMENTS,
    BQ.BATCH_CRITERIA_USED,
    BP.COLUMN_VALUE,
    PTH.REPLACEMENT,
    P.LAST_NAME,
    P.FIRST_NAME,
    P.MIDDLE_NAME
    from PY_EMPLOYEE_TAX_HISTORY PTH
    INNER JOIN PERSON_EMPLOYEE PE ON
    PE.INST_ID=PTH.INST_ID AND
    PE.EMPLOYEE_ID=PTH.EMPLOYEE_ID
    INNER JOIN PERSON P ON
    PE.INST_ID=P.INST_ID AND
    PE.PERSON_ID=P.PERSON_ID
    LEFT JOIN HR_EMPLOYEE_DEDUCTIONS_AND_BENEFITS_HISTORY DBH ON
    PTH.INST_ID=DBH.INST_ID AND
    PTH.CHECK_DATE=DBH.CHECK_DATE AND
    PTH.CHECK_NO=DBH.CHECK_NO AND
    PTH.EMPLOYEE_ID=DBH.EMPLOYEE_ID
    LEFT JOIN HR_DEDUCTION_AND_BENEFITS DB ON
    DB.INST_ID=DBH.INST_ID AND
    DB.HR_DEDUCTION_AND_BENEFITS_CODE=DBH.HR_DEDUCTION_AND_BENEFITS_CODE
    LEFT JOIN BATCH_QUEUE BQ ON
    PTH.BATCH_QUEUE_ID=BQ.BATCH_QUEUE_ID
    LEFT JOIN BATCH B ON
    B.BATCH_CODE=BQ.BATCH_CODE 
    LEFT JOIN BATCH_PARAMETER BP ON
    BQ.BATCH_QUEUE_ID=BP.BATCH_QUEUE_ID
    AND BP.COLUMN_NAME = 'SUPPRESS_DIRECT_DEPOSIT'
    ------Please change the WHERE condition for date range of the month you need to run this for.
    WHERE PTH.CHECK_DATE >='07/01/2013'
    AND PTH.CHECK_DATE <='07/31/2013'
    and BQ.BATCH_CODE='BAT_PY_PAYCALC'
    and bq.fy=2014
    ORDER BY PTH.INST_ID ,
    PTH.EMPLOYEE_ID,
    DBH.HR_DEDUCTION_AND_BENEFITS_CODE,
    DBH.CHECK_DATE

    Try this code.  The Server name will be the same name when you use SQL Server Management Studio (SSMS).  It is in the login window for SSMS.  I assume you are using SQLSTANDARD (not SQLEXPRESS) which is in the connection string in the code
    below. I also assume you have remote connection allowed in the database.
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Data;
    using System.Data.SqlClient;
    namespace ConsoleApplication1
    class Program
    const string DATABASE = "Enter Database Name Here";
    const string SERVER = "Enter Server Name Here";
    static void Main(string[] args)
    DateTime startDate = DateTime.Parse("07/01/2013");
    string startDateStr = startDate.ToString("MM/dd/yyyy");
    DateTime endDate = new DateTime(startDate.Year, startDate.Month + 1, 1).AddDays(-1);
    string endDateStr = endDate.ToString("MM/dd/yyyy");
    string connStr = string.Format("Server={0}\\SQLSTANDARD;Database={1};Trusted_Connection= True;", SERVER,DATABASE);
    string SQL = string.Format(
    "select\n" +
    " PTH.INST_ID\n" +
    ",PTH.EMPLOYEE_ID\n" +
    ",DBH.HR_DEDUCTION_AND_BENEFITS_CODE,\n" +
    ",replace(DB.DESCRIPTION,',','')\n" +
    ",DB.WITHHOLDING_LIABILITY_ACCOUNT_MASK\n" +
    ",DBH.HR_DEDUCTION_AND_BENEFITS_ID\n" +
    ",DBH.CHECK_DATE\n" +
    ",DBH.CHECK_NO\n" +
    ",DBH.FIN_INST_ACCT_ID\n" +
    ",replace(replace (DBH.COMMENT,CHAR(10),' '),CHAR(13),' ')\n" +
    ",DBH.HR_DEDUCTION_AND_BENEFIT_CYCLE_CODE\n" +
    ",DBH.LENGTH\n" +
    ",DBH.EMPLOYEE_COMPUTED_AMOUNT\n" +
    ",DBH.EMPLOYEE_BANK_ROUTING_NUMBER\n" +
    ",DBH.EMPLOYEE_ACCOUNT_TYPE\n" +
    ",DBH.EMPLOYEE_ACCOUNT_NUMBER\n" +
    ",DBH.EMPLOYER_COMPUTED_AMOUNT\n" +
    ",DBH.EMPLOYEE_GROSS_AMOUNT\n" +
    ",DBH.EMPLOYER_GROSS_AMOUNT\n" +
    ",DBH.PAYROLL_EXCLUDE\n" +
    ",PTH.VOID_DATE\n" +
    ",PTH.BATCH_QUEUE_ID\n" +
    ",B.BATCH_CODE\n" +
    ",BQ.FY\n" +
    ",BQ.END_DATE\n" +
    ",BQ.COMMENTS\n" +
    ",BQ.BATCH_CRITERIA_USED\n" +
    ",BP.COLUMN_VALUE\n" +
    ",PTH.REPLACEMENT\n" +
    ",P.LAST_NAME\n" +
    ",P.FIRST_NAME\n" +
    ",P.MIDDLE_NAME\n" +
    " from PY_EMPLOYEE_TAX_HISTORY PTH\n" +
    " INNER JOIN PERSON_EMPLOYEE PE ON\n" +
    " PE.INST_ID=PTH.INST_ID AND\n" +
    " PE.EMPLOYEE_ID=PTH.EMPLOYEE_ID\n" +
    " INNER JOIN PERSON P ON\n" +
    " PE.INST_ID=P.INST_ID AND\n" +
    " PE.PERSON_ID=P.PERSON_ID\n" +
    " LEFT JOIN HR_EMPLOYEE_DEDUCTIONS_AND_BENEFITS_HISTORY DBH ON\n" +
    " PTH.INST_ID=DBH.INST_ID AND\n" +
    " PTH.CHECK_DATE=DBH.CHECK_DATE AND\n" +
    " PTH.CHECK_NO=DBH.CHECK_NO AND\n" +
    " PTH.EMPLOYEE_ID=DBH.EMPLOYEE_ID\n" +
    " LEFT JOIN HR_DEDUCTION_AND_BENEFITS DB ON\n" +
    " DB.INST_ID=DBH.INST_ID AND\n" +
    " DB.HR_DEDUCTION_AND_BENEFITS_CODE=DBH.HR_DEDUCTION_AND_BENEFITS_CODE\n" +
    " LEFT JOIN BATCH_QUEUE BQ ON\n" +
    " PTH.BATCH_QUEUE_ID=BQ.BATCH_QUEUE_ID\n" +
    " LEFT JOIN BATCH B ON\n" +
    " B.BATCH_CODE=BQ.BATCH_CODE\n" +
    " LEFT JOIN BATCH_PARAMETER BP ON\n" +
    " BQ.BATCH_QUEUE_ID=BP.BATCH_QUEUE_ID\n" +
    " AND BP.COLUMN_NAME = 'SUPPRESS_DIRECT_DEPOSIT'\n" +
    " WHERE PTH.CHECK_DATE >='{0}'\n" +
    " AND PTH.CHECK_DATE <='{1}'\n" +
    " and BQ.BATCH_CODE='BAT_PY_PAYCALC'\n" +
    " and bq.fy=2014\n" +
    " ORDER BY PTH.INST_ID\n" +
    ",PTH.EMPLOYEE_ID\n" +
    ",DBH.HR_DEDUCTION_AND_BENEFITS_CODE\n" +
    ",DBH.CHECK_DATE", startDateStr, endDateStr);
    SqlDataAdapter adapter = new SqlDataAdapter(SQL, connStr);
    DataTable dt = new DataTable();
    adapter.Fill(dt);
    jdweng
    Could you elaborate more on what this code does in general?
    Does it generate a table with the data between specified dates? If so, where is the table shown? 
    Where does one enter in the specified start and end dates on the Web Application? Do I have to create start and end date blanks and link them to the code for it to work?
    Sorry for the inconvenience - I'm just really new at this. Thanks!

  • SQL query using Group by and Aggregate function

    Hi All,
    I need your help in writing an SQL query to achieve the following.
    Scenario:
    I have table with 3 Columns. There are 3 possible values for col3 - Success, Failure & Error.
    Now I need a query which can give me the summary counts for distinct values of col3 for each GROUP BY of col1 and col2 values. When there are no values for col3 then it should return ZERO count.
    Example Data:
    Col1 Col2 Col3
    abc 01 success
    abc 02 success
    abc 01 success
    abc 01 Failure
    abc 01 Error
    abc 02 Failure
    abc 03 Error
    xyz 07 Failure
    Required Output:
    c1 c2 s_cnt F_cnt E_cnt (Heading)
    abc 01 2 1 1
    abc 02 1 1 0
    abc 03 0 0 1
    xyz 07 0 1 0
    s_cnt = Success count; F_cnt = Failure count; E_cnt = Error count
    Please note that the output should have 5 columns with col1, col2, group by (col1,col2)count(success), group by (col1,col2)count(failure), group by (col1,col2)count(error)
    and where ever there are NO ROWS then it should return ZERO.
    Thanks in advance.
    Regards,
    Shiva

    Hi,
    user13015050 wrote:
    Thanks TTT. Unfortunately I cannot use this solution because I have huge data for this.T's solution is basically the same as mine. The first 23 lines just simulates your table. Since you actually have a table, you would start with T's line 24:
    SELECT col1 c1, col2 c2, SUM(decode(col3, 'success', 1, 0)) s_cnt, ...
    user13015050 wrote:Thanks a lot Frank. It helped me out. I just did some changes to this as below and have no issues.
    SELECT     col1
    ,     col2
    ,     COUNT ( CASE
              WHEN col3 = 'SUCCESS'
              THEN 1
              END
         )          AS s_cnt
    ,     COUNT ( CASE
              WHEN col3 = 'FAILED'
              THEN 1
              END
         )          AS f_cnt
    ,     COUNT ( CASE
              WHEN col3 = 'ERROR'
              THEN 1
              END
         )          AS e_cnt
    FROM     t1
    WHERE c2 in ('PURCHASE','REFUND')
    and c4 between to_date('20091031000000','YYYYMMDDHH24MISS') AND to_date('20100131235959','YYYYMMDDHH24MISS')
    GROUP BY c1, c2
    ORDER BY c1, c2;
    Please let me know if you see any issues in this query.It's very hard to read.
    This site normally compresses spaces. Whenever you post formatted text (such as queries or results) on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
    Also, post exactly what you're using.  The code above is SELECTing col1 and col2, but there's no mention of either in the GROUP BY clause, so I don't believe it's really what you're using.
    Other than that, I don't see anything wrong or suspicious in the query.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to test SQL query performance - realiably?

    I have certain queries and I want to test which one is faster, and how big is the difference.
    How can I do this reliably?
    The problem is, when I execute the queries, Oracle does it's caching and execution planning and whatnot, and results of the queries are dependent on the order I execute them.
    Example: query A and query B, supposed to return same data.
    query A, run 1: 587 seconds
    query A, run 2: 509 seconds
    query B, run 1: 474 seconds
    query B, run 2: 451 seconds
    It would seem that A is somewhat faster than B, but if I change the order and execute B before A, results are different.
    Also I'm running the queries in SQL Developer, and it only returns 100 first lines, how can I remove this effect and simulate real scenario where all lines are fetched?
    I can also use EXPLAIN PLANs and look at the costs but I'm not sure how much I can trust those either. I understand they are only estimations and even if cost(a) = 1.5 * cost (b), b could still end up executing faster in practise due to inaccuracies in the cost calculation....right? EDIT: actually event if cost(a) = 5000 * cost(b), b can still execute faster.....seems like query A's cost is 15836 and B's cost is 3 while A seems to be faster in practise.
    Edited by: user620914 on 19-Jan-2010 01:42

    user620914 wrote:
    I have to say I don't understand your point either :)
    What are you saying, that people should not test their SQL performance? That tools such as autotrace are useless?No.. what I'm saying is that you need a baseline to make an informed decision about SQL performance.
    What does a 4 second SQL performance mean for query foo ? Nothing really.. wearing my dba cap I would point at that this is actually utterly useless for me to determine the impact of your query on production, or use it to determine how to scale it.
    If instead you tell me that is hits that table using an index range scan.. I know what it is doing and have a far better idea what it will do to the production instance.
    Thus my questioning this "+elapsed time+" measurement approach. I as a dba cannot use it... and I'm not sure what benefit (wearing my developer hat) you will find from it either.
    You can form your SQL queries better or worse, or select your table structure / indexes better or worse. Some choices may end up executing orders of magnitude slower than others. Obviously you can't get exact measurements "this query executes in 43123 ns" and there are a lot of unpredictable variables that affect the end performance. Still, it's often better to test your querie's / table's performance before implementing them in the application than not.Exactly. I'm not questioning the fact that optimising your code (and ALL your code, not just SQL) is a Good Thing (tm) - but how you go about that optimisation process.
    For example, your PL/SQL code fires off a query. It returns on average 10,000 rows, hits a single partition (SQL enables partitioning pruning) and then uses a local bitmap index to identify the rows.
    An optimal query by the sounds of it, and one that will perform and scale well.. even when the database instance needs to service a 100 clients using your code and running this query.
    Only, the code does a single bulk collect of all the rows and stuff it into dedicated process memory (PGA). Servicing a 100 clients means that dedicated server memory is now needed for 100x10000 rows - there's insufficient free memory, causing the kernel to start swapping pages in and out of memory heavily as all 100 client sessions are active and wanting to process the rows returned by the optimal query.
    What happens to scalability and performance now?
    Testing for performance is not simply measuring a query and then trying to use that or extrapolate that to determine application performance and the impact on production.
    It starts with the design of the tables, the design of the application, the writing of the code (application and SQL). It is not something that should be done after the fact as in "+okay, application all done, let's see how she performs!+".. and especially not using time as the baseline for performance measurement.

  • Speed up SQL Query performance

    Hi,
    I am having a SQL query which has got some inner joins between tables.
    In this query i will be selecting values from set of values obtained by going through all rows in a table.
    I am using a inner join between two tables to achive this purpose.
    But, as the table which i go through all rows is extremely big it takes lot of time to go through all rows and the query slows down.
    Is there any other way by which i can speed up query.

    This is the out put of my test plan.
    Please suggest which one needs to be improved.
    PLAN_TABLE_OUTPUT
    Plan hash value: 3453987661
    | Id  | Operation                               | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                        |                                |     3 |  1002 |  3920   (1)| 00:00:48 |
    |   1 |  SORT ORDER BY                          |                                |     3 |  1002 |  3920   (1)| 00:00:48 |
    |*  2 |   TABLE ACCESS BY INDEX ROWID           | AS_EVENT_CHR_DATA              |     1 |    17 |     4   (0)| 00:00:01 |
    |   3 |    NESTED LOOPS                         |                                |     3 |  1002 |  3919   (1)| 00:00:48 |
    |*  4 |     HASH JOIN                           |                                |     3 |   951 |  3907   (1)| 00:00:47 |
    |*  5 |      TABLE ACCESS FULL                  | EV_CHR_DATA_TYPE               |     1 |    46 |     2   (0)| 00:00:01 |
    |   6 |      TABLE ACCESS BY INDEX ROWID        | AS_EVENT_CHR_DATA              |   702 | 50544 |  3883   (1)| 00:00:47 |
    |   7 |       NESTED LOOPS                      |                                |   348 | 94308 |  3904   (1)| 00:00:47 |
    |   8 |        NESTED LOOPS                     |                                |     1 |   199 |    21   (5)| 00:00:01 |
    |   9 |         NESTED LOOPS                    |                                |     1 |   174 |    20   (5)| 00:00:01 |
    |* 10 |          HASH JOIN                      |                                |     1 |   127 |    18   (6)| 00:00:01 |
    |  11 |           NESTED LOOPS                  |                                |     1 |    95 |    13   (0)| 00:00:01 |
    |  12 |            NESTED LOOPS                 |                                |     1 |    60 |    12   (0)| 00:00:01 |
    |  13 |             NESTED LOOPS                |                                |     1 |    33 |    10   (0)| 00:00:01 |
    |  14 |              TABLE ACCESS BY INDEX ROWID| ASSET                          |     1 |    21 |     2   (0)| 00:00:01 |
    |* 15 |               INDEX UNIQUE SCAN         | SERIAL_NUMBER_K3               |     1 |       |     1   (0)| 00:00:01 |
    |* 16 |              INDEX FAST FULL SCAN       | SYS_C0053318                   |     1 |    12 |     8   (0)| 00:00:01 |
    |  17 |             TABLE ACCESS BY INDEX ROWID | SEGMENT_CHILD                  |     1 |    27 |     2   (0)| 00:00:01 |
    |* 18 |              INDEX RANGE SCAN           | SYS_C0053319                   |    12 |       |     1   (0)| 00:00:01 |
    |  19 |            TABLE ACCESS BY INDEX ROWID  | SEGMENT                        |     1 |    35 |     1   (0)| 00:00:01 |
    |* 20 |             INDEX UNIQUE SCAN           | SYS_C0053318                   |     1 |       |     0   (0)| 00:00:01 |
    |* 21 |           TABLE ACCESS FULL             | SEGMENT_TYPE                   |     1 |    32 |     4   (0)| 00:00:01 |
    |  22 |          TABLE ACCESS BY INDEX ROWID    | ASSET_ON_SEGMENT               |     1 |    47 |     2   (0)| 00:00:01 |
    |* 23 |           INDEX RANGE SCAN              | ASSET_ON_SEGME_UK8115533871153 |     1 |       |     1   (0)| 00:00:01 |
    |  24 |         TABLE ACCESS BY INDEX ROWID     | ASSET                          |     1 |    25 |     1   (0)| 00:00:01 |
    |* 25 |          INDEX UNIQUE SCAN              | SYS_C0053240                   |     1 |       |     0   (0)| 00:00:01 |
    |* 26 |        INDEX RANGE SCAN                 | AS_EV_CHR_DATA_ASSETPK         |  4673 |       |    28   (4)| 00:00:01 |
    |* 27 |     INDEX RANGE SCAN                    | SYS_C0053249                   |     5 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("PARAMETRIC_TAG_NAME"."DATA_VALUE"='EngineOilConsumption')
       4 - access("AS_EVENT_CHR_DATA"."EC_DB_SITE"="EV_CHR_DATA_TYPE"."EC_DB_SITE" AND
                  "AS_EVENT_CHR_DATA"."EC_DB_ID"="EV_CHR_DATA_TYPE"."EC_DB_ID" AND
                  "AS_EVENT_CHR_DATA"."EC_TYPE_CODE"="EV_CHR_DATA_TYPE"."EC_TYPE_CODE")
       5 - filter("EV_CHR_DATA_TYPE"."NAME"='servicing ptric time unit')
      10 - access("OILSEG"."SG_TYPE_CODE"="SEGMENT_TYPE"."SG_TYPE_CODE")
      15 - access("ASSET"."SERIAL_NUMBER"='30870')
      16 - filter("ASSET"."ASSET_ID"="SEGMENT"."SEGMENT_ID")
      18 - access("SEGMENT"."SEGMENT_SITE"="SEGMENT_CHILD"."SEGMENT_SITE" AND
                  "SEGMENT"."SEGMENT_ID"="SEGMENT_CHILD"."SEGMENT_ID")
      20 - access("SEGMENT_CHILD"."CHILD_SG_SITE"="OILSEG"."SEGMENT_SITE" AND
                  "SEGMENT_CHILD"."CHILD_SG_ID"="OILSEG"."SEGMENT_ID")
      21 - filter("SEGMENT_TYPE"."NAME"='Aircraft Equipment Engine Holder')
      23 - access("OILSEG"."SEGMENT_ID"="ASSET_ON_SEGMENT"."SEGMENT_ID")
      25 - access("ASSET_ON_SEGMENT"."ASSET_ORG_SITE"="OILASSET"."ASSET_ORG_SITE" AND
                  "ASSET_ON_SEGMENT"."ASSET_ID"="OILASSET"."ASSET_ID")
      26 - access("ASSET_ON_SEGMENT"."ASSET_ORG_SITE"="AS_EVENT_CHR_DATA"."ASSET_ORG_SITE" AND
                  "ASSET_ON_SEGMENT"."ASSET_ID"="AS_EVENT_CHR_DATA"."ASSET_ID")
      27 - access("AS_EVENT_CHR_DATA"."AS_EV_ID"="PARAMETRIC_TAG_NAME"."AS_EV_ID")

Maybe you are looking for

  • Copy Concurrent Program - XML Publisher Report

    I am running a xml publisher report and it is working fine when i submit directly, but when i use copy function to copy a concurrent program the template is not defaulted and program completes without an error, but when i try to view the output i am

  • Time machine backup error in Mavericks

    I"m getting backup errors in Mavericks when it tries to back up to the Backups.backupdb folder in my 3TB external backup drive.  This happens even when I turn Time Machine off!  This is the error I'm getting (taken from the console) when I manually s

  • Since updating to 706 the sound on my ipad2 is not working right. On videos or music. There is constant distortion and what sounds like interference. Any ideas?

    Since updating to 706 the sound on my ipad2 is not working right--on videos or music. There is distortion and some kind of interference at all times. Any ideas?

  • Server-based output

    I've been working on a help system but have not published it yet. In my RH experience I haven't dealt with system issues. Now my current project needs to be published, and I'm afraid my version of RH is insufficient. It's Robohelp HTML X5, and it app

  • How do you cancel a discount?

    So, my employer decided they didn't want us using our discount and asked us to discontinue it. I was on the phone with the representative for 30 minutes, she couldn't figure out how to do it, and told me to go to www.verizonwireless.com/discounts and