Decode affecting query performance

Hi. We have a system that holds two account numbers, in 8 digit and 12 digit formats. I have a screen with an input box where the user can enter either style account number.
So the select I need to do, must be conditional based on the type used. I've got it working by checking the length of the account number entered:
select column1 from table1
where decode(length(:P1_ACCNUM),12,accountno,accno_8digit) = upper(v('P1_ACCNUM'))
but the performance has gone from instant, to about 3 seconds. In TOAD, the query is unaffected, so maybe it's the subsitution of the variable. I've tried the way coded above, and v('P1_ACCNUM') but there is no difference in Apex.
Does anyone have any ideas on this?
Cheers
Carlton
(NR - Business as Usual.)

Hi George. Initially, the code only supported the 12 digit account number,but you know what users are like! So the original query was coded to say
accountno = :p1_accnum
The decode was necessary because I had to do the comparison on another column. Both columns are indexed.
As I say, putting the length() function (and I also do an UPPER conversion too) in hidden fields and using them has made it perform perfectly again. If I had more time, I wouldn't have coded it like this, but it's an urgent throw away application to perform quick account queries.
Thanks for your response.
Carlton

Similar Messages

  • Query performance time

    Is it true that a query running on a table with default values of null is slower than a query running on a table with default values of ceros and blank spaces? If it's true, why does this happens, and if not , what is the main factor that affects query performance time?
    If anyone can answer me, i will be grateful!!!

    Hi,
    It happends cause columns with NULL, NOT NULL or no constraint influence CBO (Cost Based Optimizer) when he chooses his explain plan.
    For a complete explication, check the documentation on CBO on technet.
    Good Luck,
    Fred

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Oracle Query Performance While calling a function in a View

    Hi,
    We have a performance issue in one of our Oracle queries.
    Here is the scenario
    We use a hard coded value (which is the maximum value from a table) in couple of DECODE statements in our query. We would like to remove this hard coded value from the query. So we wrote a function which will return a maximum value from the table. Now when we execute the query after replacing the hard coded value with the function, this function is called four times which hampers the query performance.
    Pl find below the DECODE statements in the query. This query is part of a main VIEW.
    Using Hardcoded values
    =================
    DECODE(pro_risk_weighted_ctrl_scr, 10, 9.9, pro_risk_weighted_ctrl_scr)
    DECODE(pro_risk_score, 46619750, 46619749, pro_risk_score)
    Using Functions
    ============
    DECODE (pro_risk_weighted_ctrl_scr, rprowbproc.fn_max_rcsa_range_values ('CSR'), rprowbproc.fn_max_rcsa_range_values('CSR')- 0.1, pro_risk_weighted_ctrl_scr)
    DECODE (pro_risk_score, rprowbproc.fn_max_rcsa_range_values ('RSR'), rprowbproc.fn_max_rcsa_range_values ('RSR') - 1, pro_risk_score)
    Can any one suggest a way to improve the performance of the query.
    Thanks & Regards,
    Raji

    drop table max_demo;
    create table max_demo
    (rcsa   varchar2(10)
    ,value  number);
    insert into max_demo
    select case when mod(rownum,2) = 0
                then 'CSR'
                else 'RSR'
           end
    ,      rownum
    from   dual
    connect by rownum <= 10000;   
    create or replace function f_max (
      i_rcsa    in   max_demo.rcsa%TYPE
    return number
    as
      l_max number;
    begin
       select max(value)
       into   l_max
       from   max_demo
       where  rcsa = i_rcsa;
       return l_max;
    end;
    -- slooooooooooooowwwwww
    select m.*
    ,      f_max(rcsa)
    ,      decode(rcsa,'CSR',decode(value,f_max('CSR'),'Y - max is '||f_max('CSR'),'N - max is '||f_max('CSR'))) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,f_max('RSR'),'Y - max is '||f_max('RSR'),'N - max is '||f_max('RSR'))) is_max_rsr
    from   max_demo m
    order by value desc;
    -- ssllooooowwwww
    with subq_max as
         (select f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- faster
    with subq_max as
         (select /*+materialize */
                 f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- faster
    with subq_max as
         (select f_max('CSR') max_csr,
                 f_max('RSR') max_rsr,
                 rownum
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- sloooooowwwwww
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      (select /*+ materialize */
                 f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual) s
    order by value desc;
    -- faster
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      (select f_max('CSR') max_csr,
                   f_max('RSR') max_rsr,
                   rownum
            from   dual) s
    order by value desc;

  • An index can not being used and still afect a query performance?

    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?

    user11173393 wrote:
    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?You said that is what is happening, & I believe you.

  • SQL query performance issues.

    Hi All,
    I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
    SQL query performance issues.
    Following is the tkprof file.
    CURSOR_ID:76  LENGTH:2383  ADDRESS:f6b40ab0  HASH_VALUE:2459471753  OPTIMIZER_GOAL:ALL_ROWS  USER_ID:443 (APPS)
    insert into cos_temp(
    TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
    CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
    INVOICE_NUMBER, EXT_SALES, EXT_COS,
    GROSS_PROFIT, ACCT_DATE,
    SHIPMENT_TYPE,
    FROM_ORGANIZATION_ID,
    FROM_ORGANIZATION_CODE)
    select a.trx_date,
    g.segment5 dept,
    g.segment4 prd,
    m.segment1 part,
    d.customer_number customer,
    b.quantity_invoiced units,
    --       substr(a.sales_order,1,6) order#,
    substr(ltrim(b.interface_line_attribute1),1,10) order#,
    a.trx_number invoice,
    (b.quantity_invoiced * b.unit_selling_price) sales,
    (b.quantity_invoiced * nvl(price.operand,0)) cos,
    (b.quantity_invoiced * b.unit_selling_price) -
    (b.quantity_invoiced * nvl(price.operand,0)) profit,
    to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
    'DRP',
    l.ship_from_org_id,
    p.organization_code
    from   ra_customers d,
    gl_code_combinations g,
    mtl_system_items m,
    ra_cust_trx_line_gl_dist c,
    ra_customer_trx_lines b,
    ra_customer_trx_all a,
    apps.oe_order_lines l,
    apps.HR_ORGANIZATION_INFORMATION i,
    apps.MTL_INTERCOMPANY_PARAMETERS inter,
    apps.HZ_CUST_SITE_USES_ALL site,
    apps.qp_list_lines_v price,
    apps.mtl_parameters p
    where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
    and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
    and   a.batch_source_id = 1001     -- Sales order shipped other OU
    and   a.complete_flag = 'Y'
    and   a.customer_trx_id = b.customer_trx_id
    and   b.customer_trx_line_id = c.customer_trx_line_id
    and   a.sold_to_customer_id = d.customer_id
    and   b.inventory_item_id = m.inventory_item_id
    and   m.organization_id
         = decode(substr(g.segment4,1,2),'01',5004,'03',5004,
         '02',5003,'00',5001,5002)
    and   nvl(m.item_type,'0') <> '111'
    and   c.code_combination_id = g.code_combination_id+0
    and   l.line_id = b.interface_line_attribute6
    and   i.organization_id = l.ship_from_org_id
    and   p.organization_id = l.ship_from_org_id
    and   i.org_information3 <> '5108'
    and   inter.ship_organization_id = i.org_information3
    and   inter.sell_organization_id = '5108'
    and   inter.customer_site_id = site.site_use_id
    and   site.price_list_id = price.list_header_id
    and   product_attr_value = to_char(m.inventory_item_id)
    call        count       cpu   elapsed         disk        query      current         rows    misses
    Parse           1      0.47      0.56           11          197            0            0         1
    Execute         1   3733.40   3739.40        34893    519962154           11          188         0
    total           2   3733.87   3739.97        34904    519962351           11          188         1
    |         Rows Row Source Operation
    | ------------ ---------------------------------------------------
    |          188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
    |          741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
    |          741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
    |          741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
    |          741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
    |         3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
    |         3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
    |         3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
    |         1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
    |         1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
    |          486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
    |          486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
    |          486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
    |        75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
    |          486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
    |          486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
    |          486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
    |         1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
    |         2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
    |         1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
    |         1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
    |         3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
    |         3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
    |         3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
    |         3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
    |         3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
    |         3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
    |          741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
    |         6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
    Regards
    Ashish

    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
    Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
    Please post explain plan and optimizer* parameter settings.

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • Building a new Cube Vs Restricted Key figure in Query - Performance issue

    Hi,
    I have a requirement to create  a OPEX restricted key figure in Query. The problem is that the key figure should be restricted to about 30 GL Accounts and almost 300 Cost centers.
    I do not know if this might cause performance issue in the query. At the moment, I am thinking of creating a new OPEX cube and load only those 30 GL Accounts, 300 cost  centers and Amount. and  include OPEX  in multiprovider in order to get OPEX
    amount in the report.
    whats the best solution - creating OPEX restricted key figure or OPEX cube ?
    thanks,
    Bhat

    I think you should go for cube as all the restrcited key figure are calculated at OLAP runtime so it will definitely affect the query performance.There are a lot of costcenter for which you have to restrict it so definitely during the runtime of query it will take a lot of time to fetch tha data from infoprovider.Its better that you create a cube with the restrictions and include it in MP.It will definitely save a lot of time during query execution
    Edited by: shyam agarwal on Feb 29, 2012 10:26 AM

  • What happens to unused common table expressions ,Does this affect in performance or ?

    If I write a query with one or more common table expressions to which I
    don't actually refer in the query, do they just get pruned off or do
    they get executed regardless? how does it affect in performance
    Prem Shah

    Try below
    seems when the CTE is not refer in the query then statement inside CTE is not executing at all even if it nested CTE, see for your self
    Create table UserInfo
    UserId int primary key,
    UserName varchar(30)
    GO
    Create table UserInfo1
    UserId int primary key,
    UserName varchar(30)
    GO
    insert into UserInfo
    select 1001,'X1' union all
    select 1002,'X2' union all
    select 1009 ,'X9'
    GO
    insert into UserInfo1
    select 1001,'X1' union all
    select 1002,'X2' union all
    select 1009 ,'X9'
    GO
    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
    GO
    Begin tran
    select * from UserInfo1 where UserId between 1001 and 1009
    and UserName = 'XXXX'
    --Commit
    PRINT 'WITH out CTE access in select'
    SET STATISTICS IO ON
    ;WITH CTE1 AS
    (Select * From UserInfo1)
    select * From UserInfo
    PRINT 'WITH CTE access in select'
    ;WITH CTE1 AS
    (Select * From UserInfo1)
    select * From UserInfo a inner join CTE1 b on a.UserId=b.UserId
    Stats IO
        WITH out CTE access in select
        (3 row(s) affected)
        Table 'UserInfo'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        (1 row(s) affected)
        WITH CTE access in select
        (3 row(s) affected)
        Table 'UserInfo1'. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        Table 'UserInfo'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
        (1 row(s) affected)
    Thanks
    Saravana Kumar C

  • Query Performance - Free charactristics

    Hi Gurus,
    Does the no. of free characteristics in a query influence the performance of the query? Especially in case of input-ready query.
    Thanks,
    Magnus

    Free char. will not affect the query performance.
    SAP BW suggest to put the char. as free char. to avoid the large volume of data when open the query.
    If end user need to input lots of parameters when open the query, i think you'd better check the OLAP Cache.
    If the data for the query is not in the cache, BW have to fetch the data from cube and it definately affect the performance.
    You can test the query in RSRT to find the result.

  • Table size effect on query performance

    I know this sounds like a very generic question, but, how much does table size affect the performance of a query?
    This is a rather unusual case actually. I am running a query on two tables, say, Table1 and Table2. Table1 is roughly 1 million record. Whereas for Table2, I tried using different number of records.
    The resultant query returns 150,000 records. If I keep Table2 to 500 records, the query execution time takes 2 minutes. But, if I increase Table2 to 8,000 records, it would take close to 20 minutes!
    I have checked the "Explain plan" statement and note that the indexes for the columns used for joining the two tables are being used.
    Is it normal for table size to have such a big effect on performance time, even when number of records is under the 10,000 range?
    Really appreciate your inputs. Thanks in advance.

    Did you update your statistics when you changed the size of Table2? The CBO will probably choose different plans as the size of Table2 changes. If it thinks there are many more or fewer rows, you're likely to have performance issues.
    Justin

  • Query performance data

    Hi,
    I have archived our dev database and reduced the table size from one million to 45000 records. The same table is having 1 million records in PROD. Before implementing the archival job in PROD I want to see the difference in query performance. When I ran
    the same query in DEV and PROD both are taking 1 sec for execution. On a basic terms, there is not much performance improvement. But I ran io statistics and statistics time on for the queries in DEV and PROD.
    DEV - Clustreredindex look up shows-88% and nonclustered index lookup is 2 %
    PROD- Clustreredindex look up shows-18% and nonclustered index lookup is 67 %
    I am not sure why this difference in query plan. But the logical read in DEV-1657 and CPU time = 12609 ms
    PROD-logical read19434 and CPU time = 13625
    Can any one help me in analysing the key differences to come to a conclusion that says there is some performance improvement?
    Thanks

    Hi
    The statistics on DEV:
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 20, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'AirlineGroupMaster'. Scan count 0, logical reads 158, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#2F8E53E2'. Scan count 1, logical reads 79, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TimefenceSubGroup'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#3082781B'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 16 ms,  elapsed time = 10 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 15 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
    (492 row(s) affected)
    Table 'AirlineGroupMaster'. Scan count 0, logical reads 1024, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#2F8E53E2'. Scan count 1, logical reads 512, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'FLIGHTDETAILS'. Scan count 5, logical reads 1607, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#3082781B'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 13000 ms,  elapsed time = 13373 ms.
     SQL Server Execution Times:
       CPU time = 13000 ms,  elapsed time = 13373 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 13031 ms,  elapsed time = 13392 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    IN PROD:
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 20, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'AirlineGroupMaster'. Scan count 0, logical reads 158, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#58010028'. Scan count 1, logical reads 79, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'TimefenceSubGroup'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#58F52461'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 96 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    Table 'Worktable'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
    (518 row(s) affected)
    Table 'AirlineGroupMaster'. Scan count 0, logical reads 2196, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'FLIGHTDETAILS'. Scan count 20, logical reads 19445, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#58010028'. Scan count 5, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table '#58F52461'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    (1 row(s) affected)
     SQL Server Execution Times:
       CPU time = 13859 ms,  elapsed time = 16827 ms.
     SQL Server Execution Times:
       CPU time = 13859 ms,  elapsed time = 16827 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 13859 ms,  elapsed time = 16972 ms.
    SQL Server parse and compile time:
       CPU time = 0 ms, elapsed time = 0 ms.
     SQL Server Execution Times:
       CPU time = 0 ms,  elapsed time = 0 ms.

  • OLAP Query performance

    Hi,
    Does using compression & partitioning (by time) affect the Reporting performance adversely? I have a 8GB Cube with 13 dimensions built in 10.1.0.4. Cube was defined with 1 dense dimension and other 12 as sparse in a compressed composite. It was also partitioned by Years. It takes close to 1 hour to build the cube. Since it is compressed, fully aggregated, I would assume. However, performance of discoverer queries on this cube has been pathetic! Any drill downs or slice/dice takes a long time to return if there are multiple dimensions in either edges of the Crosstab. Also, when scrolling down, it freezes for a while and then brings the data. Sometimes it takes couple of minutes!
    What are the things I needs to check to speed this up? I think I checked things like sparsity, SGA/PGA sizes, OLAP Page Pool etc..
    Regards
    Suresh

    Hi Suresh,
    Before you can implement changes to improve performance, you need to understand the causes of the performance problems. Discoverer for OLAP uses the OLAP API for queries, and the OLAP API generates SQL to query an analytic workspace. There are a few broad possible causes of poor query performance:
    retrieving data from the AW is slow
    SQL execution is slow, perhaps because the SQL is inefficient
    SQL execution is fast, but the OLAP API is slow to fetch data
    Each of these causes demands a different approach. I'd suggest that you enable configuration parameters SQL_TRACE and TIMED_STATISTICS, generate some trace files, and use the tkprof utility to try to narrow down the cause of the trouble.
    Geof

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

Maybe you are looking for