Query Performance Related Issue

Hi all,
We have a problem when we are executing the Query in BI 7.0. that is
When we are executing the query first time it will take 2 minutes time .
when we are executing the same query second time ( With out logoff the query designer ) it will take 5seconds.
please give me some inputs how it is taking more time at the first time ?
How to reduce the query execution time for the first time?
Already we have tried all the Query performance options.
Thanks & regards,
Kiran Manyam.

Hi Kiran,
As the others pointed, when you have executed the query for the 2nd time, it might have read from the cache.
Do the following to investigate the performance issue:
1) Go to RSRT and run your query in "Execute in debug mode"
2) Select "Display Statistics Data", check "Do not use cache"
3) After running the query, press the back button. The system will show a the statistics summary.
4) Analyze what activity has the highest run time.
5) If Data Manager time is >= than 30% of the query time and the if the ratio of number of records selected VS the number of records transfered is > 10% then, build an aggregate to help the query run time. This sacrifices data load time but will improve query execution time.
To know what aggregate will help, run the query in debug mode again and choose the option "Display aggregates found". If the query did not hit an aggregate, the names of the cubes will appear. Use the data in the suggestions column (maked with S: ) as a basis for your aggregate.
Hope this helps.

Similar Messages

  • How to take care of Performance related issue in DRM?

    How we can optimize or take care of performance related issues when we have huge data in DRM application? especially when comes to Imports and Exports.
    Thanks
    -Ramesh

    Hi Ramesh,
    Below some suggestion from my side.
    1-Try to keep Current version open at the time of export and automator run.
    2-At extreme use derived property , else manage with somehow if you can
    3-If you are creating any derived property make sure you are restricting to related hier
    4-In case you are creating derived property check performance to take big hier with derived property node name and description ,if performance not seems to you satisfactory go with some other logic.
    5-Don’t use much looking property like Nodeinhier, hiernodepropvalue etc , if need then only use these
    6-Tack care of validation if you can do better with verification then don’t put validation in that case
    7-I do see some problem while loading data from DRM to DB try to avoid and use some workaround if possible ( like Sql loader)
    Not sure how much useful will be for you these points above.
    Thanks!
    Sandeep

  • Performance related issues in design studio 1.2

    Hi All,
    Is there any best practices guide for development purpose in design studio?
    I am facing performance issues when I increase the count of data sources (alias in my case) in my application merely from 2 to 3.
    There is a considerable time lag in launching the application both locally and on the BI server.
    My application is on HANA analytic views and going by its performance it should be fast.
    Each and every data source I created contains not more that 1 or 2 dimensions and a single measure.
    I am not able to understand why it is happening. Is it preferred to bring in all the data as one big data source? But in that case there is a limitation in selecting specific dimensions and measures for different components. Tool forces us to incorporate multiple data source alias!
    Any views will be highly appreciated.
    Best Regards,
    Tanisha

    Have you tried using background processing when adding the 3rd data source? See How to set up Background Processing in Design Studio
    Design Studio 1.3 is coming out in Q2 2014 where planned HANA native connectivity is coming - see Figure 2 BI2014 Design Studio 1.3 Planned Updates | SCN > sounds like that might help performance.

  • Performane related issue in Production Server

    Hi,
    when i deploy the whole application in development server (by remotely logging into client server through vpn) it works fine .
    But when it is run at client side for system testing, user acceptance testing and
    integration testing it becomes very slow .
    Due to this performance related issue , we are not able to transfer the DCs to production server .
    We are sure it is not coming from R3 side (BAPIs etc) . It is coming either from the Webdynpro or Portal side .
    Can anybody help out regarding how to solve this issue .
    Thanks .

    Hi Narendra,
    Please elaborate what is QAS customization and production  customization.
    TR has been done properly.
    Let me give you an overview of the project.
    In Organisational Assignment a Zprogram has been written which captures additional information like Grade Code, Designation Code & Pay Sheet Code
    Its a govt. client. Combination of Grade Code & Designation Code is Pay Scale Grouping
    e.g if Grade Code is A01 & Designation Code is A005
    Pay Scale Grouping is A01-A005
    If we put the values in 0001(Org Assignment) for Grade Code(A01) & Designation Code(A005), A01-A005 comes automatically in Pay Scale Grouping for a particular Pay Scale Structure.
    But now the problem is Pay Scale Type doesn't come get defaulted in 0008 for a Personnel Area & Personnel Subarea.
    I have maintained PST in Tariff & Check Assignment of Pay Scale Structure in Enterprise Structure.
    But in 0008, We have to put PST manually. Since client is having only 1 Pay Scale Area for all Personnel Area & Personnel Subarea, so we don't have to bother for it
    Please tell me where can be the error
    Regards,
    Usha

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query Performance Issues

    Hi Gurus,
    I m woking on performance tuning at the moment and wants some tips
    regarding the Query performance tuning,if anyone can helpme in that
    rfrence.
    the thing is that i have got an idea about the system and now the
    issues r with DB space, Abap Dumps, problem in free space in table
    space, no number range buffering,cubes r using too many aggrigates,
    large IC,Large ODS, and many others.
    So my questionis that is anyone can tell me that how to resolve the
    issues with the Large master data tables,and large ODS,and one more
    importnat issue is KPI´s exceding there refrence valuses so any idea
    how to deal with them.
    waiting for the valuable responces.
    thanks In advance
    Redards
    Amit

    Hi Amit
    For Query performance issue u can go for :-
    Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
    secondly i wud suggest u is use CKF in place of formulaes if any in the query
    other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary  join to ur MD and thus dec query perfo
    be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
    use filters if possible
    if u follow these m sure ur query perfo will inc
    Assign points if applicable
    Thanks
    puneet

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • SQL query performance issues.

    Hi All,
    I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
    SQL query performance issues.
    Following is the tkprof file.
    CURSOR_ID:76  LENGTH:2383  ADDRESS:f6b40ab0  HASH_VALUE:2459471753  OPTIMIZER_GOAL:ALL_ROWS  USER_ID:443 (APPS)
    insert into cos_temp(
    TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
    CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
    INVOICE_NUMBER, EXT_SALES, EXT_COS,
    GROSS_PROFIT, ACCT_DATE,
    SHIPMENT_TYPE,
    FROM_ORGANIZATION_ID,
    FROM_ORGANIZATION_CODE)
    select a.trx_date,
    g.segment5 dept,
    g.segment4 prd,
    m.segment1 part,
    d.customer_number customer,
    b.quantity_invoiced units,
    --       substr(a.sales_order,1,6) order#,
    substr(ltrim(b.interface_line_attribute1),1,10) order#,
    a.trx_number invoice,
    (b.quantity_invoiced * b.unit_selling_price) sales,
    (b.quantity_invoiced * nvl(price.operand,0)) cos,
    (b.quantity_invoiced * b.unit_selling_price) -
    (b.quantity_invoiced * nvl(price.operand,0)) profit,
    to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
    'DRP',
    l.ship_from_org_id,
    p.organization_code
    from   ra_customers d,
    gl_code_combinations g,
    mtl_system_items m,
    ra_cust_trx_line_gl_dist c,
    ra_customer_trx_lines b,
    ra_customer_trx_all a,
    apps.oe_order_lines l,
    apps.HR_ORGANIZATION_INFORMATION i,
    apps.MTL_INTERCOMPANY_PARAMETERS inter,
    apps.HZ_CUST_SITE_USES_ALL site,
    apps.qp_list_lines_v price,
    apps.mtl_parameters p
    where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
    and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
    and   a.batch_source_id = 1001     -- Sales order shipped other OU
    and   a.complete_flag = 'Y'
    and   a.customer_trx_id = b.customer_trx_id
    and   b.customer_trx_line_id = c.customer_trx_line_id
    and   a.sold_to_customer_id = d.customer_id
    and   b.inventory_item_id = m.inventory_item_id
    and   m.organization_id
         = decode(substr(g.segment4,1,2),'01',5004,'03',5004,
         '02',5003,'00',5001,5002)
    and   nvl(m.item_type,'0') <> '111'
    and   c.code_combination_id = g.code_combination_id+0
    and   l.line_id = b.interface_line_attribute6
    and   i.organization_id = l.ship_from_org_id
    and   p.organization_id = l.ship_from_org_id
    and   i.org_information3 <> '5108'
    and   inter.ship_organization_id = i.org_information3
    and   inter.sell_organization_id = '5108'
    and   inter.customer_site_id = site.site_use_id
    and   site.price_list_id = price.list_header_id
    and   product_attr_value = to_char(m.inventory_item_id)
    call        count       cpu   elapsed         disk        query      current         rows    misses
    Parse           1      0.47      0.56           11          197            0            0         1
    Execute         1   3733.40   3739.40        34893    519962154           11          188         0
    total           2   3733.87   3739.97        34904    519962351           11          188         1
    |         Rows Row Source Operation
    | ------------ ---------------------------------------------------
    |          188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
    |          741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
    |          741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
    |          741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
    |          741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
    |         3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
    |         3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
    |         3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
    |         1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
    |         1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
    |          486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
    |          486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
    |          486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
    |        75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
    |          486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
    |          486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
    |          486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
    |         1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
    |         2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
    |         1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
    |         1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
    |         3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
    |         3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
    |         3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
    |         3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
    |         3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
    |         3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
    |          741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
    |         6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
    Regards
    Ashish

    |    254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
    |    254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
    Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
    Please post explain plan and optimizer* parameter settings.

  • Building a new Cube Vs Restricted Key figure in Query - Performance issue

    Hi,
    I have a requirement to create  a OPEX restricted key figure in Query. The problem is that the key figure should be restricted to about 30 GL Accounts and almost 300 Cost centers.
    I do not know if this might cause performance issue in the query. At the moment, I am thinking of creating a new OPEX cube and load only those 30 GL Accounts, 300 cost  centers and Amount. and  include OPEX  in multiprovider in order to get OPEX
    amount in the report.
    whats the best solution - creating OPEX restricted key figure or OPEX cube ?
    thanks,
    Bhat

    I think you should go for cube as all the restrcited key figure are calculated at OLAP runtime so it will definitely affect the query performance.There are a lot of costcenter for which you have to restrict it so definitely during the runtime of query it will take a lot of time to fetch tha data from infoprovider.Its better that you create a cube with the restrictions and include it in MP.It will definitely save a lot of time during query execution
    Edited by: shyam agarwal on Feb 29, 2012 10:26 AM

  • Issue with query performance

    Hi,
    I have a multi provider with 2 cubes & 3 ODS & 1 info object.
    On top of this MP queries are built. All the queries are in 3.5
    It takes more then 30 min to execute each query.
    Now we are planning to replace all 3.5 queries in 7.
    We will build cube on top of 3 ODS. New MP will have 3 cubes & 1 Info object.
    My issue is, the other 2 cubes are too large. We have around 4 million records in each.
    And i need only few fields from these cubes.
    Is there any way I can have less load on MP?
    What would be the best approach to improve query performance?
    Thanks,
    Gowri

    Hello Gowri,
    I think you should take the help of Aggregates.
    You may create aggregates on the 2 large cube, using the characteristics
    that you are using in the Query.
    Since that cubes are having large amount of data, the use of aggregates will consideraly
    reduce the Data manager time i.e. time spent by the query in retriving data from info-provider.
    for more details on aggregates please refer the following link:
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/content.htm
    Thanx and regards
    Priyanka

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • Question related to query performance

    Hi Experts,
    Could any 1 please explain me ,when  BWA Indexes or Aggregates are created Query performance will be high because query will hit the aggregates tables, which we can see in the statistics table RSDDSTAT_DM .
    But now the problem is though Aggregates are created some time Query is Hitting 'F' Table and in this case time read is very high.
    In which cases though Aggregates or BWA indexes present ,Query will hit 'F table
    Please explain

    Query will read the data from BWA or aggregates only if data is present there i.e. the data required as per query definition. If it doesn't find data in BWA or Aggregates, it will go and read from infocube.
    Ex. Lets say you have aggregates that holds data only for comp code - 1234.
    However, user has run the query seeking the details of Comp code - 4567. In this case, query will read the data from F table, even though aggregates are present.
    Rgds..
    Shambhu

  • Disappointing query performance with object-relational storag

    Hello,
    after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
    SQL> select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
    Thanks in advance.
    --Swen
    Edited by: user636644 on Aug 30, 2008 3:55 PM
    Edited by: user636644 on Aug 30, 2008 4:12 PM

    Marc,
    thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
    select rt.object_value
    from t_element rt
    where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
    Execution Plan
    Plan hash value: 4104484998
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
    | 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 3 | FILTER | | | | | |
    | 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
    | 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("NESTED_TABLE_ID"=:B1)
    3 - filter( EXISTS (SELECT /*+ ???)
    8 - access("NESTED_TABLE_ID"=:B1)
    9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
    SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
    nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
    /><read-contents/></privilege>'))=1)
    10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
    11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
    filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
    Edited by: user636644 on Sep 1, 2008 9:56 PM

  • Query performance vs MySQL

    I have a table with about 500 million records in it stored within both Oracle and MySQL (MyISAM). I have a column (say phone_number) with a standard b-tree index on it in both places. Without data or indexes cached (not enough memory to fully cache the index), I run about 10,000 queries using randomly generated phone numbers as criteria in 10 concurrent threads. It seems that the average time to retrieve a record in MySQL is about 200 milliseconds whereas in Oracle it is about 400 milliseconds. I'm just wondering if MyISAM/MySQL is inherently faster for a basic index search than Oracle is, or should I be able to tune Oracle to get comparable performance.
    Of course the hardware configurations and storage configurations are the same. It's not the absolute time I'm concerned about here but the relative time. Twice as long to perform basically the same query seems concerning. I enabled tracing and it seems like some of the problem may be the recursive calls Oracle is making. Is there some way to optimize this a bit further?
    Realize, I just want to look at query performance right now...ignoring all of the issues (locking, transactional integrity, etc.)
    Thanks,
    Greg

    In Oracle, a standard table is heap-organized. A b-tree index then contains index keys and ROWIDs, so if you need to read a particular row in the table, you first do a few I/O's on the index to get the ROWIDs and then look up the ROWIDs in the table. For any given key, ROWIDs are likely to be scattered throughout the table, so this latter step generally involves multiple scattered I/O's.
    You can create an index-organized table or a hash cluster in Oracle in order to minimize the cost of this particular sort of lookup by clustering data with the same key physically near each other and, in the case of IOTs, potentially eliminating the need to store the index and table separately. Of course, there are costs to doing this in that inserts are now more expensive and secondary indexes are likely to be less useful. That probably gets you closer to what MySQL is doing if, as ajallen indicates, a MySQL table is generally stored more like an IOT than a heap-organized table.
    If you get really creative, you could even partition this single table to potentially improve performance further.
    Of course, if you're only storing one table, I'm not sure that you could really justify the cost of an Oracle license. This may well be the case where MySQL is more than sufficient for what this particular customer needs (knowing, nothing, of course, about the other requirements for the system).
    Justin

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

Maybe you are looking for