Performance of Query Execution

Hi All,
I am curious to know about how execution will happen for the below scenario and which one would be faster.
Scenario: I have one table X having 50 columns
and another table Y having 4 columns. Both table contains
same no of records (1 millon). Let's say when I try to retrieve the data from the tables with below query.
"Select Col1,Col2,Col3,Col4 From Table X" or "Select
Col1,Col2,Col3,Col4 From Table Y"
Will there be any execution time difference?
Regards, Suraj Fegade (Microsoft Dynamics CRM)

Do not you have a WHERE condition, I have doubt you will see big difference , one thing to remember , do not use SELECT *.....
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • VC 7.0 - Performance during query execution

    Hi Experts,
    we have built an VC cockpit wich includes ~35 Querys.
    When the user opens the cockpit the VC modell starts all 35 querys.
    When we start the querys in the SAP BI System (RSRT) they all need less than 1 sek each.
    Our BI system is able to handel 40 querys simultaneous.
    Our problem is that the cockpit need ~40 sec. until all querys have been finished.
    We suppose, that the VC starts all querys seriell instead of parallel.
    Is there any configuration where i can switch between parallel and serial mode?
    Thanks for your help
    Regards
    Florian

    Hello,
    Using the dedicated connection for nested iViews feature, was good thinking.
    But - since the execution time of your queries is relatively short compared to the overall time for the "running a query" process, i.e. the HTTP request for execution,  creating the connection on portal side, executing the query, returning the result and displaying it on the Flex runtime. --
    all the other factors in this equations takes more time (relatively) than the query execution itself. (~1 seconds).
    That is the reason why you don't observe major changes between running your 40 queries on a single connection or on multiple connections.
    This feature was intended for queries that run for a long period of time. (tens of seconds or minutes)
    in such queries, you will see the difference.
    Mark,
    Visual Composer 7.0 development and maintenance team.

  • Asset query execution performance after upgrade from 4.6C to ECC 6.0+EHP4

    Hi,guys
    I am encounted a weird problems about asset query execution performance after upgrade to ECC 6.0.
    Our client had migrated sap system from 4.6c to ECC 6.0. We test all transaction code and related stand report and query.
    Everything is working normally except this asset depreciation query report. It is created based on ANLP, ANLZ, ANLA, ANLB, ANLC table; there is also some ABAP code for additional field.
    This report execution costed about 6 minutes in 4.6C system; however it will take 25 minutes in ECC 6.0 with same selection parameter.
    At first, I am trying to find some difference in table index ,structure between 4.6c and ECC 6.0,but there is no difference about it.
    i am wondering why the other query reports is running normally but only this report running with too long time execution dump messages even though we do not make any changes for it.
    your reply is very appreciated
    Regards
    Brian

    Thanks for your replies.
    I check these notes, unfortunately it is different our situation.
    Our situation is all standard asset report and query (sq01) is running normally except this query report.
    I executed se30 for this query (SQ01) at both 4.6C and ECC 6.0.
    I find there is some difference in select sequence logic even though same query without any changes.
    I list there for your reference.
    4.6C
    AQA0FI==========S2============
    Open Cursor ANLP                                    38,702  39,329,356  = 39,329,356      34.6     AQA0FI==========S2============   DB     Opens
    Fetch ANLP                                         292,177  30,378,351  = 30,378,351      26.7    26.7  AQA0FI==========S2============   DB     OpenS
    Select Single ANLC                                  15,012  19,965,172  = 19,965,172      17.5    17.5  AQA0FI==========S2============   DB     OpenS
    Select Single ANLA                                  13,721  11,754,305  = 11,754,305      10.3    10.3  AQA0FI==========S2============   DB     OpenS
    Select Single ANLZ                                   3,753   3,259,308  =  3,259,308       2.9     2.9  AQA0FI==========S2============   DB     OpenS
    Select Single ANLB                                   3,753   3,069,119  =  3,069,119       2.7     2.7  AQA0FI==========S2============   DB     OpenS
    ECC 6.0
    Perform FUNKTION_AUSFUEHREN     2     358,620,931          355
    Perform COMMAND_QSUB     1     358,620,062          68
    Call Func. RSAQ_SUBMIT_QUERY_REPORT     1     358,569,656          88
    Program AQIWFI==========S2============     2     358,558,488          1,350
    Select Single ANLA     160,306     75,576,052     =     75,576,052
    Open Cursor ANLP     71,136     42,096,314     =     42,096,314
    Select Single ANLC     71,134     38,799,393     =     38,799,393
    Select Single ANLB     61,888     26,007,721     =     26,007,721
    Select Single ANLZ     61,888     24,072,111     =     24,072,111
    Fetch ANLP     234,524     13,510,646     =     13,510,646
    Close Cursor ANLP     71,136     2,017,654     =     2,017,654
    We can see first open cursor ANLP ,fetch ANLP then select ANLC,ANLA,ANLZ,ANLB at 4.C.
    But it changed to first select ANLA,and open cursor ANLP,then select  ANLC,ANLB,ANLZ,at last fetch ANLP.
    Probably,it is the real reason why it is running long time in ECC 6.0.
    Is there any changes for query selcection logic(table join function) in ECC 6.0.

  • Query execution time

    Dear SCN,
    I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
    Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
    Regards,
    PRK

    Hi Praveen
    Go through this document for performance optimization using BICS connection
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…

  • Query execution time estimation....

    Hi All,
    Is it possible to estimate query execution time using explain plan?
    Thanks in advance,
    Santosh.

    The cost estimated by the cost based optimizer is actually representing the time it takes to process the statement expressed in units of the single block read-time. Which means if you know the estimated time a single block read request requires you can translate this into an actual time.
    Starting with Oracle 9i this information (the time to perform single block/multi block read requests) is actually available if you gather system statistics.
    And this is what 10g actually does, as it shows an estimated TIME in the explain plan output based on these assumptions. Note that 10g by default uses system statistics, even if they are not explicitly gathered. In this case Oracle 10g uses the NOWORKLOAD statistics generated on the fly at instance startup.
    Of course the time estimates shown by Oracle 10g may not even be close to the actual execution time as it is only an estimate based on a model and input values (statistics) and therefore might be way off due to several reasons, the same applies in principle to the cost shown.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Vendor Code 1317 Query execution was interrupted MySQL

    An Error was encountered performing the requested operation:
    Query execution was interrupted
    Vendor Code 1317
    Is this a network issue?
    A database issue?
    This is intermittent, for a period I can query tables then I try a new query or click on a different table, and the error appears.
    Anyone else experience this?
    Version 2.1.1.64.45
    Java Platform 1.6.0_11

    Hi Guys,
    I couldn't initially replicate.
    But when I downloaded the driver version you are using (mysql-connector-java-5.1.13-bin.jar) it happened straight away.
    Example: select * from information_schema.tables
    second execution > Query execution was interruptedI have no issues (with my limited testing) using the documented JDBC driver (mysql-connector-java-5.0.4-bin.jar)
    http://downloads.mysql.com/archives/mysql-connector-java-5.0/mysql-connector-java-5.0.4.zip
    We don't upgrade/test/support the latest version of each JDBC driver, only when we see a benefit.
    This goes for JTDS for SQL Server and Sybase and the other third party JDBC drivers.
    Appreciate that this is not easy to find or obvious.
    Heres the list of JDBC versions.
    http://download.oracle.com/docs/cd/E15846_01/doc.21/e15222/intro.htm#CHDIEGDD
    Hope this helps.
    Dermot
    SQL Developer Team.

  • How can I reduce BEx Query execution time

    Hi,
    I have a question regarding query execution time in BEx.
    I have a query that takes 45 mins to 1 hour to execute in BEx analyser. This query is run on a daily basis and hence I am keen to reduce the execution time.  Are there any programs or function modules that can help in reducing query execution time?
    Thanks and Regards!

    Hi Sriprakash,
    1.Check if your cube is performance tuned: in the manage cube from RSA1 / performance tab: check if all indexes and statistics are green. Aggregate IDx should as well be.
    2.Condense your cubes regularly
    3. Evaluate the creation of an aggregate with all characteristic used in the query (RSDDV).
    4.Evaluate the creation of a "change run aggregate": based on a standalone NavAttr (without its basic char in the aggr.) but pay attention to the consequent change run when loading master data.
    5. Partition (physically) your cubes systematically when possible (RSDCUBE, menu, partitioning)
    6. Consider logical partitioning (by year or comp_code or ...) and make use of multiproviders in order to keep targets not too big...
    7.Consider creating secondary indexes when reporting on ODS (RSDODS)
    8.Check if the query runtime is due the master data read or the infoprovider itself, or the OLAP processor and/or any other cause in tx ST03N
    9.Consider improving your master reads by creating customized IDX (BITMAP if possible and depending on your data) on master data table and/or attribute SIDs when using NAvs.
    10.Check that your basis team did a good job and have applied the proper DB parameters
    11.Last but not least: fine tune your datamodel precisely.
    hope this will give you an idea.
    Cheers
    Sunil

  • Performance of query , What if db_recycle_cache_size is zero

    Hi
    in our 11g Database , We have some objects for which default buffer pool is recycle pool.But i observed recycle pool size is zero (db_recycle_cache_size = 0 ).
    Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
    The issue we face is , we have a query which is picking up correct index but it takes around 3 Minutes.I see that the step which takes more time is index range sacn which fetches around 50k records and takes 95% of whole query execution time.Then i observed that index is configured to have default pool as recycle pool.If i rerun the same query again and again , execution times are close to zero (No wonder , no physical reads in subsequent execution).
    I am thinking of setting up recycel pool .What else i may need to consider tuning this query ?
    Thanks and Regards
    Pram

    >
    Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
    >
    Recycle bin? What does that have to do with anything?
    You are correct - with no keep or recycle cache the default buffer cache and LRU are used for aging.
    >
    I am thinking of setting up recycel pool .
    >
    Why - it doesn't sound like you know the purpose of the recycle pool. See '7.2.4 Considering Multiple Buffer Pools' in the Performance Tuning Guide
    http://docs.oracle.com/cd/B28359_01/server.111/b28274/memory.htm
    >
    With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool. A segment's access pattern may be atypical if it is constantly accessed (that is, hot) or infrequently accessed (for example, a large segment accessed by a batch job only once a day).
    Multiple buffer pools let you address these differences. You can use a KEEP buffer pool to maintain frequently accessed segments in the buffer cache, and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache. When an object is associated with a cache, all blocks from that object are placed in that cache. Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to a specific buffer pool. The default buffer pool is of size DB_CACHE_SIZE. Each buffer pool uses the same LRU replacement policy (for example, if the KEEP pool is not large enough to store all of the segments allocated to it, then the oldest blocks age out of the cache).
    By allocating objects to appropriate buffer pools, you can:
    •Reduce or eliminate I/Os
    •Isolate or limit an object to a separate cache
    >
    Using a recycle pool isn't going to affect that initial 3 minute time. It would keep things from being aged out of the default cache when the index blocks are loaded.
    Further using a recycle pool could cause those 'close to zero' times for the second and third access to increase if the index blocks from the first query were 'recycled' so another query could use the blocks. Recycle means - thow-it-away when you are done if you want.
    >
    What else i may need to consider tuning this query ?
    >
    If it ain't broke, dont' fix it. You haven't shown that there is anything wrong with the query you are talking about. How could we possibly know if 3 minutes if really slow or if it is really fast? You haven't posted the query, an execution plan, row counts for the tables or counts for the filter predicates.
    See this AskTom article for his take on the RECYCLE and other pools.

  • Order of Query Execution in Workbook

    I have a workbook with 3 queries.  I have some macro code that moves results around and does formatting.  The code is dependent on the order in which the queries are executed.  I had assumed that the queries are executed in the order they are listed as data providers.
    For most of my users, the workbook works fine.  But for some, the queries execute in a different order.  Is there some way to force the order of query execution?
    Thanks,
    Dave Paz

    TopLink uses a deferred transaction model implemented through its UnitOfWork. This means all changes (creation, modify/ removal) are tracked and during the transaction commit phase we perform these operations ordered based on your referential integrity rules (FK constraints).
    If you are using TopLink Essentials (JPA) you have the option to issue flush on your entity-manager after the persist call to force the insert to occur prior to your next read.
    In Oracle TopLink the only way to do this is to use a DatabaseSession which offers additional calls controlling the transaction directly as well as support insert/update/remove operations without the UnitOfWork. This approach supports these additional calls because it uses a single connection and does not have a multi-client shared cache. This is much closer to a pure data access style versus the rich multi-client persistence infrastructure most commonly used.
    Doug

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • 0BWTC_C02 - Viewing number of query executions and refreshes

    Hi,
    I having been testing some queries I created for infocube 0BWTC_C02.  Is there any way to limit the key figure 0TCTNAVCTR to only show navigations that were query refreshes (e.g. ignoring navigations such as removing/adding a drilldown, results row, ect.)?  Do any of the available characteristics show what type of navigation was being performed, so that I could restrict the query to only look at certain types of navigations?
    Thanks,
    Andy

    Hi Pcrao,
    I am using the BW statistics infocube 0BWTC_C02.
    But as far as I see, the key figure 0TCTNAVCTR( number of navigation steps)now only includes query execution, but also includes all other navigation steps like drill down etc.
    Can we do some restriction on navigation type etc to find just the number times query execution takes place?
    Thanks,
    James

  • Methods to reduce query execution time

    hi experts
                  can anybody sugest the steps /methods to reduce the time taken for query execution
    Thanks and regards
    Pradeep

    Hi Pradeep.........
    I think u hav already posted a similar thread.........
    query and load performance steps
    Anyways also check these notes......
    SAP Note : 557870............ 'FAQ BW Query Performance'
    567746 'Composite note BW 3.x performance Query and Web'.
    How to design a good query........
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    Also Check this.......
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Business Intelligence Performance Tuning [original link is broken]
    Query Performance Improvement Tools
    Query Performance Improvement Tools
    Regards,
    Debjani........

  • LIKE causing delay in query execution

    Hi,
    I have this query that takes about 20 mins to execute. Is there a way to tune this, especially for the LIKE where clauses?
    SELECT product_id, territory_code, global_target_id,
    DECODE (service_type, 'SERVICE', 'Y', 'N') service_type,
    equipment_deployment, created_date, updated_date,
    fiscal_period_id, customer_id,
    SUM (bookings_amount) bookings_amount, ship_to_country,
    alliance_id
    FROM sp_refresh_incre_vw2@cdw
    WHERE fiscal_period_id = in_chr_fiscal_period_id
    AND ( ec_name LIKE in_chr_search_string --- culprit
    AND NOT st_name LIKE in_chr_search_string --- culprit
    AND NOT bt_name LIKE in_chr_search_string --- culprit
    GROUP BY fiscal_period_id,
    product_id,
    territory_code,
    ship_to_country,
    customer_id,
    global_target_id,
    service_type,
    equipment_deployment
    HAVING SUM (bookings_amount) <> 0
    UNION
         SELECT product_id, territory_code, global_target_id,
    DECODE (service_type, 'SERVICE', 'Y', 'N') service_type,
    equipment_deployment, created_date, updated_date,
    fiscal_period_id, customer_id,
    SUM (bookings_amount) bookings_amount, ship_to_country,
    alliance_id
    FROM sp_refresh_incre_vw2@cdw
    WHERE fiscal_period_id = in_chr_fiscal_period_id
    AND bt_name LIKE in_chr_search_string --------- culprit
    GROUP BY fiscal_period_id,
    product_id,
    territory_code,
    ship_to_country,
    customer_id,
    global_target_id,
    service_type,
    equipment_deployment
    HAVING SUM (bookings_amount) <> 0
    UNION
    SELECT product_id, territory_code, global_target_id,
    DECODE (service_type, 'SERVICE', 'Y', 'N') service_type,
    equipment_deployment, created_date, updated_date,
    fiscal_period_id, customer_id,
    SUM (bookings_amount) bookings_amount, ship_to_country,
    alliance_id
    FROM sp_refresh_incre_vw2@cdw
    WHERE fiscal_period_id = in_chr_fiscal_period_id
    AND ( st_name LIKE in_chr_search_string -------- culprit
    AND NOT bt_name LIKE in_chr_search_string -------- culprit
    GROUP BY fiscal_period_id,
    product_id,
    territory_code,
    ship_to_country,
    customer_id,
    global_target_id,
    service_type,
    equipment_deployment
    HAVING SUM (bookings_amount) <> 0;
    After some investigation, I have noted that it is the LIKE clause that is causing the delay. The three queries are exactly similar EXCEPT for the LIKE clause, which I have commented on with "--- culprit"
    Any alternative ideas to achieve a similar result by avoiding the LIKE clause will be very much appreciated.
    Thanks and regards,
    Ambili

    Hi,
    .I Accelerator BI accelerator is a separate hardware which when attached to BW server creates an index on data of infocube and stores in compressed format.Different technologies are used for compression and horizontal pratitioning of data. BI Accelerator can benefit businesses that have high volumes of data.
    This link explains you better on BI Accelerator
    What is BI Accelerator
    Comming to the query performance, Using BIAccelerator, query performance is highly improved.
    There is o ideal time for the query execution time.
    The query execution time depends on various factors, to list some of them,
    1)Complexity of the query(more complex, more execution time)
    2)Object on which the query is built (If it is built on CUbe, Performance will be more, on Dso performance is poor)
    3)Building indexes on cube increases performance....
    4)Data volume in the report.
    But yes, We can look at the stats of query execution using ST03 and ST03N .
    Hope the above answer helps you.
    Please let me know if you need any more clarifications

  • Query execution not traced in ST03N

    Dear performance gurus,
    we want to perform some query performance checks in our BW 3.5 system. To measure the impact of aggregates and eliminate the OLAP cache effect on runtime we execute the queries from the query monitor (rsrt). With "execute and debug" we can decide for each execution if we want to use agrregates or not and we can switch of OLAP cache usage.
    For what ever reason these executions are not shown in ST03N (Expert mode). If we run the same queries from BEx Analyzer, runtime can be analyzed. But in BEx we don't have the necessary options regarding aggregates and OLAP cache.
    Are you aware of any settings in RSRT or ST03N to get the executions out of the query monitor also displayed in ST03N??
    Thanks in advance
    Thomas

    while running the report user is facing the below error.
    "query execution was not successful"This is a generic error which is reported in few docs.
    Discoverer Viewer fails Moving Pivot Table Columns: Query Execution was not Successful [ID 948027.1]
    Query Execution Not Successful Error In Viewer, Runs In Desktop And Plus [ID 404974.1]
    Discoverer 10g (10.1.2.3) Plus/Viewer Cumulative Patch 7 (9112482) Readme For Linux/Unix [ID 821844.1]
    Launching A Worksheet In Discoverer 10g / 11g Plus/Viewer On Linux Fails With 'Contact with backend server lost' [ID 871012.1]
    Discoverer 10g (10.1.2.3) Plus/Viewer Cumulative Patch 4 (7595032) Readme For Windows [ID 822183.1]
    Discoverer 10g (10.1.2.3) Plus/Viewer Cumulative Patch 3 (7319096) Readme For Linux/Unix [ID 761997.1]
    Running Some Reports In Discoverer Plus/Viewer 10.1.2.2 Are Failing With "An error occurred while attempting to perform the operation. The operation did not complete successfully." [ID 733603.1]
    Discoverer Viewer 10g (10.1.2.3) Passes A Blank Parameter As 'NULL' When Using A 'Drill to Link' [ID 820003.1]
    Query Execution was not Successful Error When Running a Workbook [ID 550684.1]
    -ORA-01722 Invalid number.Please check the data type of the column or the data type of the records in the table/view and make sure it match the data type of the column.
    Thanks,
    Hussein

  • Invisible index getting accessed during query execution

    Hello Guys,
    There is a strange problem , I am encountering . I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
    I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
    Below is the query ,
    ===============
    UPDATE PO_LINES_ALL A
    SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
    LAST_UPDATE_DATE = SYSDATE
    ===============
    Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
    Also, why there is discrepancy between tkprof and trace generated for the same query .
    So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
    I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
    SQL> show parameter invisible
    NAME TYPE VALUE
    optimizer_use_invisible_indexes boolean FALSE
    Any clue regarding this .
    Thanks and Regards,
    Prasad
    Edited by: Prasad on Jun 15, 2011 4:39 AM

    Hi Dom,
    Sorry for the late reply , but yes , an update statement is trying to update that index even if it's invisible.
    Also, it seems performance issue started appearing when this index got created , so now I have dropped that index in test environment and ran the concurrent program again with oradebug level 12 trace enabled and found bit improvement in the results .
    With index dropped -> 24 records/min got processed
    With index -> 14 records/min got processed
    so , I am looking forward without this index in the production too but before that, I have concerns regarding tkprof output. Can we further improve the performance of this query.
    Please find the below tkprof with and without index .
    ====================
    Sql statement
    ====================
    UPDATE PO_LINES_ALL A SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM
    APPS.IRPO_IN_BPAUPDATE_TMP C
    WHERE
    BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =
    A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
    LAST_UPDATE_DATE = SYSDATE
    =========================
    TKPROF with Index for the above query ( processed 643 records )
    =========================
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 2499.64 2511.99 98158 645561632 13105579 1812777
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 2499.64 2511.99 98158 645561632 13105579 1812777
    =============================
    TKPROF without Index for the above query ( processed 4452 records )
    =============================
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 10746.96 10544.13 84125 3079376156 1870058 1816289
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 10746.96 10544.13 84125 3079376156 1870058 1816289
    =============================
    Explain plan which is same in both the cases
    =============================
    Rows Row Source Operation
    0 UPDATE PO_LINES_ALL (cr=3079377095 pr=84127 pw=0 time=0 us)
    1816289 TABLE ACCESS FULL PO_LINES_ALL (cr=83175 pr=83026 pw=0 time=117690 us cost=11151 size=29060624 card=1816289)
    0 COUNT STOPKEY (cr=3079292918 pr=20 pw=0 time=0 us)
    0 TABLE ACCESS BY INDEX ROWID IRPO_IN_BPAUPDATE_TMP (cr=3079292918 pr=20 pw=0 time=0 us cost=4 size=22 card=1)
    180368800 INDEX RANGE SCAN IRPO_IN_BPAUPDATE_N1 (cr=51539155 pr=3 pw=0 time=16090005 us cost=3 size=0 card=1)(object id 372721)
    There is a lot increase in the CPU ,so I would like to further tune this query. I have run SQL Tuning task but didn't get any recommendations for the same.
    Since in the trace , I have got db scattered read wait event for the table "PO_LINES_ALL" but disk reads are not much , so am not sure the performance improvement even if I pin this table (620 MB in size and is it feasible to pin , SGA is 5GB with sga_target set ) in the shared pool .
    I have already gathers stats for the concerned tables and rebuilt the indexes .
    Is there any other thing that can be performed to tune this query further and bring down CPU, time taken to execute.
    Thanks a lot for your reply.
    Thanks and Regards,
    Prasad
    Edited by: Prasad on Jun 28, 2011 3:52 AM
    Edited by: Prasad on Jun 28, 2011 3:54 AM
    Edited by: Prasad on Jun 28, 2011 3:56 AM

Maybe you are looking for

  • Business Area vs. Profit Center Accounting

    Hi all, I have a doubt regarding this issue. Our customer is a holding composed by 3 legal entities, each one with its own Balance Sheet. The company produces in such a way that they need to plan production and purchases globally (for the 3 legal ent

  • Error Handling in OSB

    Hi, In OSB how to catch all different errors and send them to specific For ex: 1.if i get a "file not found" error, i need to send an email notification 2.if i get an arithmetic error , i need to send it to database like any other errors... Can we ha

  • Duplicate Records generating for infocube.

    hi all, when I load the data from datasource to infocube I am getting the duplicate records.Actually the data is loaded into the datasource from a flat file when i executed first time it worked but when i changed  the flat file structure i am not get

  • String data type

    Hi Is there a built-in function/method to perform substring on an input string? If there is no such method, what is the quick and easy way to do substring on a string? Thanks Anthony Do You Yahoo!? Get your free @yahoo.com address at http://mail.yaho

  • When I open a new tab, it shows on the taskbar as a new window, help me close them..

    Since I installed 4.01, every-time I try to open a new tab, it shows as a new window in my task-bar. I don't like it showing that way! How can I change that?