Query plan shows larg amount of time

I ran explain plan for the query below and it takes a long time:
update table docs d
     set d.mismatch = 'Y'
     where exists (select 1 from diff a where a.versions > 1 and a.ed_id = d.ed_id)
diff is a view with 1069493 rows
docs is a table with 1527012 rows.
Any idea on improving performance. Please!

I'm running 10g. Please see the plan output below:
PLAN_TABLE_OUTPUT
Plan hash value: 2669996443
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 76351 | 3131K| | 115K (2)| 00:39:49 |
| 1 | UPDATE | DOC | | | | | |
|* 2 | HASH JOIN | | 76351 | 3131K| | 115K (2)| 00:39:49 |
| 3 | SORT UNIQUE | | 53475 | 1201K| | 80731 (1)| 00:27:49 |
| 4 | VIEW | DIFF | 53475 | 1201K| | 80731 (1)| 00:27:49 |
|* 5 | FILTER | | | | | | |
PLAN_TABLE_OUTPUT
| 6 | SORT GROUP BY | | 53475 | 1566K| | 80731 (1)| 00:27:49 |
| 7 | VIEW | | 1527K| 43M| | 80731 (1)| 00:27:49 |
| 8 | SORT GROUP BY | | 1527K| 249M| 581M| 80731 (1)| 00:27:49 |
| 9 | TABLE ACCESS FULL| DOC | 1527K| 249M| | 34009 (1)| 00:11:44 |
| 10 | TABLE ACCESS FULL | DOC | 1527K| 27M| | 34494 (3)| 00:11:54 |
Predicate Information (identified by operation id):
2 - access("A"."EDV_ED_ID"="D"."EDV_ED_ID")
PLAN_TABLE_OUTPUT
5 - filter(COUNT(*)>1)

Similar Messages

  • On iOS 7.0.2 everytime I unlock my phone while my music is playing, it skips. Forward, backward, and by small and large amounts of time.

    On iOS 7.0.2 everytime I unlock my phone while my music is playing, it skips. Forward, backward, and by small and large amounts of time.

    Hi cowboyincognito,
    Thanks for visiting Apple Support Communities.
    If you don't have the option to play music by genre on your iPhone, try this step to find the Genre option:
    Browse your music library.
    You can browse your music by playlist, artist, or other category. For other browse options, tap More. Tap any song to play it.
    To rearrange the tabs in the Music app, tap More, then tap Edit and drag a button onto the one you want to replace.
    Best Regards,
    Jeremy

  • How to setup a query plan in effective at any time for SP or SQL query?

    I have a SP which include a group by SQL statement. It retrieve data from a couple of tables which are over 1G size,
    When I run this SP at first time, it take more than 5 minutes to get the result. then I run it again and again, Finally, it become very quick, I can get the result within second.
    Not sure why. I guess it is because of query plan.
    How to make it running at first time to get result within second? How to force a better best query plan in effective at first time to run the query?
    If the engine has better plan in memory, could it be lost at some point? because I have the complain from end user said some times it is fast, sometime it is very slow.
    How to resolve this problem?

    thanks, kevin. Here is the pesudo query( I modify table name as business rule from my company). you are right, mytab3 is a lookup table.
    Select d.stock,i.description,c.categoryname,
    Round(IsNull(Sum(d.qty),0),2) AS qty,
    From mytab1 d,mytab2 s,invent i,mytab3 c       
    Where
    d.stock != 'param1'
    And d.id1 = s.id1    --id1: univarchar(11)        
    And i.code = c.code   --code:univarchar(2)         
    And d.stock = i.stock  --stock: univarchar(12)           
    And i.code2 = d.code2  --code2: univarchar(2)
    And d.code2 = 'param2'
    And s.id2 = 'param3'   --id2: univarchar(6)
    Group By  c.categoryname,d.stock,i.description
    Order By d.stock
    here is the query plan when run this query:
    The command completed with no results returned
    QUERY PLAN FOR STATEMENT 1 (at line 1).
    Executed in parallel by coordinating process and 4 worker processes.
        STEP 1
            The type of query is SELECT (into Worktable1).
            GROUP BY
            Evaluate Grouped SUM OR AVERAGE AGGREGATE.
            Evaluate Grouped SUM OR AVERAGE AGGREGATE.
            Evaluate Grouped SUM OR AVERAGE AGGREGATE.
            Executed in parallel by coordinating process and 4 worker processes.
            FROM TABLE
                mytab2
                s
            Nested iteration.
            Index : ind_mytab2 _id2
            Forward scan.
            Positioning by key.
            Keys are:
                id2  ASC
            Executed in parallel with a 4-way hash scan.
            Using I/O Size 16 Kbytes for index leaf pages.
            With LRU Buffer Replacement Strategy for index leaf pages.
            Using I/O Size 16 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                mytab1
                d
            Nested iteration.
            Index : ind_det_inv
            Forward scan.
            Positioning by key.
            Keys are:
                id1  ASC
            Using I/O Size 16 Kbytes for index leaf pages.
            With LRU Buffer Replacement Strategy for index leaf pages.
            Using I/O Size 16 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                invent
                i
            Nested iteration.
            Using Clustered Index.
            Index : invent_pk
            Forward scan.
            Positioning by key.
            Keys are:
                stock  ASC
                code2  ASC
            Using I/O Size 2 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                mytab3
                c
            Nested iteration.
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 2 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            TO TABLE
                Worktable1.
            Parallel work table merge.
        STEP 2
            The type of query is INSERT.
            The update mode is direct.
            Executed by coordinating process.
            Worktable2 created, in allpages locking mode, for ORDER BY.
            FROM TABLE
                Worktable1.
            Nested iteration.
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 8 Kbytes for data pages.
            With MRU Buffer Replacement Strategy for data pages.
            TO TABLE
                Worktable2.
        STEP 3
            The type of query is SELECT.
            Executed by coordinating process.
            This step involves sorting.
            FROM TABLE
                Worktable2.
            Using GETSORTED
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 8 Kbytes for data pages.
            With MRU Buffer Replacement Strategy for data pages.
    Total estimated I/O cost for statement 1 (at line 1): 1409882.
    The sort for Worktable2 is done in Serial

  • Query plan shows  Cost: 0 Bytes: 851 Cardinality: 1

    Hi,
    Customer running 10.2.0.4 on linux 64 bit.
    Explain plan shows:
    SELECT STATEMENT ALL_ROWS Cost: 0 Bytes: 851 Cardinality: 1
    8 FILTER
    7 SORT ORDER BY Bytes: 851 Cardinality: 1
    6 HASH JOIN OUTER Cost: 2,873,137 Bytes: 5,047,622,251 Cardinality: 5,931,401
    4 HASH JOIN Cost: 1,696,501 Bytes: 4,567,178,770 Cardinality: 5,931,401
    1 TABLE ACCESS FULL TABLE LIVE.INSTRUMENT Cost: 212,073 Bytes: 679,589,280 Cardinality: 1,595,280
    3 PARTITION RANGE ALL Cost: 764,856 Bytes: 2,040,401,944 Cardinality: 5,931,401 Partition #: 6 Partitions accessed #1 - #27
    2 TABLE ACCESS FULL TABLE LIVE.DEALTRANS Cost: 764,856 Bytes: 2,040,401,944 Cardinality: 5,931,401 Partition #: 6 Partitions accessed #1 - #27
    5 TABLE ACCESS FULL TABLE LIVE.SMDBINSTRUMENT Cost: 1,169 Bytes: 4,958,172 Cardinality: 61,212
    I understand that explain plans can be unreliable but:
    1) why does cost show as 0? Its obviously much higher. Is there just not enough room?
    2) why a cardinality of 1?
    Thanks in advance,
    Steve

    Hi Someoneelse,
    Not sure, i have a query into the client.
    Thanks for responding.
    Steve

  • ERROR MESSAGE WHEN DOING SIMPLE QUERY TO RETRIEVE LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Chart not showing large amount of data properly

    Hi,
    I have got 3 charts on a page. 2 are Bar chart and 3rd is a
    Column Chart. All one below another.
    The first two charts on the y axis do not have fixed items.
    These items are created at run time. If the no. of items reaches
    more than 15, the 1st two charts do not show them properly
    formatted.
    i.e It does show the correct data, but the bars are very
    tiny. Of course this would be because of the space it has been
    allocated.
    But can somebody suggest any nice solution to this? As, to
    how do I get the charts to display a large no. of items nicely,
    even thought it has a restricted small space allocated to it. I am
    not able to find any examples on this.
    Thanks

    There is no auto-resize feature.
    I have never tried this, but what if you put your chart in the report more than once and conditionally suppress according to how much data there is?
    Debi
    Edited by: Debi Herbert on Mar 29, 2011 6:33 AM

  • Exchange 2010: ActiveSync Report shows large amount of hits. So what should I do?

    Hello,
    We currently use the ActiveSync Report to run a daily report of Hits on the Exchange 2010 SP3 RU5 CAS servers (http://blogs.technet.com/b/exchange/archive/2012/01/31/a-script-to-troubleshoot-issues-with-exchange-activesync.aspx)
    The script will report a large number of hits for several users, for different reasons. The question I have is what do we do about it? It's great to know we have a large number of hits, but I'm not sure why and what to do about it.
    Examples:
    1) No User, DeviceID, DeviceType (the columns are blank) and it shows 19,000 hits. Almost all are from IIS_4xx (what does that mean?)
    2) A user with 16,200 hits and it looks like 15,661 were from PING
    3) A user with 3,493 hits and it looks like 3224 were from SYNC
    Any advice on how to investigate and resolve the data that the daily script for ActiveSync hits reports?
    Thanks!

    WHY are there so many PINGS for some device, and so many SYNC for others. And why is one deviceID/UserID blank?
    because it is how the ActiveSync Client is setup. You will need to have the users to upgrade their client. for e.g. iOS verion (dont remember the version number) has some severe issue back in the past and brought down the CAS server due to so much log generation.
    If your user still have the old OS/iOS on their device then you have to have them upgrade.
    Why the userid or device id is blank --  I have no idea but I can't of there may be some Android devices.
    Where Technology Meets Talent

  • Large amount of time spent in postInvoke()

    Hello,
    I'm having a stateless session bean in which I've a method that performs
    quite some SQL queries to a Sybase RDBMS. It gets
    one connection from the jts pool, creates a bunch of prepared
    statements and executes them. After executing a statement the resultset
    and
    the statement is closed.
    The whole of the method takes something like 900 msecs, but I noticed
    that the round trip from the client took way longer (around 3300 msecs).
    Adding some timings statements to the EOImpl class showed that it was
    the postInvoke() in there that took most of the time: 1950 msecs.
    Any idea what might cause this ?
    Thanks,
    Francois Staes.

    Hello.
    Your transaction will be committed in the postInvoke method if you are
    using container-managed transactions. You could prove or disprove that
    this is taking the majority of the time by trying bean-managed transactions
    and measuring the method time.
    Note, I'm not recommending that you switch your ejbs to bean-managed
    txs, just for an experiment.
    -- Rob
    Rob Woollen
    Software Engineer
    BEA WebLogic
    [email protected]
    Francois Staes wrote:
    Hello,
    I'm having a stateless session bean in which I've a method that performs
    quite some SQL queries to a Sybase RDBMS. It gets
    one connection from the jts pool, creates a bunch of prepared
    statements and executes them. After executing a statement the resultset
    and
    the statement is closed.
    The whole of the method takes something like 900 msecs, but I noticed
    that the round trip from the client took way longer (around 3300 msecs).
    Adding some timings statements to the EOImpl class showed that it was
    the postInvoke() in there that took most of the time: 1950 msecs.
    Any idea what might cause this ?
    Thanks,
    Francois Staes.

  • CheckTLFFontsLoaded taking large amount of time every frame

    After trying to figure out why converting all our text fields to TLF is hammering performance for ages, I've finally discovered the culprit function that is taking so long on every single frame:
    RuntimeManager$.checkTLFFontsLoaded (fl.text)
    This function often spends more than 10 ms running each frame, and is by far the longest running function in our application's action script every single frame.
    Can anyone give me a rough idea of what calls this function so I know what to start looking for in my code?
    I posted this question on the TLF forums but was told to come here because fl.text.RuntimeManager is a class of Flash Pro not TLF.

    fl.text.RuntimeManager is not a class of TLF but Flash Pro. Here we mainly focus on TLF, which includes classes in textLayout.swc. So you may want to duplicate your question to http://forums.adobe.com/community/flash. Thanks!

  • Query is not executed(takeing long time)

    Hi
    We have aquery(stock statement) on 0ic_c03  when i execute the query it showing error after some time
    Error :
    1)Error while reading data;navigation is possible
    2)query0:runtime error time limit exceded. with parllal processing via RFC
    what is the reason what can i do
    If any one face like this plz suggest me
    Tahnks in advances
    Ogeti

    Hi
    First try to execute the report using the RSRT tcode then you know if is there any problem in the query.
    and check in the data target is there any failed request are there.
    Regards,
    Chandra.

  • Planning query donu2019t show current data

    Hello,
    I have an input ready query over an aggregation level of a real time cube. Whenever the yellow request is closed and a new request is opened, the input ready query does not show the old data. And sometimes it shows incorrect data. We found that the issue is with the Cache.
    In RSRT when opening the input ready query in debug mode with "Do not use cache setting", the query returns correct data. But the surprise thing is that, the input ready query has Cache setting as inactive (0) in RSRT. So we had to generate the Delta buffer query <infoprovider>/!!1<infoprovider> in RSRT where <infoprovider> is the name of the real time cube.
    This solved our problem and the query brought in correct data. But again when I close the second request, the input ready query again shows me no data or shows me wrong data. So again we need to generate the delta buffer query in RSRT <infoprovider>/!!1<infoprovider>.
    This is very annoying when considering the fact that you have to generate the delta buffer query every time the request is closed. This could be a overhead in maintenance and will not go well with people.
    Does anybody have any solutions for solving this issue. Is there any setting by which we can turn off cache altogether or delete cache when a request is closed etc? or worst conditions how to automate the generation of delta buffer queries every time the request is closed?
    Any help is really appreciated.
    Regards,
    Anand

    please check the below;
    Delta buffer query in RSRT for BI-IP ("<infoprovider>/!!1<infoprovider>)
    Planning query donu2019t show the current data
    Edited by: Hymavathi Yanamadala on Sep 9, 2009 5:47 AM

  • Pull large amounts of data using odata, client API and so takes a long time in project server 2013

    We are trying to pull large amounts of data in project server 2013 using both client API and odata calls, but it seem to take a long time. How is this done
    In project server 2010 we did this creating SQL views in both the reporting database and for list creating a view in the content database. Our IT dept is saying we can't do this anymore. How does a view in Project database or content database create issues?
    As long as we don't add a field in the table. So how's one to do this with creating a view?

    Hello,
    If you are using Project Server 2013 on premise I would recommend using T-SQL against the dbo. schema in the Project Web Database for your reports, this will be far quicker that the APIs. You can create custom objects in the dbo. schema, see the link below:
    https://msdn.microsoft.com/en-us/library/office/ee767687.aspx#pj15_Architecture_DAL
    It is not supported to query the SharePoint content database directly with T-SQL or add any custom objects to the content database.
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS |
    MVP | Downloads

  • Querying large amounts of data

    Suppose I want to store an amount of data that is too large to store in memory only (serveral GB's or TB's). The data needs to be highly available and fault tolerant and a user would need to query the data based on some criteria. Coherence would fit these requirements perfectly except for the fact that you would need to keep all data in memory before being able to query it using the filter API.
    As I see it, the only possible way to make this happen is to store all this data in a database via a write-behind schema and query the db instead of the cache. This would make the db the single source of data instead of the cache. However, this would also seriously diminish the usefullness of the cache as you would now need to make sure the db is replicated and fault tolerant. You wouldn't even need the cache anymore as it's only purpose would be to pass the data to the db via the write-behind schema.
    Is there another way to query the data not using a db query but using the coherence filter API and still storing the data somewhere outside the JVM's memory?
    Best regards
    Jan

    user10601659 wrote:
    Suppose I want to store an amount of data that is too large to store in memory only (serveral GB's or TB's). The data needs to be highly available and fault tolerant and a user would need to query the data based on some criteria. Coherence would fit these requirements perfectly except for the fact that you would need to keep all data in memory before being able to query it using the filter API.
    As I see it, the only possible way to make this happen is to store all this data in a database via a write-behind schema and query the db instead of the cache. This would make the db the single source of data instead of the cache. However, this would also seriously diminish the usefullness of the cache as you would now need to make sure the db is replicated and fault tolerant. You wouldn't even need the cache anymore as it's only purpose would be to pass the data to the db via the write-behind schema.
    Is there another way to query the data not using a db query but using the coherence filter API and still storing the data somewhere outside the JVM's memory?
    Best regards
    JanHi Jan,
    three things come to my mind regarding your post:
    1. Write-behind is not usable if you want the DB to be the system of record. With write-behind the cache is the system of record and not the backing storage. Will the data change? If yes, how frequently? What system changes the data?
    2. To query the cache with such high amount of data with the Filter API you practically have to have all your data indexed as otherwise query response times would be too high if you needed to deserialize even a subset of your data. But indexes always have to reside on the heap. So the question is how complex your queries would be? The more complex the queries, the larger amount of indexes you would likely need.
    3. You can query only a single cache with a Coherence filter. Would this be sufficient for your querying needs?
    Best regards,
    Robert

  • Real time application and large amount of data

    Hi,
    I've a real-time application that needs to allocate large amount of data in memory (more than 5GB). I'm using 1.6 version of JRE and I'm planning to migrate to 1.8 because the application experiment a lot of "Stop of the world" every day. I spent a lot of time analysing and testing all the parameters and policies of the GC.
    The question is, with the 1.8 version dissapears the pauses on the application caused by the GC cleaning process?
    Thanks.

    Just noting that the GC only really needs to do something when objects are created and then are no longer in use.  So if your application needs a large amount of memory then keeping it, rather than discarding it might be a better solution.

  • Firefox is using large amounts of CPU time and disk access, and I need to know how to shut down most of this so I can actually use the browser.

    Firefox is a very busy piece of software. It's using large amounts of CPU time and disk access. It puts my usage at low priority, so I have to wait for some time to be able to use my pointer or keyboard. I don't know what it uses all that CPU and disk access time for, but it's of no use to me. It often takes off with massive use of resources when I'm not doing anything, and I may not have use of my pointer for several minutes. How can I shut down most of this so I can use the browser to get my work done. I just want to use the web site access part of the software, and drop all the extra. I don't want Firefox to be able to recover after a crash. I just want to browse with a minimum of interference from Firefox. I would think that this is the most commonly asked question.

    Firefox consumes a lot of CPU resources
    * https://support.mozilla.com/en-US/kb/Firefox%20consumes%20a%20lot%20of%20CPU%20resources
    High memory usage
    * https://support.mozilla.com/en-US/kb/High%20memory%20usage
    Check and tell if its working.

Maybe you are looking for