Wait events indicating Interconnect Hardware issues

Version:10.2, 11.2
What are Wait events that appear in AWR report which could indicate that the high speed interconnect is not functioning well?

What are the "expected standards"?
There are numerous factors that determines performance at this level. Latency. Packet size. Collissions. Bandwidth. Etc.
The critical bit is that your Interconnect is not of much use if it is the same speed, or slower, than your storage fabric layer. For example, if you have a dual port HBA running 2 x 2Gb fibre channels into the storage system switch, it makes little sense to have an Interconnect running at 1Gb (minimum Oracle recommendation).
Cache Fusion runs over the Interconnect. Having a cluster cache that is slower (with more latency) than what your physical storage layer is, will be a major performance drawback.
The "standards" for RAC Interconnect, if one wants to call it that, would seem to be running RDS (an Infiniband protocol) over QDR/Quad Data Rate (40Gb) Infiniband - as that is what Oracle's Database Machine and Exadata servers use.

Similar Messages

  • What is the wait period after an event before vendor can issue a refund?

    I paid for an event thru paypal. At the event: I found out that I did not need to pay since I was listed as a vendor assistant. The vendor organizer said she would refund by money but it saying there is a hold on the account after an event before they can issue a refund. Is this true and if so; how long is the wait period?

    She can refund at anytime. 

  • Performance Issue: Wait event "log file sync" and "Execute to Parse %"

    In one of our test environments users are complaining about slow response.
    In statspack report folowing are the top-5 wait events
    Event Waits Time (cs) Wt Time
    log file parallel write 1,046 988 37.71
    log file sync 775 774 29.54
    db file scattered read 4,946 248 9.47
    db file parallel write 66 248 9.47
    control file parallel write 188 152 5.80
    And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
    When I view v$sql, following command is parsed frequently
    EXECUTIONS PARSE_CALLS
    SQL_TEXT
    93380 93380
    select SEQ_ORDO_PRC.nextval from DUAL
    Please suggest what should be the method to troubleshoot this and if I need to check some more information
    Regards,
    Sudhanshu Bhandari

    Well, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
    It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits.

  • What do the wait events 'gc cr failure' and 'cr request retry' mean?

    I'm trying to troubleshoot an issue for a customer. Environment is Oracle 10.2.0.4 (64-bit) on Redhat 5. Two node RAC cluster. The 10046 trace file shows lots of 'gc current block 2-way' waits but also a few 'gc cr failure' and 'cr request retry' waits. The 'cr request retry' waits take about 0.9 seconds each. I cannot find much if any information on these two wait events. Any help is much appreciated.
    Thanks!

    Hi,
    Also, you might need to check the protocol is being used for the interconnect communication.
    here are the steps just in case:
    $ORACLE_HOME/bin/sqlplus / as sysdba
    oradebug setmypid
    oradebug unlimit
    oradebug ipc
    oradebug TRACEFILE_NAME
    Review the file that the last oradebug gave you back.
    Metalink note 181489.1 provides some handy steps to analyze your situation. (also contains the latest supported protocols for IPC)
    Hope this helps
    Regards,
    Jozsef

  • What is ges reusing os pid wait event

    What is wait event "ges reusing os pid". In our RAC environment it is one of the top wait events. How to minimze it.

    This is a wait event in Oracle 10g for Global Enqueue Services (ges) waiting on an operating system process id (os pid).
    How to resolve this issue? I checked the bug list on Metalink and there is a patch set for the issue that may help.
    Question: what version and patch release are you running for Oracle RAC?
    Also, you probably want to tune your public network and private interconnects between the nodes in your Oracle RAC cluster.
    Regards,
    Ben Prusinski
    http://oracle-magician.blogspot.com/

  • Oracle RAC Wait events

    Sun OS 10
    Oracle 10.2.0.5
    We we are running 2 node RAC and we frequently seeing the following waits in the top 5 wait event
    cr request retry
    gcs log flush sync
    Couldn't locate these events in the database reference
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents.htm
    Thanks
    Saravanan

    gcs log flush sync is similar to log file sync in standalone:
    from - http://orainternals.files.wordpress.com/2010/02/riyaj_advanced_rac_troubleshooting_rmoug_2010_ppt.pdf (you might have more luck opening this one)
    Gcs log flush sync
    - But, if the instances crash right after the block is transferred to
    other node, how does RAC maintain consistency?
    -Actually, before sending a current mode block LMS process will
    request LGWR for a log flush.
    - Until LGWR sends a signal back to LMS process, LMS process
    will wait on ‘gcs log flush’ event.
    - CR block transfer might need log flush if the block was
    considered “busy”.
    - One of the busy condition is that if the block was constructed by
    applying undo records.
    cr request retry in some cases means that the message was lost and re-requested... this is tied to interconnect - either udp issues (like truncated udp packets or packets sent out of order), the session was lost on the other node, or the node restarted quickly... could also mean your nic might be flaky or something happening on the switch. If this is a big concern then you'll need to have someone look at the flow on in the interconnects as this is specific to cache fusion.

  • Wait Event "gc buffer busy"

    We have transaction stuck with this event at paticular index block# for long time.Even we tried to restart many times.Is that related to interconnect issue ?

    Global Cache Buffer Busy wait is similar to 'Buffer Busy' wait in a single instance enviroment. When more than one process is looking for an current buffer (exclusive access) you may see this wait event. If you know what kind of block (i guess you already know), you can easily address the problem.
    Also you had got some good inputs so far. You may want to consider partitioning the objects if you had not already done. Also investigate the reverse key indexes option.
    -Gopal

  • "lms flush message acks" wait event

    Dear All,
    We are load testing our application in 10.2.0.1 database with 2 node RAC, in the AWR report top 5 wait event, we are getting "lms flush message acks" 90%.
    I did search in google and metalink, but i could not able to find any related notes.
    Please help..
    Thanks,
    Anand.

    [email protected] wrote:
    We are load testing our application in 10.2.0.1 database with 2 node RAC, in the AWR report top 5 wait event, we are getting "lms flush message acks" 90%.
    Something is waiting for the "other" node to acknowledge a "flush" message - so you need to look at the other node to see if you can see anything that might cause the flush message to see a slow response.
    Of course, there may be congestion on the interconnect - but then various other RAC communications would also be slow - so it's more likely that the "flushing" is slow.
    Reasons for flushing - we are telling the other node to clear part of its buffer cache, this might be related to frequent truncate commands (as the top of a shortlist). If you truncate an object, any dirty blocks for that object have to be written to disc, and any clean blocks have to be flushed from the cache; in a RAC environment the other nodes have to be told to do the same and your session has to wait for them to complete the write and flush.
    In your case, you might check the code for frequency truncates - and check to see if you can see evidence for frequent slow write from dbwr (and also from lgwr) on the remote node.
    Since you're running an early version of 10.2, I think problems of this type can even be related to truncates on global temporary tables due to some bugs that weren't fixed until 10.2.0.3. (And I think there were some problems with dynamic remastering in that version too, which caused similar flushing issues).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Why does my iPhone 5 take very grainy flash photographs in the dark will a hard reboot be safe to do to try to fix or is it a hardware issue?

    Why does my iPhone 5 take very grainy flash photographs in the dark will a hard reboot be safe to do to try to fix or is it a hardware issue?
    And By hard boot I mean holding the home and power button down at the same time for 10 seconds or whatever
    I see other people have these grainy flash photographs if you try to take a picture in darkness the picture comes out very dark and grainy I thought the software would ultimately fix this but it's been going on for over a year I usually take my photographs in daylight but I need this to be fixed
    If it is a hardware issue I hope Apple will replace my iPhone 5 because I was just waiting for one of the iOS updates to fix it I bought my phone in November 2012 its the iPhone 5
    Please help or advise if you can was this just a software or hardware issue I know Apple was aware of it but I was advised that iOS update would fix it but obviously not
    Is it safe to do a hard boot on my iPhone five by holding the power and home button at the same time for 10 or 20 seconds I saw someone said that fixed there's at least temporarily
    I'm typing this on my iPhone five right now why do the words go off to the left and right no matter if I hold the phone vertical or horizontal the words bleed off to left right so for me to proofread it I have to move my text left to right to proof read it
    Thank you

    The flash on a smartphone camera is tiny. It only lights up a very small area right in front of the camera. It won't light up the whole scene.
    Have you seen the size of the flash on a professional camera? Even they can't light a whole scene in the dark.
    Fact is, a smartphone camera is not capable of taking good photos in the dark, even with the flash.
    You'll need to buy and learn how to use a professional, dedicated SLR camera to be able to take good photos in the dark. It is not easy.

  • Server 2012 fresh install - Running at a crawl. Possible hardware issue?

    Hello All,
    I have recently purchased a used server and just fresh installed Server 2012 to evaluate it, and I am experiencing a issue with it running very slow.   So far nothing has been setup or installed besides the OS, hard drives, and a external usb drive,
    so it is pushing me to believe my issue is hardware related.
    First let me describe the issue more.   Basically everything is slow to load.  Control panel takes 20 seconds...  trying to load disk management and other snap in from administrative tools all take 15-20secs+ and first come up in
    a greyed out window or the administrative tools window goes to "Not Responding" for a few seconds then loads it.
    I do not think it is due to the fact the server is used with slightly dated hardware as it has Dual Quad-Core 2.66 Xeons & 16gb ECC Ram, so I was thinking their must be a bad piece of hardware in there?  Maybe bad ram?
    Anyway I plan to start hardware diagnostics, but I just wanted to see if anyone had any insight or suggestions for me.  And to verify if this sounds like a hardware issue to you as well.
    Thanks!

    Windows 2008 R2 that is on top of the list of supported operating systems. It is a good choice for some testing with M350 G5. Then you can test the in-place upgrade and application compatibility test will tell you, which component makes the problem.
    If for example onboard NIC is not compatible, then you can use another NIC card that is compatible. When it is RAID controller (very often), then either use another RAID or single HDD (it is test only, not for production environment).
    You can explore the virtualization path. Get know if appropriate hypervisor from VMvare is compatible with this hardware is also compatible with Windows Server 2012 R2. VMware forum may answer your question on virtualization. Working in virtual environment
    will provide some experience too.
    Majority of current desktop equipment allows you for testing Windows Server 2012 R2. There are some exceptions, namely "high end game machines" that make you to find remedy for drivers (example
    MAXIMUS IV GENE-Z - here is a need to adapt inf file for NIC, because of there is low end NIC on motherboard.)
    For low performance issues use native tools like Performace Monitor and diagnostics - Event logs and Device Manager. For detailed analysis consider Sysinternals tools.
    HTH
    Milos 

  • Wait events - how to read it

    Hi frnds,
    As, I'm beginner to performance tuning I dont know
    What action do i need to take?
    I mean how to read the output which I given below.
    this is the output suffering buffer busy waits.
    Could anyone please tell me
    CLASS TOTAL_WAITS TOTAL_TIME
    data block 93303 58711
    unused 0 0
    system undo header 12 232
    undo header 7847 6636
    3rd level bmb 0 0
    save undo header 0 0
    bitmap index block 0 0
    file header block 0 0
    free list 0 0
    undo block 68 207
    segment header 422 399
    extent map 0 0
    2nd level bmb 0 0
    system undo block 0 0
    sort block 0 0
    save undo block 0 0
    1st level bmb 1 17
    bitmap block 0 0
    Thanks, Muhammed Thameem. S

    Hello,
    "Buffer busy waits" is contention for a buffer (representing a specific
    version of a database block) within the Buffer Cache. So, in essence
    it is block contention and thus it is most likely something to do with
    the design of the tables and indexes supporting the application. A
    built-in bottleneck. On indexes, it could be the age-old problem of
    insertions into an index on a column with a monotonically-ascending
    data value (i.e. timestamps or sequence numbers) which tends to cause
    contention on the highest leaf node of the index. On tables, it might
    have to do with many concurrent insertions into a table in a
    freelist-managed tablespace where the table has only one freelist. It
    could also be due to a home-grown implementation of sequence-number
    generators (i.e. small table with one row, one column in which contains
    the "last value" of a sequence, etc) which lots of people use to avoid
    not being "portable across databases" which they think means not using
    Oracle sequences (yadda yadda yadda).
    I'd look for any SQL statement in the "SQL sorted by Elapsed Time"
    section of the AWR report which exhibits high elapsed time but
    relatively low CPU time, indicating a lot of wait time. Of course,
    there are something like 800 possible wait events in current releases
    of Oracle, of which "buffer busy waits" is only one, so this is just
    inference and not a direct causal connection to your problem. But,
    once I find such statements I'd check to see if they are
    accessing/manipulating tables within the CUBS_DATA tablespace, and then
    use "select * from table(dbms_xplan.display_awr('sql-id'))" to
    get the execution plan(s), and then look for something ineffective
    within the execution plan. You might find the script "sqlhistory.sql" helpful
    here as well, to get a "historical perspective" on the execution of the
    SQL statements over time, in case the buffer busy waits peaked at some
    point in the past
    Please refer to:
    http://www.pubbs.net/201003/oracle/51925-understanding-awr-buffer-waits.html
    Also
    http://www.remote-dba.net/oracle_10g_tuning/t_buffer_busy_waits.htm
    kind regards
    Mohamed

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Wait events 'direct path write'  and 'direct path read'

    Hi,
    We have a query which is taking more that 2 min. It's a 9.2.0.7 database. We took the trace/tkprof of the query,and identified that there are so manay 'direct path write' and 'direct path read' wait events in the trace file.
    WAIT #3: nam='direct path write' ela= 5 p1=201 p2=70710 p3=15
    WAIT #3: nam='direct path read' ela= 170 p1=201 p2=71719 p3=15
    In the above, "p1=201" is a file_id, but we could not find any data file, temp file, control file with that id# 201.
    Can you please let us know what's "p1=201" here, how to identify the file which is causing the issue.
    Thanks
    Sravan

    What does:
    show parameter db_filesreturn? My guess, is that it returns 200.
    The direct file read and direct file write events are reads and writes to TEMP tablespace. In those wait events, the file# is reported as db_files+temp file id. So, 201 means temp file #1.
    Now, as to your actual performance problem.
    Without seeing the SQL and the corresponding execution plan, it's impossible to be sure. However, the most common causes of temp writes are sort operations and group by operations.
    If you decide to post your SQL and execution plan, please be sure to make it readable by formatting it. Information on how to do so can be found here.
    Hope that helps,
    -Mark
    Edited by: mbobak on May 1, 2011 1:50 AM

  • My 5s's data has been running dirt slow! Phone company says it's a hardware issue. Has anyone else had the same problem?

    I've had my iphone 5s since Oct. Since I've had it, it will go through these periods where the data just slows to a crawl. So slow that I can't even access the App store until I get back around wi-fi. And this is in areas that I am recieving full signal and it's indicating that I have access to 4G. My wife has the same company, but has an iphone 5 and has not been having this problem. I called the phone comany and they advised me that it's likely a hardware issue. Has anyone had a similar problem with the 5s? If so, what can I do?

    As silly as it may seem, I have had success simply turning my phone completely off and then back on again.  I was using an iPhone 5 on Sprint and would run into this issue a couple of times a month.  I would see the data indicator on my phone switching from 3G to 1x and back again several times over a few minutes while just sitting still.  This would happen regardless of whether or not I had a full strength signal or not.  (Remeber that if your signal strength is below 2 bars,dots,etc, you will not get great data speeds).
    Try rebooting the phone.  If this does not help, pay close attention to the signal strength meter at the top of the phone.  If you and your wife are on the same carrier and it turns out you get different signal strengths while standing in the same location, you may have a problem with your phones antenna.
    If these suggestions do not help, post back to the discussion.

  • Hash join ending up in huge wait events

    Hi,
    We are experiencing huge wait events ( direct path read temp , direct path write temp) on our Materialized View refresh in 10.2.0.4 version of oracle 10g in linux rhel5 environment while monitoring the refresh session from db console. While checking the explain plan of the mv query there is a huge hash_join (due to self join nature of the query) is shown. As advised in some dba forums, i have increased my pga_aggregate_target to a value of 4 gb from 1800 mb. The PGA_HIT % is raised to 60% from 58% ( just 2% improvement). But still my direct path read temp and direct path write temp wait event have not reduced and a huge temp space is taken for hash join.
    Since we have some usage limit set by some hidden parameters for a each session on pga_aggregate_target, increase the size did not helped me much. The mv refresh is taking more than 5 hours ( sometimes it exceeds 5 hrs) to completes it refresh where as the same query in window (production) is completed less than two hours. Before a month, the refresh time in both environment was nearly close. But now it has changed and not able to figure it out.
    STATISTICS have been collected regularly using dbms_gather_stats in both environment. Both mv refresh are scheduled to run using dbms_scheduler (Manual refresh). SGA_TARGET and other memory parameters are almost same.
    Environment : Dataware house
    O/s : RHEL 5
    Oracle version : 10.2.0.4
    Work_policy=auto
    Is there any possibility to reduce this wait event and there by reducing the elapsed time? I am also interested to know changing the plan to use other sort will help? I don't know whether the details are sufficient to analyze this issue. If you need more details on this just let me know.
    I really appreciate your help and thanks in advance to all.

    Thans for your comments. Here is the code, explan plan and autotrace trace stat output.
    SELECT lasg.employee_number "EMPLOYEE_NUM",
    lasg.full_name "FULL_NAME",
    lasg.person_id "PERSON_ID",
    SUBSTR (lasg.organization, 1, 4) "DEPT",
    casg.assign_start_date "EFFECTIVE_START_DATE",
    casg.assign_end_date "EFFECTIVE_END_DATE",
    hasg.organization "PRIOR_ORG",
    casg.organization organization,
    hasg.supervisor "PRIOR_SUPERVISOR",
    casg.supervisor "SUPERVISOR_NAME",
    hasg.location "PRIOR_LOCATION",
    casg.location location,
    hasg.job_title "PRIOR_TITLE",
    casg.job_title job_name,
    CASE
    WHEN hasg.organization = casg.organization THEN 'No Change'
    ELSE 'Change'
    END
    org_change,
    CASE
    WHEN hasg.location = casg.location THEN 'No Change'
    ELSE 'Change'
    END
    loc_change,
    CASE
    WHEN hasg.supervisor = casg.supervisor THEN 'No Change'
    ELSE 'Change'
    END
    sup_change,
    CASE
    WHEN hasg.job_title = casg.job_title THEN 'No Change'
    ELSE 'Change'
    END
    job_change
    FROM panad.data_employ_details lasg,
    panad.data_employ_details casg,
    panad.data_employ_details hasg
    WHERE lasg.person_id = casg.person_id(+)
    AND lasg.assign_end_date = (SELECT MAX (lasg2.assign_end_date)
    FROM panad.data_employ_details lasg2
    WHERE lasg.person_id = lasg2.person_id)
    AND casg.person_id = hasg.person_id(+)
    AND hasg.assign_start_date =
    (SELECT MAX (hasg2.assign_start_date)
    FROM panad.data_employ_details hasg2
    WHERE hasg2.person_id = lasg.person_id
    AND hasg2.assign_end_date < casg.assign_start_date)
    | Id  | Operation                 | Name                       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT          |                            |     1 |   303 |       | 10261  (91)| 00:02:04 |
    |*  1 |  FILTER                   |                            |       |       |       |            |          |
    |*  2 |   HASH JOIN               |                            |     1 |   303 |       | 10179  (91)| 00:02:03 |
    |*  3 |    HASH JOIN              |                            |     5 |  1060 |       | 10095  (92)| 00:02:02 |
    |*  4 |     HASH JOIN             |                            |  6786 |   960K|       | 10011  (93)| 00:02:01 |
    |   5 |      VIEW                 | VW_SQ_1                    |  6786 |   225K|       |  9927  (94)| 00:02:00 |
    |   6 |       HASH GROUP BY       |                            |  6786 |   384K|       |  9927  (94)| 00:02:00 |
    |   7 |        MERGE JOIN         |                            |    50M|  2820M|       |  1427  (53)| 00:00:18 |
    |   8 |         SORT JOIN         |                            | 31937 |   998K|  2776K|   367   (2)| 00:00:05 |
    |   9 |          TABLE ACCESS FULL| DATA_EMPLOY_DETAILS            | 31937 |   998K|       |    82   (2)| 00:00:01 |
    |* 10 |         SORT JOIN         |                            | 31937 |   810K|  2520K|   324   (2)| 00:00:04 |
    |  11 |          TABLE ACCESS FULL| DATA_EMPLOY_DETAILS        | 31937 |   810K|       |    82   (2)| 00:00:01 |
    |  12 |      TABLE ACCESS FULL    | DATA_EMPLOY_DETAILS        | 31937 |  3461K|       |    83   (3)| 00:00:01 |
    |  13 |     TABLE ACCESS FULL     | DATA_EMPLOY_DETAILS        | 31937 |  2089K|       |    83   (3)| 00:00:01 |
    |  14 |    TABLE ACCESS FULL      | DATA_EMPLOY_DETAILS        | 31937 |  2838K|       |    83   (3)| 00:00:01 |
    |  15 |   SORT AGGREGATE          |                            |     1 |    13 |       |            |          |
    |* 16 |    TABLE ACCESS FULL      | DATA_EMPLOY_DETAILS        |     5 |    65 |       |    82   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("LASG"."ASSIGN_END_DATE"= (SELECT MAX("LASG2"."ASSIGN_END_DATE") FROM
                  "PANAD"."DATA_EMPLOY_DETAILS" "LASG2" WHERE "LASG2"."PERSON_ID"=:B1))
       2 - access("CASG"."PERSON_ID"="HASG"."PERSON_ID" AND "HASG"."ASSIGN_START_DATE"="VW_COL_1")
       3 - access("LASG"."PERSON_ID"="CASG"."PERSON_ID" AND "PERSON_ID"="LASG"."PERSON_ID")
       4 - access("ROWID"=ROWID)
      10 - access(INTERNAL_FUNCTION("HASG2"."ASSIGN_END_DATE")<INTERNAL_FUNCTION("CASG"."ASSIGN_START_DATE")
           filter(INTERNAL_FUNCTION("HASG2"."ASSIGN_END_DATE")<INTERNAL_FUNCTION("CASG"."ASSIGN_START_DATE")
      16 - filter("LASG2"."PERSON_ID"=:B1)
    37 rows selected.
    - autot trace stat output -
    5070 rows selected.
    Statistics
          35203  recursive calls
              0  db block gets
        3675913  consistent gets
        4269882  physical reads
              0  redo size
        1046781  bytes sent via SQL*Net to client
           4107  bytes received via SQL*Net from client
            339  SQL*Net roundtrips to/from client
             69  sorts (memory)
              0  sorts (disk)
           5070  rows processed   I have tried running this query with paralell but not helped.
    I have read the links provided by both of you. Dictionary and fixed table stats are collected as a routine.
    From the link given byTaral, Greg Rahn has suggested that it is a bug as below.
    Its bug 9041800 and there is a 10.2.0.4 backport available as of 01/29/10.How can i get this bug fixed since there is no explanation of what need to be done? Do i need to contact oracle support for the 10.2.0.4 backport for RHEL5?
    Thanks in advance
    Edited by: Karthikambalav on Mar 9, 2010 2:43 AM

Maybe you are looking for

  • HELP: AR Open Invoices Conversion with Revenue Recognition (Daily Revenue)

    Hi, I need some help on Conversion of AR Open Invoices with Accounting Rules in R12, Here the client wanted to use Daily Revenue Schedule rule, I am developing Functional Specification Document (CV.040); Need to understand the Assumptions, Approach a

  • Songs Play in iTunes, but can't drag them into iMovieHD

    As with an upgrade a while back, as soon as I upgraded my iTunes version, this problem pops up. I can play the songs that are an issue within iTunes itself, but when I click on the media button within iMovieHD, like I always have with no issues, it s

  • A flow layout in a boarder layout

    Hey everyone i have a boarderlayout: JPanel pane = new JPanel(); pane.setLayout(new BorderLayout()); and i want to create a flowlayout in the north section of the boarder layout. How can i do that? Sorry for the newbie question but i am just starting

  • Several line in a cell

    is it possible to create several line in one cell in a table?

  • Error when run the transaction ES55 or ES56 of Connection Object

    Dear Expert, I like to say that our organzation is going to SAP 4.7 to 6.0. All the basis activity has been comlpleted & now its testing time. I was faced the problem when create connection for Business partner. Actually when I write the t-code ES55