WeakReference impact on gc times

We are using JDK 5.0 on 4 CPU Sparc Solaris, in both 32 and 64 bit mode, CMS, parallel gc for new gen.
In our application, due to specific design decision, we ended up with having large number of weak references. We have a max heap of 2GB, with almost-permanent data being around 0.5GB and rest of heap being filled out by semi-transient data. New generation gc is actually doing very good job - sometimes we don't have to run CMS collection for entire day under small load. Unfortunately, when it runs, we hit a big pause during a stop-the-world phase which happens in middle of collection. 'Processing weak refs' part takes between 7-30s for us... with rest of this blocking phase taking maybe 0.3-0.5s. Funny thing is that it is often the second CMS which takes so long - first one manages to run with under 1s time for weak ref processing.
I would like to understand what is exactly happing at 'processing weak refs' phase. Is the time taken by it proportional to number of weak references which have to be cleared and enqueued,to number of still alive weak references, or to total number of them ? Is there any chance of processing them in non-blocking mode (we can afford one CPU for gc running in background, but we cannot afford pause times bigger than 2s) ?

We are not longer experiencing 2 minute pauses. I have enabled parallel weak ref processing and currently 2-3 second pauses are the biggest we can see. As production machine will be 50%+ more powerful, we could probably live with that - as long as it will not become worse. Unfortunately we are not able to easily reproduce it - it happens only once per few CMS collections, plus there is not easy way to force CMS collection at given time (full gc which can be forced through jconsole takes 1 minute+ and is probably very different from normal case).
As for the weak references, I know how they are allocated. We were allocating big number of weak references in bursts (1000-10000 at one time). Objects they pointed to were strongly referenced for umpteen minutes, then strong references were dropped. Somewhere during early time (most probably when they will still mostly referenced) there was CMS run, but blocking phase took around 0.3s. Few hours later, after 80%+ weak references were already possible to collect, another CMS has run. Both weak references and objects they were pointing to were not touched by anything during this time - and it could be easily 100-200k objects (with probably 3-4 unique objects referenced from each of them) + same number of weak references.
Below I have attached gc log around biggest CMS pause. Our vm flags are
-d32 -server -Xmx3000m -Xms2048m
-Xloggc:somefile/somewhere -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseParNewGC -
XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:PermSize=128m -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:MaxTenuringThreshold=8 -XX:NewSize=28m -XX:MaxNewSize=28m -XX:SurvivorRatio=5 -XX:+ParallelRefProcEnabled -XX:+TraceClassUnloading -XX:+PrintHeapAtGC
22427.997: [GC  {Heap before GC invocations=4978:
Heap
par new generation   total 24576K, used 22658K [0x34c00000, 0x36800000, 0x36800000)
  eden space 20480K, 100% used [0x34c00000, 0x36000000, 0x36000000)
  from space 4096K,  53% used [0x36000000, 0x36220b50, 0x36400000)
  to   space 4096K,   0% used [0x36400000, 0x36400000, 0x36800000)
concurrent mark-sweep generation total 2068480K, used 1912742K [0x36800000, 0xb4c00000, 0xf0400000)
concurrent-mark-sweep perm gen total 131072K, used 17676K [0xf0400000, 0xf8400000, 0xf8400000)
22427.998: [ParNew
Desired survivor size 2097152 bytes, new threshold 3 (max 8)
- age   1:     927464 bytes,     927464 total
- age   2:     787824 bytes,    1715288 total
- age   3:     439992 bytes,    2155280 total
: 22658K->2121K(24576K), 0.1776179 secs] 1935401K->1915402K(2093056K) Heap after GC invocations=4979:
Heap
par new generation   total 24576K, used 2121K [0x34c00000, 0x36800000, 0x36800000)
  eden space 20480K,   0% used [0x34c00000, 0x34c00000, 0x36000000)
  from space 4096K,  51% used [0x36400000, 0x36612788, 0x36800000)
  to   space 4096K,   0% used [0x36000000, 0x36000000, 0x36400000)
concurrent mark-sweep generation total 2068480K, used 1913280K [0x36800000, 0xb4c00000, 0xf0400000)
concurrent-mark-sweep perm gen total 131072K, used 17676K [0xf0400000, 0xf8400000, 0xf8400000)
} , 0.1788122 secs]
22428.177: [CMS-concurrent-preclean-start]
22430.743: [CMS-concurrent-preclean: 2.340/2.566 secs]
22430.743: [CMS-concurrent-abortable-preclean-start]
22430.827: [CMS-concurrent-abortable-preclean: 0.080/0.084 secs]
22430.835: [GC[YG occupancy: 16724 K (24576 K)]22430.835: [Rescan (parallel) , 0.0838017 secs]22430.919: [weak refs processing, 2.4906309 secs]22433.410: [c
lass unloading, 0.1508142 secs]22433.561: [scrub symbol & string tables, 0.0329090 secs] [1 CMS-remark: 1913280K(2068480K)] 1930005K(2093056K), 3.0276614 se
cs]
22433.865: [CMS-concurrent-sweep-start]
22434.079: [GC  {Heap before GC invocations=4979:
Heap
par new generation   total 24576K, used 22601K [0x34c00000, 0x36800000, 0x36800000)
  eden space 20480K, 100% used [0x34c00000, 0x36000000, 0x36000000)
  from space 4096K,  51% used [0x36400000, 0x36612788, 0x36800000)
  to   space 4096K,   0% used [0x36000000, 0x36000000, 0x36400000)
concurrent mark-sweep generation total 2068480K, used 1912786K [0x36800000, 0xb4c00000, 0xf0400000)
concurrent-mark-sweep perm gen total 131072K, used 17677K [0xf0400000, 0xf8400000, 0xf8400000)
22434.080: [ParNew
Desired survivor size 2097152 bytes, new threshold 3 (max 8)
- age   1:    1295336 bytes,    1295336 total
- age   2:     489792 bytes,    1785128 total
- age   3:     602880 bytes,    2388008 total
: 22601K->2345K(24576K), 0.4696339 secs] 1935388K->1915406K(2093056K) Heap after GC invocations=4980:
Heap
par new generation   total 24576K, used 2345K [0x34c00000, 0x36800000, 0x36800000)
  eden space 20480K,   0% used [0x34c00000, 0x34c00000, 0x36000000)
  from space 4096K,  57% used [0x36000000, 0x3624a400, 0x36400000)
  to   space 4096K,   0% used [0x36400000, 0x36400000, 0x36800000)
concurrent mark-sweep generation total 2068480K, used 1913061K [0x36800000, 0xb4c00000, 0xf0400000)
concurrent-mark-sweep perm gen total 131072K, used 17677K [0xf0400000, 0xf8400000, 0xf8400000)
} , 0.4708485 secs]

Similar Messages

  • Impact of Administrative time in Project plan

    Hi,
    Currently, I am using Project Server 2007. Can you please explain the impact of Administrative time? 
    We are thinking whether or not they can use the ‘Plan Administrative Time’ option within Project Server 2007 in order to indicate when they are planning to take annual leave.
    Is it advisable for colleagues to use this tool within Project Server 2007 or should they continue to inform Project Managers as/when they take annual leave?
    If the tool within Project Server 2007 is to be used, how does this affect the project plans as We notice there is the ‘committed’ and ‘planned’ time to be entered– what is the difference between these two categories? Will project plans
    be automatically updated?
    Thanks in advance.

    Hi Nathik,
    When a user plans future nonworking time in the Plan Administrative Time dialog, the system creates a new Timesheet for the specified period and puts the planned Work into the Timesheet. When the user's resource manager approves the nonworking time, the
    system also adds the nonworking time on the user's calendar in the Enterprise Resource Pool. This is how the feature works, and there is no way to avoid placing the time on theTimesheet. 
    Here is an excellent
    article about the feature in PS2007.
    I recommend that people use the Planned line for future planned Administrative time, such as for vacation, and use the Committed line for past Administrative time already used, such as for sick leave. Planned will work like the scheduled time in any
    MSP task. So use this for planning future time away. Committed is when users have taken the time off and are reporting it.
    Hope this helps,
    Guillaume Rouyre, MBA, MVP, MCP |

  • Impact of changing Time Constraints in Time Management

    hello gurus,
    2 leaves can be taken at the same time if TCC is 3 with a Warning Message.
    a) Is this feasible ??
    b) Will this have any negative impact on the time evaluation? If yes what will be the impact?
    My business requirement is that are 2 diff leaves possible for the same period?
    rgds,

    Hi,
    When an employee goes on FMLA he has certain leaves that can be taken with FMLA. So my doubt if its feasible & if yes then whts the impact?
    see in simple terms of what i understand is FMLA is a leave OK with that you can take certain leaves right,so what i presume is if he has some 4 diff types of leave ,he can avail all those one by one complete them all then go on FMLA.
    e g:- leave 1 from 1st April to 30th April,leave 2 from 1st may to 31st may,leave 3 from 1st June to 30th June and leave 4 from 1st July to 30th July and when all are exhausted then from 1st Aug FMLA (this is only illustration of my understanding )hope its clear cause which employee would want to take 1 CL & PL on same day and sacrifice one extra leave?
    when we say it combines all leaves its actually one after the other,if still you have doubts you can revert back to forum with other experts view on this.
    Salil

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Courtesy Callback & Reporting Impacts on Wait Time/Average Speed of Answer

    Hello,
    With CVP Courtesy callback, if a caller selects to be called back and then hangs up, will that period of time be also considered in the Skill Group's wait times and average speed of answer?  Or is wait time only considered when the caller is physically waiting on the phone?
    Thanks,
    Mike

    Mike,
    The caller is still considered "queued" for the duration. The CVP app that does the callback is run as a non-interruptable script, and keeps the ICM call active until the decision is made to call back the caller. ICM does not see this as any different from a regular queued call.
    -Jameson

  • Impact of tables in Time & Payroll

    Dear Experts,
    Can any one please explain me what are the tables which are impacted in Time & Payroll with the changes made in PA&OM.
    Thanks&Regards,
        Viswanath

    The changes in OM will impact PA0001, thus it impacts both Time and Payroll.
    PA changes may or may not impact everythng, for ex. change to Communication type will nto impact time or payroll.
    Chagnes made to IT0000,1,6,7,9,27,8,14,15,580-586.... depends on country specific again, willl have impact on payroll & time.
    Regards
    ...SAdhu

  • Time change impact?

    I recently noticed that my images needed to have 9 hours added to the capture time in order to have the correct time in the metadata. I added 9 hours to the capture time via Lightrooms option. However, after noticing that some of the buildings with clocks were still 1 hour ahead of the metadata capture time I made a note to add an additional hour to the capture time so everything would be in sync. Today, the first day of the change back to standard time, I looked at the clock images and the capture time in the metadata and noticed the images are now 2 hours behind the actual time shown on the clocks. Did the change to standard time, which is automatic on my computer somehow impact the capture time in my metadata also?
    Don

    LR does some funky things about time, and for some reason the time shown is impacted by the current computer time.
    They're well aware of all the date/time issues, we're just going to have to wait for them to get fixed in a future version.

  • I cannot receive email properly now. When I open mail, it says that is downloading about 1,700 emails. At the very end, it gives me my newest ones. But this takes a long time. I've contacted the Internet service provider and verified all the right setting

    I cannot receive email properly on either my IPad or my IPhone. I have had them for over a year and they have always worked fine. Until three days ago, when they both started acting up. On the IPad, when I open mail, it says it is downloading about 1,700 emails. At the very end, which takes quite a while to get to, I finally get the most recent ones. The IPad is sending emails just fine.
    On my IPhone, when I open mail, it says it is downloading 100 emails, but it doesn't do that. And it gives me no new emails at all. The IPhone is sending email just fine.
    I have already deleted the email accounts on both devices and reinstalled them. I've contacted the Internet service provider and verified all the right settings. The Outlook email on my desktop is working perfectly.

    WMV is a heavily-compressed format/CODEC, and the processing time will depend on several factors:
    Your CPU, which is not that powerful in your case
    Your I/O sub-system, which is likely a single HDD on your laptop
    The source footage. What is your source footage?
    Any Effects added to that footage. Do you have any Effects?
    Each of those will have an impact on the time required.
    The trial has only one main limitation - the watermark. Now, there are some components, that have to be activated, but are not with the trial, but they would be evident with Import of your source footage, if it's an issue.
    Good luck,
    Hunt

  • Saving Video Takes a LONG Time

    Howdy, everyone. I am currently shopping around for a video editing application, and I'm using the trial version of Adobe Premiere Elements. I like everything about it so far, except for one thing. It takes a very long time to save the final video file (from the Share tab). I've noticed that it gets to 50% fairly quickly, probably under five minutes, and at that point the progress bar slows to a crawl. It takes more than two hours for the last 50% to complete. Is this unusual, or is this what I should expect?
    Here's a little background of my project, in case that helps.
    The video project is just under 30 minutes.
    From the Share tab, I'm choosing to Export files for viewing on computers.
    The type of video I want is WMV (Windows Media). I cannot vary from this format. It's a requirement of the project.
    Frame size: 1024 x 768. At this time I cannot vary from this size. The video must be 1024 x 768.
    Frame Rate: 30 fps
    Audio Setting: 48kbps, 44 kHz, mono CBR
    Codec: Windows Media 9
    Encoding Passes: 1
    Bitrate Mode: Constant  (I've also tried Variable Quality, but noticed no difference in rendering time)
    Maximum Bitrate: 200 kbps (The first time, I tried 2,000 kbps but I noticed no difference in rendering time)
    Image Quality: 3 (I've also tried 90 but again with no noticable difference in rendering time)
    My Laptop:
    Windows XP Pro SP3
    Intel Core2 Duo 2.2GHz
    2GB of RAM
    The window says I should expect the video size to be 53MB in size, which seems reasonable. But when it finishes, it's more than ten times larger than that estimation. I don't understand why the filesize is so large and the rendering time is so long.
    Is my rendering time being governed or throttled back because I'm using a trial version of Premiere Elements?
    If not, what suggestions do you all have for me? Am I being unreasonable by expecting my rendering time to be quicker than the duration of the video?
    Thanks in advance for your help. Please let me know if I can provide any other details.

    WMV is a heavily-compressed format/CODEC, and the processing time will depend on several factors:
    Your CPU, which is not that powerful in your case
    Your I/O sub-system, which is likely a single HDD on your laptop
    The source footage. What is your source footage?
    Any Effects added to that footage. Do you have any Effects?
    Each of those will have an impact on the time required.
    The trial has only one main limitation - the watermark. Now, there are some components, that have to be activated, but are not with the trial, but they would be evident with Import of your source footage, if it's an issue.
    Good luck,
    Hunt

  • Hp x4632 all in one change time on fax

    HP OFFICEJET 4632 all in one- Just the FAX !!!  Have went online and referred to manual but NO where am I able to change the factory preset time. It is 6 hrs ahead  and would like some help! I end up looking over what I have set on initial set-up and run the fax test, it all says PASS, and even SOMETIMES puts the correct time in upper corner on that fax test....but when I send a fax, it will put my 9:30 am fax as sent at 3:44pm. Its like it was pre-set as I cannot find how to change. HELP thanks avbuck

    Hi @avbuck 
    I wonder if the time and date option in the Preferences impacts the Fax time. Let's arrow down to the Setup menu, then go to Preferences and select Date and Time. Please make sure this time is set correctly.
    Please let me know if this resolves the issue or not. Thanks.
    Please click the Thumbs up icon below to thank me for responding.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Please click “Accept as Solution” if you feel my post solved your issue, it will help others find the solution.
    Sunshyn2005 - I work on behalf of HP

  • Setting ECHO ON takes more time?

    Hi all,
    Recently, we had to run a huge file with INSERTs in production database. But before that when the same file was run in testing database, we set the ECHO on in SQL*PLUS and it took more time, I mean, the difference was huge, in fact. I wish to know if setting ECHO to ON takes more time than setting ECHO to OFF. Does this have an effect on time it takes to make the INSERTs.
    Regards,
    ...

    Yingkuan,
    Thanks for the reply. In fact, I know the function what ECHO does. Now suppose I have 121,000 lines of INSERT statements in a file called "inserts.sql" and I am going to execute it in SQL*PLUS to a remote server, the server being 9.2.0.8.0. Will there be a time difference in completing the scripts if I set the ECHO to ON and if I set the ECHO to OFF. Consider the following scenario:
    Scenario 1
    ========
    SQL> SET ECHO ON;
    SQL> @inserts.sql;
    Elapsed: 02:00:00.00
    Scenario 2
    ========
    SQL> SET ECHO OFF;
    SQL> @inserts.sql;
    Elapsed: 01:00:00.00
    Please note the "Elapsed" time between the 2 scenarios. Will the ECHO setting impact the elapsed time? I think this setting will not cause the file to take long time to complete as it is just a client side setting. Please clarify.
    Regards,
    ...

  • Solaris 10 x86 daylight savings time patch failes

    Hello! I'm having trouble getting my solaris box to recognize the new timezone change. I've installed patches 122033-04 and 121208-03 as you can see here:
    $ showrev -p | fgrep 122033
    Patch: 122033-04 Obsoletes:  Requires:  Incompatibles:  Packages: SUNWcsu
    $ showrev -p | fgrep 121208
    Patch: 121208-02 Obsoletes: 118345-13, 118849-01, 120018-02 Requires: 118844-22 Incompatibles:  Packages: SUNWcsu, SUNWcsr, SUNWcsl, SUNWtoo, SUNWcslr, SUNWhea, SUNWbtool
    Patch: 121208-03 Obsoletes: 118345-13, 118849-01, 120018-02, 118565-03 Requires: 118844-14, 118844-22 Incompatibles:  Packages: SUNWcsu, SUNWcsr, SUNWcsl, SUNWtoo, SUNWcslr, SUNWhea, SUNWbtool
    $ uname -a
    SunOS icarus 5.10 Generic_118844-26 i86pc i386 i86pAfter the reboot, the time from the date command jumped forward one hour, but the timezone still shows "CDT", not "CST" for central time.
    This problem also seems to affect time calculations in my java version "1.5.0_06" environment.
    Does anyone have an idea what could be happening? Thanks for any help.
    Joe Gonzalez

    Hi Celerius,
    I thought that Central Daylight Time would be the correct timezone since DST began last Sunday? CST is the winter timezone.
    What might be impacting your java time calculations is a problem which we noticed with java 1.5.0_06, where there is an issue with time zones being incorrectly mapped. We found that the use of 1.5.0_04 was more stable. Maybe you can try that on your test platform to see if it helps.
    Cheers.

  • Impact of changing XI 3.0 System Timezone

    I am currently analysing the impact of a change of an XI server's timezone from local time (Melbourne Australia GMT10 and 11 for Daylight saving time) to UTC (or GMT0). The XI system is integrating SAP and non-SAP systems running around the globe with the majority of those systems running in their local time zone. The production instance has been running for a few years and has about 160 integration scenarios.
    I am looking for information relating to:
       u2022The potential impacts of a time zone change of an XI system (on interfaces, support processes, general Basis considerations).
       u2022Approaches for analysing the impact of the change (i.e. do we need to go through every interface to search for time related mapping and config elements)
       u2022Approach for implementing the change

    HI
    u2022The potential impacts of a time zone change of an XI system (on interfaces, support processes, general Basis considerations).
    Hi Time zones can effect if you have a time based triggering or prioritizing done in any of the developments over the system.
    u2022Approaches for analysing the impact of the change (i.e. do we need to go through every interface to search for time related mapping and config elements)
    Unfortunately yes you need to check at every interface for time related mapping. For example - Time formats, system time, Time greater than or Less than involved
    u2022Approach for implementing the change
    Approach can be.
    1. Check out all the dependencies you have on time.
    2. Prioritize the impacts on business production data
    3. Plan for implementation
    4. identify the changes
    5. Implementation Drill. - If you have load balancing involved in landscape you can go one by one on PI(actual & Load balance)
    6. System up
    Thanks
    Gaurav

  • Diffrence between cpu and elapse time in tkprof

    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    >call count cpu elapsed disk query current rows
    ==================================================
    Parse 1 0.12 1.36 2 11 0 0
    Execute 1 14.30 720.20 46548 190520 205 100
    Fetch 0 0.00 0.00 0 0 0 0
    ======================================================
    total 2 14.42 721.56 46550 190531 205 100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times waited Max. Wait Total Waited
    ===========================================
    db file sequential read 46544 0.49 632.12
    db file scattered read 1 0.00 0.00
    my select statement
    SELECT cst.customer_id> ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
    > ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
    > ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0) newlate
    > ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
    >
    > FROM ar_receivable_applications_all ra
    > ,ar_cash_receipts_all cr
    > ,ar_payment_schedules_all ps
    > ,zz_ar_customer_summary_all cst
    > WHERE ra.cash_receipt_id = cr.cash_receipt_id
    > AND ra.apply_date BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
    > AND ra.status = 'APP'
    > AND ra.display = 'Y'
    > AND ra.applied_payment_schedule_id = ps.payment_schedule_id
    > AND ps.customer_id = cst.customer_id
    > AND NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
    > group by cst.customer_id ;
    Thanks,
    Anu

    user653066 wrote:
    Hi All
    i found huge diffrence between cpu and elapsed time in tkprof. can you please advice me on this issue.
    call     count       cpu    elapsed       disk      query    current        rows
    ================================================================================
    Parse        1      0.12       1.36          2         11          0           0
    Execute      1     14.30     720.20      46548     190520        205         100
    Fetch        0      0.00       0.00          0          0          0           0
    ================================================================================
    total        2     14.42     721.56      46550     190531        205         100
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 173     (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on                      Times waited   Max. Wait  Total Waited
    ===========================================================================
    db file sequential read                     46544        0.49        632.12
    db file scattered read                          1        0.00          0.00
    SELECT  cst.customer_id
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.trx_date) / COUNT(cr.deposit_date))) avgdays
             ,DECODE(COUNT(cr.deposit_date), 0, 0, ROUND(SUM(cr.deposit_date - ps.due_date) / COUNT(cr.deposit_date))) avgdayslate
             ,NVL(SUM(DECODE(SIGN(cr.deposit_date - ps.due_date),1, 1, 0)), 0)  newlate
             ,NVL(SUM( DECODE(SIGN(cr.deposit_date - ps.due_date),1, 0, 1)), 0) newontime
              FROM ar_receivable_applications_all ra
                  ,ar_cash_receipts_all           cr
                  ,ar_payment_schedules_all       ps
                  ,zz_ar_customer_summary_all cst
              WHERE ra.cash_receipt_id                 = cr.cash_receipt_id
              AND   ra.apply_date                BETWEEN ADD_MONTHS(SYSDATE, -12) AND SYSDATE
              AND   ra.status                          = 'APP'
              AND   ra.display                         = 'Y'
              AND   ra.applied_payment_schedule_id     = ps.payment_schedule_id
              AND   ps.customer_id                     = cst.customer_id          
              AND   NVL(ps.receipt_confirmed_flag,'Y') = 'Y'
              group by cst.customer_id    ;           Toon Koppelaars seems to have pinpointed the problem. Where are the 74 seconds unaccounted for seconds (I might have calculated it incorrectly, but I arrived at 88.08 seconds of unaccounted for time: 721.56 total - 1.36 parse - 632.12 db file sequential reads)?
    It is interesting that the maximum wait for a single block read reported by TKPROF is 0.49 seconds - this might be an indication of excessive competition for the server's CPU - processes are waiting in the CPU run queue, and therefore not on the CPU. As Toon indicated, 632.12 of the 721.56 seconds were spent waiting for single block reads to complete with 46,544 blocks read. Note also that the query executed at dep=1, and TKPROF may be providing misleading information about what actually happened during those executions. An example of misleading information:
    CREATE TABLE T11 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T12 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T13 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE TABLE T14 (
      C1 NUMBER,
      C2 VARCHAR2(30));
    CREATE OR REPLACE TRIGGER HPM_T11 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T11
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T12
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T12 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T12
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T13
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    CREATE OR REPLACE TRIGGER HPM_T13 AFTER
    INSERT OR DELETE OR UPDATE OF C1 ON T13
    REFERENCING OLD AS OLDDATA NEW AS NEWDATA FOR EACH ROW
    BEGIN
      IF INSERTING THEN
        INSERT INTO T14
        SELECT
          ROWNUM,
          DBMS_RANDOM.STRING('A',25)
        FROM
          DUAL
        CONNECT BY
          LEVEL <= 100;
      END IF;
    END;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_FIND_ME2';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    INSERT INTO T11 VALUES (1,'MY LITTLE TEST CASE');
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';The partial TKPROF output:
    INSERT INTO T11
    VALUES
    (1,'MY LITTLE TEST CASE')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          8          0           0
    Execute      1      0.00       0.00          0       9788         29           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0       9796         29           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9788 pr=7 pw=0 time=0 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID : 6asaf110fgaqg
    INSERT INTO T12 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.04       0.09          0          2        130         100
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.09          0          2        130         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=9754 pr=7 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : db46bkvy509w4
    INSERT INTO T13 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute    100      1.31       1.27          0         93      10634       10000
    Fetch        0      0.00       0.00          0          0          0           0
    total      101      1.31       1.27          0         93      10634       10000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 2)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=164 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    SQL ID : 6542yyk084rpu
    INSERT INTO T14 SELECT ROWNUM, DBMS_RANDOM.STRING('A',25) FROM DUAL CONNECT
      BY LEVEL <= 100
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute  10001     41.60      41.84          0       8961      52859     1000000
    Fetch        0      0.00       0.00          0          0          0           0
    total    10003     41.60      41.84          0       8961      52859     1000000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 3)
    Rows     Row Source Operation
          0  LOAD TABLE CONVENTIONAL  (cr=2 pr=0 pw=0 time=0 us)
        100   COUNT  (cr=0 pr=0 pw=0 time=0 us)
        100    CONNECT BY WITHOUT FILTERING (cr=0 pr=0 pw=0 time=0 us)
          1     FAST DUAL  (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file switch completion                      2        0.07          0.07
    ********************************************************************************In the above note that the "INSERT INTO T11" is reported as completing in 0 seconds, but it actually required roughly 42 seconds - and that would be visible by manually reviewing the resulting trace file. Also note that the log file switch completion wait was not reported for the "INSERT INTO T11" even though it impacted the execution time.
    Back to the possibility of CPU starvation causing lost time. Another test with an otherwise idle server, followed by a second test with the same server having 240 other processes fighting for CPU resources (a simulated load).
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_NO_LOAD';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SET TIMING ON
    SELECT
      COUNT(*)
    FROM
      T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT
      2    COUNT(*)
      3  FROM
      4    T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:01.37With no load the COUNT(*) completed in 1.37 seconds. The TKPROF output looks like this:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.01       0.84        523        172          1           1
    total        3      0.01       0.84        523        172          1           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=172 pr=523 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=172 pr=523 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         3        0.02          0.04
      db file parallel read                           1        0.31          0.31
      db file scattered read                         52        0.03          0.47
    SQL ID : 96g93hntrzjtr
    select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
      sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
      spare1, spare2, avgcln
    from
    hist_head$ where obj#=:1 and intcol#=:2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.06          2          2          0           0
    total        3      0.00       0.06          2          2          0           0
    Misses in library cache during parse: 0
    Optimizer mode: RULE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=2 pr=2 pw=0 time=0 us)
          0   INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=2 pw=0 time=0 us)(object id 413)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         2        0.02          0.04
    SELECT
      COUNT(*)
    FROM
      T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.03       0.43       6558       6983          0           1
    total        4      0.03       0.44       6559       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6558 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6558 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        111        0.02          0.38
      SQL*Net message from client                     2        0.00          0.00Note that TKPROF reported that it only required 0.44 seconds for the query to execute while the SQL*Plus timing indicate that it required 1.37 seconds for the SQL statement to execute. The SQL optimization (parse) with dynamic sampling query accounted for the remaining time, yet TKPROF provided no indication that this was the case.
    Now the query with 240 other processes competing for CPU time:
    ALTER SYSTEM FLUSH BUFFER_CACHE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_TEST_QUERY_WITH_LOAD';
    SELECT COUNT(*) FROM T14;
    SELECT
      SYSDATE
    FROM
      DUAL;
    SQL> SELECT COUNT(*) FROM T14;
      COUNT(*)
       1000000
    Elapsed: 00:00:59.03The query this time required just over 59 seconds. The TKPROF output:
    SQL ID : gy8nw9xzyg3bj
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_0"),
      NVL(SUM(C2),:"SYS_B_1")
    FROM
    (SELECT /*+ NO_PARALLEL("T14") FULL("T14") NO_PARALLEL_INDEX("T14") */
      :"SYS_B_2" AS C1, :"SYS_B_3" AS C2 FROM "T14" SAMPLE BLOCK (:"SYS_B_4" ,
      :"SYS_B_5") SEED (:"SYS_B_6") "T14") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.28        423         69          0           1
    total        3      0.00       0.28        423         69          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 56     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=69 pr=423 pw=0 time=0 us)
       8733   TABLE ACCESS SAMPLE T14 (cr=69 pr=423 pw=0 time=0 us cost=2 size=12 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                         54        0.01          0.27
      db file sequential read                         2        0.00          0.00
    SQL ID : 7h04kxpa13w1x
    SELECT COUNT(*)
    FROM
    T14
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          1          1          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.06      58.71       6551       6983          0           1
    total        4      0.06      58.74       6552       6984          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 56 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=6983 pr=6551 pw=0 time=0 us)
    1000000   TABLE ACCESS FULL T14 (cr=6983 pr=6551 pw=0 time=0 us cost=1916 size=0 card=976987)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.02          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file scattered read                        110        1.54         58.59
      SQL*Net message from client                     1        0.00          0.00Note in the above that the max wait for the db file scattered read is 1.54 seconds due to the extra CPU competition - about 3 times longer than your max wait for a single block read. On your database platform with single block reads, it might be possible that the time in the CPU run queue is not always counted in the db file sequential read wait time or the CPU wait time - what if your operating system is slow at returning timing information to the database instance due to CPU saturation - this might explain the 74 (or 88) lost seconds.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 28, 2009 10:26 AM
    Fixing formatting problems

  • Migration to new 3TB Time Machine/Time Capsule?

    From an earlier posting, I think A Silverstone asked for a step-by-step procedure.
    I have a similar issue I would like a procedure(s) on.  Actually two identical issues.
    1. I wish to purchase Apples new 3TB tower Airport/Time Machine to replace a 1TB Airport extreme/Time Machine.  I want to migrate the old to the new using Apple processes/procedures without a lot of workaround.  How can this be done?  Does Apple have a procedure?
    2. I have a desktop with four drives (three 2TB & one 1TB) and wish at least two of them to be Time Machine capable, i.e., backup every hour like the Airport/Time Machine does.  Can this be done?  Does Apple have a procedure?
    I want to migrate the old Time Capsule data to the new Time Capsule and to two HD's in the desktop.
    The idea is to have backups for a Time Machine backup in the event of a power disruption or a hacker that might impact the tower Time Machine.  There are other reasons such as storage capacity using a daisy chain process.
    One main problem is capturing the old "sparsebundle" in such a way to get it into the new unit and on to internal HD's.  How to do this is the question.

    1. Time Machine – Transfer Backup to a New Drive
    Time Machine – Transfer Backup to a New Drive (2)
    You can also use Disk Utility's Restore tab, which is often faster and less prone to errors. Note it will format the new drive.
    2. Yes. Go to System Preferences/Time Machine, select Add a drive and you can have 2. They will alternate with each new backup.
    OS X Mavericks: Use multiple backup disks

Maybe you are looking for

  • "Ringtone Can no longer be created..."

    I understand that there is a certain amount of time after purchasing a song or album from the iTunes Store where I can create a ringtone from a track. Tonight I downloaded the remastered Centerfield Album by John Fogarty using my iPhone 4. I went to

  • Installation for Mac OS 10.8.3 Mountain Lion

    After following all the steps I reached this point in Troubleshoot: Progress bar hangs while installing Flash Player. I have uninstalled Flash Player.  Downloaded Flash Player installer and double-clicked the installer.  The Install Adobe Flash Playe

  • I cannot re-open Firefox, or open another window, without rebooting

    I get a message indicating that Firefox is already operating and to either close the program or reboot. I cannot access the supposedly open program, nor can I open another window. I am using Windows 7. The problem repeats itself even after rebooting.

  • Display selection screen details in the header of ALV

    hi.. can someone please help me on how to display the data in the selection screen in the header in ALV? i hope someone can help me..thanks very much..

  • Photo sizes for upload

    Stupid question of the day: If I just drag and drop photos from either iPhoto or a camera memory card directly to a page in iWeb, does iWeb 'downsize' the photo so I'm not really posting a photo a few meg. in size? Or do I have to rescale/resize to r