Cache-flush VM-related performance issue

Dear forum,
I've got a peculiar performance issue going on with the BDB pagecache being flushed to disk. I've managed to reproduce the issue perfectly on three out of three quite different systems that I've tried on, so it is at least quite well-defined.
My usage pattern for the database in question is such that I periodically (perhaps once every 10-60 seconds or so) need to read through an amount of values (around 500-2000 or so) from a database containing a rather large amount (in the millions, at least) of keys. There are a few writes for every such batch, but not very many (a couple of tens). The keys that are read each batch are quite random, and very likely to be completely different from batch to batch. The database is a DB_HASH.
When I do that, BDB seems to dirty a lot of pages in the page cache (which I have currently sized at 512 MB so that pages don't have to be forced out from it), from what I can tell by manipulating refcounts and stuff, so all in all, a single batch seems to dirty some 10-40 MB or so of the mmapped cache region. (I check this using pmap -x on Linux.) Note that when I speak of pages and the dirtying of them here, I mean at the VM level, not the BDB level.
A while after this has happened, the VM comes around and wants to flush the dirty pages to disk, so it batches writes of large portions (often the entire set of dirty pages, but sometimes it only does 10-20 MB or so at a time; this detail shouldn't matter) of the dirtied pages to the backing block device. Since the dirty pages are often rather interspersed in the region file, such a flush usually requires a couple of thousands of write ops, so it might sometimes take up to 10-20 seconds for the requests to complete.
If the program, then, again tries to dirty any of the pages while they are waiting to be flushed, which is often the case, the VM will block it until the page in question is flushed. This means that the thread in question might very well be blocked for up to 20 seconds, causing quite annoying wait times.
How to deal with this problem? I've considered trying to put the region files on tmpfs or so, but that seems like such an excessive measure for a problem which, from what I can tell, should be commonplace.
On a very related note, I've noticed a large discrepancy in the I/O performance between the systems I've tried this on. Two of the systems in question manage to carry out some 200-500 write ops per second on my test load, while the third manages closer to 2000-3000 write ops per second, which makes quite a difference. What makes it very weird is that the faster system uses the exact same hard drive as one of the slower systems. I know this isn't exactly a BDB-specific question, but I thought someone around here might have experience in the matter. All three systems use Linux and S-ATA hard disks (not SSDs), but they use different S-ATA host adapters, different kernel versions and are configured in quite different ways.
Thanks for reading my wall of text! I'm sorry for dragging on so long, but I didn't know how to describe the situation more briefly.
Edited by: Dolda2000 on Mar 23, 2013 8:08 AM

As a follow-up on this, it appears that the blocking behavior was introduced in Linux 3.0 to stabilize pages under writeback:
http://lwn.net/Articles/486311/
It seems that the commits that introduced the behavior can be safely patched away, and also that it is due to change in 3.9, but for now, this is not the route I took to solve it.
Rather, I wrote a patch to Berkeley DB to allow me to store the region files in another directory than the environment root directory, and used it to store them in /dev/shm -- that is, on tmpfs, which avoids writeback of the region files altogether.
If you want the patch, it is here for db4.8 (which what Debian Stable uses), and here for 5.1, which is what Debian Testing uses.
(For some reason, the hyperlink format suggested by the forum doesn't seem to be working?)

Similar Messages

  • Language related performance issues in PS HRMS

    Dear experts,
    I was just verifying a PS HRMS application(enterprise 9.1) performance and I found out that the language I select while logging in is drastically affecting the end user performance.
    The verification for all languages were done from same location(suspected the issue could be location\network specific).
          Any suggestions/recommendations/insights that would help me understand this issue better would be welcome!
         Best Regards,
    A   Arun

    I've heard this issue before. If you check the query being executed, you'll see that when you're working in a non-base language, that the related language records are joined in as well. If you're searching in a big record (PS_JOB e.g.), you can thus end up with a much more costly query. I don't have a solution just right away on how to force lookups etc. to not check unneeded language stuff, but knowing the base issue might point you in a good direction to look further.

  • BW database related performance issue..

    hi...
    can anyone tell me how to improve the database performance??

    Hi Nisha,
    Welcome to SDN )
    The Database performance increases by doing the following,
    1. Partioning
    2. Indexing
    3. Archive unwanted data
    4. Creating Aggregates ( Consider DB ratio and KPI ratio)
    5. Check for parallelization options
    6. deleting PSA requests
    Please the below links
    check this
    http://help.sap.com/bp_biv235/BI_EN/documentation/Multi-dimensional_modeling_EN.doc
    and
    Business Intelligence Performance Tuning [original link is broken]
    Thanks,
    Sudhakar.
    Saying thanks in SDN == Assigning Points.

  • BW  Performance issue

    Hi,
          We have done reorge for four tablespaces PSAPBTABI, PSAPBTABD, PSAPODSD, PSAPODSI  in our BW System 7.0. And we have created new tablespaces for above our tablespaces like PSAPODSINEW for all the above tablespaces and made the old tablespaces as offline. After doing this activity we are facing lot of performance problems.
    For e.g while our BW guys are running one report for the portal it is executing but after a long time like after 2+ hours the page goes empty. The report is CL_SQL_RESULT_SET=============CP, not only this report but lot of reports. We have done statistics update for the tables that report uses using BRCONNECT and also rebuild the indexex for all the tables. But still no luck.
    << Moderator message - The answers given in this forum are by volunteers. Everyone's problem is important. Please do not ask for help quickly. >>
    Regards,
    Balaji Vedagiri
    Edited by: Rob Burbank on Jan 9, 2011 10:40 PM

    Hi,
    For the BASIS related performance issues :
    1. This is mostly DBA related : Keeping an eye on the DB space to monitor the growth of the database and look at space available and needed to prevent shutting down the BW server.
    2. Making sure that backups don't consume a lot of time to interfere with daily operations.
    3. Doing load balancing
    4. Making sure IDocs are flowing freely.
    5. Making sure that shared drives and servers don't overshoot the capacity.
    6. Patching servers and server instances to improve performance.
    The BW consultants can monitor the the space , etc using tcode DB02. The DBAs have their own tools to monitor these parameters.
    Cheers,
    Kedar

  • Performance Issue with PAY_BALANCE_VALUES_V View in Oracle R12

    Dear all ,
      We have recently upgraded from 11i(11.5.10.2) to R12(12.1.3). We are facing one Issue with slow performance of the queries where PAY_BALANCE_VALUES_V is used. We have so many reports & logic in Payroll which uses this View.
    In 11i this works fine, however in R12 it takes very long time. There are no configuration changes we have done from 11i to R12.
    Is there any way to optimize the performance or alternate way to retrieve the Balances Data in Payroll ?
    Any heads up would be highly Appreciated.
    Thanks,
    Razi

    Hi Razi,
    The balance related performance issue is written in the following note.
    Note:1494344.1 UK Payslip Generation - Self Service Program Takes Much Time To Complete (Performance Issue)
    This issue was fixed in HR_PF.B RUP6 or patch:14376786. Did you apply this patch? If not, I suggest you apply it.
    Also, HR_PF.B RUP6 has some balance related performance issues.
    If you already have applied HR_PF.B RUP6, I suggest you log a SR with SQL trace.
    Thanks,
    Hideki

  • Strange performance issue with 3510/3511 SAM-FS disk cache

    Hi there!
    I'm running a small SAM-QFS environment and have some strange performance issue on the disk storage part, which somebody here might be able to explain.
    Configuration: one 3510, dual controller, RAID-5 9+1, one hot spare and one disk not configured for whatever reason. The R5 logical drive hosts a 150GB LUN for SAM-QFS metadata (mm in SAM-FS speak) and a 1TB LUN for data (mr in SAM-FS speak). Further, there are two small LUNs (2GB, 100GB) for some other purpose. Those two LUNs have nearly no I/O. All disks are SUN146G. Host connection is 2GBit, multipathing enabled and working.
    Then the disk cache became too small, and the customer added a 3511 expansion unit with SUN300G disks. One logical drive is a RAID-1, 1+1, used for NetBackup catalog. The other is a RAID-5, 8+1, providing two LUNs: 260GB SAM-FS metadata (mm) and 1.999TB SAM-FS data (mr).
    For SAM-FS, the LUNs form two file systems: one "residing" in the 3510, the other "residing" in the 3511 expansion. Cabling is according to the manual and checked several times by several independant people. Operating system is Solaris 10, hardware is a V880.
    The problem we observe: SAM-FS I/O on LUNs on disks inside the 3510 is fine. With iostat, I see 100MB/s read and 50MB/s write at the same time. On the SAM-FS file system which is running on the two LUNs in the 3511, the limit seems to be at 40MB/s read/write. Both SAM-FS file systems are configured the same in regards of block size.
    In case I have activity on both SAM-FS file systems, I see 100MB/s+ on the LUN running inside the controller shelf and another 40MB/s on the disk runnin in the 3511 expansion chassis. So, the controller is easily capable of handling 150MB/s.
    Cache settings in the 3510 controller are default I think (wasn't installed by me), batteries are fine.
    Is this 40MB/s we experience a limitation by the expansion shelf? Don't think so. Anybody has any ideas on this? What parameters to check or to change? Any hint appreciated. I can also provide further details if needed. Thank you.
    wolfgang

    SUN300G disks sound like 300GB FC disks.
    Depending on how many files are in the SAMFS file system, sharing the mm and mr devices on the same RAID array can be a pretty horrible idea. In my opinion and experience, it's almost always better to NEVER put more than one LUN on a RAID array. Period. Putting more than one LUN on an array results in IO contention on that array. And large, unnaturally configured (9+1? Why?) RAID arrays will have problems from the start.
    What are the block sizes used on the RAID arrays? It wouldn't surprise me to see that the RAID array on the expansion tray has a very large block size. Larger block sizes are, in general, not better. Especially for SAMFS metadata - which IIRC is something like 8k or 16k blocks.
    I suspect what is happening is most of the metadata updates are going to the mm device on the new array, contending with the IO operations on the file data.
    How much space is left on each mm device? What does "iostat -sndxz 2" show when you're having the IO problems?

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance Issue's Related in Adance table in advance table

    Hi,
    Can anybody let me know what are the performance issues in advance table in advance table,because i am having big performance issue while implementing advance table in advance table, my inner table is rendering very slowly.
    Thanks

    Table in a table is a performance eating structure :), because ur VL will cache both parent and child VO rows in JVM.The only way to improve the performance is to tune your sql queries.
    --Mukul                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance issue related to OWM? Oracle version is 10.2.0.4

    The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
    Thanks for any possible answers!

    Ha, sounded like you knew what I was talking about :)
    I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
    I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
    Thanks for the reply anyway!

  • Concurrent program performance Issue

    Hi,
    We are currently experiencing performance issue in one of the concurrent program related
    to the HR module. The concurrent request is currently completing in 3 hrs time.
    We have obtained a trace for the concurrent program.
    Please help me analyze the cause of the performance issue from the trace file.
    Trace file below:
    BEGIN SLC_PYINF_USMONACCROH_PKG.SLC_421_HANDLE_OUTBOUND(:errbuf,:rc,:A0,:A1,
    :A2,:A3,:A4,:A5,:A6,:A7,:A8,:A9,:A10,:A11); END;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 76.08 9602.16 700828 1330818 663813 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 76.08 9602.16 700828 1330818 663813 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 3 0.00 0.00
    SQL*Net message from client 3 0.00 0.00
    PL/SQL lock timer 969 9.83 9485.16
    UPDATE HRAPPS.SLC_PYINF_USMONACCRO_STG SET PROCESS_STATUS = 2
    WHERE
    CONC_REQUEST_ID = :B2 AND SET_SEQUENCE_NUM = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 24.83 45.67 145127 695479 602714 560730
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 24.83 45.67 145127 695479 602714 560730
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 UPDATE SLC_PYINF_USMONACCRO_STG (cr=684898 pr=134556 pw=0 time=44759708 us)
    1135266 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=694708 pr=124937 pw=0 time=6874212 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 15622 1.43 13.94
    db file sequential read 25578 0.52 14.30
    latch: cache buffers lru chain 3 0.00 0.00
    DELETE FROM SLC_PYINF_USMONACCRO_ARC
    WHERE
    EXTRACT_DATE<TRUNC(SYSDATE)-60
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 7.41 15.05 87598 87668 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 7.41 15.06 87598 87668 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE SLC_PYINF_USMONACCRO_ARC (cr=87668 pr=87598 pw=0 time=15053606 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_ARC (cr=87668 pr=87598 pw=0 time=15053595 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.00 0.00
    db file scattered read 11025 0.61 13.21
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 10.14 10.23 116633 123540 0 2
    total 6 10.14 10.23 116633 123540 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58317 pw=0 time=5290475 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58317 pw=0 time=1689204 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 15646 0.27 6.24
    db file sequential read 625 0.00 0.01
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = 2
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 5.20 8.32 51482 69842 0 1
    total 3 5.20 8.32 51482 69842 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=69842 pr=51482 pw=0 time=8323369 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=69842 pr=51482 pw=0 time=2811304 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 6514 0.30 6.09
    db file sequential read 114 0.00 0.02
    SELECT MAX(SET_SEQUENCE_NUM)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 5.34 6.63 58318 61770 0 1
    total 3 5.34 6.63 58318 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58318 pw=0 time=6639527 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58318 pw=0 time=2250410 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7820 0.30 4.46
    db file sequential read 313 0.00 0.05
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B2 AND
    SET_SEQUENCE_NUM = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.99 4.88 58315 61770 0 1
    total 3 4.99 4.88 58315 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58315 pw=0 time=4887337 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58315 pw=0 time=1688451 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7824 0.00 3.02
    db file sequential read 313 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.98 4.87 58318 61770 0 1
    total 3 4.98 4.87 58318 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58318 pw=0 time=4872548 us)
    560730 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58318 pw=0 time=1688407 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7821 0.00 2.98
    db file sequential read 312 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS = -1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.45 4.36 58317 61770 0 1
    total 3 4.45 4.36 58317 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=58317 pw=0 time=4369473 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=58317 pw=0 time=4369425 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 7823 0.00 2.98
    db file sequential read 312 0.00 0.00
    SELECT COUNT(*)
    FROM
    HRAPPS.SLC_PYINF_USMONACCRO_STG WHERE CONC_REQUEST_ID = :B1 AND
    PROCESS_STATUS < 0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 4.14 4.24 51481 61770 0 1
    total 3 4.14 4.24 51481 61770 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1 SORT AGGREGATE (cr=61770 pr=51481 pw=0 time=4243020 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_STG (cr=61770 pr=51481 pw=0 time=4242968 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 6537 0.06 2.90
    db file sequential read 104 0.00 0.00
    DELETE FROM SLC_PYINF_USMONACCRO_GLI_ARC
    WHERE
    EXTRACT_DATE<TRUNC(SYSDATE)-60
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.63 2.52 7681 7689 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.63 2.52 7681 7689 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE SLC_PYINF_USMONACCRO_GLI_ARC (cr=7689 pr=7681 pw=0 time=2521592 us)
    0 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_GLI_ARC (cr=7689 pr=7681 pw=0 time=2521583 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 1 0.00 0.00
    db file scattered read 976 1.00 2.36
    UPDATE HRAPPS.SLC_PYINF_USMONACCRO_GLI_STG SET PROCESS_STATUS = 2
    WHERE
    CONC_REQUEST_ID = :B1 AND PROCESS_STATUS = 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 1.89 2.25 5863 16125 60963 52309
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 1.89 2.25 5863 16125 60963 52309
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    0 UPDATE SLC_PYINF_USMONACCRO_GLI_STG (cr=11787 pr=1273 pw=0 time=1332023 us)
    122679 TABLE ACCESS FULL SLC_PYINF_USMONACCRO_GLI_STG (cr=16291 pr=5859 pw=0 time=48501241 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file scattered read 745 0.01 0.76
    db file parallel read 1 0.00 0.00
    db file sequential read 5 0.00 0.00
    SELECT B.ATTRIBUTE1 ,B.ATTRIBUTE2 ,B.ATTRIBUTE3 ,T.FLEX_VALUE_MEANING ,
    T.DESCRIPTION
    FROM
    FND_FLEX_VALUES_TL T ,FND_FLEX_VALUES B WHERE B.FLEX_VALUE_ID =
    T.FLEX_VALUE_ID AND T.LANGUAGE = USERENV ('LANG') AND TRIM(UPPER
    (B.FLEX_VALUE)) = TRIM(UPPER (:B1 )) AND B.ENABLED_FLAG = 'Y' AND UPPER
    (B.VALUE_CATEGORY) = UPPER ('SLCHR_INTERFACE_CLEANUP')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.25 0.86 1640 3286 0 2
    total 5 0.25 0.86 1640 3286 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    2 NESTED LOOPS (cr=3286 pr=1640 pw=0 time=866461 us)
    2 TABLE ACCESS FULL FND_FLEX_VALUES (cr=3280 pr=1637 pw=0 time=848331 us)
    2 TABLE ACCESS BY INDEX ROWID FND_FLEX_VALUES_TL (cr=6 pr=3 pw=0 time=18101 us)
    2 INDEX UNIQUE SCAN FND_FLEX_VALUES_TL_U1 (cr=4 pr=2 pw=0 time=9705 us)(object id 849241)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.02
    db file scattered read 208 0.30 0.71
    SELECT PHASE_CODE, STATUS_CODE, COMPLETION_TEXT, PHASE.LOOKUP_CODE,
    STATUS.LOOKUP_CODE, PHASE.MEANING, STATUS.MEANING
    FROM
    FND_CONCURRENT_REQUESTS R, FND_CONCURRENT_PROGRAMS P, FND_LOOKUPS PHASE,
    FND_LOOKUPS STATUS WHERE PHASE.LOOKUP_TYPE = :B3 AND PHASE.LOOKUP_CODE =
    DECODE(STATUS.LOOKUP_CODE, 'H', 'I', 'S', 'I', 'U', 'I', 'M', 'I',
    R.PHASE_CODE) AND STATUS.LOOKUP_TYPE = :B2 AND STATUS.LOOKUP_CODE =
    DECODE(R.PHASE_CODE, 'P', DECODE(R.HOLD_FLAG, 'Y', 'H',
    DECODE(P.ENABLED_FLAG, 'N', 'U', DECODE(SIGN(R.REQUESTED_START_DATE -
    SYSDATE),1,'P', R.STATUS_CODE))), 'R', DECODE(R.HOLD_FLAG, 'Y', 'S',
    DECODE(R.STATUS_CODE, 'Q', 'B', 'I', 'B', R.STATUS_CODE)), R.STATUS_CODE)
    AND (R.CONCURRENT_PROGRAM_ID = P.CONCURRENT_PROGRAM_ID AND
    R.PROGRAM_APPLICATION_ID= P.APPLICATION_ID ) AND REQUEST_ID = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 971 0.25 0.16 0 0 0 0
    Fetch 971 0.53 0.65 0 13605 0 971
    total 1943 0.78 0.81 0 13605 0 971
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    971 TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES (cr=17489 pr=0 pw=0 time=877481 us)
    2913 NESTED LOOPS (cr=16518 pr=0 pw=0 time=1643550 us)
    971 NESTED LOOPS (cr=11663 pr=0 pw=0 time=658551 us)
    971 NESTED LOOPS (cr=5837 pr=0 pw=0 time=95374 us)
    971 TABLE ACCESS BY INDEX ROWID FND_CONCURRENT_REQUESTS (cr=2924 pr=0 pw=0 time=63054 us)
    971 INDEX UNIQUE SCAN FND_CONCURRENT_REQUESTS_U1 (cr=1953 pr=0 pw=0 time=43874 us)(object id 240792)
    971 TABLE ACCESS BY INDEX ROWID FND_CONCURRENT_PROGRAMS (cr=2913 pr=0 pw=0 time=28198 us)
    971 INDEX UNIQUE SCAN FND_CONCURRENT_PROGRAMS_U1 (cr=1942 pr=0 pw=0 time=17956 us)(object id 849182)
    971 TABLE ACCESS BY INDEX ROWID FND_LOOKUP_VALUES (cr=5826 pr=0 pw=0 time=558105 us)
    971 INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (cr=4855 pr=0 pw=0 time=539171 us)(object id 906518)
    971 INDEX RANGE SCAN FND_LOOKUP_VALUES_U1 (cr=4855 pr=0 pw=0 time=172115 us)(object id 906518)
    SELECT MAX(LT.SECURITY_GROUP_ID)
    FROM
    FND_LOOKUP_TYPES LT WHERE LT.VIEW_APPLICATION_ID = :B2 AND LT.LOOKUP_TYPE =
    :B1 AND LT.SECURITY_GROUP_ID IN (0,
    TO_NUMBER(DECODE(SUBSTRB(USERENV('CLIENT_INFO'),55,1), ' ', '0', NULL, '0',
    SUBSTRB(USERENV('CLIENT_INFO'),55,10))))
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1945 0.11 0.11 0 0 0 0
    Fetch 1945 0.18 0.10 0 3890 0 1945
    total 3891 0.29 0.21 0 3890 0 1945
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Rows Row Source Operation
    1945 SORT AGGREGATE (cr=3890 pr=0 pw=0 time=142954 us)
    1945 FIRST ROW (cr=3890 pr=0 pw=0 time=96520 us)
    1945 INDEX RANGE SCAN (MIN/MAX) FND_LOOKUP_TYPES_U1 (cr=3890 pr=0 pw=0 time=89938 us)(object id 906517)
    INSERT INTO HRAPPS.SLC_HRINF_INT_SUMMARY (INT_SUMMARY_ID,
    INT_SUMMARY_CREATE_DATE ,INT_SUMMARY_LAST_UPDATE_DATE, INTERFACE_NAME ,
    HANDLER_CONC_REQUEST_ID, INT_CONC_REQUEST_ID ,SET_SEQUENCE_NUMBER,
    SET_RECORD_COUNT, INT_FROM_DATE ,INT_TO_DATE, INT_STATUS_1_STATE,
    INT_STATUS_1_MESSAGE ,INT_STATUS_1_STARTED, INT_STATUS_1_COMPLETED ,
    INT_STATUS_1_SUCCESS_COUNT, INT_STATUS_1_ERROR_COUNT ,INT_STATUS_2_STATE,
    INT_STATUS_2_MESSAGE ,INT_STATUS_2_STARTED, INT_STATUS_2_COMPLETED ,
    INT_STATUS_2_SUCCESS_COUNT, INT_STATUS_2_ERROR_COUNT ,INT_STATUS_3_STATE,
    INT_STATUS_3_MESSAGE ,INT_STATUS_3_STARTED, INT_STATUS_3_COMPLETED ,
    INT_STATUS_3_SUCCESS_COUNT, INT_STATUS_3_ERROR_COUNT ,INT_STATUS_4_STATE,
    INT_STATUS_4_MESSAGE ,INT_STATUS_4_STARTED, INT_STATUS_4_COMPLETED ,
    INT_STATUS_4_SUCCESS_COUNT, INT_STATUS_4_ERROR_COUNT ,INT_STATUS_5_STATE,
    INT_STATUS_5_MESSAGE ,INT_STATUS_5_STARTED, INT_STATUS_5_COMPLETED ,
    INT_STATUS_5_SUCCESS_COUNT, INT_STATUS_5_ERROR_COUNT )
    VALUES
    (:B7 , :B6 , :B6 , :B5 , :B4 , NULL , NULL, NULL, :B3 , :B2 , :B1 , NULL ,
    NULL, NULL , NULL, NULL , :B1 , NULL , NULL, NULL , NULL, NULL , :B1 , NULL
    , NULL, NULL , NULL, NULL , :B1 , NULL , NULL, NULL , NULL, NULL , :B1 ,
    NULL , NULL, NULL , NULL, NULL )
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.01 0.12 12 1 12 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.01 0.12 12 1 12 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 70 (recursive depth: 1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 12 0.02 0.12
    Thanks & Regards,
    Rup

    Hi;
    Please check our previous topic
    Concurrent manager real time tune
    Oracle apps database
    tune concurrent manager
    Oracle apps database
    Concurrent Manager very slow
    Concurrent Manager very slow........
    Regard
    Helios

  • Captivate 6 Output intermittent performance issues

    I am getting feedback from users that some of my captivate e-learning is having intermittent performance issues when being run via our LMS. I am hearing of slow down and pausing between slides and questiosn in quizs. I have seen this in action and I cannot pin point what is causing this. The problem doesnt affect most users, then some will report these extreme slow downs.
    I have ruled out any specific browser or OS. Does anyone have similar experiences and any ideas around what may be causing it? I want to rule out Captivate if possible and potentially point the finger at our LMS. However I dont know how the content within Captivate is cached/downloaded when being played.
    Does anyone know how Captviate operates when run via an LMS. Does it download in full, or buffer content and download progressively as the person moves through the content? If I can work this out I may be able to pinpoint the problem.
    Sorry if this is a bit vague, any help or additional experiences would be greatly appreciated.
    Jay

    There is also network latency to consider, in addition to server latency.  In our building, network bandwidth availablity can surge or slow down depending on which subnetwork we're on, if there is several VOiP or web video conferencing sessions taking up bandwidth.  If you're distributing to geographically separated organizations, then you may also experience latency issues related to the local infrastructure.
    Just a thought...

  • Anyone using a 12 Core Mac Pro? I have HORRIBLE performance issues .. Help!

    After the latest 10.7.4 Mac OS X update I have extremely horrible performance issues with AE ... and they were not so great before the update.
    It is still stabilizing ... but an 1:19 clip in SD is taking 12 HOURS TO analyze and stabilize. !!!
    The 12 cores are barely being used and this problem has been an issue since I purchased the suite over a year ago.
    Does anyone else have problems using AE on their 12 CORE MAC PRO?
    REPLY ONLY IF YOU HAVE A 12 CORE MAC PRO PLEASE.
    There must be a problem because since the update ... Adobe Encore is PERFECT .. and ALL 12 CORES MAX OUT and the encoding is quick!!
    I also have major problems between PP and AE using Dynamic link .. and slow renders in PP.
    Everything else works fine .. other apps / other vendors.
    I am calling Adobe today.

    Thank you for your time.
    I am using a 12 Core 2.93 GHZ with 32 GB RAAM and a NVidia GTX 285.
    I also have a Areca 1212 PCI RAID Card. ( NOTE: I HAVE A SERIOUS RAID DRIVER PROBLEM NOW. THE RAID IS DISCONNECTED )
    Mac OS X 10.7.4
    Adobe Master Suite 5.5
    The Apple "Console" app logs a whole lot of these three Adobe related errors:
    1. 5/18/12 2:10:16.260 AM aeselflink: CFURLCreateWithString was passed this invalid URL string: '/System/Library/Frameworks/System.framework' (a file system path instead of an URL string). The URL created will not work with most file URL functions. CFURLCreateWithFileSystemPath or CFURLCreateWithFileSystemPathRelativeToBase should be used instead.
    2.) 5/18/12 2:10:16.331 AM aeselflink: -[NSMenu menuID]: unrecognized selector sent to instance 0x1183100e0
    5/18/12 2:10:17.333 AM aeselflink: -[NSMenu menuID]: unrecognized selector sent to instance 0x115625740
    3.) 5/18/12 2:11:18.596 AM [0x0-0x9b09b].com.adobe.aerendercore: You have at least one output module template that refers to a missing output plug-in.  Please check your Output Module Templates.
    Only half of my hyperthreaded processors are active when using AE "Warp Stabilizer". This issue was addressed before in the Forums. There was no solution. I dont' know if it is any better in CS 6.
    Also the automatic saving of all linked compositions while using the dynamic link feature bewteen PP and AE causes huge inoperatable waiting times .. unbearable. My guess is that the new "Global Performance Cache " fixes this ... which I consider update to a terrible problem .. but they sell it as a feature ( Ill go into that later.
    Question:
    What RAID card are you using?
    Do you use the Warp Stabilizer?

  • Performance issue when a Direct I/O option is selected

    Hello Experts,
    One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
    There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
    Other Information -
    a) Environment Details
    HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
    OS: Microsoft Windows x64 (64-bit) 2003 R2
    b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
    Application 1: Buffered 700MB, Direct 5GB
    Application 2: Buffered 600MB to 1.5GB, Direct 2GB
    Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
    c) What is the current Database Data cache; Data file cache and Index cache values?
    Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
    Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
    d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
    Application 1: Page File 20GB, Index 1.7GB.
    Application 2: Page 3GB, index 700MB.
    Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
    Thanks in advance.
    Regards,
    Sudhir

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

Maybe you are looking for

  • Flex 3 to Flex 4.5.1 Error (UIComponent)

    When I change flex 3 to flex 4.5. I keep the almost code and change the essitial code to pass the complie of flex 4.5 But when I did the debug, the system just the fault at the webservice module. The server dispatch event to the webservice which conn

  • Sap query and additional fields.

    Hello guys. ABAP it's not my strong skill, but i need to develop some report. What i need: input: material, SLED, plant output: material, SLED, plant, values from characteristic field. I have created sap query with additional field type c. Below my c

  • I've accidentally deleted the browser. How do I get it back?

    I have somehow deleted the reader or browser. I cannot look up any web addresses. How do I get it back?

  • Need a mapping program

    Just switched to a Mac from a PC. I need to find a mapping program similar to Microsoft Mappoint. Can't seem to find anything that will sub for this, any help out there???

  • FRM-40735 when mouse double click trigger raised unhandled exceptiORA-01722

    I am using Forms 10g. i am writting a procedure in When mouse double click trigger that will display all the values available for ip address from this procedure. The procedure is DECLARE CURSOR cr IS SELECT LEVEL NUM FROM DUAL CONNECT BY LEVEL<=(Sele