Poor performance of the BDB cache

I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
Overview
Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
The Database
Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
sequences (maintains record IDs for all other tables)
urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
downloads (p), downloads.values (s), downloads.xfer (s)
agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
search (p), search.values (s), search.hits (s)
users (p), users.values (s), users.hits (s), users.groups.hits (sf)
errors (p), errors.values (s), errors.hits (s)
dhosts (p), dhosts.values (s)
statuscodes (HTTP status codes)
totals.daily (31 days)
totals.hourly (24 hours)
totals (one record)
countries (a couple of hundred countries)
system (one record)
visits.active (active visits - variable length)
downloads.active (active downloads - variable length)
All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
Database Size
One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
urls (p):
8192 Underlying database page size
2031 Overflow key/data size
1471636 Number of unique keys in the tree
1471636 Number of data items in the tree
193 Number of tree internal pages
577738 Number of bytes free in tree internal pages (63% ff)
55312 Number of tree leaf pages
145M Number of bytes free in tree leaf pages (67% ff)
2620 Number of tree overflow pages
16M Number of bytes free in tree overflow pages (25% ff)
urls.hits (s)
8192 Underlying database page size
2031 Overflow key/data size
2 Number of levels in the tree
823 Number of unique keys in the tree
1471636 Number of data items in the tree
31 Number of tree internal pages
201970 Number of bytes free in tree internal pages (20% ff)
45 Number of tree leaf pages
243550 Number of bytes free in tree leaf pages (33% ff)
2814 Number of tree duplicate pages
8360024 Number of bytes free in tree duplicate pages (63% ff)
0 Number of tree overflow pages
The Testbed
I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
I also used a code profiler to analyze SSW and BDB performance.
The Problem
Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
The Tests
SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
Some of the other things I tried/observed:
* I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
* I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
* I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
* The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
* I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec.

Similar Messages

  • Poor performance for the 1st select everyday for AFRU table

    Hello everyone, I have performance problems with AFRU table. Every day, the first time I run a "Z" transaction, it takes around 100-120 seconds, but the second time and following it only takes four seconds. What could I do, in order to reduce the first execution time?
    This is the select:
    SELECT * FROM AFRU WHERE MANDT = :A0 AND CATSBELNR = :A1 AND BUDAT = :A2 AND PERNR = :A3 AND STOKZ <> :A4 AND STZHL = :A5
    The execution plan for this select takes index AFRU~ZCA with an acceptable cost of 6.319. Index AFRU~ZCA is a nonunique index with these colums: MANDT + CATSBELNR + BUDAT + PERNR
    I'll appreciate any ideas.
    Thanks in advance,
    Santi.

    What database system are you using (ASE, Oracle, etc?).
    If ASE, for the general issue of the first exection of a query taking longer, the two most likely reasons would be
    a)  the table's data has aged out of cache so the query has to do a lot of physical i/o to read the data back into cache
    or
    b) the query plan for the query has aged out of statement cache and needs to be recompiled.
    This query looks pretty simple, so the data cache seems much more likely.
    To get a better feel, some morning run the query with
    set statistics io on
    set statistics time on
    then run it again and look for differences in the physical vs logical i/o numbers and compile vs execution times.
    You could use a scheduled event (Job Scheduler, cron job) to run the query or some query very like it a little earlier in the day to prime the data cache with the table data.

  • Poor Reorganization of the Thumbnail Cache

    After the repeated requests I foolishly discovered why I had been avoiding the prompt to reorganizine my thumbnail cache. It placed the pictures in folders according to the Year shot and the Roll imported, when they had been well organized by Years, Months and Days.
    With the new file organization, finding the right photo through Finder to export for other applications and emails is now more time consuming because the Rolls are not organized by months and some contain multiple days.
    Can anybody help me to reset the file organization of the thumbnail cache back to Months and Days with out having to reinstall all of the fotos? Shouldn't there be an option for import organization?
    Many thanks, B
    Powerbook G4   Mac OS X (10.3.9)  
    Powerbook G4   Mac OS X (10.3.9)  

    Esheki:
    Welcome to the Apple Discussions. To email a photo(s) all you need to do is select them in iPhoto and then click on the Email button at the bottom. You do not have to find the file via the Finder to add to an email. If you want to use an photo in another application you can either drag the photo from the iPhoto window into the open window of the other application to add it. Or drag it to the desktop from iPhoto and then add it to the other application from there. iPhoto is designed so the user does not, and should not, have to go rummaging around in the iPhoto Library folder for a file via the Finder. See Don't tamper with files in the iPhoto Library folder from the Finder.
    There is no way to reset the folder system back to the previous iPhoto 5 system unless you want to start over with iPhoto 5 and re-import your photos back in.

  • Poor Performance of the WebLogic Portal System

    Hi,
    I am facing one issue which has become bottleneck over the time as far as the development of my application is concerned.
    My problem is that when i run and wish to see my portal page (Web Page)on Internet Explorer/Mozzila Firefox it takes so much time to get rendered (appx 10 mins). This is affecting the productivity as the page rendering is a frequent process to see the output of your work/changes made.
    I would be very thankful if anyone can guide me what is wrong. Is this problem is with me only? Why Weblogic Portal system so slow as compared to other Portal systems like Microsoft's Sharepoint and IBM's Webshpere Portal system.
    I am using Weblogic Portal v10.
    CPU is 3.2 Ghz, 4 GB RAM, 3 MB Cache.
    Please guide. I would appreciate if one can provide some way out to speed up the page rendering. I have tried changing the Heap Size etc but failed.
    Thank You all. Have a great Day.

    10 minutes?!
    We need to narrow that down, it may be something in your portlet implementation. An easy way to get an idea would be to take a series of Java thread dumps of the WLP server instance while it is processing that portlet. On Windows, press CTRL-Break or Google for the way to do it for your platform.
    It will print out what each thread is working on - if you see your code in there over a period of time, you've got a problem in your portlet. If it is stuck in WLP code, let us know.
    I also did a blog entry about performance improvement tips during iterative development, some might apply for you:
    http://peterlaird.blogspot.com/2007/05/optimized-development-for-weblogic.html
    Peter

  • Observing poor performance on the execution of the quereis

    I am executing a relatively simple query which is rougly taking about 48-50 seconds to execute. Can someone suggest an alternate way to query the semantic model where we can achieve response time of a second or under. Here is the query
    PREFIX bp:<http://www.biopax.org/release/biopax-level3.owl#>
    PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
    PREFIX ORACLE_SEM_FS_NS:<http://oracle.com/semtech#dop=24,RESULT_CACHE,leading(t0,t1,t2)>
    SELECT distinct ?entityId ?predicate ?object
    WHERE
    ?entityId rdf:type bp:Gene .
    ?entityId bp:name ?x .
    ?entityId bp:displayName ?y .
    ?entityId ?predicate ?object .
    FILTER(regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i"))
    Same query executed from sqldeveloper takes about as long as well
    SELECT distinct /*+ parallel(24) */subject,p,o
    FROM TABLE
    (sem_match ( '{?subject rdf:type bp:Gene .
    ?subject bp:name ?x .
    ?subject bp:displayName ?y .
    ?subject ?p ?o
    filter (regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i") )
    sem_models ('biopek'),
    null,
    sem_aliases
    ( sem_alias
    ('bp',
    'http://www.biopax.org/release/biopax-level3.owl#'
    NULL,
    null,null ))
    Is there anything I am missing, can we do anything to optimize our data retrieval?
    Best Regards,
    Ami

    For better performance when using FILTER involving regular expression, you may want to create a full-text index on MDSYS.RDF_VALUE$ table as described in:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e11828/sdo_rdf_concepts.htm#CIHJCHBJ
    I am assuming that you are checking for case-insensitive occurrence of the string GB035698 in ?x or ?y. (On the other hand if you are checking if ?x or ?y is equal to a case-insensitive form of the string GB035698, then the filter could be written in an expanded form involving just value-equality checks and would not need a full-text index for performance.)
    Thanks.

  • Poor Performance of the query.

    Hi all,
    i am using this query
    select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website
    from (select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,
                 row_number() over (partition by contactname, address1
                                   order by contactname, address1) as rn
            from vw_sub_cl_add1 where siteid=10 and bpcnum_0 = '0063') emp where rn =1I used explain plan for the query the result is
    Plan hash value: 3976107967
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | | | 0 (0)| | |
    | 1 | REMOTE | | | | | INFO | R->S |
    8 rows returned in 0.04 seconds      but, actually the query return 10 rows.
    the view "vw_sub_cl_add1" is created using database links(remote database server).
    this query i am using in for loop to retrieve the records and print it one by one.
    The problem is : The perfomance of the query is so poor. it takes 1.08 sec to display all the records.
    what are the steps i should do to minimize the retrival time.?
    Thanks in advance
    bye
    Srikavi

    Since this is query that is processed completely on the remote site, there are at least two potential issues that you should check if you don't want to use the "materialized view" approach:
    1. The time it takes to transport the result set to your local database, i.e. potential network issues
    2. The time it takes to process the query on the remote site
    Since you're only fetching 10 rows - if I understand you correctly - the first point shouldn't be an issue in your case.
    If you have suitable access to the remote site you would need to generate an execution plan of the "local" version of the query by logging directly into the remote size to find out why it takes longer than you expect. Probably it's missing some indexes if the number of rows to process should be only a few and you expect it to return more quickly.
    Here are simple instructions how to generate a meaningful execution plan if you want to post it here:
    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Macbook (mid 2012, 9,2) has very poor performance since the day I bought it (less than 2 months ago)

    Problem description:
    My macbook is extremely slow since the day I bought it (which was less than 2 months ago)…Came preinstalled with Mavericks and 4GB of RAM. Upgraded it to 16GB hoping that would help, but that did very little…then upgraded to Yosemite. That just made it worse. Formatted the whole thing and installed clean Yosemite. Still, every time I right click, I get the rainbow loading icon and it hangs. Every time I open any file, or click any app, it hangs for a few seconds, again with a loading icon. Every time I play a song on iTunes or even VLC for that matter, it stops multiple times, every few seconds. Feels like it’s “buffering” while I’m playing a song off my hard drive. It’s been bothering me for too long now…Any idea what the issue is exactly?
    EtreCheck version: 2.0.11 (98)
    Report generated 12 November 2014 23:55:47 GMT-3
    Hardware Information: ℹ️
      MacBook Pro (13-inch, Mid 2012) (Verified)
      MacBook Pro - model: MacBookPro9,2
      1 2.5 GHz Intel Core i5 CPU: 2-core
      16 GB RAM Upgradeable
      BANK 0/DIMM0
      8 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      8 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000 -
      Color LCD 1920 x 1080
      LCD TV spdisplays_1080p
    System Software: ℹ️
      OS X 10.10 (14A389) - Uptime: 2 days 10:0:44
    Disk Information: ℹ️
      APPLE HDD TOSHIBA MK5065GSXF disk0 : (500.11 GB)
      S.M.A.R.T. Status: Verified
      EFI (disk0s1) <not mounted> : 210 MB
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
      Macintosh HD (disk1) /  [Startup]: 498.88 GB (374.54 GB free)
      Core Storage: disk0s2 499.25 GB Online
      MATSHITADVD-R   UJ-8A8 
    USB Information: ℹ️
      SIGMACHIP USB Keyboard
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. Apple Internal Keyboard / Trackpad
      Apple Computer, Inc. IR Receiver
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Configuration files: ℹ️
      /etc/hosts - Count: 6
    Gatekeeper: ℹ️
      Anywhere
    Kernel Extensions: ℹ️
      /Library/Application Support/VirtualBox
      [loaded] org.virtualbox.kext.VBoxDrv (4.3.18) Support
      [loaded] org.virtualbox.kext.VBoxNetAdp (4.3.18) Support
      [loaded] org.virtualbox.kext.VBoxNetFlt (4.3.18) Support
      [loaded] org.virtualbox.kext.VBoxUSB (4.3.18) Support
    Startup Items: ℹ️
      TuxeraNTFSUnmountHelper: Path: /Library/StartupItems/TuxeraNTFSUnmountHelper
      Startup items are obsolete and will not work in future versions of OS X
    Launch Agents: ℹ️
      [not loaded] com.adobe.AAM.Updater-1.0.plist Support
      [running] com.teamviewer.teamviewer.plist Support
      [running] com.teamviewer.teamviewer_desktop.plist Support
    Launch Daemons: ℹ️
      [loaded] com.macpaw.CleanMyMac2.Agent.plist Support
      [loaded] com.microsoft.office.licensing.helper.plist Support
      [loaded] com.teamviewer.Helper.plist Support
      [running] com.teamviewer.teamviewer_service.plist Support
      [not loaded] org.virtualbox.startup.plist Support
    User Launch Agents: ℹ️
      [loaded] com.adobe.ARM.[...].plist Support
      [loaded] com.google.keystone.agent.plist Support
      [loaded] com.macpaw.CleanMyMac2Helper.diskSpaceWatcher.plist Support
      [loaded] com.macpaw.CleanMyMac2Helper.scheduledScan.plist Support
      [loaded] com.macpaw.CleanMyMac2Helper.trashWatcher.plist Support
      [not loaded] org.virtualbox.vboxwebsrv.plist Support
    User Login Items: ℹ️
      iTunesHelper Application (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
      Dropbox Application (/Applications/Dropbox.app)
    Internet Plug-ins: ℹ️
      AdobePDFViewerNPAPI: Version: 11.0.0 - SDK 10.6 Support
      SharePointBrowserPlugin: Version: 14.4.5 - SDK 10.6 Support
      AdobePDFViewer: Version: 11.0.0 - SDK 10.6 Support
      QuickTime Plugin: Version: 7.7.3
      AdobeAAMDetect: Version: AdobeAAMDetect 1.0.0.0 - SDK 10.6 Support
      Default Browser: Version: 600 - SDK 10.10
    3rd Party Preference Panes: ℹ️
      Tuxera NTFS  Support
    Time Machine: ℹ️
      Time Machine not configured!
    Top Processes by CPU: ℹ️
          20% mds
          6% Google Chrome
          4% WindowServer
          1% hidd
          0% Dropbox
    Top Processes by Memory: ℹ️
      481 MB Coda 2
      258 MB Google Chrome
      189 MB softwareupdated
      189 MB loginwindow
      189 MB Finder
    Virtual Memory Information: ℹ️
      5.71 GB Free RAM
      7.22 GB Active RAM
      2.63 GB Inactive RAM
      1.62 GB Wired RAM
      25.24 GB Page-ins
      0 B Page-outs

    If it has been slow since it came out of the box, it's probably defective. The warranty entitles you to complimentary phone support for the first 90 days of ownership.

  • Poor Performance of the Demos - help please

    I finally got the demos to run after figuring out that they had the wrong database port (19092 instead of 29092) configured by default.
    The Vehicle Incident Report application runs fine, except for responsiveness.
    Should I expect over a second delay between clicking a button and something displaying on the screen when it is running on local host, with a p4 2.6ghz machine running linux with 1.5GB of memory in Firefox 1.0.7?
    On the first impression, I don't mind (initial compilation I expect), but subsequent impressions I would expect things to be lightning fast.
    Sometimes they are lightning fast, if, for example, I swap between two pages in quick succession but if I click on the tabs from left to right then towards the left again, zzzz.
    I have tried setting up the HTTP monitor but the timestamps recorded are pretty useless for this sort of thing, having a resolution of minutes (microseconds or even seconds would have been handy)
    I have captured the TCP packets on local host using
    tcpdump -A -s 256 -i lo 'tcp port 28080 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'and filtered them a bit.
    For example, in the VIR app, I clicked on each of the tabs one after the other and then generated the following output by looking for POST then looking for the next GET. (on the assumption the first communication is the POST and the last communicatoin is a GET - not entirely true, I am sure, but good enough for illustration purposes).
    As you can see, the delays between the post and the GET can be quite long.
    18:04:29.288087     POST /VehicleIncidentReportApplication/faces/Profile.jsp HTTP/1.1
    18:04:29.820572     GET /favicon.ico HTTP/1.1
    18:04:30.849785     POST /VehicleIncidentReportApplication/faces/SearchAVehicle.jsp HTTP/1.1
    18:04:31.269366     GET /favicon.ico HTTP/1.1
    18:04:32.381863     POST /VehicleIncidentReportApplication/faces/Profile.jsp HTTP/1.1
    18:04:38.014451     GET /VehicleIncidentReportApplication/theme/com/sun/rave/web/ui/defaulttheme-gray/javascript/table.js HTTP/1.1
    18:04:40.852971     POST /VehicleIncidentReportApplication/faces/Vehicles.jsp HTTP/1.1
    18:04:49.233314     GET /favicon.ico HTTP/1.1
    18:04:53.156757     POST /VehicleIncidentReportApplication/faces/Help.jsp HTTP/1.1
    18:04:58.777153     GET /favicon.ico HTTP/1.1
    18:05:00.288716     POST /VehicleIncidentReportApplication/faces/Vehicles.jsp HTTP/1.1
    18:05:05.938047     GET /favicon.ico HTTP/1.1
    18:05:06.934621     POST /VehicleIncidentReportApplication/faces/Profile.jsp HTTP/1.1
    18:05:07.320218     GET /favicon.ico HTTP/1.1There is no way I am going to proceed with this app using jsCreator if this is how fast they run.
    Where would I start looking?
    Any help would be appreciated.

    I am not entirely sure its the databases fault.
    I have MySQL setup as my data source for my app in jsCreator2 and it is still glacial in speed.
    Runs fine in Tomcat though.
    Thanks for the reply though,
    ...Lyall

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Project Server 2013 Poor Performance

    Hi Guys
    On a new build of Project Server, we are experiencing poor performance across the board, Site Collection, Central Admin, everything is slow.
    We have allocated 12GB and Quad core (lower end of the recommend scale). However as there are no users, I would expect it to be fine at the moment (Search is not setup or anything, just a simple service app with a pwa site provisioned).
    What things can we check? From looking on the internet, Distributed Cache crops up a lot. Can this bring performance right down?
    For the record, we have March 2013 PU and Oct 2013 CU applied.

    Hello,
    There could be a number of causes, it is very difficult to diagnose without understanding the farm infrastructure. Can you give details of:
    Number of servers in the farm + Specs
    If virtual, is the host server over committed? Have you set up reservations for memory and CPU?
    What is the storage?
    What is the network speed between servers / storage etc?
    Do you have any Anti-Virus software installed on the servers? If so, have you tested with it disabled on all servers in the farm?
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS

  • BI4 - Poor Performance

    Hi,
    I have a problem updating several reports which either time out or take ages to respond when making simple changes. It doesn't seem to matter which mode you are in while making a change (whether in data or structure mode) nor the tool used (Webi or Rich Client).
    I have read in other forums users reporting similiar problems which results in an error unable to get the first page of the report.
    To ensure it wasn't an issue inherited from a previous version of BO (as these reports were originally written in BOXI BI 3.1) I recreated the report from scratch only to hit the same issue when populating the report with various formula.
    When this occurs (i.e. the unable to get the first page of the report error occurring) I am forced to save then close the Rich Client and then have to re-open the file each and every time.
    We are currently using Business Objects BI4.0 SP6 Edge. These reports consist of some 600+ variables however they never caused this issue in the older version of BO.
    Please can someone suggest a solution to this issue as it is taking me much longer to make simple updates to these reports when it ought to be straight forward.
    Cheers
    Ian

    Hi Farzana,
    Thanks for your response. Yes, I had read this on a variety of forums and due to the poor performance wrote the report from scratch again.
    Firstly, I got the structure of the report and saved it as a template. No issues. Then add in Queries and variables. No issues. It was only when I had populated the report with the formula / calculations (after about my half way point) I started to detect performance issues again.
    This forced my hand and I used RestFUL Web Service calls to complete the rest otherwise it would of been a painful excercise. The report contains some 600+ variables as 750+ cells populated with formula calculations so it is a busy report.
    I would of thought others with complex reports may have reported similiar performance issues as this worked fine in our old BOXI v3.1 environment.

  • Poor Performance of Blackberry 9860-Torch Model RDP71UW

    This is to inform you regarding my sincere concern over the blackberry product. I had bought a new blackberry torch 9860 11 months back and since then am facing problems because of the poor performance of the handset. During this period I have taken this handset 4times for servicing to your authorized service center. My data loss and of-course time is lost each time. Its too pathetic that I have to take the phone for service in the very first year of purchase and despite of that still has complaint. I am really sorry to say the performance of this reputed brand is far below the expectation and is poor that any cheap quality phones available in the market.
    I am least satisfied with the quality of phone and also doubtful about the other blackberry models available in the market.
    Finding no other alternative I am forced to forward this expecting a sincere reply and a solution to resolve the issues with my handset.
    Regards
    Sunil Kumar

    I'm sorry to hear of your phone issues.  Just to be clear, we're volunteers here and not actual RIM employees so any response you receive here is not directly from RIM.
    We're very good at solving issues in these forums.  If I could ask you to list any issues you're having, including what error message(s) you're seeing, what actions you performed when seeing such message(s), etc., we can tackle the issues one at a time.  Hopefully with our help, you can cancel any future trips to the service depots.
    We await your reply. 
    - If my response has helped you, please click "Options" beside my post and mark it as solved. Clicking the "thumbs up" icon near the bottom of my response would also be appreciated.

  • Drop-Down WebDynPro components - Poor Performance ( MSS )

    Hello everyone,
    I am currently working with a customer who is implementing ESS/MSS on NW2004s with an ECC 6.0 backend.
    We are having pretty poor performance with the DropDown UI components in the DynPro iViews. Sometimes they take upto 5secs to appear. ( which is annoying ) I have verified that the performance impact is due to machine performance ( too much javascript going on ) and the normal calls to the backend to populate the dropdowns.
    My question is if there is anyway to 'pre-populate' the dropdowns when the iView is loaded and turn off the live calls to the backend when they are clicked. ( We cannot do anything about the machine performance )
    Thanks

    why dont you use node and save all the drop down attributes in it before loading that screen itself?
    This may help you to avoid consuming such time during populating dropdowns.
    create a node.
    save all the drop down elelments to it in an attribute and it will not take so much of time to populate as it will be already present before runtime.

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • Performance of the query is poor

    Hi All,
    This is Prasad.  I have a problem with the query it is taking more time to retrieve the data from the Cube.  In the query they are using a Variable of type Customer Exit.   The Cube is not at compressed.  I think the issue with the F fact table is due to the high number of table partitions (requests) that it has to select from. If I compress the cube, the performance of the query is increased r not?  Is there any alternative for improving the performance of the query.  Somebody suggested Result set query, iam not aware of this technique if u know let me know.
    Thanks in advance

    Hi Prasad,
    Query performance will depend on many factors like
    1. Aggregates
    2. Compression of requests
    3. Query read mode setting
    4. Cache memory setting
    5. By Creating BI Accelerator Indexes on Infocubes
    6. Indexes
    Proposing aggregates to improve query performance:
    First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
    --> execute.
    Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
    Again goto RSRT> Give query name> Execute+Debug-->
    A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
    query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
    then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
    Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
    Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
    I hope this will help u... go through the below links to know about aggregates more clear.
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Follow this thread for creation of BIA Indexes:
    Re: BIA Creation
    Hopr this helps...
    Regards,
    Ramki.

Maybe you are looking for