Poor performance of update

Hi,
I'm having a performance issue with the updates that run on a huge partitioned table. Here are the details -
Table_A has 80 million records, it has columns - pk_col (this is primary key), col1 & col2.
Table_B is a partitioned table that has a total of 100 million records (each partition has about 9 million records). It has columns pk_b_col (this is the primary key), new_col and some other columns.
Here is a snippet of my pl/sql procedure that does the update on Table_B. I'm using primary key to do update and I was hoping the update to take atmost 1min for 60000 records (which is my fetch limit). But unfortunately, its taking about 20 min to update 60000 records. Can someone let me know if there's anything I can try to improve the performance of these queries?
Also, how does the fetch cursor work? does oracle run the fetch cursor query once for each fetch? or is the result of cursor query stored in memory and fetch just gets a subset of that data each time?
CURSOR get_data_cur IS
SELECT pk_col, min(col1-col2)
FROM Table_A
GROUP BY pk_col;
-- This query would return about 40 million records
FETCH get_data_cur BULK COLLECT INTO v_col1, v_col2 LIMIT 60000;
FORALL index IN v_col1.FIRST..v_co1.LAST
UPDATE Table_B
SET new_col = v_col2
WHERE pk_b_col = v_col1
AND (new_col IS NULL OR new_col > v_col2);
Thanks

Some corrections in bold... ignore my earlier post...
I'm having a performance issue with the updates that run on a huge partitioned table. Here are the details -
Table_A has 80 million records, it has columns - main_col (this column has an index), col1 & col2.
Table_B is a partitioned table that has a total of 100 million records (each partition has about 9 million records). It has columns pk_b_col (this is the primary key), new_col and some other columns.
Here is a snippet of my pl/sql procedure that does the update on Table_B. I'm using primary key to do update and I was hoping the update to take atmost 1min for 60000 records (which is my fetch limit). But unfortunately, its taking about 20 min to update 60000 records. Can someone let me know if there's anything I can try to improve the performance of these queries?
Also, how does the fetch cursor work? does oracle run the fetch cursor query once for each fetch? or is the result of cursor query stored in memory and fetch just gets a subset of that data each time?
CURSOR get_data_cur IS
SELECT main_col, min(col1-col2)
FROM Table_A
GROUP BY main_col;
-- This query would return about 40 million records
FETCH get_data_cur BULK COLLECT INTO v_col1, v_col2 LIMIT 60000;
FORALL index IN v_col1.FIRST..v_co1.LAST
UPDATE Table_B
SET new_col = v_col2
WHERE pk_b_col = v_col1
AND (new_col IS NULL OR new_col > v_col2);
Thanks

Similar Messages

  • IPhone 3GS poor performance following update

    I had a perfectly good iPhone 3GS which is now performing very poorly after IOS 6.1.6 update. Pls advise how to return it to former abilities.

    Thank you I will try this and post a result as this seems like a simple solution which would be fantastic if it works!

  • Extremely poor performance after update to 1.6

    Hi,
    I've recently updated to 1.6 from 1.5 and now any webpage that contains java applets takes anywhere from 20 seconds upto 1 minute to open and start. Is there a problem or should i just uninstall java completely?

    Can you give an example of the applet that you are using?
    Also, what OS are you using on the computer having this problem?
    -Roger
    Sun Microsystems

  • General poor performance of my Mac mini

    I have been using my mac mini for over a year, it continues to frustrate me how slow and unresponsive it can be even though I do very little with it, i.e organise photos and browse the web - usually looking for solutions to the poor performance problem!
    I came a cross this app that has given me lots of information, unfortunately with my limited IT know-how there is nothing here that gives me any clues, as to the cause of the problem. I guess the red font is not good?
    I'm sorry i can't be more specific about the problem other than to say that iPhoto and Safari the two programmes i use way more often than any other, keep giving me the beach ball. My internet speed is ok 25 Mbps down - 4.5 Mbps
    Is there anything here that stands out as being wrong? if so any suggestions/ instructions for what I should do?
    p.s. i had to quit Safari while writing this and have a report for this if it helps?
    Any help much appreciated.
    Problem description:
    My mac is just generally very slow. I only use it to look at the internet and manage photos.
    EtreCheck version: 2.1.5 (108)
    Report generated 3 January 2015 16:03:59 GMT
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
      Mac mini (Late 2012) (Verified)
      Mac mini - model: Macmini6,2
      1 2.3 GHz Intel Core i7 CPU: 4-core
      4 GB RAM Upgradeable
      BANK 0/DIMM0
      2 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      2 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000
      DELL U2412M 1920 x 1200
    System Software: ℹ️
      OS X 10.10.1 (14B25) - Uptime: 5 days 22:14:3
    Disk Information: ℹ️
      APPLE HDD HTS541010A9E662 disk0 : (1 TB)
      EFI (disk0s1) <not mounted> : 210 MB
      Macintosh HD (disk0s2) / : 999.35 GB (759.01 GB free)
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
    USB Information: ℹ️
      HP Photosmart B110 series
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple, Inc. IR Receiver
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Problem System Launch Daemons: ℹ️
      [killed] com.apple.AssetCacheLocatorService.plist
      [killed] com.apple.coreservices.appleid.passwordcheck.plist
      [killed] com.apple.ctkd.plist
      [killed] com.apple.wdhelper.plist
      [killed] com.apple.xpc.smd.plist
      5 processes killed due to memory pressure
    Launch Daemons: ℹ️
      [loaded] com.adobe.fpsaud.plist [Support]
    User Login Items: ℹ️
      iTunesHelper Application (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
      Dropbox Application (/Applications/Dropbox.app)
      Wondershare Helper Compact Application (/Users/[redacted]/Library/Application Support/Helper/Wondershare Helper Compact.app)
    Internet Plug-ins: ℹ️
      Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Support]
      FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
      Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
      QuickTime Plugin: Version: 7.7.3
      Unity Web Player: Version: UnityPlayer version 4.5.5f1 - SDK 10.6 [Support]
      Default Browser: Version: 600 - SDK 10.10
    3rd Party Preference Panes: ℹ️
      Flash Player  [Support]
    Time Machine: ℹ️
      Auto backup: YES
      Volumes being backed up:
      Macintosh HD: Disk size: 999.35 GB Disk used: 240.33 GB
      Destinations:
      Time Capsule [Network]
      Total size: 2.00 TB
      Total number of backups: 56
      Oldest backup: 2014-09-30 04:29:13 +0000
      Last backup: 2015-01-03 15:37:35 +0000
      Size of backup disk: Adequate
      Backup size 2.00 TB > (Disk used 240.33 GB X 3)
    Top Processes by CPU: ℹ️
        110% com.apple.WebKit.Plugin.64
          6% WindowServer
          1% Activity Monitor
          1% coreaudiod
          1% sysmond
    Top Processes by Memory: ℹ️
      1.11 GB com.apple.WebKit.Plugin.64
      73 MB iTunes
      49 MB com.apple.WebKit.WebContent
      39 MB mds
      39 MB WindowServer
    Virtual Memory Information: ℹ️
      68 MB Free RAM
      1.07 GB Active RAM
      1.02 GB Inactive RAM
      893 MB Wired RAM
      63.93 GB Page-ins
      2.15 GB Page-outs
    Diagnostics Information: ℹ️
      Jan 1, 2015, 11:43:39 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-114339_[r edacted].cpu_resource.diag [Details]
      Jan 1, 2015, 11:32:46 AM /Library/Logs/DiagnosticReports/WindowServer_2015-01-01-113246_[redacted].crash
      Jan 1, 2015, 11:00:30 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-110030_[r edacted].cpu_resource.diag [Details]
      Jan 1, 2015, 10:19:03 AM /Library/Logs/DiagnosticReports/iBooks_2015-01-01-101903_[redacted].cpu_resourc e.diag [Details]

    Hi Linc Davis
    Thank you for taking the time to respond.
    I got this information from a time when I was opening up iPhoto, and then going to the iCloud folder. To be honest it wasn't the most disastrous of events, I did get a few beachballs and when I opened up a shared folder the photos weren't loaded as normal. As I get more lengthy delays in carrying out any activities I will post back the console results.
    Just as a bit more info, I don't consider my problem an update to Yosemite problem, if anything since updating performance has been improved from Mavericks.
    Do you have any comments about upgrading RAM from the 4GB I have?
    Should I act on any of the messages in red from the EtreCheck report?
    Many Thanks
    Duncan
    03/01/2015 19:30:18.331 com.apple.iCloudHelper[12481]: objc[12481]: Class FALogging is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircl e and /System/Library/PrivateFrameworks/FamilyNotification.framework/Versions/A/Famil yNotification. One of the two will be used. Which one is undefined.
    03/01/2015 19:30:18.391 com.apple.xpc.launchd[1]: (com.apple.imfoundation.IMRemoteURLConnectionAgent) The _DirtyJetsamMemoryLimit key is not available on this platform.
    03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.186 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.187 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:25.446 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 10.70 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:30:33.548 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:30:33.919 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:30:34.077 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:30:34.605 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:30:34.774 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:30:34.853 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:30:47.562 pkd[12123]: FCIsAppAllowedToLaunchExt [343] -- *** _FCMIGAppCanLaunch timed out. Returning false.
    03/01/2015 19:31:10.695 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:10.807 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:16.961 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:18.514 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:18.593 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:22.538 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
    03/01/2015 19:31:27.912 mds[32]: (DiskStore.Normal:2376) 6052001 1.780097
    03/01/2015 19:31:56.831 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:57.224 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:57.281 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:57.405 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:58.507 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:58.541 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:59.160 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:09.891 com.apple.InputMethodKit.UserDictionary[12456]: -[PFUbiquitySetupAssistant canReadFromUbiquityRootLocation:](1492): CoreData: Ubiquity:  Error attempting to read ubiquity root url: file:///Users/Duncan/Library/Mobile%20Documents/com~apple~TextInput/Dictionarie s/.
    Error: Error Domain=NSCocoaErrorDomain Code=134323 "The operation couldn’t be completed. (Cocoa error 134323.)" UserInfo=0x7f90e152a380 {NSAffectedObjectsErrorKey=<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput}
    userInfo: {
        NSAffectedObjectsErrorKey = "<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput";
    03/01/2015 19:32:12.862 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:20.818 com.apple.SecurityServer[53]: Killing auth hosts
    03/01/2015 19:32:20.819 com.apple.SecurityServer[53]: Session 100281 destroyed
    03/01/2015 19:32:20.974 com.apple.SecurityServer[53]: Session 100577 created
    03/01/2015 19:32:34.833 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:35.767 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:32:35.789 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:35.980 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:36.273 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:36.487 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:47.035 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:32:50.555 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 4.52 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:32:51.578 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:32:52.745 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 2.17 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:32:53.098 mds[32]: (DiskStore.Normal:2376) 6052001 1.279589
    03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
    03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
    03/01/2015 19:33:11.961 mds[32]: (DiskStore.Normal:2376) 6052001 1.939804
    03/01/2015 19:33:29.541 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:33:32.354 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 3.81 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:33:44.549 discoveryd[49]: Basic DNSResolver  dropping message because it doesn't match the one sent Port:53 MsgID:20602
    03/01/2015 19:33:54.055 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:07.001 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
    03/01/2015 19:34:13.809 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:13.821 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:34:14.124 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:34:14.766 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:14.867 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:34:17.769 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Skype crashing and poor performance

    Hello!
    I have a Lumia625 with WP8.1. My problem is that Skype has a really poor performance on my phone. It crashes 6 times out of 10 on startup, and even if I manage to start it, the whole app is slow and laggy. Sometimes I can't even write a message it's so laggy. Video call is absolutely out of the question. It crashes my whole phone. I have no similar problems with other instant messaging apps nor with high-end games. There is something obviously using way more resource in the Skype app than it's supposed to. It's a simple chat program, why would it need so much resource?
    The problem seems to be originating from the lower (512 mb) RAM size of my phone model, because I experienced the same effect with poorly written apps, that don't keep in mind that there are 512 RAM devices, not only 1GB+ ones, and use too much resource.
    Please don't try to suggest to restart/reset the phone, and reinstall the app. Those are already behind me, and they did NOT help the problem. I'm not searching for temporary workarounds.
    Please find a solution for this problem, because it is super annoying, and I can't use Skype, which will eventually result in me leaving Skype.
    Solved!
    Go to Solution.

    When it crashes on startup it goes like:
    I tap the skype tile
    The black screen with the "Loading....." appears (default WP loading screen). Usually this takes longer than it would normally take on any other app.
    For a blink of an eye the Skype gui appears, but it instantly crashes.
    If I can successfully start up the app, it just keeps lagging. I sart to write a message to a contact, and sometimes even the letters don't appear as I touch them, but they appear much later altogether. If I tap the send message button the whole gui freezes (seems like it freezes till the contact gets my message). Sometimes the lag get stronger, and sometimes it almost vanishes, but if I keep making inputs when the lag is strong, sometimes it crashes the whole app.
    When I first installed the app, everything was fine. But after a while this behavior appeared. I reinstalled the app, and it solved the problem temporarily, but after some time the problem re-appeared. I don't know if it's relevant, but there was a time when I couldn't make myself appear online all the time (when the app was not started). In that time I didn't experience the lags and crashes. Anyways, what I'm sure about is that the lags get worse with time. Idk if it's because of use of the app (caching?), or the updates the phone makes to itself (conflict?).
    I will try to reinstall Skype. Probably it will fix it for now. I hope the problem won't appear again.

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Mbpr poor performance in bootcamp, return?

    I'm still getting poor performance in bootcamp, especially when using an external monitor after the updates. is this a hardware or software issue?

    ¿Works OK in Mac OS X?
    --Yes, not a Hardware or Mac OS X software problem.
    ¿Works OK in Windows?
    --Yes, all is well.
    --No, its a Windows software Problem, since the Hardware worked fine in Mac OS X.
    Have you re-installed Windows?

  • Poor Performance using Generic Conectivity for ODBC

    Hi my friends.
    I have a problem usign Generic Conectivity , I need update 500,000 records in MS SQL Server 2005, I'm using Generic Conectivity for ODBC. In my Oracle Database i have create a DB_LINK called TEST to redirect to MS SQL Server database.
    Oracle Database: 10.2.0.4
    MS SQL Server version: 2005
    The time for update 1,000 records in MS SQL Server using DBMS_HS_PASSTHROUGH is ten minutes. This is a poor performance.
    This is de PL/SQL
    DECLARE
    c INTEGER;
    nr INTEGER;
    CURSOR c_TEST
    IS
    SELECT "x", "y"
    FROM TEST_cab
    WHERE ROWNUM <= 1000
    ORDER BY "y";
    BEGIN
    FOR cur IN c_TEST
    LOOP
    c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@TEST;
    DBMS_HS_PASSTHROUGH.PARSE@TEST(
    c,
    'UPDATE sf_TEST_sql
    SET observation= ?
    WHERE department_id = ?
    AND employee_id = ? ');
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 1, 'S');
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 2, 'N');
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 3, 'ELABORADO');
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 4, 'PENDIENTE');
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 5, cur."x");
    DBMS_HS_PASSTHROUGH.BIND_VARIABLE@TEST (c, 6, cur."y");
    nr := DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@TEST (c);
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@TEST (c);
    COMMIT;
    END LOOP;
    END;
    You can help, how better the performance for update 500,000 record by record in MS SQL Server. I need advantages of use Oracle Transparent Gateway for Microsoft SQL Server.
    your's can suggest my another solution ?
    Thanks.

    Hi,
    There are no real parameters to tune the gateways. You should turn on gateway debug tracing and check the SQL that is being sent to SQL*Server to make sure is update statements and nothing else.
    If this is the case then the time taken will be down to various factors, such as network delays, processing on SQL*Server and so on.
    How long does it take to update the same number of records directly on SQL*Server without the Oracle or the gateway involved or if you use another non-Oracle client to do the same updates across a network, if a network is involved ?
    It may be possible to improve performance by changing the HS_RPC_FETCH_REBLOCKING parameter, so have a look at this note in My Oracle Support -
    Tuning Generic Connectivity And Gateways (Doc ID 230543.1)
    Regards,
    Mike

  • Poor performance of new SQL Azure Standard database

    This is not new information. The revised SQL models (basic|standard|premium) perform poorly compared to the earlier web|business databases.  We are talking orders of magnitude - over 4 minutes to perform an update vs. 19 seconds. (UPDATE Invoice SET
    SalesOrderID = O.SalesOrderID FROM Invoice INNER JOIN SalesOrder AS O ON Invoice.InvoiceID = O.InvoiceID for 196043 rows.
    Microsoft is saying we can only use the web database until September, 2015.  Moving to new model (tried standard S2) will cause the project to fail.
    There are numerous posts on the Internet identifying this problem.  How do we get Microsoft's attention?  This is an Azure killer. Fortunately for us, there are a number of other hosting solutions
    available.
    If this problem is not resolved in the next few months, we will be forced to abandon Microsoft Azure!
    Jim Rand

    Our application is a desktop application that communicates to the web role using a single WCF call. In the server pipeline, a call is made to a method that looks like this, except all the error trapping is removed here for brevity:
    public static Response Process(Request request)
    DateTime startDate = DateTime.UtcNow;
    Agents.Agent agent = Agents.AgentFactory.GetAgent(request);
    Response response = agent.ProcessRequest();
    response.ServiceTime = DateTime.UtcNow - startDate;
    return response;
    While building this application over the last year, we did occasional performance testing with the Windows client reporting on logout to the server the mean service time for a complete session. Quite frankly, I was amazed at the performance.  While
    slightly slower than the development machine, the performance was acceptable from the user perspective over the Internet.
    Not so anymore. The mean service time on the Azure server has increased dramatically resulting in timeouts. 
    We will be sticking with the Web edition for one more month during development. At that time, we will switch to Premium(P1) for user acceptance testing.  It should be interesting see what the mean, median and standard deviation session server statistics
    are.
    The performance of SqlAzure web edition is no longer acceptable.  I sure hope Premium(P1) makes it.
    Jim Rand

  • Poor performance of WD Abap/ Adobe

    Dear sirs,
    I would like to know if anybody of you have experienced very poor performance of WD ABAP with Adobe interactive form. Our client has paid for a 2-3 pages interactive form in WDA and is complaining about very poor performace. As a result no users are using this application because of this poor performance.
    Can anybody point out what can be a problem? Some developement problem? Basis issue? Any experience related to WDA Adobe performance? Thank you all, Otto

    Update: SAP OSS message was opened regarding this problem.
    We got a list of patches to update, notes to apply etc. All was done, applied, patched. The performance didn´t get better, it anything it was like extra percent or two, but nothing what would make the customer less angry.
    The result: this technology is promising but a) needs strong client PC b) will get better (i hope gets better soon)
    Our basis team checked all the times (of actions that has to be done to load/use the app) and the memory need both on server and client. On some client PCs only the Adobe Rader was starting like a half a minute (and more, not less). If you add time for WD, for WD/ Adobe communication and the data transfer, the time to start working with WD ABAP Adobe app can be more than a minute. That is not very usable.
    Otto

  • Illustrator CS6 12.0.1 Winx64: Crashing regularly while scrolling, general poor performance

    Hey,
    I am creating a simple reader app for Windows 8. I use very simple shapes, no appearence effects, many text styles, a few artboards and a few slices. The overall performance is very poor when scrolling around the document and dragging objects around. Changing text (typing) is also performing poorly.
    Added to the poor performance, Illustrator regularly crashes when scrolling the document using the scrool bars or hand tool. It just dies with a crash report windows popping up. This happens several times a day.
    I never had problems with CS5. All Mercury-run apps suffer from instability, Fireworks is dead due to out of memory... I mean, come on Adobe!

    Uninstall everything again, run the cleaner tool and then reinstall. Before trying to open the applications, update all apps to their most recent version.

  • CS6 poor performance

    Hi.
    I have a workflow that involves opening several raw files into Camera Raw via Bridge, adjusting the settings, then from Bridge I use the command to open files into Photoshop layers. While that is happening in Photoshop, I go back to Bridge and open the next several files in Camera Raw and make adjustments. When done, the first lot of files will be in layers in Photoshop ready for me to start layer blending.
    This worked fine in CS5, but in CS6 it all goes wrong after a few batches of files. Camera Raw is slow to respond to adjustment of sliders and is slow to update to changes in settings. Then I find that Photoshop has aborted the loading of files into layers. I end up with one layer with an empty layer above it. I close this file, but when I quit Photoshop I'm prompted to save a file that I cannot see and is not listed as an open document under Photoshop>Windows.
    This is a show-stopper for me, and have reverted to CS5 to get work done. Hoping to find a solution.
    Running a MacPro, with 2 x 2.66 GHz Quad Core Intel Xeon processors, 24GB RAM, standard ATI Radeon HD 4870 card that came with the Mac.

    A lot of the poor performance of CS6 is linked to the video card and driver.  CS6 relies a lot more on the video card than CS5 and also has a new graphics engine.  It may take a little longer for ATI and NVidia to get the drivers configured correctly.  THe latest 12.4 from ATI is worse than the 12.2.  Perhaps the next update will get it right.

  • BI4 - Poor Performance

    Hi,
    I have a problem updating several reports which either time out or take ages to respond when making simple changes. It doesn't seem to matter which mode you are in while making a change (whether in data or structure mode) nor the tool used (Webi or Rich Client).
    I have read in other forums users reporting similiar problems which results in an error unable to get the first page of the report.
    To ensure it wasn't an issue inherited from a previous version of BO (as these reports were originally written in BOXI BI 3.1) I recreated the report from scratch only to hit the same issue when populating the report with various formula.
    When this occurs (i.e. the unable to get the first page of the report error occurring) I am forced to save then close the Rich Client and then have to re-open the file each and every time.
    We are currently using Business Objects BI4.0 SP6 Edge. These reports consist of some 600+ variables however they never caused this issue in the older version of BO.
    Please can someone suggest a solution to this issue as it is taking me much longer to make simple updates to these reports when it ought to be straight forward.
    Cheers
    Ian

    Hi Farzana,
    Thanks for your response. Yes, I had read this on a variety of forums and due to the poor performance wrote the report from scratch again.
    Firstly, I got the structure of the report and saved it as a template. No issues. Then add in Queries and variables. No issues. It was only when I had populated the report with the formula / calculations (after about my half way point) I started to detect performance issues again.
    This forced my hand and I used RestFUL Web Service calls to complete the rest otherwise it would of been a painful excercise. The report contains some 600+ variables as 750+ cells populated with formula calculations so it is a busy report.
    I would of thought others with complex reports may have reported similiar performance issues as this worked fine in our old BOXI v3.1 environment.

  • Adobe Reader XI Shows Poor Performance.

    Hi there!
    I've been using acrobat for a few years now , but the Acrobat Reader XI , its showing poor performance rendering a Big PDF file Like 2.8MB (its a MS Proyect Exported Document)..and I have tested it with another PDF viewers and it works fine. Any AR Bug maybe. FYI it complete freezes the Computer. (Im using a Core i3 W7 Pro x64 bits , 4GB , so the PC is not the problem) . I have tested in  2 pcs and i got the same results.

    I just posted the below on another discussion.  But read below and give it a try.
    I am running Windows 7 64-bit with Adobe Reader 11.0.3 and am experiencing the same issue. Opening PDFs takes a long time and there is a delay and it freezes for a period of time and then it will unfreeze and becomes available.
    What I have found to be the issue in my case is that Protected Mode is enabled. If you open Adobe Reader and click on Edit and Preferences then go to the section Security (Enhanced). You can turn "Enable Protected Mode at Startup" off by unchecking the check box.
    After that give it a try and see if that doesn't clear up the issue.
    I hope Adobe provides and update or new release to address this issue as it seems to be a problem with quite a lot of folks. Not sure what is causing it but it shouldn't be that way. Our users are chomping at the bit to turn it off but I am telling them not to for now as I hope there is an update soon to fix and address the problem.
    And don't tell me to turn it off and keep it off either as that is NOT a solution.
    Anyway - hope it helps.

Maybe you are looking for

  • SAP Gui on Windows Vista

    Hi, I am getting a Dump error while logging in to the SAP GUI 7.10( After Giving user name and Password) in Windows vista( Business 32 Bit OS). Error log as follows, Could anyone pls give me solution. Information on where terminated                  

  • Terrible customer service.

    I'm writing in for my parents, they are in their 80s and technology is difficult at times.Thursday afternoon a tech was installing voice in the unit below my parents. In the process, he knocked out service. I called and was told service would be rest

  • Clearing Customer down payment

    Hello all, Please explain whether the system gives currency conversion fluctuation at the time of clearing customer down payment against the invoice or not. If Yes the where to got for its setting. suggest

  • Canon .MXF footage - should I transcode it? What to?

    I've got some footage recorded from a Canon XF105. It's 1920x1080 and it's 3D - so there's a pair of files for each clip. In AE it plays frustratingly slowly and I have to do a RAM render to see it at full speed. In Premiere it plays ok. Is there a f

  • Performance and tuning techniques

    Hi Experts,   Good moring, please help me to send what are performance and tuning techniques in abap programming. how to improve the performance of abap program please give reply asap. Regards Venkat