Why such poor performance?

Hi.  New Arch Linux user here, but not new to Linux (I'm no expert though, I've merely tinkered with it here and there over the past 7 years or so)
I have this old laptop that I've installed Arch on in order to give it some new life.  While it's old, it's not exactly ancient either, and it's a system on which I thought Linux would run well on, at least for modest web surfing/paper writing and the likes.
Dell Inspiron 8200
Pentium 4 1.6ghz
1 GB RAM
Nvidia GeForce4 Go 440 graphics (64MB) -- using Nvidia 96xx driver, because based on what I read it seems open-source 3D drivers aren't usable/stable yet
Using XFCE4 for a desktop environment.
I'm having an issue with performance on the system.  I check with top and it shows Xorg using about 20% of the processor idle.  That goes up when I'm moving windows or if I make top refresh faster than the default.  It seems like pretty much anything having to due with X redrawing hammers the CPU.  The biggest offender would be AbiWord.  I was working on a document and scrolling takes the CPU usage all the way to 100%.  I thought it might be compositing, so I turned it off and performance is even worse.  Xorg uses about 30% CPU idle and there is about a 2 second delay when I try to minimize/maximize a window.  I have no idea why it would actually run better with the compositing on.  Of course, the performance with compositing isn't impressive either.
Also, although this may be expected since general 2D performance seems bad... 3D/OpenGL performance is terrible.  I installed a few games and they run like crap.  For instance, SuperTux runs at about half speed at best.  I am almost certain that it isn't that the computer's too weak for it, because I remember running it a few years ago on my Pentium II/450.  I even tried some games in WINE and they lag as well (I know WINE imposes a bit of a performance penalty but these games never really pushed the CPU too badly under Windows XP)
I'm not sure if I misconfigured something or if this laptop just hates me.  Any suggestions?
EDIT 4/1/2011 - I'm not bumping this thread because of its age, but if any searchers in the future come across this, I'd like to note that I discovered this issue is not linux-related.  My laptop has a faulty heat sensor that is underclocking the processor to about the equivalent of a 233mhz Pentium when the temperature rises just a tiny bit.  However when it does this the operating system will still report that it is running at the full 1.6ghz, as it's basically just overheating protection that is kicking in.  The system thinks that it is going to catch fire or something and freaks out.  It is apparently a common flaw with these machines as they age.  I am able to maintain decent performance by keeping the temperature low (of course this means running the fans all the time on low as a minimum, but I'm finally replacing this in a few months so I am not concerned about its well-being)  If it does underclock the processor for some reason, pressing Fn+Z will temporarily return it to the original speed and it should stay there providing the temperature doesn't rise again.
Last edited by MrKsoft (2011-04-01 18:00:05)

lagagnon wrote:
MrKsoft wrote:I've run a very usable Ubuntu/GNOME/Compiz based system on my P2/450, 320MB RAM, with a Radeon 7500 before, and that's even older hardware, with bulkier software on top of it.
I'm sorry but I find that very hard to believe. I work with older computers all the time - I volunteer with a charity that gets donated computers and we install either Puppy or Ubuntu on them, depending on vintage. On a P450 with only 320MB almost any machine of that vintage will run like a dog with Ubuntu and Compiz would be a no go. It would be using the swap partition all the time and the graphics will be pretty slow going.
Hey, believe it: http://www.youtube.com/watch?v=vXwGMf141VQ
Of course, this was three years ago.  Probably wouldn't go so well now.
To start helping you diagnose your problems please reboot your computer and before you start loading software show us the output to "ps aux", "free", "df -h", "lspci" and "lsmod" so we can check your system basics. You could paste all those over to pastebin.ca if you wish.
Here's everything over at pastebin: http://pastebin.ca/2005110

Similar Messages

  • AE CS4 - why such poor RAM preview on my FAST machine?

    Hi all, I just installed After Effects CS4 - I have a smokin fast mchine: quadcore Zeon 3.20 GHz, hyperthreading turned on, nvidia quadrofx 3800 with current driver, 12 GB DDR3 RAM, windows 7 64bit ... can you PLEASE help me troubleshoot a problem:  even on the simplest compositions, a moving picture or a moving piece of type, the RAM preview stutters along, the current time indicator hops along at a choppy pace, images in the frame dont look like they're moving smoothly, and all i get is about 6 seconds of RAM preview ... seems to me that I used to get better performance on a MUCH MUCH  slower machine under windows XP ... i have looked around the forums and seem the advice on multiprocessor options and the like, but nothing I do seems to help ... do any of you have any advice for me to improve my performance???  if you have any questions about my setup or preferences, let me know, I will be on this until I get what I expect out of my (otherwise incredibly fast) machine

    Hi Todd, thanks again for your reply ... FYI, I bought the FX3800 for Avid editing, for which it is the qualified video card, and supports distributed processing in the Avid along with my DX HD hardware BOB - I had hoped it would work well with After Effects, too, but I do understand the Adobe is very transparent about the fact that, as long as your card has OpenGL support, probably best to sink your $$$ into more RAM or faster CPU .. but to stay on topic, I am just  very stunned that I am not getting better playback with my system ... I have the most recent Nvidia drivers ... I have 2 NEC 2490WUXi2 each connected through DVI-D to my video card, each runnign at its native 1920x1200 res, 59 hertz ... the programs I have installed are Adobe CS4 Production Premium, Win7 64bit, MS Office 2007 Enterprise, latest version of Quick Time, Media Player version 11, and NERO 9 ... you dont think that one of these video player apps (like media player or QT) are messing with my screen performance do you?  what about Nero 9, have you heard any compatibility or display problems associated with this app?  would unistalling any of these programs be a troubleshooting technique that you'd attempt?
    Finally, when you say "an issue with a bottleneck of drawing to the screen" what do you mean exactly? 'cause this kinds sounds like a goood description of what I am seeing ... what might be the cause of such a bottleneck, I am willing to try anything ...
    Thanks, sir, for your reply!
    Derek Reid
    Trader Multimedia Inc.
    www.tradermm.com
    Date: Tue, 30 Mar 2010 21:46:43 -0600
    From: [email protected]
    To: [email protected]
    Subject: AE CS4 - why such poor RAM preview on my FAST machine?
    Just one thing to cut down a frequent misconception:
    Your nice graphics card doesn't do much for After Effects performance.In fact, trying to use OpenGL for rendering (including rendering of previews) can cause problems.
    I recommend turning OpenGL rendering off for all but interactions. I.e., choose the http://help.adobe.com/en_US/AfterEffects/9.0/WSAF696587-2D81-42e2-B248-4C5C2B7D3614a.html and turn off http://help.adobe.com/en_US/AfterEffects/9.0/WS3878526689cb91655866c1103a4f2dff7-79e8a.htm l. OpenGL is not compatible with Render Multiple Frames Simultaneously multiprocessing.
    I'm not saying that OpenGL is your problem. Rather, because you mentioned your nice graphics card, I just had to stress that you need to forget about that with regard to After Effects. It's not really relevant.
    It may be that the jerkiness is an issue with a bottleneck of drawing to the screen. (I don't know, but it's possible.)
    Your hard disk being too slow wouldn't be an issue with RAM previews, since the frames are all cached in RAM.
    >

  • Why such poor support for iCal?

    On the first two pages of this forum, only TWO questions are marked as answered (My own question has been viewed 36 times - no responses at all). Many of the questions refer to fairly serious 'buggy' behaviour by the iCal application. By doing a little Googling, I have found many similar complaints elsewhere, going back several years.
    It seems that Apple has a rather unsatisfactory application here. Why is this not being fixed? Does anybody know of any impending major updates that will produce a calendar app without the kind of problems encountered on this forum?

    These forums are user-to-user. Apples involvement is limited to providing the platform and monitoring for adherence to terms of use (which your post probably violates!).
    Marking of questions as answered is something that has to be done by the question poster. One of the great frustrations of answering questions here is that original posters frequently fail to give any feedback at all on answers, so other visitors don't know if the answer could help them, and the helper doesn't learn how to give better help.
    As for unanswered questions, you must remember that the answerers are just ordinary users. If I, for example, had seen your original question, I would probably have decided not to answer it (I have now answered it with a request for further info) as I don't use invitations myself and so have limited knowledge with which to help you. If someone answers a question, and is not able to help, there is a smaller chance that someone who might know the answer will look at the question.
    As for bugs, two points. First, there are millions of users for whom iCal performs satisfactorily. Second, this is not a bug reporting site, so if you don't use the OS X Feedback site, or better the developer site, it doesn't register on the developer's radar.
    AK
    Message was edited by: Austin Kinsella1

  • Why such horrible performance and unwillingness for Verizon to help?

    I have fios and Internet, TV, five cell phones and terrible performance. I can barely connect from one room to another using wireless. Occasionally it works often substandard strength. When I come home from people's houses the connections are great, even when the connection is from the neighbor next door! I have complained several times. I have BEGGED to have a modern router put in. The router is the original antiquated machine the originally put in. I think it runs on vacuum tubes. Tech support will not swap it out. Meanwhile my neighbor across the street mentions a little problem and, boom, they get a new router. It is still an 802.11g. They want you to pay for an n level router. I pay $600 dollars a MONTH for crumby service with no willingness to help out.
    Can anyone tell me if cable vision has better service? I just got back from my cousin and one of their service guys came over, spent two hours checking things out and went out of his way to verify the service was up to or better than standard. I was very impressed. The verizon guys seem not the sleigh east bit interested in helping but they di want to prove they were smarter than anybody else.
    Any advice or recommendations would be very welcome. Please help. I give up on these corporate thieves. Please don' hesitate to advise. Thanks.
    Bill
    Email info removed as required by the Terms of Service.
    Message was edited by: Admin Moderator

    lagagnon wrote:
    MrKsoft wrote:I've run a very usable Ubuntu/GNOME/Compiz based system on my P2/450, 320MB RAM, with a Radeon 7500 before, and that's even older hardware, with bulkier software on top of it.
    I'm sorry but I find that very hard to believe. I work with older computers all the time - I volunteer with a charity that gets donated computers and we install either Puppy or Ubuntu on them, depending on vintage. On a P450 with only 320MB almost any machine of that vintage will run like a dog with Ubuntu and Compiz would be a no go. It would be using the swap partition all the time and the graphics will be pretty slow going.
    Hey, believe it: http://www.youtube.com/watch?v=vXwGMf141VQ
    Of course, this was three years ago.  Probably wouldn't go so well now.
    To start helping you diagnose your problems please reboot your computer and before you start loading software show us the output to "ps aux", "free", "df -h", "lspci" and "lsmod" so we can check your system basics. You could paste all those over to pastebin.ca if you wish.
    Here's everything over at pastebin: http://pastebin.ca/2005110

  • Why such poor streaming on ATV3 with Netflix?

    I have the latest ATV3 hooked up via ethernet to my airport extreme with 60 Mbs download speeds. There is a lot of banding and compression artifacts streaming 1080p material on Netflix. Itunes HD,  Vimeo HD material along with Trailer App HD material look fine with no such problems. When watching the same Netflix HD material on my Boxee Box, I see no such banding or artifacts. Seems to be issues still ongoing streaming Netflix material on the ATV.

    Well, I don't know how to explain it any better, I feel I put it pretty straight forward, however here's an example from a small lookbook. The red is not accurate, everything else is.
    The only thing I've ever come up with is possibly some 'Out of Gamut Warning' option that I haven't found to disable, twenty_one. However, no, I do know Lightroom VERY well, just not 100%, maybe 95%. When Soft proofing for monitor, all colors in the example are within gamut in sRGB, AdobeRGB, and my calibrated monitor color space.
    I've almost exclusively run into this with NEFs, I've used Lightroom with Canon, PhaseOne backs, Leaf backs, Fujis, and more. Oh, and I am a pro, and have worked a wide range of clients from Harper's, Elle, and BEBE, to RED, Playboy and more -not to toot my horn, but to establish I'm not a green.
    I'm on Lightroom 5, current build.
    Typically I stay with the Adobe Standard Profile or Portrait, but none seem to help this particular issue.
    Not sure what sort of NEFs Nikon produced in '76, but I to have been with Adobe since Photoshop 2. I'm viewing on a calibrated monitor in good viewing conditions. I don't print, this is most commonly an issue with lookbooks and catalogs when they need perfect color, the issue remains when viewed on other monitors. I've also tested images on my studio's Eizo CG276 and run into nealry exactly the same results (nearly in that there are pixels of difference, however it's a different size than my normal editing monitor).

  • Why such poor quality saturation and contrast with NEFs?

    Having used Lightroom since the beginning, I'm at my wits end.
    Ever since Lightroom 1, I've noticed that ACR does a really -really, bad job handeling NEFs when you push contrast and saturation, especially in regards to certain colors (reds, yellows, and blues).
    I thought for a long time it was my side, originally using a consumer DSLR (D50), but it affects every NEF I've ever imported, from any Nikon body up to D4.
    The saturation gets chunky with little effort (even after attempting to correct via the Calibration panel), the same when pushing contrast. Not pushing it beyond the files' limits, just a simple s-curve.
    When importing other RAW formats, this is less often the case.
    It's really frustrating to be able to rely on Lightroom doing and adequate job 75% of the time, and it makes me look bad in front of clients.
    Has/ does anyone else run into this? Does anyone have any ideas, or tips as to avoiding this?
    I really hate the UI of C1, but since it has started implementing cataloging I am getting more and more enticed to switch over completely, as even when I push things waaay too contrasty or overly saturated the results are still miles better.
    I'm an unabashed Adobe fanboy with fond memories of the introduction of Levels (Oooooo) back in the day, but I'm feeling I've been pushed to a point where I have to default to whatever program does a better job handeling RAW info, and ACR is not cutting it at a professional level.

    Well, I don't know how to explain it any better, I feel I put it pretty straight forward, however here's an example from a small lookbook. The red is not accurate, everything else is.
    The only thing I've ever come up with is possibly some 'Out of Gamut Warning' option that I haven't found to disable, twenty_one. However, no, I do know Lightroom VERY well, just not 100%, maybe 95%. When Soft proofing for monitor, all colors in the example are within gamut in sRGB, AdobeRGB, and my calibrated monitor color space.
    I've almost exclusively run into this with NEFs, I've used Lightroom with Canon, PhaseOne backs, Leaf backs, Fujis, and more. Oh, and I am a pro, and have worked a wide range of clients from Harper's, Elle, and BEBE, to RED, Playboy and more -not to toot my horn, but to establish I'm not a green.
    I'm on Lightroom 5, current build.
    Typically I stay with the Adobe Standard Profile or Portrait, but none seem to help this particular issue.
    Not sure what sort of NEFs Nikon produced in '76, but I to have been with Adobe since Photoshop 2. I'm viewing on a calibrated monitor in good viewing conditions. I don't print, this is most commonly an issue with lookbooks and catalogs when they need perfect color, the issue remains when viewed on other monitors. I've also tested images on my studio's Eizo CG276 and run into nealry exactly the same results (nearly in that there are pixels of difference, however it's a different size than my normal editing monitor).

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • CRIO Poor Performance - Where have my MIPS gone?

    I have a cRIO based system that is used to control a motor for a particular application. The application has been developed and enhanced over the years and is currently using about 50% of the CPU. The RT Controller is a cRIO-9012. I have recently been asked to add a 1 kHz (or more) loop function to the cRIO application. I can only achieve a maximum loop rate of about 200 Hz. When I told the customer this, he asked how fast my controller was, to which I replied 400 MHz. “Where is all the CPU power going?”, he asked. He's now thinking of replacing the cRIO Controller with an mbed with C code to get the performance he requires, which is a pity since I'd like to continue developing the application in LabVIEW.
    Following on from his question, “where is all the CPU power going?”, I decided to write a simple application to test the cRIO 9012's performance. Below is the code I used to perform the evaluation:
    With just the bottom loop running, which reports CPU load over the cRIO Controller's serial port, I have a CPU load of 7.0%. This is the baseline.
    I then add the "execution" loops as shown above the bottom loop, one at a time and recoded the CPU load. Here are the results:
    1 Loops - 18.3% load (11.3% extra)
    2 Loops - 29.4% load
    3 Loops - 45.5% load
    4 Loops - will not run!
    I have two problems/concerns.
    Concern 1
    The cRIO 9012 has a 400 MHz processor, which has 760 MIPS of processing power. The rate of the simple loop is 2 kHz and each loop takes about 11% of the CPU power. That is, each loop uses up 83.6 MIPS and each loop iteration uses up 41,800 instruction cycles. Where are the 41,800 instructions going? Even if there was a context switch after each loop iteration, this would account for 150 to 200 instruction cycles. Each loop is only doing an integer increment, timing check, compare and branch. These should only take up about 4 instruction cycles (8 if you want to be generous). If this was programmed in C, you could get bare metal performance that allows a single loop rate of something like 40 MHz or with an RTOS something like 2 MHz. Instead, my maximum loop rate is something like 20 kHz.
    Where are the "wasted" 41,600 instructions per loop iteration going? This is only 0.5% efficient!
    Concern 2
    Why does adding the 4th "execution" loop cause the application to halt (or at least not send data over the serial port)?
    I like programming on the desktop using LabVIEW and I like programming the FPGA using LabVIEW. The RT Controller is however becoming an embarrassment. Is it really the case that the best additional loop rate I can add to an existing application that already uses 50% of the cRIO Controller's CPU can only be 200 Hz maximum?

    Thanks for all the feedback.
    MajorTom,
    Changing to timed loops instead of while loops makes the performance worse. For 2 Loops, rather than a CPU load figure of 29.4% (22.4% after removing base load) it shoots up to 78.4% (71.4% after removing base load). That is, it runs about 3 times slower, which takes the "efficiency" down from 0.5% to 0.15% efficient.
    TimothyA,
    I tried making the "execution" loops subVIs (with Preferred Execution System = other 1, with the top level = other 2) and that solved the four "execution" loops problem. Thanks, one of my concerns is now resolved (I'll mark it as such once the conversation quietens down).
    The execution time is still large with the 2 loops taking 29.1% of the CPU, which is the same as before.
    I tried using the "reduced" us wait next multiple CLN.vi, but it appears to be in LabVIEW 2014 and I'm using LabVIEW 2013. Any chance of resaving it as LabVIEW 2013?
    crossrulz,
    Thank you for pointing me to the table in the CompactRIO Developer's Guide. I assume you’re talking about Figure 3.5. Priorities and Execution Systems available in LabVIEW Real-Time. I didn't realise that Execution Systems are limited in the number threads (would be nice to get a warning when this happens). This will make interesting reading and experimentation.
    All,
    I’ve tried various loop limiting rates including Timed Loops and While Loops with RT Wait Until Next Multiple, RT Wait, Wait Until Next Multiple and Wait and the best perform is from the two RT waits. The worst performance was from the Timed Loops.
    In summary, I’ve solved the problem regarding how to run more parallel loops, but I still get very poor performance with each loop interaction taking about 41,800 instructions when it “should” take more about 200 instruction cycles. All I want is to be able to run a loop at at least 1 kHz on my cRIO-9012 when I already have an application that takes up about 50% of the CPU. I have the threat of the code being moved to an mbed using C, which I’m trying to resist. Surely a 400 MHz controller can have a 1 kHz loop and not take up more than 10% of the CPU.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Invalid block count and poor performance

    Hi - Can anyone help?
    Running an early 2011 13' MBP on ML. For a year - 18 months I've noticed poor performance at times, consistently slow boot (sometimes 5 minutes to usable), occasional lockups and spotlight often reindexes itself and slows the system (much more than my previous MBP), I can't see anything that triggers it. I'm also getting recurring errors in Disk Utility of 'Invalid block count', every time I'll repair the disk (booting into recovery), but after a month of use the error will reappear. About three months ago I erased and reinstalled ML, some of my data was moved over in Time Machine but I didn't restore from it. The erase and reinstall hasn't made any difference and I'm still seeing the same issues. Hard disk SMART status appears to be fine.
    Can anyone recommend any troubleshooting steps? I'm not sure what else to do, still under Applecare and was thinking maybe the HD isn't working correctly?
    Cheers!

    Performance.
    Activity Monitor – Monitor Performance Problems          
    Performance Guide
    Why is my computer slow
    Why your Mac runs slower than it should
    Slow boot.
    Startup - Slow Boot
    Startup - Slow Boot (2)
    Startup - Slow Boot (3)
    Startup Issues - Resolve
    Startup Issues - Resolve (2)

  • Safari hangs and poor performance in MBPR (Mid 2012)

    Safari hangs and poor performance in MBPR (Mid 2012)? OS X 10.10.2 is up to date

    Please answer as many of the following questions as you can. You may already have answered some of them. In that case, there's no need to repeat the answers.
    Back up all data before making any changes.
    Have you restarted your router and your broadband device (if they're separate) since you first noticed the problem? If not, do that now and see whether there's any change.
    If your browser is Safari, then from the Safari menu bar, select
              Safari ▹ Preferences... ▹ Privacy ▹ Remove All Website Data
    and confirm. If the Downloads button (with the icon of a downward-pointing arrow) is showing in the toolbar, click it and then click Clear in the box that appears. The download history will be removed. Any change?
    If you're running OS X 10.9 or later, select the Advanced tab in the Preferences window and uncheck the box marked
              Stop plug-ins to save power
    Any change?
    Quit and relaunch the browser. Any change?
    Enable guest logins* and log in as Guest. Don't use the Safari-only “Guest User” login created by “Find My Mac.”
    While logged in as Guest, you won’t have access to any of your documents or settings. Applications will behave as if you were running them for the first time. Don’t be alarmed by this behavior; it’s normal. If you need any passwords or other personal data in order to complete the test, memorize, print, or write them down before you begin.
    Test while logged in as Guest. Same problem?
    After testing, log out of the guest account and, in your own account, disable it if you wish. Any files you created in the guest account will be deleted automatically when you log out of it.
    *Note: If you’ve activated “Find My Mac” or FileVault, then you can’t enable the Guest account. The “Guest User” login created by “Find My Mac” is not the same. Create a new account in which to test, and delete it, including its home folder, after testing.
    Are any other web browsers installed, and are they the same? What about other Internet applications, such as iTunes and the App Store?
    If other browsers and Internet applications are also affected, follow these instructions and test. Any change?
    If Parental Controls is active for any user, please turn it off and test. Any change?
    If only Safari is affected, launch the Activity Monitor application and enter "web" (without the quotes) in the search box. If a process named "Safari Web Content" is shown in red or is using more than about 5% of a CPU, select it and force it to quit by clicking the X or Quit Process button in the toolbar of the window. There may be more than one such process. Any improvement?
    Follow the instructions in this support article. Any change?
    Open the iCloud preference pane and uncheck the box marked Photos, if it's checked. Any change?
    Are there any other devices on the same network that can browse the Web, and are they affected?
    If you can test Safari on another network, is it the same there?
    If you connect to your router with Wi-Fi and you can also connect with Ethernet, do that and turn off Wi-Fi. Any difference?

  • Skype crashing and poor performance

    Hello!
    I have a Lumia625 with WP8.1. My problem is that Skype has a really poor performance on my phone. It crashes 6 times out of 10 on startup, and even if I manage to start it, the whole app is slow and laggy. Sometimes I can't even write a message it's so laggy. Video call is absolutely out of the question. It crashes my whole phone. I have no similar problems with other instant messaging apps nor with high-end games. There is something obviously using way more resource in the Skype app than it's supposed to. It's a simple chat program, why would it need so much resource?
    The problem seems to be originating from the lower (512 mb) RAM size of my phone model, because I experienced the same effect with poorly written apps, that don't keep in mind that there are 512 RAM devices, not only 1GB+ ones, and use too much resource.
    Please don't try to suggest to restart/reset the phone, and reinstall the app. Those are already behind me, and they did NOT help the problem. I'm not searching for temporary workarounds.
    Please find a solution for this problem, because it is super annoying, and I can't use Skype, which will eventually result in me leaving Skype.
    Solved!
    Go to Solution.

    When it crashes on startup it goes like:
    I tap the skype tile
    The black screen with the "Loading....." appears (default WP loading screen). Usually this takes longer than it would normally take on any other app.
    For a blink of an eye the Skype gui appears, but it instantly crashes.
    If I can successfully start up the app, it just keeps lagging. I sart to write a message to a contact, and sometimes even the letters don't appear as I touch them, but they appear much later altogether. If I tap the send message button the whole gui freezes (seems like it freezes till the contact gets my message). Sometimes the lag get stronger, and sometimes it almost vanishes, but if I keep making inputs when the lag is strong, sometimes it crashes the whole app.
    When I first installed the app, everything was fine. But after a while this behavior appeared. I reinstalled the app, and it solved the problem temporarily, but after some time the problem re-appeared. I don't know if it's relevant, but there was a time when I couldn't make myself appear online all the time (when the app was not started). In that time I didn't experience the lags and crashes. Anyways, what I'm sure about is that the lags get worse with time. Idk if it's because of use of the app (caching?), or the updates the phone makes to itself (conflict?).
    I will try to reinstall Skype. Probably it will fix it for now. I hope the problem won't appear again.

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • URGENT: Migrating from SQL to Oracle results in very poor performance!

    *** IMPORTANT, NEED YOUR HELP ***
    Dear, I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
    Environment at Customer Server Side:
    Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
    Data Storage: Em2
    DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
    Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
    Anton

    On the production system we have very poor DB performanceHave you identified the cause of the poor performance is not the queries and their plans being generated by the database?
    Do you know if some of the queries appear to take more time than what it used to be on old system? Did you analyze such queries to see what might be the problem?
    Are you running RBO or CBO?
    if stats are generated, how are they generated and how often?
    Did you see what autotrace and tkprof has to tell you about problem queries (if in fact such queries have been identified)?
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/sqltrace.htm#1052

Maybe you are looking for

  • Is it possible to connect to Drives with the same name simultaneously on Windows?

    Hello I have used Adobe Drive to create two connections to two different servers which both have the same setup: Server one: Adobe Drive webservice which is connected to a database called "DatabaseA" Server two: Adobe Drive webservice which is connec

  • Sales Employee wise comission report

    Hi all please guide me how to get the report of sales employee commission after setting nth initial setting related to Commission group & comission how i can view the report samir

  • Import captions from Olympus Master to PSE8

    Just bought Photoshop Elements 8.  Imported photos that had been downloadedearlier into Olympus Master (didn't know was only a temporary license).  Had spent a bunch of time adding captions, but none of the captions transferred with the photos. Is th

  • How to translate this statement to java servlet code

    INSERT INTO table_name (column1, column2,...) VALUES (value1, value2,....)

  • Messaging Server 4.15 Plug-in API

    I need to convert a messaging server 3.6 plug-in into a messaging server 4.15 plug-in. I got the messaging server 4.15 plug-in API guide. I wrote a plug-in. Building the SMTP plug-in is OK. I installed this plug-in by configuring the SMTP plug-in con