Performance problem !! CPU 100% !!

Hi all,
Whe have an Weblogic application accessing Oracle Database 10gR2.
We are having a high CPU Usage sometimes 100% for a long time (1 hour or more).
The query that is consuming more CPU run in 2 seconds. I have 40 simultaneous connections on this database. How can i solve this??
I'm using MTS connections, not dedicated connections.
Tks,
Paulo.

Hi Paulo,
Hope you are doing well, and hopt to see you next spring aon the Oracle Cruise . . .
high CPU Usage sometimes 100% for a long time (1 hour or more).Remember, all virtual memory servers are designed to drive CPU to 100%. See my notes here:
http://www.dba-oracle.com/t_high_cpu.htm
You only have CPU enqueues when there are more tasks waiting for CPU, than you have CPU's.
As to MTS and high CPU consumption, I have some notes here:
http://www.dba-oracle.com/t_mts_multithreaded_servers_shared.htm
Keating also notes cases where using the multithreaded server (MTS can cause a degradation in performance:
I encountered one situation in which a database server’s CPUs were constantly pegged at 100% usage, and the CPU queue length (the number of processes waiting for CPU time) was typically 6 or 7 during peak periods. That database had been using MTS for several years, even though there was more than enough physical memory on the system to support dedicated connections.
Hope this helps. . .
Donald K. Burleson
Oracle Press author

Similar Messages

  • Performance Problems - CPU

    Hi all,
    I'm having some performance problems and i have generated an AWR of a day and i have seen this following things:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 50,318 41.7
    db file sequential read 6,688,472 32,711 5 27.1 User I/O
    Backup: sbtwrite2 1,068,309 7,903 7 6.6 Administra
    db file scattered read 1,012,065 6,999 7 5.8 User I/O
    PX Deq Credit: send blkd 231,401 4,989 22 4.1 Other
    Operating System Statistics DB/Inst: CAPDB14P/capdb14p1 Snaps: 15710-15778
    Statistic Total
    AVG_BUSY_TIME 3,221,704
    AVG_IDLE_TIME 4,923,831
    AVG_IOWAIT_TIME 2,302,776
    AVG_SYS_TIME 537,429
    AVG_USER_TIME 2,682,900
    BUSY_TIME 6,446,121
    IDLE_TIME 9,850,381
    IOWAIT_TIME 4,608,322
    SYS_TIME 1,077,598
    USER_TIME 5,368,523
    LOAD 0
    OS_CPU_WAIT_TIME 1,999,898,469,700
    RSRC_MGR_CPU_WAIT_TIME 0
    VM_IN_BYTES 12,201,893,888
    VM_OUT_BYTES 476,655,616
    PHYSICAL_MEMORY_BYTES 8,568,512,512
    NUM_CPUS 2
    NUM_CPU_SOCKETS 2
    ###########################3
    I think that we are having CPU problems here !!
    All my memory caches are good, 99% hit.
    Anybody agree with me???
    Tks,
    Paulo

    I have problems on some queries that have another wait event related to RAC.
    "gc cs multi block request" is taking a lot of time on some queries. These queries run very fast at another databas that isn't a RAC database.
    Example:
    1-Tables has the same number of rows!!!!!
    2-Both tables and indexes are analyzed using the same tool (DBMA_STATS)
    ####RAC DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     1     7
    PX BLOCK ITERATOR               2     1     7
    INDEX FAST FULL SCAN     BRCAPDB2     IMENSALIDADE1     2     1     7
    ----It takes more than 500 seconds to run
    ####STANDALONE DATABASE####
    SELECT 1 from dual
    WHERE NOT EXISTS (SELECT 1
    FROM mensalidade a
    WHERE data_vencimento >= CHAR_TO_DATE('20070201'));
    ----Explain
    SELECT STATEMENT, GOAL = ALL_ROWS               4     1     
    FILTER                         
    FAST DUAL               2     1     
    PX COORDINATOR FORCED SERIAL                         
    PX SEND QC (RANDOM)     SYS     :TQ10000     2     2     16
    PX BLOCK ITERATOR               2     2     16
    TABLE ACCESS FULL     BRCAPDB2     MENSALIDADE     2     2     16
    ----It takes 0.1 seconds to run

  • ITunes performance problems CPU specific?

    one question to anyone who is experiencing performance problems and playback problems still with version 7.0.2. I discussed the situation with a view friends and found out, that there are some still having all these problems with performance and playback quality and some that have them not. So i had the following idea: Could it be, that all these people, who use the iTunes for Windows on machines with AMD CPUs have problems and those who use it on INTEL CPUs haven't?
    Please give some Feedback to me and most important: to Apple.
    If so, please Apple fix it soon.

    I'm using a Sony desktop PC with an Intel processor, and have had numerous problems with the performance of the latest version(s) of iTunes (7.0 and 7.0.2).

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • Performance problem ,  100 % swap used, but vmstat -   sr = 0

    Hi,
    I have a performance problem on a server. It is sometimes very low during several hours.
    context : v890, 32 Go RAM, 8 SPARC IV+, solaris 10 release 03/05, veritas volume manager, containers, several oracle databases, applications...
    with iostat, swap partition : %b -> 100% !!!!
    with vmstat, r-> 0, b -> 0, w ->29, free memory : 600 Mo , sr -> 0, idle : more than 50%,
    uptime, load average : 6
    vmstat -S : si -> 0 , so -> 0
    vmstat -p : api -> 45126682863 ( probably a bug ) , apo -> 0 fpi -> 1895320681342 ( probably a bug ), fpo -> 0
    It's difficult to me to find the problem. Is it paging activity ??? someone can tell me, what is the memory limit for paging activity start ?
    If you thing I'm in the wrong way, thanks for all ideas :)
    Julien
    Edited by: Wylem on Feb 28, 2008 6:11 PM

    Does seem a bit odd.
    The 'w' column doesn't necessarily mean that anything bad is happening now, but it does mean that the system was severely memory limited at some point in the past at least.
    Paging should occur when free memory drops below LOTSFREE. I don't remember if swapping happens at a particular point, but probably wouldn't happen above DESFREE. The page scanner should become active (non-zero 'sr' numbers) any time the memory is below LOTSFREE.
    Since you have Solaris 10, you might want to grab the dtrace toolkit and see if some of the tools in there show you anything more useful (some of the I/O ones might break down the access further).
    So it really doesn't look like you're swapping/paging out anything now, but you almost certainly did in the past. It could be that you're using an app that has paged out a lot of stuff do disk, so that the I/O you're seeing is it bringing stuff back now that RAM is available.
    Darren

  • Performance Problem : High CPU on a simple Insert

    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanks

    897208 wrote:
    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanksThread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting
    post EXPLAIN PLAN & results from SQL_TRACE this statement

  • Performance problems on non-Intel CPU's

    Hello,
    Recently I have been looking into tracking down a performance problem with Adobe Flash on a couple of different Linux platforms.  I understand that there is limited hardware acceleration and that is actually important for my question.  On multiple netbooks I tested similar Flash based content and found that content my EeePC 701 with a 600Mhz Intel Celeron could play, a netbook with a VIA processor running at 1Ghz couldn't even come close to playing.  Running sysprof I see that 80+% of the time is spent within the libflashplayer library.
    To my point, which you may or may not want to answer.  Is it possible that Adobe uses Intel's compiler to produce their binaries?  I only ask this because it has been documented that Intel's compiler explicitely disables certain optimizations for non-GenuineIntel processors, http://www.theinquirer.net/inquirer/news/1567108/intel-compiler-cripples-code-amd-via-chip s
    .   I have also examined the codepaths and see that libflashplayer.so does check /proc/cpuinfo for the vendor_id string in it's initialization logic.
    I don't want to start conspiracy theories, but if any of this is valid at all, could it please be fixed? Thanks.

    http://www.apple.com/support/bootcamp/
    Only Intel when talking Macs, no PowerMacs or G4 etc.

  • Performance problem with WPF Viewer CRVS2010

    Hi,
    We are using Crystal Reports 2010 and the new WPF Viewer. Last week when we set up a test machine to run our integration tests (several hundred) all report tests failed (about 30 tests) with a timeout exception.
    The testmachine setup:
    HP DL 580 G5
    WMWare ESXi 4.0
    Guest OS: Windows 7 Enterprise 64-bit
    Memory (guest OS): 3GB
    CPU: 1
    Visual Studio 2010
    Crystal Reports for Visual Studio 2010 with 64 bit runtime installed
    Visual Studio 2008 installed
    Microsoft Office 2010 installed
    Macafee antivirus
    There are about 10 other virtual machines on the same HW.
    I think the performance problem is related to text obejcts on a report document viewed in a WPF Viewer. I made a simple WPF GUI with 2 buttons and the first button executes a very simple report that only has a text object with a few words in it and the other button is also a simple report with only 1 text object with approx. 100 words (about 800 charchters).
    The first report executes and displays almost instantly and the second report executes instantantly but displays after approx. 1 min 30 sec.
    And execute in this context means that all VB.Net code runs in the compiler without any exception or performance problem. The performance problem seems to come after viewer.Show() (in the code below) has executed.
    I did another test on the second report and replaced the text obejct with a formula field with the same text as the text object and this test executed and displayed the report instantly.
    So the performance problem seems to have something to do with rendering of textobjects in the WPF Viewer on a virtual machine with the above setup.
    I've made several tests on local machines with Windows XP (32 bit) or Winows 7 (64 bit) installed and none of them have this performance problem. Its not a critical issue for us because our users will run this application on their local PCs with Windows 7 64-bit but its a bit problematic for our project not being able to run all of our integration tests but I will probably solve this by using a local PC instead.
    Here is the VB.Net code Im using to View the reports:
    Private Sub LightWeight_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeight.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
        Private Sub LightWeightSlow_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs)
            Dim lightWeightReport As New CrystalDecisions.CrystalReports.Engine.ReportDocument
            lightWeightReport.Load(Environment.CurrentDirectory & "\LightWeightSlow.rpt")
            ' Initialize Viewer
            Dim viewer As LF.LIV.PEAAT.Crystal.Views.ReportViewer = New LF.LIV.PEAAT.Crystal.Views.ReportViewer()
            viewer.Owner = Me
            viewer.reportViewer.ViewerCore.ReportSource = lightWeightReport
            viewer.Show()
        End Sub
    The reports are 2 empty default reports with only 1 textobject on the details section.
    // Thomas

    See if the KB [
    [1448013  - Connecting to Oracle database. Error; Failed to load database information|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433343338333033313333%7D.do] helps.
    Also the following may not hurt to have a look at (if only for ideas):
    [1217021 - Err Msg: "Unable to connect invalid log on parameters" using Oracle in VS .NET|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333233313337333033323331%7D.do]
    [1471508 - Logon error when connecting to Oracle database in a VS .NET application|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333433373331333533303338%7D.do]
    [1196712 - Error: "Failed to load the oci.dll" in ASP.NET application against an Oracle database|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_bi/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes%7B6163636573733d36393736354636443646363436353344333933393338323636393736354637333631373036453646373436353733354636453735364436323635373233443330333033303331333133393336333733313332%7D.do]
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Performance problems with File Adapter and XI freeze

    Hi NetWeaver XI geeks,
    We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
    - NetWeaver XI 2004
    - SAP 4.6c
    - Outbound Channel
    - No mapping used and only the iDocs Adapter is involved in the pipeline processing
    - File Adapter
    - message file size < 2Ko
    We have zeroed down the problem to Idoc adapter’s performance.
    We are using a file channel and  every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it  to the SAP R/3 instance.
    For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
    If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting  2-3 docs to SAP R/3.
    Could you point us some reasons that could provoke that behavior?
    Basically looking for some help in improving performance of the Idoc adapter.
    Thanks in advance for your help and regards,
    Adalbert

    Hi Bhavesh,
    Thanks for your suggestions. We will test...
    We wonder if the hardware is not the problem of this extremely poor performance.
    Our XI server is:
    •     Windows 2003 Server
    •     Processors: 2x3GHZ
    •     RAM: 4GB (the memory do not soak)
    The messages are well formed iDocs = single line INVOICES.
    Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
    Tnanks for your help.
    Adalbert

  • LR3 "Extra Processing in Develop" Performance Problem

    I have been investigating a specific LR3 performance problem.  It may explain a small subset of the problems people have reported in the "Why is LR3 So Slow?" thread.   I'm starting this thread to focus on this particular problem.  I hope others will confirm/refute/refine my findings.
    The Problem
    In Develop, when I make an adjustment, normally the following happens: The CPU usage (as shown in Activity Monitor's bar graph) jumps to between 50 and 75% for all four cores, the updated image appears, and the CPU usage settles back down.  This all happens in less than half a second.  Note: this is with the image at the Fit size.  However, sometimes I instead get the following after an adjustment: the CPU usage jumps to 50 to 75% for all four cores and the updated image appears as usual, however, instead of settling back down, the CPU usage jumps up to 90 to 100% for all cores and stays there for 3 to 5 seconds before settling down. Thus it appears that LR is doing some kind of "extra processing" since a lot of computation is happening AFTER the updated image has already appeared.  I will refer to this problem as "EP".  Obviously, when you are getting EP, editing in Develop becomes very balky.
    Dependency on ratio between image size and displayed size
    It appears that EP only happens when the displayed size of the image (in Fit zoom level and perhaps also Fill zoom level) is above a certain percentage of the actual image size (as currently cropped).  Evidence: When editing full 21MP 5D2 images, I don't experience EP.  If I crop the 5D2 image fairly significantly, then I can get EP.  When editing 10MP images from my Canon S90, I usually get EP for landscape orientation pictures but not for portrait orientation pictures (since in Fit mode, landscape images display at a higher zoom level than portrait images).  If I am getting EP, I can eliminate it by sufficiently reducing the size that LR is displaying the image by resizing the LR window smaller, opening additional panels (I normally edit with only the right panel open), displaying the toolbar, etc.  It appears that EP is enabled when the displayed image is about 50% or larger w.r.t. the actual image (as currently cropped).  For example, EP becomes enabled when a 3648 pixel wide S90 image is displayed at least 17 and 7/8 inches wide on my 100 ppi monitor (i.e. about 1787 pixels).
    Dependency on HOW an adjustment is invoked
    Even when the displayed image size is large enough w.r.t. the actual image size to enable EP, whether you get it on a given adjustment depends on how you invoke it:
    - If you CLICK (i.e. press the mouse button down and quickly release it) on the track of one of the sliders (a technique I use often to make big jumps), EP will happen.
    - If you press the mouse button down on a slider handle, drag it to a new position, and quickly release the mouse button), EP will happen
    - If you press the mouse button down on a slider handle, drag it to a new position, but continue to hold the mouse button down until the displayed image is updated, EP does NOT happen (either before or after you then release the mouse button).
    - If you highlight the numeric field at the end of a slider and use the arrow keys (possibly along with Shift) to increment or decrement the value, EP does NOT happen.
    - EP will happen if you resize the LR window such that the displayed image size is above the threshold.  (In fact, I determined the threshold by making a series of window width increases until I saw EP indicated by the CPU bar graphs.)
    - EP can happen with local adjustment brush applications, but as with the sliders, it depends on HOW you perform the brush stroke.  Single click and drags with immediate mouse release cause EP, drags with delayed mouse button release don't.
    - Clicking an earlier History state causes EP
    - More exploration could be done.  For example, I haven't looked at Graduated Filter and Spot Removal adjustments.
    My theory of what's happening
    With LR2, my understanding is that in Develop mode when the displayed image is below 1:1 zoom level, after an adjustment is invoked, LR calculates the new version of the image to display using a fast, simplified algorithm that doesn't include the more computationally intensive adjustments like Sharpening and Noise Reduction (and perhaps works on a lower rez version of the image with multiple sensels binned together?).  It appears that in conditions described above, LR3 calculates the initial, fast image update and then goes on to do the full update of the image, including the computationally intensive adjustments.  Evidence:   setting Sharpening Amount and Luminance and Color Noise Reduction to zero eliminates EP (or reduces the amount of time it takes to be barely noticeable).  I'm not sure whether the displayed image is updated with the results of the extra processing.  I think the answer is Yes since when I tried an adjustment of changing Sharpening Amount from 0 to 90, the initial update of the displayed image showed sharpening but after the EP, the displayed image was updated again to show somewhat different sharpening. Perhaps Adobe felt that it would be useful to see the more accurate version of the image when it is at or above 50% zoom.  Maybe the UI is supposed to cancel the EP if you start to make another adjustment before it has completed but the canceling doesn't happen unless you invoke the adjustment in one of the ways described above that doesn't cause EP.  
    Misc
    - EP doesn't seem to happen for Process 2003
    - As others have mentioned, I'm surprised that LR (both version 2 and 3) in 64bit mode doesn't use more available RAM.  I don't think I've seen LR go above 4GB of virtual memory or above 3GB "Real Memory" (as reported by Activity Monitor) even though I have several GB free.
    - It should be obvious from the above that if you experience EP, there are workarounds: reduce the size of the displayed image (e.g. by window resizing), invoke adjustments in ways that don't cause EP, turn off Sharpening and Noise Reduction until the end of editing an image.
    System specs
    First generation Intel Mac Pro with two dual-core CPUs at 2.66 Ghz
    OS 10.5.8
    21GB RAM
    ACR cache on volume striped across 3 internal SATA drives
    LR catalog and RAWs on an internal SATA drive
    30" HP LP3065 monitor (2560 pixels wide)
    NVIDIA GeForce 7300 GT

    I'm impressed by your thorough analysis.
    Clearly, the programmers haven't figured out the best way to do intelligent caching and/or parallel rendering at a reduced size yet.
    In my experience reducing the settings in the "Details" panel doesn't help.
    What really bugs me is that the lag (or increasing lack of interactiveness) depends on the number of adjustments one has made.
    This shouldn't be the case. If a cache is produced then every further adjustment should only cost the effort for that latest adjustment and not include adjustments before it. There are things that stand in the way of straightforward edit applications:
    If you work below 1:1 preview, adjustments have to be shown in a reduced form. If you don't have a way to faithfully mimic the adjustments on the reduced size, you have to do them on the original image and then scale down. That's expensive.
    To the best of my knowledge LR uses a fixed image pipeline. Hence, independently of the order in which you apply edits, they are always performed in the same fixed order. Say all spot removal operations are done first. If you have a lot of adjustment brush edits and then add a spot removal operation, it means that all the adjustment brush operations have to be replayed each time you do a little adjustment on your spot removal edit.
    I believe what you are seeing is mostly related to 1.
    I also believe that the way LR currently handles a moderate number of edits is unacceptable and incompatible with the notion that it is usable in a commercial setting for more than trivial edits. I suspect there is something else going on. If everyone saw the deterioration in performance after a number of edits that I see, I don't think LR would be as accepted as it is. Having said that, I've read that the problem of repeated applications of the adjustment brush slowing LR down has existed for a long time. I truly hope that this doesn't mean we'll have to live for it for the foreseeable future.
    There are two ways I can see how 2. should be addressed:
    combine the effects of a set of operations into one bitmap operation. Instead of replaying all adjustment brush strokes one after the other (speedwise it feels like this is happening), compute a single bitmap operation that combines all effects.
    give up the idea that there is an image pipeline with a fixed execution order.
    Some might argue that the second point is at odds with the whole idea of parametric editing, but I dispute that. Either edit operations are commutable in which case the order is immaterial, or they are not. If they are not, the user applies the edits in a way as he/she sees fit and will thus compensate for any effect of a changed ordering.
    N.B., currently the doctrine of "fixed ordering of edit applications" results in the effect that even if you convert an image into B&W all your adjustment brush edits that applied colour tints will still show through. Reasoning: The user should be able to locally tint a B&W image. I agree with the latter but this could be achieved by only applying those tinting brush strokes that were created after the B&W conversion. All the ones that happened before should only be used to obtain the correct luminance values for the B&W conversion but obviously they shouldn't cause tinted areas.
    The above example demonstrates to me that users naturally expect operations to occurr in the order they have been introduced, not in a fixed predefined order. If that principle were followed, I see no reason why the speed of a single edit should depend on the number of edits that were done to the image before.
    I hope the programmers can (and the management wants to) address the performance issues. While I find LR usable for pretty modest edits, in no way the performance on my system approaches that would I would expect from an industrial strenght application.
    P.S.: Your message reminded me of the following: When I experience serious lag with LR showing the strokes I make with an adjustment brush, it helps to pause a moment after the first click before one starts moving. This allows LR to catch up and then one can see the effect of the application pretty much interactively. Otherwise, there is terrible lag and the feedback where you have brushed an effect comes way too late.

  • 3D performance problems after upgrading memory

    I recently purchased an additional 2GB of memory to try and extend the life of my aging computer.  I installed the memory yesterday and Windows seems to recognize it (reporting now 3.3GB) but when I dropped into WoW (pretty much the only game I have) the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).  Basically WoW was being software 3D rendered!!!
    I went through the usual reinstall drivers, reboot, etc... and couldn't find a fix.  I powered down, pulled out 2 of the memory sticks, booted up, and dropped into WoW - it ran at the full 60FPS and CPU utilization was very low (i.e. back to GPU Hardware 3D rendering).  I powered down again, swapped the 2 sticks for the other 2 sticks, booted up, and dropped into WoW - again it ran 100% fine.  So I powered down, put all four sticks in, booted back up, and when I dropped into WoW it was running in the software 3D rendering mode (20FPS at best and High CPU/Kernel usage).
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    All info in signature is up to date.
    Thanks in advance for any help!

    Quote
    Well his last post was a little over 6 hours ago so he was up pretty late.
    Looks like nothing one does in here goes completely unnoticed.   
    Anyway, I am done sleeping now.
    Quote
    his 2 Pfennig's worth.  I know, I know it's Euro's now.
    Yeah, and what used to be "Pfennige" is now also called "Cents" and here are mine:
    Quote
    I've tried the /PAE option in boot.ini - no joy.  I've tried /MAXMEM = to 3300, 3072, 3000, and even 2048 - no joy in any of those cases.  Has anyone seen anything like this before?  Or have suggestions to fix (other than going to Win7-64)?
    PAE or Physical Memory Extension will not do anything as Microsoft has castrated this feature to such an extend that it has nothing to do with memory addressing anymore when in comes to Windows XP:
    http://en.wikipedia.org/wiki/Physical_Address_Extension#Microsoft_Windows
    Quote
    Windows XP Service Pack 2 and later, by default, on processors with the no-execute (NX) or execute-disable (XD) feature, runs in PAE mode in order to allow NX. The NX (or XD) bit resides in bit 63 of the page table entry and, without PAE, page table entries only have 32 bits; therefore PAE mode is required if the NX feature is to be exploited. However, desktop versions of Windows (Windows XP, Windows Vista) limit physical address space to 4 GiB for driver compatibility reasons.
    The feature is already automatically enabled.  But since is original function (Address Extension) does no longer exist when it comes to the desktop versions of Windows XP, it won't really do anything you would ever notice.
    About the /MAXMEM Switch:  In Windows 32bit operating systems, every process is limited to 2GB of memory.  The point of the switch is to allow certain applications (or their run-time process) to occupy a higher amount of system memory than 2GB.  However, the culprit here is that only those applications are able to utilize this ability that have been programmed (or compiled) accordingly.  A special flag (large memory aware) has to be implemented.  Otherwise, these application will be restricted to 2GB even though the /MAXMEM Switch has been set to extend the 2GB limit to 3GB.  Most 32bit applications come without the "large memory aware" flag and that is why usually, settings the switch won't change anything.
    In any case, it is unlikely that /PAE (even if it would not be castrated) and /MAXMEM would have an impact on your actual issue because I doubt that it has much to do with either memory adressing or the memory limit of an indiviual Windows process.
    Quote
    the 3D performance was down from the usual 60FPS @ 1600x1080 to a bleak 20 (at best) and the CPU utilization went to about 80% on both cores (with ~20% kernel usages).
    There are a couple of hardware based explanations to consider here.  Let's start with the most obvious one:
    1. 975X Memory Controller
    The main reason that the system chooses to automatically set the Memory Speed to DDR2-667 even though DDR2-800 modules are installed, is that by design the memory controller of the Intel 975X Chipset does not natively support DDR2-800 modules, but
    >>Intel® 975X Express Chipset Datasheet - For the Intel® 82975X Memory Controller Hub (MCH)<< [Page 20]
    This means, that from the point of view of the memory controller, operating the memory @DDR2-800 actually means overclocking it (with all potential side effects).
    Basically, if your initial problem disappears as soon as you reduce the memory speed to DDR2-667, the design limitation of the memory controller may explain your findings.
    2. Different memory modules
    If I read your signature correctly, you are actually mixing two different kits/models of RAM (CM2X1024-6400C4DHX and  CM2X1024-6400C4).  This can work of course, but in practice it not necessarely does under all circumstances. 
    This list  (-> http://ramlist.i4memory.com/ddr2/) indicates that there are at least 14 different module types/revisions of Corsair DDR2-800 / CL4 modules that utilize a wide range of different memory chips (Elpida, ProMos, Micron, Infinion, Powerchip, Qimonda, Samsung, Infinion etc.).  Even though the superficial specifications for these chips appear to be pretty similar (DDR2-800 / CL5 / CL4), this does not necessarely mean that the modules will respond to the same operating conditions in the same way. There may be small difference in sub-timings/sub-latencies and/or the general responsiveness of the ICs which may affect the operating behaviour of the memory controller (which by the way also includes the PCI-Express interface which your video card is hooked up to).
    And again:  If running the system @DDR2-667 solves your issue, the possible explanation is that higher clock speeds may amplify (or trigger) potential performance problems that could have to do with the use of non-identical memory modules.
    Furthermore: It is also possible that the memory controller's design limitations and the potential compatibility problems that may be attributed to mixing different modules types may reinforce each other in terms of reduced system performance.
    3. The BIOS may have an impact as well
    There has been known issue with the use of certain video cards in conjunction with 4GB of system memory on this mainboard:
    https://forum-en.msi.com/index.php?topic=107301.0
    https://forum-en.msi.com/index.php?topic=105955.0
    https://forum-en.msi.com/index.php?topic=99818.msg798951#msg798951
    What may have come out as graphics/display corruption in earlier BIOS Releases may come out as reduced system performance when using the latest BIOS Release.  Of course, this is hard to prove, but I thought I'd mention it anyway.  May I ask what amount of video memory your card has onboard?
    Fortunately, there is a BIOS version that you could consider to try in this matter.  It is not only the last BIOS Release that could be used in order to avoid the corruption issue, but it is (in my oppionion) the best BIOS Version that was ever released for the 975X Platinum PUE Mainboard:  W7246IMS.716 [v7.1b6].  I have been using this mainboard for almost two years and have tested almost every BIOS Release that ever came out and I always went back to v7.1b6 as "ground zero". 
    It will properly support your E6600 (so you don't have to worry about that) and as far as I remember, there are no known compatibility issues with other components.  So maybe, you want to give this a shot.
    The bottom line is that in a worst case scenario, the problem you describe could be caused by all of the above things at the same time.  You cannot really do anything about the 975X Chipset Specifications and the only way to rule out explanation #2 is to test modules that are actually identical (same model number, revision and memory chips).  A test of the 7.1b6 BIOS Release is something you should consider.  It may be the only way to test the BIOS Hypothesis.
    This post turned out to be longer than I intended, but then again, I am well-rested after a good sleep and the wake-up coffee is kicking in pretty good.

  • Performance problem

    Hi, I'm having a performance problem I cannot solve, so I'm hoping anyone here has some advice.
    Our system consists of a central database server, and many applications (on various servers) that connect to this database.
    Most of these applications are older, legacy applications.
    They use a special mechanism for concurrency that was designed, also a long time ago.
    This mechanism requires the applications to lock a specific table with a TABLOCKX before making any changes to the database (in any table).
    This is of course quite nasty as it essentially turns the database into single user mode, but we are planning to phase this mechanism out.
    However, there is too much legacy code to simply migrate everything right away, so for the time being we're stuck with it.
    Secondly, this central database may not become too big because the legacy applications cannot cope with large amounts of data.
    So, a 'cleaning' mechanism was implemented to move older data from the central database to a 'history' database.
    This cleaning mechanism was created in newer technology; C# .NET.
    It is actually a CLR stored procedure that is called from a SQL job that runs every night on the history server.
    But, even though it is new technology, the CLR proc *must* use the same lock as the legacy applications because they all depend on the correct use of this lock.
    The cleaning functionality is not a primary process, so it does not have high priority.
    This means that any other application should be able to interrupt the cleaning and get handled by SQL first.
    Therefore, we designed the cleaning so that it cleans as few data as possible per transaction, so that any other process can get the lock right after committing each cleaning transaction.
    On the other hand, this means that we do have a LOT of small transactions, and because each transaction is distributed, they are expensive and so it takes a long time for the cleaning to complete.
    Now, the problem: when we start the cleaning process, we notice that the CPU shoots up to 100%, and other applications are slowed down excessively.
    It even caused one of our customer's factories to shut down!
    So, I'm wondering why the legacy applications cannot interrupt the cleaning.
    Every cleaning transaction takes about 0.6 seconds, which may seem long for a single transaction, but this also means any other application should be able to interrupt within 0,6 seconds.
    This is obviously not the case; it seems the cleaning session 'steals' all of the CPU, or for some reason other SQL sessions are not handled anymore, or at least with serious delays.
    I was thinking about one of the following approaches to solve this problem:
    1) Add waits within the CLR procedure, after each transaction. This should lower the CPU, but I have no idea whether this will allow the other applications to interrupt sooner. (And I really don't want the factory to shut down twice!)
    2) Try to look within SQL Server (using DMV's or something) and see whether I can determine why the cleaning session gets more CPU. But I don't know whether this information is available, and I have no experience with DMV's.
    So, to round up:
    - The total cleaning process may take a long time, as long as it can be interrupted ASAP.
    - The cleaning statements themselves are already as simple as possible so I don't think more query tuning is possible.
    - I don't think it's possible to tell SQL Server that specific SQL statements have low priority.
    Can anyone offer some advice on how to handle this?
    Regards,
    Stefan

    Hi Stefan,
    Have you solved this issue after follow up Bob's suggestion? I’m writing to follow up with you on this post, please let us know how things go.
    If you have any feedback on our support, please click
    here.
    Elvis Long
    TechNet Community Support

  • Erratic performance problems in Oracle 8.0.x

    Hi all,
    We are having a performance problem that appeared somewhere between 8.0.6 and 8.1.5 when using embedded SQL and the ProC compiler under Linux and Solaris.
    The moment we use client libraries > 8.0.x, things seem to grind to a halt. We are currently using 8.0.6 client against 8, 9 and 10g databases. Using 8.1.5 or 10g clients against Oracle 9 or 10 databases triggers the problem.
    The problem also isn't tied to any specific query. On the latest run we tried, the timings for a specific problem query are as follows: Oracle 10 server, Oracle 8.0.6 client - 40 seconds; Oracle 10 server (same database), Oracle 10 client - 14 hours
    Explain plan doesn't show anything funny with the query. On occasion, the query does get through quickly. Subsequent runs are then also quick.
    The query also runs fine in SQLPlus.
    What we have noticed is that the server process flatlines it at 100% CPU usage for the entire duration. The client, on the other hand is just sleeping, waiting for data from the server. Stopping the client in the debugger shows that the client is waiting in the sqlcxt() call when opening the cursor, not actually fetching data.
    We are at our wits end as to where look next, and we can't stay on 8.0.6 client libraries for ever as this is starting to cause us other hassles now.
    Did something significant perhaps change between 8.0.x and 8.1.x that we need to cater for in our apps?
    Any help/ideas would be greatly appreciated.
    Regards,
    Gerhard

    Check Metalink
    Client / Server / Interoperability Support Between Different Oracle Versions
    Doc ID: Note:207303.1
    Looks like Client version 8.1.5 has some problem, it was never designed to support Oracle version higher than 8.1.7
    On the other hand, 8.0.6 was supported up to 9.2
    I would stay with 8.0.6 if I have to use Oracle 8 client. Client version 8.1.7 seems much better.

  • Performance problem PRD environment

    Hi Everybody,
    Some moments during the day we are having a performance problem in our PRD environment. The system get slow.
    The CPU get 100% of his usage.
    The most used process are:
    oracle.exe - 35%
    disp+work - 30%
    disp+work - 11%
    disp+work - 05%
    disp+work - 05%
    In SM50 we have the programs running.
    In ST03N we can see the most sequential reads and DB time problems.
    If we do a performance tunning of the programs identified we can reduce the performance problem?
    Anybody know another solution or where can i get more information to see where exactly the problem is?More transactions
    maybe?
    Best Regards,
    Fábio Karnik Tchobnian

    Hi,
    In SM50 we have the programs running.
    At what staus they are like commit , read etc; you also need to check for active JOBs get the details.
    In ST03N we can see the most sequential reads and DB time problems.
    Check for poorest SQL statement from ST04 => SQL cache => remove * => check for total execution time (descending order) => check for latest 3-4 SQL statement with your'e developer to fine tune the programe
    This is just for analysis if this is happening every time then above solution for upgrading CPU is worth than tuning
    Regards;

  • Performance problems after Update from 7.6.00.37 to 7.6.03.15

    Hi,
    after the Update from 7.6.00.37 to 7.6.03.15 in our live system we noticed a lot of performance problems. We have tested the new version before on our test system, but there don't noticed effects like this.
    We have 2 identical systems (Opteron 64 bit, openSuse 10.2) with log shipping between them. We updated first the standby system, switched from online to standby (with copy of cold log) and started the new server as online system. After that we run a complete backup (runtime: 1 hour) for starting a new backup history and for activating autolog. Then
    With the update we changed USE_OPEN_DIRECT to YES, but the performance of the system was very slow afterwards. After the backup it remains at a high load average (> 10, previous system had about 2-4), with nearly 100% of CPU usage for the db kernel process.
    Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but periodically rises up to a load average of 6 and slow down the performance of various applications (somebody says about 10 times slower). Here we also noticed a high usage (now 200-300%) of the db kernel process.
    Our questions are:
    1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    2. Are there any other (new) parameters, which can help? Maybe reducing MAXCPU from 4 to 3 for reserving capacities for the system (there is only one maxdb instance running)?
    3. Is the a possibility to switch back to 7.6.00.37 (only for worst case)?
    I have made some first steps with x_cons, but don't see any anomalies on the first look.
    Regards,
    Thomas

    Thomas Schulz wrote:>
    > > > Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but
    > >
    > > What is it about this parameter that lets you think it may be the cause for your problems?
    > After changing it back to NO, the system runs better (lower load average) than with YES (but much slower than with old version!)
    Hmm... that is really odd. When using USE_OPEN_DIRECT there is actually less work to do for he operating system.
    > > > Our questions are
    > > >
    > > > 1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    > >
    > > Yes - of course. Changes are what Patches are all about!

    > Are there any known problems with updating from 7.6.00.37 to 7.6.03.15?
    Well of course there are bugs that have been found inbetween the release of both versions, but I am not aware of something like the performance killer.
    We will have to check this in detail here.
    > > > I have made some first steps with x_cons, but don't see any anomalies on the first look.
    > >
    > > Ok, looking into what the system does when it uses CPU is a first good step.
    > > But what would be "anomalies" to you?
    >
    > Good question! I don't really know.
    Well - then I guess 'looking at the system' won't bring you far...
    > > Do you use DBAnalyzer? If not -> activate it!
    > > Does it gives you any warnings?
    > > What about TIME_MEASUREMENT? Is it activated on the system? If not -> activate it!
    >
    > OK, that will be our next steps.
    Great - let's see the warnings you get.
    Let us also see the DB Parameters you set.
    > > What parameters have changed due  to the patch installations (check the parameter history file)?
    > >
    > > What queries take longer now? What is the execution plan of them?
    >
    > It seems to happen for all selects on tables with a lot of rows (>10.000, partially without indexes because automatically generated). With the old version we had no problems with missing indexes or the general performance. Unfortunatly it is very difficult to extract some sql statements out of the JBoss applications. But even simple queries (without any join) runs slower, when the load average rises over 4-5.
    Hmm... the question here is still, if the execution plans are good enough to meet your expectations.
    E.g. for tables that you access via the primary key it actually doesn't matter how many rows a table has (not for MaxDB at least).
    > > BTW: how exactly do the tests look like that you've done on the testsystem?
    > Usage over 6 weeks with our JDBC development environment (JBoss), backup and restore with various combinations of USE_OPEN_DIRECT and USE_OPEN_DIRECT_FOR_BACKUP.
    Sounds like the I/O is your most suspect aspect for overall system performance...
    > > Was the testsystem a 1:1 copy of the productive machine before the upgrade test?
    > No - smaller hardware (32 bit), only 20% of data of the live system, few db users and applications.
    >
    > > How did you test the system performance with multiple parallel users?
    > Only while permanent development with the 2-3 developers and some parallel tests of backup/restore. Unfortunately no tests with many users/applications.
    Ok - so this is next to no testing at all when it comes to performance.
    > An UPDATE STATISTICS over all db users seems to change nothing. At the moment the system remains markedly slow and we are searching for reasons and solutions. Another attempt will be the change of MAXCPU from 4 to 3.
    Why do you want to do that? Have you observed any threads that don't get a CPU because all 4 cores are used by the MaxDB kernel?
    regards,
    Lars

Maybe you are looking for