Major performance bottleneck in JSF RI 1.0

We've been doing some load testing this week, and have come up with what I believe is a major performance bottleneck in the reference implementation.
Our test suite was conducted two different application servers (JBoss and Oracle) and we found that in both cases response time degraded dramatically when hitting about 25-30 concurrent users.
On analyzing a thread dump when the application server was in this state we noticed that close to twenty threads were waiting on the same locked resource.
The resource is the 'descriptors' static field in the javax.faces.component.UIComponentBase class. It is a WeakHashMap. The contention occurs in the getPropertyDescriptors method, which has a large synchronized block.

Well not the answer I was hoping for. But at least that's clear.
Jayashri, I'm using JSF RI for an application that will be delivered to testing in august. Can you give advice wether I can expect an update for this bottleneck problem within that timeframe?
Sincerely,
Joost de Vries
ps hi netbug. Saw you at theserverside! :-)

Similar Messages

  • Performance bottleneck with subreports

    I have an SSRS performance bottleneck on my production server that we have diagnosed as being related to the use of subreports.
    Background facts:
    * Our Production and Development servers are identically configured
    * We've tried the basic restart/reboot activities, didn't change anything about the performance.
    * The Development server was "cloned" from the Production server about a month ago, so all application settings (memory usage, logging, etc.) are identical between the two
    * For the bottlenecked report the underlying stored procedure executes in 3 seconds, returning 901 rows, in both environments with the same parameters.  The execution plan is identical between the two servers, and the underlying tables and indexing
    is identical.  Stats run regularly on both servers.
    * In the development environment the report runs in 12 seconds. But on Production the report takes well over a minute to return, ranging from 1:10 up to 1:40.
    * If I point the Development SSRS report to the PROD datasource I get a return time of 14 seconds (the additional two seconds due to the transfer of data over the network).
    * If I point the Production SSRS report to the DEV datasource I get a return time of well over a minute.
    * I have tried deleting the Production report definition and uploading it as new to see if there was a corruption issue, this didn't change the runtimes.
    * Out of the hundreds of Production SSRS reports that we have, the only two that exhibit dramatically different performance between Dev and Prod are the ones that contain subreports.
    * Queries against the ReportServerTempDB also confirm that these two reports are the major contributors to TempDB utilization.
    * We have verified that the ReportServerTempDB is being backed up and shrunk on a regular basis.
    These factors tell me that the issue is not with the database or the SQL.  The tests on the Development server also prove that the reports and subreports are not an issue in themselves - it is possible to get acceptable performance from them in the
    Development environment, or when they are pointed from the Dev reportserver over to the Prod database.
    Based on these details, what should we check on our Prod server to resolve the performance issue with subreports on this particular server?

    Hi GottaLoveSQL,
    According to your description, you want to improve the performance of report with subreports. Right?
    In Reporting Services, the usage of subreport will impact the report performance, because the report server processes each instance of a subreport as a separate report. So the best way is avoid using subreport by using LookUp , MultiLookUp , LookUpSet, which
    will bridge different data sources. In this scenario, we suggest you cache the report with subreport. We can create a cache refresh plan for the report in Report Manager. Please refer to the link below:
    http://technet.microsoft.com/en-us/library/ms155927.aspx
    Reference:
    Report Performance Optimization Tips (Subreports, Drilldown)
    Performance, Snapshots, Caching (Reporting Services)
    Performance Issue in SSRS 2008
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Major performance Issues after upgrading to 10.9.2

    Hi,
    I have been having major performance issues almost preventing me from using the computer at times.  I suspect I don't have enough memory to run Maverick as the computer was great before I upgraded.
    If any experts or people with ideas for me to speed up the computer, please respond.  If you think the only way to improve performance is add memory or revert back to a previous version of OSX that I have a backup for, let me know.
    Here is the info on my system, thank you in advance!!
    Hardware Information:
              MacBook Pro (15-inch, Late 2008)
              MacBook Pro - model: MacBookPro5,1
              1 2.4 GHz Intel Core 2 Duo CPU: 2 cores
              2 GB RAM
    Video Information:
              NVIDIA GeForce 9400M - VRAM: 256 MB
              NVIDIA GeForce 9600M GT - VRAM: 256 MB
    System Software:
              OS X 10.9.2 (13C1021) - Uptime: 3 days 21:20:10
    Disk Information:
              Hitachi HTS543225L9SA02 disk0 : (250.06 GB)
                        EFI (disk0s1) <not mounted>: 209.7 MB
                        :c (disk0s2) / [Startup]: 249.2 GB (67.82 GB free)
                        Recovery HD (disk0s3) <not mounted>: 650 MB
              MATSHITADVD-R   UJ-868 
    USB Information:
              Apple Inc. Built-in iSight
              Apple, Inc. Apple Internal Keyboard / Trackpad
              Apple Computer, Inc. IR Receiver
              Fitbit Inc. Fitbit Base Station
              Apple Inc. BRCM2046 Hub
                        Apple Inc. Bluetooth USB Host Controller
    Thunderbolt Information:
    Configuration files:
              /etc/sysctl.conf - Exists
              /etc/hosts - Count: 29
    Gatekeeper:
              Mac App Store and identified developers
    Kernel Extensions:
              [not loaded] com.LaCie.ScsiType00 (1.2.0) Support
              [not loaded] com.cisco.nke.ipsec (2.0.1) Support
              [not loaded] com.leapfrog.codeless.kext (2) Support
              [not loaded] com.leapfrog.driver.LfConnectDriver (1.8.1 - SDK 10.7) Support
              [not loaded] com.rim.driver.BlackBerryUSBDriverInt (0.0.39) Support
              [not loaded] com.rim.driver.BlackBerryUSBDriverVSP (0.0.39) Support
              [not loaded] net.kromtech.kext.AVKauth (2.3.6 - SDK 10.8) Support
              [not loaded] net.kromtech.kext.Firewall (2.3.6 - SDK 10.8) Support
    Startup Items:
              CiscoVPN: Path: /System/Library/StartupItems/CiscoVPN
    Problem System Launch Daemons:
              [failed] com.apple.wdhelper.plist
    Launch Daemons:
              [loaded] com.adobe.fpsaud.plist Support
              [loaded] com.adobe.SwitchBoard.plist Support
              [running] com.fitbit.galileod.plist Support
              [loaded] com.google.keystone.daemon.plist Support
              [loaded] com.leapfrog.connect.shell.plist Support
              [loaded] com.microsoft.office.licensing.helper.plist Support
              [loaded] com.timesoftware.timemachineeditor.backupd-auto.plist Support
              [running] com.zeobit.MacKeeper.AntiVirus.plist Support
              [running] com.zeobit.MacKeeper.plugin.AntiTheft.daemon.plist Support
    Launch Agents:
              [not loaded] com.adobe.AAM.Updater-1.0.plist Support
              [loaded] com.adobe.CS5ServiceManager.plist Support
              [running] com.brother.LOGINserver.plist Support
              [running] com.google.keystone.agent.plist Support
    User Launch Agents:
              [loaded] com.adobe.ARM.[...].plist Support
              [failed] [email protected]
              [loaded] com.macpaw.CleanMyMac.helperTool.plist Support
              [running] com.microsoft.LaunchAgent.SyncServicesAgent.plist Support
              [running] com.zeobit.MacKeeper.Helper.plist Support
    User Login Items:
              Google Chrome
    Internet Plug-ins:
              o1dbrowserplugin: Version: 5.3.1.18536 Support
              Google Earth Web Plug-in: Version: 7.1 Support
              Default Browser: Version: 537 - SDK 10.9
              Flip4Mac WMV Plugin: Version: 2.3.8.1 Support
              OfficeLiveBrowserPlugin: Version: 12.2.9 Support
              AdobePDFViewerNPAPI: Version: 10.1.9 Support
              FlashPlayer-10.6: Version: 13.0.0.201 - SDK 10.6 Support
              DivXBrowserPlugin: Version: 2.0 Support
              Silverlight: Version: 5.1.10411.0 - SDK 10.6 Support
              Flash Player: Version: 13.0.0.201 - SDK 10.6 Outdated! Update
              iPhotoPhotocast: Version: 7.0
              googletalkbrowserplugin: Version: 5.3.1.18536 Support
              QuickTime Plugin: Version: 7.7.3
              AdobePDFViewer: Version: 10.1.9 Support
              GarminGpsControl: Version: 2.6.4.0 Release Support
              SharePointBrowserPlugin: Version: 14.3.9 - SDK 10.6 Support
              JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
    Safari Extensions:
              Dashlane: Version: 2.4.0.55923
    Audio Plug-ins:
              BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
              AirPlay: Version: 2.0 - SDK 10.9
              AppleAVBAudio: Version: 203.2 - SDK 10.9
              iSightAudio: Version: 7.7.3 - SDK 10.9
    iTunes Plug-ins:
              Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
    User Internet Plug-ins:
              Dashlane: Version: Dashlane 1.0.0 - SDK 10.7 Support
              Move_Media_Player: Version: npmnqmp 071705000010 Support
              WebEx64: Version: 1.0 - SDK 10.6 Support
              Picasa: Version: 1.0 Support
    3rd Party Preference Panes:
              Flash Player  Support
              Flip4Mac WMV  Support
              Growl  Support
    Time Machine:
              Skip System Files: NO
              Auto backup: NO - Auto backup turned off
              Time Machine not configured!
    Top Processes by CPU:
                   3%          WindowServer
                   2%          SystemUIServer
                   1%          diskimages-helper
                   1%          mds
                   0%          Google Chrome Helper EH
    Top Processes by Memory:
              94 MB          Google Chrome
              59 MB          GoogleSoftwareUpdateDaemon
              57 MB          Google Chrome Helper EH
              52 MB          Google Chrome Helper
              39 MB          Finder
    Virtual Memory Information:
              40 MB          Free RAM
              499 MB          Active RAM
              481 MB          Inactive RAM
              465 MB          Wired RAM
              7.33 GB          Page-ins
              502 MB          Page-outs

    The performance issues are due to third party software. Mavericks at the minimum requires 2GB's of RAM but I don't think that's the issue. And you can upgrade RAM anytime.
    MacKeeper should be uninstalled. It does far more harm than good.
    Do not install MacKeeper: Apple Support Communities
    Uninstall instructions > how to uninstall MacKeeper
    249.2 GB (67.82 GB free)
    Keep an eye on available disk space.
    Click your Apple menu icon top left in your screen. From the drop down menu click About This Mac > More Info > Storage
    Make sure there's at least 15% free disk space. Less can slow your Mac down.
    You also need to uninstall CleanMyMac >  How To Uninstall CleanMyMac
    Third party so called Mac cleaning utilities are not necessary on a Mac. Your Mac runs maintenance in the background for you.
    Mac OS X: About background maintenance tasks

  • OWB Performance Bottleneck

    Is there any session log that is produced by the OWB mapping execution other than seeing the results in OWB Runtime Audit Browser.
    Suppose that the mapping is doing some hash join which is consuming too much amount of time and I would like to see which are the tables that are being joined at that instant. This would help me in identifying the exact area of the problem in a mapping. Does OWB provide a session log which can help me get that information, or from any other place where I can get some information regarding the operation which is causing a performance bottleneck
    regards
    -AP

    Thanks for all your suggestions. The mapping was using a join between some 4 - 5 tables and I think this was the place the mapping was getting stuck during execution in Set Based Mode. Moreover the mapping loads some 70 million records into the target table. Perhaps, loading such huge volume of data and that too in a set based mode and also with a massive join in the beginning, mapping should have got stuck somwhere.
    The solution that came up was to create a table with the join condition and use the table as input to the mapping. This helps us to get rid of the joiner in the very beginning and also the mapping be run in Row Based Target Only mode. The data (70 million) got loaded in some 4 hours.
    regards
    -AP

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

  • J2EE application performance bottlenecks

    For anyone interested in learning how to get resolve J2EE application performance bottlenecks, found a great resource:
    http://www.cyanea.com/email/throttle_form2.html
    registering with them can have you win 1 of 3 iPod mini's

    I agree with yawmark's response #1 in one of your evil spams http://forum.java.sun.com/thread.jsp?thread=514026&forum=54&message=2446641

  • Array as Shared Memory - performance bottleneck

    Array as shared memory - performance bottleneck
    Hello,
    currently I work on a multi-threaded application, where many threads work on shared memory.
    I wondering why the application doesn't become faster by using many threads (I have i7 machine).
    Here is an example for initialization in single thread:
              final int arrayLength = (int)1e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
               * init array in single thread
              Integer[] a1 = new Integer[arrayLength];
              startTime = System.currentTimeMillis();
              for(int i=0; i<arrayLength; i++){
                   a1[i] = i;
              System.out.println("single thread=" + (System.currentTimeMillis()-startTime));and here initialization with many threads:
               * init array in many threads
              final Integer[] a3 = new Integer[arrayLength];
              List<Thread> threadList = new ArrayList<Thread>();
              for(int i=0; i<threadNumber; i++){
                   final int iF = i;
                   Thread t = new Thread(new Runnable(){
                        @Override
                        public void run() {
                             int end = (iF+1)*offset;
                             if(iF==(threadNumber-1))
                                  end = a3.length;
                             for(int i=iF*offset; i<end; i++){
                                  a3[i] = i;
                   threadList.add(t);
              startTime = System.currentTimeMillis();
              for(Thread t:threadList)
                   t.start();
              for(Thread t:threadList)
                   t.join();After execution it looks like this:
    single thread=2372
    many threads List=3760I have i7 4GB RAM
    System + Parameters:
    JVM-64bit JDK1.6.0_14
    -Xmx3g
    Why the executing of one thread is faster as executing of many threads?
    As you can see I didn't use any synchronization.
    Maybe I have to configure JVM in some way to gain wished performance (I expected the performance gain on i7 x8 times) ?

    Hello,
    I'm from [happy-guys|http://www.happy-guys.com] , and we developed new sorting-algorithm to sort an array on the multi-core machine.
    But after the algorithm was implemented it was a little-bit slower as standard sorting-algorithm from JDK (Array.sort(...)). After searching for the reason, I created performance tests which shows that the arrays in Java don't allow to access them by many threads at the same time.
    The bad news is: different threads slowdown each-other even if they use different array-objects.
    I believe all array-objects are natively managed by a global manager in JVM, thus this manager builds a global-lock for all threads.
    Only one thread can access any array at the same time!
    I used:
    Software:
    1)Windows Vista 64bit,
    2) java version "1.6.0_14"
    Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
    Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
    Hardware:
    Intel(R) Core(TM) i7 CPU 920 @ 2,67GHz 2,79 GHz, 6G RAM
    Test1: initialization of array in a single thread
    Test2: the array initialization in many threads on the single array
    Test3: array initialization in many threads on many arrays
    Results in ms:
    Test1 = 5588
    Test2 = 4976
    Test3 = 5429
    Test1:
    package org.happy.concurrent.sort.forum;
    * simulates the initialization of array in a single thread
    * @author Andreas Hollmann
    public class ArraySingleThread {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              long startTime;
                    * init array in single thread
                   Integer[] a1 = new Integer[arrayLength];
                   startTime = System.currentTimeMillis();
                   for(int i=0; i<arrayLength; i++){
                        a1[i] = i;
                   System.out.println("single thread=" + (System.currentTimeMillis()-startTime));
    }Test2:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on the single array
    * @author Andreas Hollmann
    public class ArrayManyThreads {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init array in many threads
                   final Integer[] a = new Integer[arrayLength];
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int iF = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  int end = (iF+1)*offset;
                                  if(iF==(threadNumber-1))
                                       end = a.length;
                                  for(int i=iF*offset; i<end; i++){
                                       a[i] = i;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads List=" + (System.currentTimeMillis()-startTime));
    }Test3:
    package org.happy.concurrent.sort.forum;
    import java.util.ArrayList;
    import java.util.List;
    * simulates the array initialization in many threads on many arrays
    * @author Andreas Hollmann
    public class ArrayManyThreadsManyArrays {
         public static void main(String[] args) throws InterruptedException {
              final int arrayLength = (int)2e7;
              final int threadNumber = Runtime.getRuntime().availableProcessors();
              long startTime;
              final int offset = arrayLength/threadNumber;
                    * init many arrays in many threads
                   final ArrayList<Integer[]> list = new ArrayList<Integer[]>();
                   for(int i=0; i<threadNumber; i++){
                        int size = offset;
                        if(i<(threadNumber-1))
                             size = offset + arrayLength%threadNumber;
                        list.add(new Integer[size]);
                   List<Thread> threadList = new ArrayList<Thread>();
                   for(int i=0; i<threadNumber; i++){
                        final int index = i;
                        Thread t = new Thread(new Runnable(){
                             @Override
                             public void run() {
                                  Integer[] a = list.get(index);
                                  int value = index*offset;
                                  for(int i=0; i<a.length; i++){
                                       value++;
                                       a[i] = value;
                        threadList.add(t);
                   startTime = System.currentTimeMillis();
                   for(Thread t:threadList)
                        t.start();
                   for(Thread t:threadList)
                        t.join();
                   System.out.println("many threads - many List=" + (System.currentTimeMillis()-startTime));
    }

  • Performance bottleneck of hard drive: assets vs. cache vs. render-to drive?

    so i'm beefing up my old mac pro tower (5,1) and was wondering which combination of use of hard drives is fastest, if anyone has any firsthand or theoretical suggestions...
    if someone has all three of these hard drives:
    A) PCIe SSD (OWC Mercury Accelsior_E2 PCI Express SSD)
    B) internal drive bay SSD
    C) external SSD connected via 600MB/s eSATA port of the above linked card
    … which is best to use in combination for the following in After Effects CC/CC2014?
    1) storage of assets files used in the AE project (ie. 1080/4k/RAW/etc footage, PSD files)
    2) AE disk cache
    3) the drive that AE is rendering to
    … for example is 1A + 2C + 3B the fastest use for rendering? and is 1AC + 2B the fastest for while working in AE?
    between assets, disk cache, and render location, which are more of a performance bottleneck?
    and does the optimal combination vary if someone had 16 GB vs 64GB vs 128GB of RAM?
    thanks in advance for any insight!

    Well, the long and short answer is: It won't matter. All your system buses only have so much overall transfer bandwith and ultimately they all end up being in some way piped through your PCI bus, which in addition is shared by your graphics card, audio devices and what have you as well. There are going to be wait states and data collisions and whether or not you can make your machine fly to Mars is ultimately not relevant. There may be perhaps some tiny advantage in using a native PCI card SSD for Cache, but otherwise the overall combined data transfer rates will be way above and beyond what your system can handle, so it will put in the throttle one way or the other.
    Mylenium

  • Performance bottleneck with 2.2.1 and 2008 R2 os VM's

    Hi,
    I have DL370 g6 with Oracle vm server 2.2.1 installed
    *72 GB of Memory and 2 dual + quad core processor*
    All VM's are installed on local disk ( 6 300 GB in Raid 5 )*
    I have 2 nic connected to siwtch for lan traffic
    We have 10 VM's with 2008 R2 OS on it.
    The overall performance of these VM's is really horrible.
    They are very very very slow
    To install Databse on it takes 4 hours even though RAM is 6 GB  for each VM
    To restar the system it takes around 20 minutes .
    Has anybody tried thiese many VM's on 1 server.
    Is there any tool or any way i can see what is the issue or is there any bottlneck on the server .
    2008 R2 is generally a resource hungry OS but still the overall performance is really horrible

    hi,
    hdparm -T /dev/cciss/c0d0 which is the drive gives
    /dev/cciss/c0d0
    Timing cached reads: 31088 MB in 1.99 seconds = 15595.54 MB/sec
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    hdparm  -T /dev/cciss/c0d0p5 whichi is /OVS - it gives
    /dev/cciss/c0d0p5
    Timing cached reads: 30636 MB in 1.99 seconds = 15364.10 MB/sec
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    HDIO_Drive_CMD(null) (wait for flush complete ) failed : Inappropriate ioctl for device
    My all VM guests are windows 2008 R2 64 bit OS which is very new OS of windows.Oracle came up with New PV drivers for it and I suspect can that be a reason for all this resource bottleneck ???????????
    For I/O test use :On the VM server
    1) iostat
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.11 0.00 0.03 0.24 0.00 99.61
    Device tps BLK_read/s Blk_wrtn/s
    ciss/c0d0 - disk 41.90 186.32 26.05
    cciss/c0d0p1-/boot 0.00 0.00 0.00
    cciss/c0d0p2 - 0.00 0.00 0.00
    cciss/c0d0p3- / 1.90 1.83 43.06
    cciss/c0d0p4 0.00 0.00 0.00
    cciss/c0d0p5 - /OVS 40.00 184.49 295.82
    2) vmstat
    swpd free buff cache
    92 166632 94940 53444
    3) sar
    To display block I/O activity:
    sar -b 3 100
    Average : tps - 41.18 rtps - 7.17 wtps - 34.01 bread/s 188.99 bwrtn/s - 588.64
    To display block I/O activity for each block device:
    sar -d 3 100
    Does this look ok .Is there any way to improve overall performance.Like enabling or disabling something .We are facing a very bad problem here with all the VM with 2008 R2 OS on it

  • Structured approach to debugging performance bottlenecks for 3rd Party apps

    Hi All,
    I am facing a situation which I believe most App Support personnel and DBAs in IT organizations do, but I havent found a structured approach to solve the problem. I am hoping this thread can help filter and pull together the varied chunks of information out there in one place.
    Here is the situation. I am avoiding making it too specific, as the idea is to identify a good approach that is repeatable in other scenarios.
    We are in the process of implementing a solution using a third party application (SAP's BPC), which is sitting on an Oracle database. The application implementation team has some control on to use the application to design the solution, but no direct access to the underlying queries that the app generates. We are starting to find that as the underlying database usage size is increasing (from a couple of million to tens of mllions of records), the performance of certain operations is becoming very unpredictable. Sometimes an operation would go through, relatively fast while at other times it would get stuck for over an hour and then time-out.
    In such situations it is a classic battle between the Oracle DBAs and the App implementation team to try and push the ball in each other's court to try and identify and "fix" the problem.
    What in your opinion would be a structured approach between the two teams to help solve the problem? For each step of the approach, please also try and see if you can point to links which further dive into specifics of executing that step.
    For example, one approach might be to ...
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)
    3. After exhausing (2), DBA team to analyze the remaining culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    4. After exhausing (3), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    Thoughts?

    >
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)
    3. After exhausing (2), DBA team to analyze the remaining culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    4. After exhausing (3), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    >
    In general your approach is correct.
    However I'd put priorities different way.
    1. DBA team to find a way to identify specific Querios/DBOperations that are taking too long. (add references here)
    2. DBA team to analyze the culprit queries and find ways to obtain better performance without changing the query or the size of the database tables via indexes/DB parameters/etc.. (add references here)
    With collaboration with the App manufacturer's support if required.
    Indexes are transparent to application logic. They do not affect results data. Only performance.
    Note that indexes should be regular b-tree indexes, not unique or bitmap.
    Edited by: user11181920 on Nov 7, 2012 3:20 PM
    Changes of queries can be allowed here, with using Oracle query substitution techniques (Plan Stability, Plan Management...).
    3. After exhausing (2), DBA/Unix admin team to identify which specific hardware bottlenecks are being faced (CPUs/storage/memory) to see if hardware changes can help obtain better performance.
    Not only because today to beef up HW is less expensive way to improve performance comparing to SW optimization, especially redesign of App; but mainly, in case with SAP, the poor performance that can be improved by HW tells that the sizing of the system has been done incorrectly.
    SAP has a methodology to size your HW depending on volume of data, number of users and quantity of transactions.
    Sizing should be re-done if your data grown beyond the volume that had been used for initial SAP sizing.
    4. After exhausting (3), App team to collaborate with the App manufacturer's support organization to see what design changes or parameters could alter the nature of queries being generated or affect the size of the underlying tables. (too specific for each 3rd party app)

  • Does BPM - for a synchronous interface have a performance bottleneck

    Hi All,
    Just have a small query.
    We have a scenario in which we need to receive PO details from a legacy system, create a sales order in ecc and send back a response table to the legacy system.
    Our understanding is that this can be acheived using synchronous ABAP Proxies and also involves BPM and Abstract mappings.
    I beleive that this should not pose any problems. My concern here is that we are confused as to whether BPM would have bottlenecks with performance. Do we have any SAP document or article which mentions that for synchronous interfaces BPM is the only way to go and this would not have a significant impact on the performance.
    Another approach to the problem would be to create an asynchronous inbound proxy, write ABAP code within it and call a seperate outbound asynchronous proxy within the inbound proxy method. This approach looks and sounds very clumsy.
    Kindly let me know your thoughts or any links which would be useful.
    Thanks & Regards,
    Mz

    Hi Aashish,
    Thanks for your quick reply. it was helpful, but i am not using RFC's. Correct me if i am wrong, but i have explained the scenarios in detail below.
    Scenario 1. Synchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) The abap proxy code executes a function module and sends the response as an internal table back to PI.
    5) PI receives the response and places it in a text/csv file and places it back to another folder.
    I assume that the above would be possible only using BPM. What i understand is that in order for an interface to receive and send data, abstract mappings are to be used, and for this BPM is required. We do not have any conversions etc. its just a simple matter of receiving an internal table from ECC and creating a file to place in the folder.
    I also understand that BPM could have bottlenecks due to queue and cache issues, messages might be pending, or lost etc.
    Scenario 2. Asynchronous
    1) PI Picks file from a common folder.
    2) PI does a data mapping and sends the data to ECC.
    3) ECC contains an inbound interface which receives the data and in which abap proxy code is written.
    4) ABAP Proxy code executes the same function module and calls a seperate outbound interface and passes the values to it. This would be used in sending the response back.
    5)  PI receives the response from the second interface and places it in a text/csv file and places it back to another folder.
    I would like to know which would be the better approach. Documentation/references to support your claims would be much appreciated.
    Cheers,
    Mz

  • Performance bottleneck in Service.poll

    In running a performance evaluation of the Coherence software I'm seeing almost all threads blocked waiting in Service.poll as illustrated in the thread dump below. This particular thread dump is spawned from a call to NamedCache.put( org.apache.commons.collections.keyvalue.MultiKey, Object ). My app (deployed in WLS) is chugging at 150% CPU (on a 24CPU machine) and is giving me about 2.5tps.
         "ExecuteThread: '5' for queue: 'oms.xml'" daemon prio=5 tid=0x013bb138 nid=0x11 in Object.wait() [70c7e000..70c7
         fc28]
         at java.lang.Object.wait(Native Method)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:26)
         - locked <0x82145d10> (a com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKey
         Request$Poll)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.put(Di
         stributedCache.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$BinaryMap.put(Di
         stributedCache.CDB:1)
         at com.tangosol.util.ConverterCollections$ConverterObservableMap.put(ConverterCollections.java:1878)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.DistributedCache$ViewMap.put(Dist
         ributedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:882)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:805)
         at com.tangosol.net.cache.CachingMap.put(CachingMap.java:742)
         I have a near scheme configured with a local scheme fronting a distributed scheme.
         Any suggestions on how to alleviate this bottleneck?
         Regards, San

    Ahh, it definitely looks like things are all blocking on a single daemon thread than:
         One of these before each of the following:
         "DistributedCache:EventDispatcher" daemon prio=5 tid=0x0152b068 nid=0x130 in Object.wait() [6347f000..6347fc28]
         at java.lang.Object.wait(Native Method)
         at com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:9)
         - locked <0xa34960f0> (a com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispat cher$Queue)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:31)
         at java.lang.Thread.run(Thread.java:534)
         "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227e000..6227fc28]
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:219)
         at com.tangosol.net.ResolvingObjectInputStream.resolveClass(ResolvingObjectInputSt ream.java:57)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1513)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1521)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
         at java.util.HashSet.readObject(HashSet.java:276)
         at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:838)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1746)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1845)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1769)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
         at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
         at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
         at java.lang.Thread.run(Thread.java:534)
         "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227f000..6227fc28]
         at java.io.ObjectStreamClass.getReflector(ObjectStreamClass.java:1923)
         - waiting to lock <0xa02a04e0> (a sun.misc.SoftCache)
         at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:501)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1521)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
         at java.util.HashSet.readObject(HashSet.java:276)
         at sun.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.ja va:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:838)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1746)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1845)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1769)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1646)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
         at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
         at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
         at java.lang.Thread.run(Thread.java:534)
         "DistributedCache" daemon prio=5 tid=0x0116f828 nid=0x12f runnable [6227f000..6227fc28]
         at java.lang.String.intern(Native Method)
         at java.io.ObjectStreamField.(ObjectStreamField.java:84)
         at java.io.ObjectStreamClass.readNonProxy(ObjectStreamClass.java:543)
         at java.io.ObjectInputStream.readClassDescriptor(ObjectInputStream.java:762)
         at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1503)
         at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1435)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1626)
         at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1274)
         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:324)
         at com.tangosol.util.ExternalizableHelper.readSerializable(ExternalizableHelper.ja va:1626)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:174 3)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:187 )
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$ConverterFromBinary.convert(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage$BinaryEntry.getValue(DistributedCache.CDB:9)
         at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:78)
         at com.tangosol.util.filter.AllFilter.evaluateEntry(AllFilter.java:75)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$Storage.query(DistributedCache.CDB:107)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onQueryRequest(DistributedCache.CDB:25)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache$QueryRequest.run(DistributedCache.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheReq uest.onReceived(DistributedCacheRequest.CDB:12)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onMessage(S ervice.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.onNotify(Se rvice.CDB:103)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Distributed Cache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:34)
         at java.lang.Thread.run(Thread.java:534)
         We'll try incrementing the thread-count.

  • Major Performance Issues in Flash 8

    I have been experiencing major slowdown in the Flash 8
    authoring tool... I am working with large files (about 40-50mb
    .fla).
    My system specs aren't too bad but maybe not enough?
    1.5Ghz Pentium M
    512mb RAM
    1024mb virtual memory...
    Is there anything that specifically causes Flash to slow way
    down? I'm thinking about tweaking my system (either settings or by
    upgrading hardware) but I don't know what would help Flash with
    larger projects. I'm assuming that more RAM would speed things up,
    but I don't want to jump to that assumption and buy it without
    knowing for sure...
    ...any ideas?

    Yes... more graphics = heavier load...
    ...obviously.
    However the performance issues here I'm talking about aren't
    just slight slowdown issues. It is a MAJOR slowdown problem that
    occurs suddenly. This means that once I hit a certain filesize,
    Flash slows down and will not respond for MINUTES. If it were a
    simple "you added more graphics" problem, then it would show a
    gradual decrease in performance.
    What I was really wanted to know is what is the weak link
    concerning Flash performance? Is it RAM, processor speed, or
    something else?
    I can upgrade, but I'm not buying a new multi-processor
    machine...it's not feasible, and not to mention it's a huge
    overkill.

  • Performance comparison: MyFaces, JSF RI really slow

    As we are looking for a new Java based Web UI framework, JSF seems to be a very interesting framework. During our evaluation we did some performance tests. Currently we are using
    a proprietary Java based servlet framework, which uses JSP as page description language. This framework should be replaced by a JSF implementation. We wrote a small application (with no business logic) by means of our old proprietary framework, by means of JSF reference implementation and by means of MyFaces. Mainly the application consists of just one JavaBean. The tests were running on
    a 4 CPU Solaris server.
    The results of these tests are quite alarming.
    Whereas our old proprietary JSP solution reacts in a considerable time, JSF Ri and Myfaces end up
    with a response time of approximately 12-13 seconds for one use case (with 50 concurrent clients, where a use case consists of 3 requests).
    Even more surprising was, that the use cases per seconds just stays at the same value for MyFaces
    and JSF RI (4 use cases per seconds) regardless of the number of concurrent clients. Compared
    to the JSP solution, where the use case number per seconds grow up to 90 (with 50 concurrent clients).
    Is there any configuration parameters that can cause that limited number of requests?
    Has anybody else made such observations?
    This results are definitely a reason not to switch to JSF. But as I believe that the concepts of
    JSF are great, I wish to be sure that we did use JSF in the correct way.
    So what can be wrong? Is there something that must be respected regarding performance?
    Regards
    P.S.:
    I analysed the test application deployed at JavaServer Faces RI with a profiler, and observed that this method could be a problem
    javax.faces.component.UIComponentBase#getPropertyDescriptors. This problem is also
    reported in this article http://forum.java.sun.com/thread.jsp?forum=427&thread=511439
    I did not find any hint why MyFaces is so slow as well, therefore I don't think that solving this
    issues will bring great performance improvements.

    We did a test run today with JSF RI weekly build from 18 August 2004.
    The results were better than with JSF RI 1.1, but although to worse for a productive use.
    Instead of 4 use cases per seconds we got 8 use cases per seconds (to be more
    precise with 4 concurrent clients -> 7 use cases, for 10, 20, 30, 40,50 clients the value sticks
    to 8 use cases).
    I took a thread dump and think there is potentially a deadlock problem. Here is an extract from this thread dump:
    Thread-4" daemon prio=5 tid=0x1502e8 nid=0x14 waiting for monitor entry [b2c7f000..b2c819bc]
         at java.lang.ref.ReferenceQueue.poll(ReferenceQueue.java:76)
         - waiting to lock <cbc03d10> (a java.lang.ref.ReferenceQueue$Lock)
         at java.util.WeakHashMap.expungeStaleEntries(WeakHashMap.java:263)
         at java.util.WeakHashMap.getTable(WeakHashMap.java:292)
         at java.util.WeakHashMap.get(WeakHashMap.java:336)
         at javax.faces.component.UIComponentBase.getPropertyDescriptor(UIComponentBase.java:114)
         at javax.faces.component.UIComponentBase.access$300(UIComponentBase.java:56)
         at javax.faces.component.UIComponentBase$AttributesMap.get(UIComponentBase.java:1353)
         at com.sun.faces.util.Util.hasPassThruAttributes(Util.java:719)
         at com.sun.faces.renderkit.html_basic.TextRenderer.getEndTextToRender(TextRenderer.java:120)
         at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeEnd(HtmlBasicRenderer.java:173)
         at javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:720)
         at com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeRecursive(HtmlBasicRenderer.java:443)
         at com.sun.faces.renderkit.html_basic.TableRenderer.encodeChildren(TableRenderer.java:257)
         at javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:701)
         at javax.faces.webapp.UIComponentTag.encodeChildren(UIComponentTag.java:607)
         at javax.faces.webapp.UIComponentTag.doEndTag(UIComponentTag.java:544)
         at com.sun.faces.taglib.html_basic.DataTableTag.doEndTag(DataTableTag.java:491)
         at register_jsp._jspx_meth_h_dataTable_0(register_jsp.java:1148)
         at register_jsp._jspx_meth_h_form_2(register_jsp.java:1103)
         at register_jsp._jspx_meth_f_view_0(register_jsp.java:185)
         at register_jsp._jspService(register_jsp.java:124)
         at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:1 I counted 44 threads waiting for the same object.
    Regards

  • Hard drive passes all tests but extremely high response times causing major performance issues.

    I have a HP Compaq Presario CQ62-360TX pre-loaded with Windows 7 home premium (64-bit) that I purchased just under a year ago.
    Recently my experience has been interrupted by stuttering that ranges from annoying in general use to a major headache when playing music or videos from the hard drive.
    The problem appears to be being caused by extremely hard drive high response times (up to 10 seconds).  As far as I know I didn't install anything that might have caused the problems before this happened, and I can't find anything of note looking back through event viewer.
    In response to this I've run multiple hard drive scans for problems (chkdsk, scandsk, test through BIOS, test through HP software and others) all of which have passed with no problems. The only thing of any note is a caution on crystaldiskinfo due to the reallocated sector count but as none of the other tests have reported bad sectors I'm unsure as to whether this is causing the problem. I've also updated drivers for my Intel 5 Series 4 Port SATA AHCI Controller from the Intel website and my BIOS from HP as well as various other drivers (sound, video etc), as far as I can tell there are none available for my hard drive directly. I've also wanted to mess with the hard drive settings in the BIOS but it appears those options are not available to me even in the latest version.
    System Specs:
    Processor: Intel(R) Pentium(R) CPU P6100 @ 2.00GHz (2 CPUs), ~2.0GHz
    Memory: 2048MB RAM
    Video Card: ATI Mobility Radeon HD 5400 Series
    Sound Card: ASUS Xonar U3 Audio Device or Realtek High Definition Audio (both have problem)
    Hard Drive: Toshiba MK5065GSK
    Any ideas?
     Edit: The drive is nowhere near full, it's not badly fragmented and as far as I can tell there's no virus or malware.

    Sounds like failing sectors are being replaced with good spares sucessfully so far, this is done on the fly and will not show in any test, you have a failing drive, I would back up your data and replace the hard drive.
    Sector replacement on the fly explains the poor performance also, replacing sectors with spares is normal if it is just a few over many years, but crystal is warning you there are too many, a sign of drive failure is around the corner.

Maybe you are looking for

  • Can't see Ñ or é in Forms 10G

    Hello I can't see the character Ñ or é in my forms , only see a square o ? , my DB is 9.2.0.2.1 NLS_LANG american_america.US7ASCII and my OAS is 10.1.0.4 i try with many characterset in the OAS by i dont can see this . please any sugestions. MArio

  • Data source for deposit advice xml and check writer xml

    Could someone please tell me where I can find the data source for the Deposit Advice XML and Check Writer XML programs. Since these executables are now a spawned process (PYUGEN), I don't know how to get to the data source. I was able before to look

  • Oracle OpenWorld 2004 Sessions Download

    Hi, My colleagues & I had attended the OpenWorld 2004 & were told that we will be able to download all sessions' PowerPoint presentation online after the conferene (since they don't give out handouts during all sessions and did not allow material req

  • Trim Transparent Pixels

    I cropped the photo to 16x20. With this crop I ended up with some transparent pixels on the right and left edges. I used the Trim tool to remove the transparent pixels. Question - after removing the transparent pixels with the Trim tool do I still ha

  • Information pop-up message

    Dear All, I am having a situation like a 'CALL FUNCTION' stmt inside the body of another function. I observed that; while I do debugging, on th very next F5 on the 'CALL FUNCTION' stmt, I am getting some information pop-up messages. I want to know th