Performance issue  of Jboss,JDK on solaris

Hi ,
We recently upgraded to solaris 10 (5.10) on sun sparc 64 bit machine along with jboss and jdk .
With solaris 10 , new jboss and new jdk our application performance has come down.
Old configuration
sun sparc 64 bit
solaris 9 (no patches applied )
oarcle 10.2.0.1 (no patches applied )
jboss 3.2.7
jdk 1.4
New configuration
sun sparc 64 bit
solaris 10 (no patches applied )
oarcle 10.2.0.1 (latest patches applied i.e. 10.2.0.4 )
jboss 4.2.0
jdk 1.5_12
we would like to know about any known Solaris10 , Oracle10g issues / patches to address performance issues.
and also would like to know about any specific tunable jboss ,jdk settings/parameters on solaris10 for improving the performance .
thanks a lot in advance.

I'm interested in the same question since we are seeing a similar situation.

Similar Messages

  • Oracle 10g performance issues

    Hi,
    We were using Oracle 9i in Solaris 5.8 and it was working fine with some minor performance issues. We formatted the Solaris server with new Solaris 5.10 and installed Oracle 10g.
    Now we are experiencing some performance issues in Oracle 10g. This issue is arising when using through Websphere 5.1.
    We have analyzed the schema, index is rebuild, SGA is 4.5 GB, PGA is 2.0 GB, Solaris RAM is 16 GB. Also we are having some Mat Views (possibly this may cause performance issues - not sure) due to refresh.
    Also I have changed some parameters in init.ora file like query_rewrite = STALE_TOLERATED, open_cursors = 1500 etc.
    Is is something due to driver from which the data is accessed. I guess it is not utilizing the indexes on the table.
    Can anyone please suggest, what could be the issue ?

    <p>There are a lot of changes to the optimizer in the upgrade from 9i to 10g, and you need to be aware of them. There are also a number of changes to the default stats collection mechanism, so after your upgrade your statistics (hence execution paths) could change dramatically.
    </p>
    <p>
    Greg Rahn has a useful entry on his blog about stats collection, and the blog al,so points to an Oracle white paper which will give you a lot of ideas about where the optimizer changes - which may help you spot your critical issues.
    </p>
    <p>Otherwise, follow triggb's advice about using Statspack to find the SQL that is the most expensive - it's reasonably likely to be this SQL that has changed execution plans in the upgrade.
    </p>
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • JCaps 5.1.3 Sun Solaris CPU performance issue

    Folks,
    We are experiencing a serious CPU performance issue on our Solaris server with HL7 projects deployed.
    The projects consist of the sample HL7 inbound and outbound projects with an additional service sending to a batch local file external for writing journals.
    The performance issue occurs when there is volume of data in the queues/topics. As we continue to deploy additional HL7 projects (usually about 6 interfaces), the CPU increases until it reached 100%.
    This sanapshot is prstat when no date is transmitting through the interfaces (One inbound - one outbound):
    B PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    15598 jre 379M 177M sleep 59 0 2:49:11 3.1% eManager/74
    21549 phs 1174M 1037M sleep 59 0 14:49:00 2.5% is_dm_phs/113
    23090 phs 3456K 3136K cpu1 59 0 0:00:01 0.4% prstat/1
    23102 phs 3792K 3496K sleep 59 0 0:00:00 0.2% prstat/1
    21550 phs 46M 35M sleep 59 0 0:13:27 0.1% stcms.exe/3
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    131 root 4368K 2480K sleep 59 0 0:02:10 0.1% nscd/30
    23094 phs 3064K 2168K sleep 59 0 0:00:00 0.1% bash/1
    This sanapshot is prstat when data is transmitting through the interfaces(One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    21549 phs 1174M 1037M cpu1 20 0 14:51:20 88% is_dm_phs/113
    15598 jre 379M 181M sleep 59 0 2:49:18 1.3% eManager/74
    21550 phs 46M 35M sleep 49 0 0:13:29 1.2% stcms.exe/3
    23090 phs 3456K 3128K cpu3 49 0 0:00:03 0.4% prstat/1
    1272 noaccess 209M 95M sleep 59 0 0:26:30 0.1% java/25
    11733 jre 420M 212M sleep 59 0 1:35:40 0.1% java/34
    21546 phs 118M 904K sleep 59 0 0:01:21 0.1% isprocmgr_dm_ph/13
    This sanapshot is prstat -L when data is transmitting through the interfaces (One inbound - one outbound):
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
    21549 phs 1174M 1037M cpu1 41 0 0:00:45 22% is_dm_phs/13971
    21549 phs 1174M 1037M sleep 51 0 3:31:06 21% is_dm_phs/1394
    21549 phs 1174M 1037M run 51 0 3:14:16 20% is_dm_phs/1296
    21549 phs 1174M 1037M sleep 52 0 3:14:13 19% is_dm_phs/1380
    15598 jre 379M 181M sleep 50 0 1:49:57 3.1% eManager/4
    21549 phs 1174M 1037M sleep 59 0 0:15:36 1.7% is_dm_phs/4
    21550 phs 46M 35M sleep 59 0 0:10:52 1.0% stcms.exe/1
    21549 phs 1174M 1037M sleep 59 0 0:10:45 0.9% is_dm_phs/6
    15598 jre 379M 181M sleep 54 0 0:33:35 0.3% eManager/35
    21549 phs 1174M 1037M sleep 59 0 0:03:34 0.3% is_dm_phs/5
    21550 phs 46M 35M sleep 59 0 0:02:37 0.2% stcms.exe/2
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/3
    21549 phs 1174M 1037M sleep 59 0 0:02:17 0.2% is_dm_phs/2
    Solaris 10 server details:
    CPU's (4x900 Sparc III+)
    4096 MB RAM
    SunOS testican 5.9 Generic_118558-39 sun4u sparc SUNW,Sun-Fire-880
    Disk: 6 internal Fujitsu 72GBs
    swapspace on the server:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    My sysadmin has run statistics (iostat, vmstat, psig, pmap, pfind, pstack, mpstat, etc.) - and has reported that the server is performing fine - with the exception of the CPU. It also looked like the swap space was not being utilized.
    We have increased the MaxPerm value to 512, and increased the heapsize on isprocmgr_dm_phs to -Xmx2048m, and increased the heapsize on the domain to 2048 per KB 103824
    We have also added the -d64 value (specific to Solaris) per the Deployment Guide.
    We increased the value of Maximum Pool size in the JMS clients to 128 - per the deployment Guide.
    We increased the swapspace on the server to 10Gb:
    total: 4305272k bytes allocated + 349048k reserved = 4654320k used, 10190536k available
    We have modified the tcpip and kernal parameters per the Sun Administration server 8.2 performance tuning guide:
    core file size (blocks, -c) unlimited
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    open files (-n) 8192
    pipe size (512 bytes, -p) 10
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) 29995
    virtual memory (kbytes, -v) unlimited
    None of these modificatons appear to increase performance.
    Any help is appreciated.
    Thanks
    Rich...

    Hi,
    I noticed this behavior with the Alert + SNMP Agents installed but not configured. In this situation, the SNMP agent generates traps for all events, leading to high CPU using, even when nothing was processed. Are you in a similar case?
    Regards

  • Sun JVM Performance Issue in Sun Solaris 10 (SPARC)

    Hi,
    Issue : Performance issue after the migration of a Java application from IBM-AIX 5 to Sun Solaris 10 (SPARC)
    I am facing performance issue after the migration of a Java application from IBM-AIX 5.3 to Sun Solaris 10 (SPARC).
     Normally the application takes less than 1 hour to complete the process in AIX, but after migration in Solaris the application is taking 4+ hours.
    The Java version of IBM AIX is ,
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap32dev-20051104)
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc-32 j9vmap3223-20051103 (JIT enabled)
    The Java version of Solaris 10 is,
    Java(TM) Platform, Standard Edition for Business (build 1.5.0_17-b04)
    Java HotSpot(TM) Server VM (build 1.5.0_17-b04, mixed mode)
    Description of Application
    The application merges 2 XML files of size 300 MB each using DOM Parser and generates flat file according to certain business logic.No remote files are using for the file generation. There are two folders and around 200 XML file in each folders of similar names. The application loads 2 similar XML file at a time from each folder and Processes. Same way, the application processes all the 200 XML file pairs using loop.
    The JVM Parameters are given below.
    /usr/java5/bin/java -cp $CLASSPATH -Xms3072m -Xmx3072M com.db.mcc.creditderiv.GCDXMLTransProc
    Here the extended swap memory in AIX is 3072 (3GB). After copying the same tode to Solaris, the
    application started throwing java.lang.OutofMemoryError. So that we have increased the swap memory up to 12 GB.
    Since 32bit Java allows maximum 4 GB extended memory we started using 64 Bit Java in Solaris using -d64 argument.
    The Current JVM Parameter in Solaris is given below.
    java -d64 -cp $CLASSPATH -Xms8192m -Xmx12288m com.db.mcc.creditderiv.GCDXMLTransProc ( 64 GB Swap Memory is available in the System)
    We have tried the following options
    1.       Extended heap size up to 12 GB using -xms and -xmx parameters and tried multiple -XX options. Earlier the application was working fine in AIX with 3.5 GB extended heap size. ( 64 GB Swap Memory is available in the System)
    2.       Downloaded and installed the Solaris SPARC Patches from the website,
         http://java.sun.com/javase/downloads/index_jdk5.jsp
    4.   Downloaded and installed XML and XSLT patch from sun website
    5.       Tried to run the Java in server mode using -server option.

    A 64 bit VM is not necessarily faster than a 32 bit one. I remember at least on suggestion that it could be slower.
    Make sure you use the -server option.
    As a guess IBM isn't necessarily a slouch when it comes to Java. It might simply be that their VM was faster. Could have used a different dom library as well.
    Could be an environment problem of course.
    Profiling the application and the machine as well might provide information.

  • Performance issues (Oracle 9i Solaris 9)

    Hi Guys,
    How do I tell if my database is performing at its optimum level. We seem to be having perfomance issues on one of our applications. There are saying it's the database, network, etc.
    Thank you.

    Hi,
    In order to determine whether or not your Database is having performance Issues,you will need to install and execute Statspack. Statspack is utility which provides information about the Performance Parameters of Oracle Database.
    If you are already using statspack report for performance analysis post the snapshot of the report.........
    Regards,
    Prosenjit Mukherjee.

  • Performance issue studio 10 fortran compiler on 2500 blade solaris 10

    Dear All,
    We facing performance issue on sun blade 2500 with fortran (studio 10).
    when we run our code it took 14 min. where same code tokk 1min IRIS/Apple Power Mac/HP/Linux .
    if u any solution on this pl. mail me [email protected]
    regards
    Narayan

    You'll need to provide more details before we can help you. You could start by using the performance analyzer to find the slowest parts of your program. You might also be using inappropriate flags (e.g., compiling without optimization would be a problem).

  • Performance issues with class loader on Windows server

    We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
    The thread dumps indicate many threads are waiting in queue for the native file methods:
    "[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
         java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
         java.io.File.exists(Unknown Source)
         weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
         weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
         weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
         weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
         weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
         weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
         weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
         java.lang.ClassLoader.getResourceAsStream(Unknown Source)
         javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
         java.security.AccessController.doPrivileged(Native Method)
         javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
         javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
         javax.xml.parsers.FactoryFinder.find(Unknown Source)
         javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
         org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
         org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
         org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
         org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
         org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
         org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
    On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciated

    Hi shubhu,
    I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
    Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
    Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
    https://issues.jboss.org/browse/JBPAPP-6166
    Regards,
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Performance issue webi report-BOXI3.1

    Hi,
    We have a requirement for a report where we will give user a set of objects (26 u2013 31) to do analysis using interactive viewing feature. Here we are facing severe performance issues and memory issues as the data that we are calling is huge( around 6 million records). At the report level we will be summarizing the data.
    No of rows in the report is depending on the no of objects.
    Mode of view : Interactive view.
    Note:
    1. Objects which are using in conditional level those have indexes.
    2. No of report level variable are two.
    3. Version of Business objects: BOXI3.1
    4. OS: Sun Solaris
    Please let me know if there are any means by which the memory requirements for the report can be minimized/ performance of the report can be improved.
    Thanks,
    Subash

    Subash,
    At the report level we will be summarizing the data ... any means by which the memory requirements for the report can be minimized/ performance of the report can be improved
    Is there any way that you can summarize this on the database side versus the report level?  The database should be sized with memory and disk space properly to handle these types of summarizations versus expecting the application to perform it.
    Thanks,
    John

  • Strange performance issue with 3510/3511 SAM-FS disk cache

    Hi there!
    I'm running a small SAM-QFS environment and have some strange performance issue on the disk storage part, which somebody here might be able to explain.
    Configuration: one 3510, dual controller, RAID-5 9+1, one hot spare and one disk not configured for whatever reason. The R5 logical drive hosts a 150GB LUN for SAM-QFS metadata (mm in SAM-FS speak) and a 1TB LUN for data (mr in SAM-FS speak). Further, there are two small LUNs (2GB, 100GB) for some other purpose. Those two LUNs have nearly no I/O. All disks are SUN146G. Host connection is 2GBit, multipathing enabled and working.
    Then the disk cache became too small, and the customer added a 3511 expansion unit with SUN300G disks. One logical drive is a RAID-1, 1+1, used for NetBackup catalog. The other is a RAID-5, 8+1, providing two LUNs: 260GB SAM-FS metadata (mm) and 1.999TB SAM-FS data (mr).
    For SAM-FS, the LUNs form two file systems: one "residing" in the 3510, the other "residing" in the 3511 expansion. Cabling is according to the manual and checked several times by several independant people. Operating system is Solaris 10, hardware is a V880.
    The problem we observe: SAM-FS I/O on LUNs on disks inside the 3510 is fine. With iostat, I see 100MB/s read and 50MB/s write at the same time. On the SAM-FS file system which is running on the two LUNs in the 3511, the limit seems to be at 40MB/s read/write. Both SAM-FS file systems are configured the same in regards of block size.
    In case I have activity on both SAM-FS file systems, I see 100MB/s+ on the LUN running inside the controller shelf and another 40MB/s on the disk runnin in the 3511 expansion chassis. So, the controller is easily capable of handling 150MB/s.
    Cache settings in the 3510 controller are default I think (wasn't installed by me), batteries are fine.
    Is this 40MB/s we experience a limitation by the expansion shelf? Don't think so. Anybody has any ideas on this? What parameters to check or to change? Any hint appreciated. I can also provide further details if needed. Thank you.
    wolfgang

    SUN300G disks sound like 300GB FC disks.
    Depending on how many files are in the SAMFS file system, sharing the mm and mr devices on the same RAID array can be a pretty horrible idea. In my opinion and experience, it's almost always better to NEVER put more than one LUN on a RAID array. Period. Putting more than one LUN on an array results in IO contention on that array. And large, unnaturally configured (9+1? Why?) RAID arrays will have problems from the start.
    What are the block sizes used on the RAID arrays? It wouldn't surprise me to see that the RAID array on the expansion tray has a very large block size. Larger block sizes are, in general, not better. Especially for SAMFS metadata - which IIRC is something like 8k or 16k blocks.
    I suspect what is happening is most of the metadata updates are going to the mm device on the new array, contending with the IO operations on the file data.
    How much space is left on each mm device? What does "iostat -sndxz 2" show when you're having the IO problems?

  • Performance issue while opening the report

    HI,
    I am working BO XI R3.1.there is performance issue while opening the report in BO Solris Server but  on window server it is compratively fast.
    we have few reports which contains 5 fixed prompt 7 optional prompt.
    out of 5 fixed prompt 3 prompt is static (it contains 3 -4 record only )which is coming from materlied view.
    we have already use many thing for improve performance in report like-
    1) Index Awareness
    2) Aggregate Awareness
    3) Array fatch size-250
    3) Aray bind time -32767
    4) Login time out -600
    the issue is that before refresh opening the report iteslf taking time 1.30 min on BO solris server but same report taking time in BO window server 45 sec. even we  import on others BO solris server it is taking same time as per old solris server(1.30 min).
    when we close the trace in solris server than it is taking 1.15  sec time.it should not be intial phase it is not hitting more on database.so why it is taking that much time while opening the report.
    could you please guide us where exectly problem is there and how we can improve performance for opening the report.In case the problem related to solris server so what would be and how can we rectify.
    Incase any further input require for the same feel free to ask me.

    Hi Kumar,
    If this is happening with all the reports then this issue seems to be due to firewall or security settings of Solaris OS.
    Please try to lower down the security level in solaris and test for the issue.
    Regards,
    Chaitanya Deshpande

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Oracle Apps Database severe Performance Issue

    Hi Gurus,
    This is regarding a severe performance issue running in our Production E-Business Suite Instance.
    its an R12.1.3 setup installed with 11.2.0.1 Database. All the servers are Solaris Sparc 64 (Solaris 10)
    Let me brief you about the instance first:
    2 Node Application
    - Main Application Server hosting web/forms/concurrent/admin servers
    - iSupplier server hosting web services (placed in DMZ, used by external suppliers via Internet)
    1 Node Database Server
    Database Server Specs
    Memory: 144G phys mem 20G total swap
    - CPUs (8Px4cores, 2Px2cores)
    - I/O - fiber channel hard disk (hitachi SAN Storage) - 7 DATA_TOPs (7 drives with RAID 5) - current DB size 1.6 TB
    - at peak load, around 1000 concurrent forms session and 2000 web sessions.
    We have been facing some serious performance issues and we raised an SR with Oracle Support.
    The Support analyzed a bunch of AWR Reports we provided them and they asked us to increase the DB_CACHE from its current usage of 27G to 40G
    So, we changed SGA_TARGET from 35G to 50G and PGA was increased from 35G to 40G as v$pgastat was also suggesting some lack of memory.
    We made these changes last night.
    Today morning we observed the following:
    1. after start of office hours, we checked in the home page of EM DB Console that ADDM was showing reduced impact due to lack of SGA memory which seemed to be a good sign. Earlier it was around 25% which was now at 12%.
    However, negative aspects were:
    1. lot of swapping was reported by the System Administrators on the DB Server
    2. High CPU Usage
    3. EM DB Console showed a lot of "Concurrency Wait Class" events ...throughout the day lot of blocking sessions were reported which were making other sessions to wait.
    in the AWR Report, following foreground reports were listed:
    Top 5 Timed Foreground Events
    Event
    Waits
    Time(s)
    Avg wait (ms)
    % DB time
    Wait Class
    DB CPU
    132,577
    61.46
    library cache lock
    3,539
    40,683
    11496
    18.86
    Concurrency
    library cache: mutex X
    4,014,083
    21,011
    5
    9.74
    Concurrency
    db file sequential read
    4,138,014
    20,767
    5
    9.63
    User I/O
    latch free
    381,916
    5,897
    15
    2.73
    Other
    This is showing "library cache lock" events as the main culprit apart from the usual suspect, the CPU.
    I am attaching the AWR Report. Please let me know if  i should revert back the memory changes or is there anything else i could do.
    Please help us resolving it because the performance is going worst.
    Regards,
    Muneer.

    Pl do not post duplicates - Oracle Apps Database severe Performance Issue
    For all critical production issues, pl work with Support thru SRs - using the forums to troubleshoot production issues is not wise

  • Oracle 9i reading BLOB performance issues

    Windows XP Pro SP2
    JDK 1.5.0_05
    Oracle 9i
    Oracle Thin Driver for JDK 1.4 v.10.2.0.1.0
    DBCP v.1.2.1
    Spring v1.2.7 (I am using the JDBC template for convenience)
    I have run into serious performance issues reading BLOBs from Oracle using oracle's JDBC thin driver. I am not sure if it a constraint/mis-configuration with oracle or a JDBC problem.
    I am hoping that someone has some experience accessing multi-MB BLOBs under heavy volume.
    We are considering using Oracle 8 or 9 as a document repository. It will end up storing hundreds of thousands of PDFs that can be as large as 30 MBs. We don't have access to Oracle 10.
    TESTS
    I am running tests against Oracle 8 and 9 to simulate single and multi-threaded document access. Out goal is to get a sense of KBps throughput and BLOB data access contention.
    DATA
    There is a single test table with 100 rows. Each row has a PK id and a BLOB field. The blobs range in size from a few dozen KB to 12MB. They represent a valid sample of production data. The total data size is approx. 121 MBs.
    Single Threaded Test
    The test selects a single blob object at a time and then reads the contents of the blob's binary input stream in 2 KB chunks. At the end of the test, it will have accessed all 100 blobs and streamed all 121 MBs. The test harness is JUnit.
    8i Results: On 8i it starts and terminates successfully on a steady and reliable basis. The throughput hovers around 4.8 MBps.
    9i Results: Similar reliability to 8i. The throughput is about 30% better.
    Multi-Threaded Test
    The multi-threaded test uses the same "blob reader" functionality used in the single threaded test. However, it spawns 8 threads each running a separate "blob reader".
    8i Results: The tests successfully complete on a reliable basis. The aggregate throughput of all 8 threads is a bit more than 4.8 MBps.
    9i Results: Erratic. The tests were highly erratic on 9i. Threads would intermittently lock when accessing a BLOB's output stream. Sometimes they lock accessing data from the same row, othertimes it is distinct rows. The number and the timing of the thread "locks" is indeterminate. When the test completed successfully the aggregate throughput of the 8 threads was approx. 5.4 MBps.
    I would be more than happy to post code or the data model if that would help.
    Carlos

    Hi Murphy16,
    Try investigate where are the principal issues in your RAC system.
    Check:
    * Expensive SQL's;
    * Sorts in disks;
    * Wait Events;
    * Interconnect hardware issues;
    * Applications doing unnecessary manual LOCKs (SQL);
    * If SGA is adequatly sized (take care to not use of SWAP space "DISK");
    * Backup's and unnecessary jobs running at business time (Realocate this jobs and backups to night window or a less intensive work hour at database);
    * Rebuild indexes and identify tables that must be reorganized (fragmentation);
    * Verify another software consuming resources on your server;
    Please give us more info about your environment. The steps above are general, but you can use to guide u in basic performance issues.
    Regards,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • Oracle Advance Compression Deletion Performance issue in 11g R1

    Hi,
    We have implemented OAC in our datawarehouse environment to enable table and index compression. We tested in our Test machine and we gained almost 600GB due to advance compression without any issues and all the informatica loads are running fine. And hence we implemented the same in our production but unfortunately two sessions which are involving deletion of data are taking more time (3 times of actual timing) for completion which affects our production environment.
    The tables creating issue are all non partitioned tables.
    I need to know whether Oracle Advance Compression will decrease delete performance? and is there any way to disable advance compression on those particular tables?
    Our environment details:
    DB earlier version: 11.1.0.6
    DB current version : Oracle 11.1.0.7
    Applied PSU: 11.1.0.7.6
    Operating system: Solaris 5.9
    Syntax used for compression:
    ALTER TABLE TABLE_NAME MOVE COMPRESS FOR ALL OPERATIONS;
    Thanks in Advance.

    Hi,
    Thanks for your reply.
    The note is for update performance issue and also I have applied necessary patches for improving update performance.
    The update sessions are all working fine. only the deletion sessions are creating problem.
    Could someone help me out to clear this problem.
    Thanks,
    VBK

Maybe you are looking for

  • Error Executing Query

    Hi All, I am working on financial report 11.1.1.3. I created a income statement report using database connection plandetails. While in web preview warning message like " 5200:Error executing query. The data form grid is invalid. Verify that all membe

  • Keep getting unexpected error when trying to install 10.7.2

    I have Lion 10.7.1 right now and every time I try to update to 10.7.2 it fails. Just says an unexpected error occured and I have to restart. It doesn't say what the error is. I even tried to update using a direct link and update using it but it still

  • Huge problem, not able to boot after shut down.

    I just updated the Firmware 1.8 to eliminate the optical drive noise, thought my computer would restart but it said it wasn't necessary. So I continued using it, writing a document. I had Safari, Pages, Mail open-- I opened up PhotoBooth to take a pi

  • JDBC: Prepared statements with more parameters than column names

    I'm using the latest version of the JDBC driver - 4.1.5605.100_enu - on Java 1.7, Linux. I'm connecting to MS SQL Server 2012 Express Edition using a connection URL of the form jdbc:sqlserver://10.0.0.2;user=username;password=pwd;database=testdb1 I

  • Smart Mailboxes - Can we make them Smarter?

    I'd like to make a smart mailbox which contains all messages from the last month which have no reply tag on them. Try as I might it seems impossible with the current set of smart mailbox configurable attributes. Does anyone know of a plugin or manual