Performance issue....quick response is highly appreciated

hi!
i have a jsp that starts a new thread(connects to database and writes to an excel file) everytime it is called.it works fine upto 7 to 8 threads but the response is very very poor beyond that.i tried to change the webserver settings(changed the min thread count to 5 which by default is 1 in JRUN)but it didn't help me.
can you suggest me a better approach to get rid of this problem??
rahul

I am making sure that the database connections are maintained properly.also response from the database is quick.the problem is with the thread that does this stuff.if the thread count is beyond 8 its eating up a lot of memory of the webserver. at this point i have no clue how to control the thread count.
i assume the webserver(I am using JRUN)does the pooling and takes care of the memory.
do i have to implement thread pooling explicitly in my application?
rahul

Similar Messages

  • Need help in a SQL quick suggestion will highly appreciated

    Hi
    I need to dynamically generate the following query
    ALTER TABLE AAA.TAB1 ADD SUPPLEMENTAL LOG GROUP t_l_g (COL1,COL2,COL3) ALWAYS;
    I have like 30 tables in 100 clients and I need to generate this query for all the columns that are in a unique index within a list of tables and with in a list of users. the issue that i am facing is that the columns are comming in rows. Any help will be highly appreciated.

    You did not post your query so here is a general approach:
    Write the query that returns the columns and pivot it to give you a concatenated list then add in the rest of the statement which is a constant except for the table_name which could come from a query.
    There have been numerous posts in the past on pivoting rows into columns. You should be able to find some via a search of the archives. Version 11g even comes with a new command to pivot data.
    HTH -- Mark D Powell --

  • Performance issues for iOS with high resolution.

    I made an app with a resolution of 480x320 for iOS. It works quite well.
    I then remade it with a resolution of 960x640. In AIR for iOS settings I set Resolution to "High".
    The app looked great, however there was a noticeable drop in performance.
    The app functioned the same way as the original lower resolution app. but it was lagging.
    Has anyone else had this problem?
    Am I doing something wrong?

    With my game, I had around 60fps on the 3GS, and around 50 on the iPhone 4, with the high settings. I got around 10 fps extra by using: stage.quality = StageQuality.LOW;
    Air 2.6. I tried with Air 2.7, but it seems like that command can't be used there (?)

  • Performance issue - smart response required

    Hi Guyz,
    SELECT * FROM regup
    INTO CORRESPONDING FIELDS OF TABLE itab1
    FOR ALL ENTRIES IN itab
    WHERE bukrs = itab-bukrs
    AND belnr = itab-belnr
    AND lifnr = itab-lifnr
    AND gjahr = itab-gjahr
    AND vblnr NE space.
    This query is taking 2.5 mins to execute, I have to increase its performance.
    FYI
    itab has 2677 number of records
    and final data in table itab1 has 3536 number of enteries. Its taking huge time, 2.5 mins to execute this portion. How can I increase the performance of this query.
    Pls it's urgent.
    <i>Note: If google can fetch the data from around the globe in fractoin of seconds, then at least it's not impossible.</i>
    Your answers will be rewarded.

    It would be useful to know what business requirement this is trying to meet - it looks like you are trying to trawl through the payment run line items table for a series of document numbers, but you may already have some of this same data available from BKPF & BSEG, or at least more quickly accessible by travelling through these tables first.
    For example, I'm not sure if it's site configured or standard, but I have seen the LAUFD and LAUFI value concatenated into the BKPF-BKTXT field on the paying (clearing) document, e.g. "20070205-123456", which would mean you could then get to the REGUP data much more efficiently.

  • Performance issue / faster response after using of same criteria once again

    Since some days I have the problem that some fast queries (not changed) need now a lot of time. The response time for the same query is very different as well: sometimes only 531ms and then 15 secs.
    Now i found out that when I use the same criteria (or only the column which is indicated) once again that the query runs much faster.
    Does anybody has similiar experiences?
    Thanks! Daniel.

    I don't think that is the issue, a query that requires a 14+ second parse would be something to behold.
    The difference is respnse time is likely to do with the data blocks being cached as result of the first query.

  • Performance Issue - Update response time

    Hi,
    I am trying to get the equipment data and I have to link with asset. The only way I could find is through m_equia view. I select the equipment number from m_equia passing the key as asset no and from there i go to equi and get the other data.
    But when i am trying to select from m_equia the database time is more. So, can some one suggest me a better option for this other than the m_equia to get the equipment details with asset as the key. I also have cost center details with me.
    Thanks,

    Hi,
    Please find the select on m_equia and further select on it.
    Get asset related data from the view
    IF NOT i_asset[] IS INITIAL.
       SELECT anlnr
              equnr
         FROM m_equia
         INTO TABLE i_asst_equi
         FOR ALL ENTRIES IN i_asset
         WHERE anlnr = i_asset-anln1 AND
               anlun = i_asset-anln2 AND
               bukrs = i_asset-bukrs.
       IF sy-subrc = 0.
         SORT i_asst_equi BY equnr.
       ENDIF.
    ENDIF.
    Get Equipment related data
    IF NOT i_asst_equi[] IS INITIAL.
       SELECT equi~equnr
              herst
              typbz
              eqart
              mapar
              serge
              iwerk
         FROM equi
         INNER JOIN equz
         ON equiequnr = equzequnr
         INTO TABLE i_equipment
         FOR ALL ENTRIES IN i_asst_equi
         WHERE equi~equnr = i_asst_equi-equnr.
       SORT i_equipment BY equnr.
    ENDIF.
    Thanks.
    Message was edited by:
            lakshmi

  • SQL Server 2000 std Report Performance Issue

    Dear All,
    I have a VB based desktop application with back end MS SQL server 2000 database with server machine ibmx5650 with specs intel xeon 2.7GHz (24 CPU's) & 24GB RAM.
    There are two things i need help:
    Recently we have upgrade the SQL server from 2000 personal edition to the 2000 standard edition. There comes a problem with one of the Report in the application. The report took almost 30 mins previously in SQL 2000 personal edition.But after the upgrade
     to Standard edition we are unable to view report before 3 hours even sometimes it doesn't appear after several hours.
    Secondly for brief testing i have installed the personal edition on a simple PC rather then a server PC specs are corei5 & 4 GB of RAM. The same report is generated in only 15 mins from the application with this desktop machine as DB server.
    Please help me out i have gone through all SQL Server & system performance log of my server machine everything is normal but the report is taking too long & i can only generate that report from personal edition.
    Is there the difference due the higher corei5 processor in desktop machine or there is any other issue behind this.
    Your prompt response is highly appreciated.
    Regards,
    Rashid Ali

    Hello,
    SQL Server 2000 is not support since 2013. Please upgrade to SQL Server 2012 to get better performance and support.
    Thanks for your understanding and support.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Performance Issue while changing a characteristic using CT04 transaction

    Hi Experts,
    Just now we have upgraded our system from 4.6C to ECC 6.0. In the new system we have created some characteristics and later I am trying to change these characteristics using transaction CT04 .
    There are some characteristics which are already present in the new system which has come from 4.6 C. Now when I try to open/change these characteristic (already existing) using CT04 it does not take any time where as if I try to open/change a characteristic which is created newly in ECC 6.0, then it takes a lot of time.
    When I run SQL trace for both the scenario, then I find that most of the time taken is on Query on table PLFV.
    Trace Result for Newly Created Characteristic :
    115 PLFV PREPARE 0 SELECT WHERE "MANDT" = :A0 AND "ATINN"
    = :A1 AND "LOEKZ" = :A2 AND ROWNUM <= :A3
    3 PLFV OPEN 0 SELECT WHERE "MANDT" = '070' AND "ATINN" =
    0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1
    336681733 PLFV FETCH 0 1403
    For this time taken is 336681733 .
    Trace Result for Existing Characteristic :
    2 PLFV OPEN 0 SELECT WHERE "MANDT" = '070' AND "
    ATINN" = 0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1
    For this time taken is 2.
    Here one difference I see is that for the Newly created characteristic, Prepare, Open and Fetch part is executed where as for the already existing query only Open is executed.
    The program which is used for querying PLFV is SAPLCMX_TOOLS_CHARACTERISTICS.
    Could you please help me in this.
    Your response is highly appreciated.
    Regards,
    Lalit Kabra

    Hi Rob,
    Thanks for the response. But the problem which I mentioned is not with all the characteristics. It occurs only with those characteristics which are newly created. The characteristics which are already created are opened without any delay.
    So I am bit confused whether for this problem there would be be any note though I tried searching the same as well.
    Please respond if some one has clue about this issue.
    Your response is highly appreciated.
    Regards,
    Lalit Kabra

  • Performance Issue in Web Application: Event 12605 "Web: Text Content"

    Dear All,
    we are about to deploy a couple of rather complex web applications (built with WAD) and are currently running performance tests with Mercury Loadrunner.
    According to these tests our web application become very slow once more than 30 (virtual) users are running the application.
    When analyzing the BW statistics data we found out that most of the runtime of our application comes from an event called "Web Java: Return Text-Type Content" (ID 12605).
    Has anyone experienced the same issue? Or does anyone know what exactly this event means?
    We are using XPATH in our web applications. Could this be a cause for poor performance?
    Any help is highly appreciated!
    Best Regards
    Christian

    Can you tell me more about the detail of loadrunner stress test for BI WAD scripts? We met error when run the load runner for BI WAD report.
    1. We develop the BI report basing WAD 7.x, setup Iview in sap EP for BI report and get the BI report URL.
        2. We fill the report URL in IE(6.0) to run this report and display the bi report successfully ,
    eg: http://epserver:50000/irj/servlet/prt/portal/prtroot/com.sap.ip.bi.web.portal.integration.launcher?sap-bw-iViewID=pcd%3aportal_content%2fcom.ahepc.BI%2fcom.ahepc.iView%2fcom.ahepc.FI%2fcom.ahepc.BI_I_ZWAFI_006&sap-ext-sid=1

  • Performance Issue while changing Characteristics from CT04

    Hi Experts,
    Just now we have upgraded our system from 4.6C to ECC 6.0. In the new system we have created some characteristics and later I am trying to change these characteristics using transaction CT04 .
    There are some characteristics which are already present in the new system which has come from 4.6 C. Now when I try to open/change these characteristic (already existing) using CT04 it does not take any time where as if  I try to open/change a characteristic which is created newly in ECC 6.0, then it takes a lot of time.
    When I run SQL trace for both the scenario, then I find that most of the time taken is on Query on table PLFV.
    Trace Result for Newly Created Characteristic :
    115 PLFV       PREPARE      0     SELECT WHERE "MANDT" = :A0 AND "ATINN"
          = :A1 AND "LOEKZ" = :A2 AND ROWNUM <= :A3
    3 PLFV       OPEN                 0   SELECT WHERE "MANDT" = '070' AND "ATINN" =
       0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1*
    336681733 PLFV       FETCH       0   1403        
    For this time taken is 336681733 .
    Trace Result for Existing Characteristic :
    2    PLFV       OPEN               0           SELECT WHERE "MANDT" = '070' AND "
                                                                  ATINN" = 0000000575 AND "LOEKZ" = ' ' AND ROWNUM <= 1
    For this time taken is 2.
    Here one difference I see is that for the Newly created characteristic, Prepare, Open and Fetch part is executed where as for the already existing query only Open is executed.
    The program which is used for querying PLFV is  SAPLCMX_TOOLS_CHARACTERISTICS.
    Could you please help me in this.
    Your response is highly appreciated.
    Regards,
    Lalit Kabra

    hi Rajesh,
    Pls check the below comments
    Please install the Oracle optimizer patch 6740811 and 6455795 as per
    note 871096. Also ensure you have installed the other patches listed in
    the same note as these are mandatory when running on Oracle 10.2
    If this does not solve your problem open an OSS message for this.
    Regards,
    Lalit

  • Service Manager 2012 SP1 consoles hanging or slow performance issue in Virtual Environment

    Hii,
    We are facing SCSM SP1 console performance issue which is utilizing high CPU and working deadly.
    For information, our SCSM is in Virtual environment via Hyper-V.
    When running the console over an RDP session to a Hyper-V virtual machine, we have to be careful not to maximize the console so that it will remain fast.  If we maximize on the VM, the console is so slow as for it to be unusable.
    Can someone share his experience, please?
    Regards, Syed Fahad Ali

    Hi Sayed,
    This is a bug and hopefully Microsoft team will solve it soon. if you can to vote for this bug here
    https://connect.microsoft.com/WindowsServer/feedback/details/810667/scsm-console-consumes-a-lot-of-cpu-when-opened-maximized-on-work-item-view-like-all-incidents
    Mohamed Fawzi | http://fawzi.wordpress.com

  • Performance issue with high CPU and IO

    Hi guys,
    I am encountering huge user response time on a production system and I don’t know how to solve it.
    Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
    We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
    In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
    And no or very small waits for this specific SQL.
    it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
    We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
    The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
    Also generated a trace for this SQL on the problematic machine:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1    402.08    1038.31    1842916    6174343          0           1
    total        3    402.08    1038.32    1842916    6174343          0           1
      db file sequential read                     12419        0.21         40.02
      i/o slave wait                             135475        0.51        613.03
      db file scattered read                     135475        0.52        675.15
      log file switch completion                      5        0.06          0.18
      latch: In memory undo latch                     6        0.00          0.00
      latch: object queue header operation            1        0.00          0.00
    ********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
    Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
    I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
    One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
    I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
    On the other machine, the smaller and the faster one, it is other way around.
    What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
    we have 10G on sun os with ASM.
    Thanks in advance.

    Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
    Also, be sure and post exact patch levels for both Oracle and OS.
    Be sure and check all your I/O settings and see what MOS has to say about those.
    Are you using ASSM? See Long running update
    Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
    You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
    Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
    Edited by: jgarry on Nov 15, 2011 1:45 PM

  • Hi, I am oracle apps HRMS Technical consultant.I wanted to know ,can we implement Digital Signatures in Oracle apps 11i XML Reports.if yes what is the approach to do so ? Your quick response is appreciated. Regards , Aasma Sayyad.

    Hi,
    I am Oracle Apps HRMS Technical Consultant.
    I wanted to know,if we can implement Digital Signatures in XML Reports for Oracle Apps HRMS 11i Aplication.
    If yes,what is the approach to do so.
    Your quick response is appreciated.
    Regards,
    Aasma Sayyad.

    Hi Aasma,
    The standard BI Publisher is part of EBS applications.
    Most of the EBS reports(R12) are based on BI Publisher.
    If you check the responsibility 'XML Publisher Administrator' you will see all the templates used in the application.
    Your technical team should already know this.
    On the other hand OBIEE would need separate licences.
    But for you BI Publisher would do.
    Cheers,
    Vignesh

  • How can I send a copy of my Keynote Presentation to someone else so that it will include my slide notes (uploading it to iWork seems to exclude my notes and only shows the slides). A quick response would be greatly appreciated!

    How can I send a copy of my Keynote Presentation to someone else so that it will include my slide notes (uploading it to iWork seems to exclude my notes and only shows the slides). A quick response would be greatly appreciated!

    I'd try Dropbox (http://www.dropbox.com/), which you and your recipient both have to have. It's easy, and the free version would probably be enough. There are any number of other services available for emailing large files (google that). Of course, your recipient must have KN installed to be able to view it correctly, or can view it in PP, but with some loss, primarily transitions/animations.

  • Oracle 9i Performance Issue High Physical Reads

    Dear All,
    I have Oracle 9i Release 9.2.0.5.0 database under HP Unix, I have run the query and got following output. Can any body just have a look and advise what to do in the following situation? We have performance issues.
    Many thanks in advance
    Buffer Pool Advisory for DB: DBPR Instance: DBPR End Snap: 902
    -> Only rows with estimated physical reads >0 are displayed
    Size for Size Buffers for Est Physical Estimated
    P Estimate (M) Factr Estimate Read Factor Physical Reads
    D 416 .1 51,610 4.27 1,185,670,652
    D 832 .2 103,220 2.97 825,437,374
    D 1,248 .3 154,830 2.03 563,139,985
    D 1,664 .4 206,440 1.49 412,550,232
    D 2,080 .5 258,050 1.32 366,745,510
    D 2,496 .6 309,660 1.23 340,820,773
    D 2,912 .7 361,270 1.14 317,544,771
    D 3,328 .8 412,880 1.09 301,680,173
    D 3,744 .9 464,490 1.04 288,191,418
    D 4,096 1.0 508,160 1.00 276,929,627

    Hi,
    Actually you didnt give the exact problem statement.
    Seems to be your database is I/O bound. Ok, do the following one by one:
    1. Identify the FTS queries and try to create the optimal indexes (depending on the disk reads factor!!) on the problem queries.
    2. To reduce the 276M physical reads, you need to allocate more memory to db_cache_size. try 8GB (initially) and then depending on the buffer advisery you can increase further if you have more memory on the box.
    3. as a Next step , configure KEEP and RECYCLE cache to get the benefits of reduced I/O by multiple pools. Allocate objects to the KEEP/RECYCLE pools.
    Thanks,

Maybe you are looking for