Improving performance for java

I'm new to this so please bare with me ... I have 2 basic questions
I just upgraded my server to SunOS 5.10 Generic_139555-08 sun4u sparc SUNW,Sun-Fire-V440
I also upgraded java to java version "1.6.0_14"
This is a 4 processor box. Top gives me:
last pid: 26233; load averages: 2.79, 2.99, 3.12 13:23:57
174 processes: 172 sleeping, 2 on cpu
CPU states: 40.2% idle, 54.2% user, 5.6% kernel, 0.0% iowait, 0.0% swap
Memory: 8192M real, 3059M free, 6156M swap in use, 4105M swap free
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
17294 prodslic 270 0 0 654M 641M cpu/1 527:36 50.02% java
*!st Question:*
*1. Why is java using so much cpu time?*
When I run ps -ef | grep java:
root 15666 1 0 Aug 10 ? 4:52 /usr/java/bin/java -server -Xmx128m -XX:+UseParallelGC -XX:ParallelGCThreads=4
prodslic 17294 1 25 18:07:14 ? 530:07 /usr/jdk/instances/jdk1.6.0/bin/java -Xmx1024m -Djava.awt.headless=true -Djava.
*2nd Question:*
*2. Why are there 2 java version running?*
/usr/java/bin/java -versionjava version "1.5.0_18"
/usr/jdk/instances/jdk1.6.0/bin/java -versionjava version "1.6.0_14"
This is confusiing to me. I'd also like to know what the different command line options mean
Thanks
Edited by: BH80477 on Aug 14, 2009 8:19 AM

Claudia,
We have not yet released a "production" version of OHJ 4.2 because 1) we'd like to add a few more features before calling it production, and 2) OHJ 4.2 has not yet shipped with an Oracle product and therefore hasn't undergone the rigorous testing that takes place on larger Oracle products. If you're concerned about the label of "production" versus "beta," I think you'd be pretty safe going with the latest release of OHJ, or we wouldn't have put it OTN for you to use. The developers have tested thoroughly. :) I'd recommend trying out 4.2.1 since it's got some substantial improvements over the 4.1 branch, and on the off-chance you run into problems, please let us know.
- Ryan

Similar Messages

  • To improve performance for report

    Hi Expert,
    i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
    it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
    SELECT vbeln
               auart
               submi
               vkorg
               vtweg
               spart
               knumv
               vdatu
               vprgr
               ihrez
               bname
               kunnr
        FROM vbak
        APPENDING TABLE itab_vbak_vbap
        FOR ALL ENTRIES IN l_itab_temp
    *BEGIN OF change 17/Oct/2008.
        WHERE erdat IN s_erdat              AND
             submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              auart = l_itab_temp-auart     AND
    *BEGIN OF change 17/Oct/2008.
              submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              vkorg = l_itab_temp-vkorg     AND
              vtweg = l_itab_temp-vtweg     AND
              spart = l_itab_temp-spart     AND
              vdatu = l_itab_temp-vdatu     AND
              vprgr = l_itab_temp-vprgr     AND
              ihrez = l_itab_temp-ihrez     AND
              bname = l_itab_temp-bname     AND
              kunnr = l_itab_temp-sap_kunnr.
        DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
      ENDDO.
    Please give me suggession for improving performance for the programmes.

    hi,
    you try like this
    DATA:BEGIN OF itab1 OCCURS 0,
         vbeln LIKE vbak-vbeln,
         END OF itab1.
    DATA: BEGIN OF itab2 OCCURS 0,
          vbeln LIKE vbap-vbeln,
          posnr LIKE vbap-posnr,
          matnr LIKE vbap-matnr,
          END OF itab2.
    DATA: BEGIN OF itab3 OCCURS 0,
          vbeln TYPE vbeln_va,
          posnr TYPE posnr_va,
          matnr TYPE matnr,
          END OF itab3.
    SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
    START-OF-SELECTION.
      SELECT vbeln FROM vbak INTO TABLE itab1
      WHERE vbeln IN s_vbeln.
      IF itab1[] IS NOT INITIAL.
        SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
        FOR ALL ENTRIES IN itab1
        WHERE vbeln = itab1-vbeln.
      ENDIF.

  • Times ten to improve performance for search results in Oracle eBS

    Hi ,
    We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
    Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.

    Vikash,
    We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
    SParker

  • Need to improve performance for bex queries

    Dear Experts,
    Here we have bex queries buit on BW infoset, further infoset is buit on 2 dsos and 4 infoobjects.
    we have built secondary indices to the two dso assuming will improve performance, but still query execution time is very long.
    Could you suggest me on this.
    Thanks in advance,
    Mannu

    HI,
    Thanks for the repsonse.
    But as I have mentioned the infoset is based on DSOs and Infoobjects. So we could not perform on aggregates.
    in RSRT
    I have tried look in read mode of the query i.e. in 'x', which is also valid as qurey needs to fetch huge data.
    Could you pls look into other possible areas, in order to improve this.
    Thanks in advance,
    Mannu

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • New in Kodo 3.3.3: Improved support for Java 5 enums and generics?

    Hello,
    Can anybody tell me if Kodo 3.3.3 can be deployed on WebLogic 8.1 sp4, jdk
    1.4.1? The reason I ask this is because one of the features mentioned for
    v3.3.3 is the support for Java 5 generics, which is available on WebLogic
    9 -- but not in WebLogic 8.1 sp4. The documentation for Kodo 3.3.3 seems
    to indicate that it can be deployed on WebLogic 8.1 -- can anyone tell me
    if this is accurate?
    Thanks for your help!

    Correction:
    I think that my question may have been misunderstood. What I want to know
    is if Kodo 3.3.3 can be deployed on WebLogic 8.1sp4 which is running JDK
    1.4 or do I have to deploy on a newer version of WebLogic that is running
    Java 5?
    Thanks!
    Rita wrote:
    I think that my question may have been misunderstood. What I want to know
    is if Kodo 3.3.3 can be deployed on WebLogic 8.1sp4 which is running JDK
    1.4 or do I have to deploy on a newer version of WebLogic that is running
    Java 4?
    Thanks!
    Stephen Kim wrote:
    Rita,
    While Kodo 3.3.x can work with JDK 5, it cannot make WL work with JDK 5.
    However, Kodo 3.4 RC 3 / 4.0 EA 2 both can work with WL 9 (which works
    with JDK 5).
    Rita wrote:
    Hello,
    Can anybody tell me if Kodo 3.3.3 can be deployed on WebLogic 8.1 sp4,
    jdk
    1.4.1? The reason I ask this is because one of the features mentionedfor
    v3.3.3 is the support for Java 5 generics, which is available on WebLogic
    9 -- but not in WebLogic 8.1 sp4. The documentation for Kodo 3.3.3 seems
    to indicate that it can be deployed on WebLogic 8.1 -- can anyone tell me
    if this is accurate?
    Thanks for your help!
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com

  • How to improve performance for bulk data load in Dynamics CRM 2013 Online

    Hi all,
    We need to bulk update (or create) contacts into Dynamics CRM 2013 online every night due to data updated from another external data source.  The data size is around 100,000 and the data loading duration was around 6 hours.
    We are already using ExecuteMultiple web services to handle the integration, however, the 6 hours integraton duration is still not acceptable and we are seeking for any advise for further improvement. 
    Any help is highly appreciated.  Many thanks.
    Gary

    I think Andrii's referring to running multiple threads in parallel (see
    http://www.mscrmuk.blogspot.co.uk/2012/02/data-migration-performance-to-crm.html - it's a bit dated, but should still be relevant).
    Microsoft do have some throttling limits applied in Crm Online, and it is worth contacting them to see if you can get those raised.
    100 000 records per night seems a large number. Are all these records new or updated records, or are there some that are unchanged, in which case you could filter them out before uploading ? Or are there useful ways to summarise the data before loading
    Microsoft CRM MVP - http://mscrmuk.blogspot.com/ http://www.excitation.co.uk

  • How to improve performance for Custom Extractor in BI..

    HI all,
               I am new to BI and started working on BI for couple of weeks.. I created a Custom Extractor(Data View) in the Source system and when i pull data takes lot of time.. Can any one respond to this, suggesting how to improve the performance of my custom Extractor.. Please do the needfull..
      Thanks and Regards,
    Venugopal..

    Dear Venugopal,
    use transaction ST05 to check if your SQL statements are optimal and that you do not have redundant database calls. You should use as much as possible "bulking", which means to fetch the required data with one request to database and not with multiple requests to database.
    Use transaction SE30 to check if you are wasting time in loops and if yes, optimize the algorithm.
    Best Regards,
    Sylvia

  • Improving performance for SM35

    Hi all,
    Are there any ways to improve the performance (time taken to load data) of SM35?
    We are aware of executing the session in backgroud, but due to high data volume (~>10,000 records) per file, the time taken is still slow (about 3 hours per file).

    Hi Raj,
    The previous posters gave already all the information you need, but since the question is still open, let me try to summarize it.
    You're getting almost 1 transaction processed per second, which might be ok depending on the application area and the complexity of the executed transaction. So as Hermann initially pointed out, you should first profile the transaction you're running and check for any inefficiencies (custom coding in exits/BAdI's are often sources of slow-downs). If you find any problems, tune your transaction/application (not SM35).
    If your application is fast enough (i.e. you cannot find any easy measures for making your transaction faster), you can compare application/transaction processing time versus total time taken in SM35. I personally doubt that you'll find any worthwhile discrepancy there (i.e. process time taken up by SM35, which is not due to the called transaction). Thus you should be left with Hermann's initial point of running several BDC's in parallel - meaning that you'll have to split your input file (you can automate that if you have to run such loads regularly). Without parallel processing you will always encounter unacceptable processing times when running huge data loads (even with optimal coding throughout the application).
    Kind regards, harald

  • How can I improve performance for BC4J/JSP-application

    Hi,
    I have developed a JSP-Applikcation with the master-detail views. This Application has been implemented
    as SSO enabled web portlet into portal. Whenn I click on a row retrieved from the master view (Master.jsp), it take 15 seconds
    until the associated data in detail view will be displayed on the other window(Detail.jsp). The master table has about 2500 records and detail table about 7000. (Other case: it takes 2 seconds for 162 rows (master) and 228 rows (detail) respectively)
    In Master.jsp I set rangesize = "10" to reduce data loading time.
    ======================== master.jsp =================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="10">
    <a href="detail.jsp?RowKeyValue=<jbo:ShowValue datasource="dsMaster" dataitem="RowKey"/>">
    Here Click
    </a>
    ==================================================================
    Because all records from master view have firstly to be retrieved to locate right row, I set rangesize="-1" in detail.jsp. Consequently this leads to a lower performance.
    When rangesize="20" sets instead rangesize="-1", The performance is good, but the wanted Data from detail view are not displayed if the records of master view cliked on ist not in this range.
    ======================== detail.jsp ======================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="-1">
    <jbo:RefreshDataSource datasource="dsMaster">
    <jbo:Row id="msRow" datasource="dsMaster" action="Find" rowkeyparam="RowkeyValue">
    <jbo:dataSource id="dsDetail" appid="testAppId viewobject="DetailView">
    ======================================================================
    Is my programming logic not to be suited for the high performance?
    How can I improve the performance, if it is so?
    Many thanks for your help.
    regards,
    Yoo

    http://forums.adobe.com/thread/1369260?tstart=0

  • How to improve performance of java.awt.choice

    Hi!
    In my GUI I'm using a java.awt.choice. The problem is, when my app starts this choice is filled with lots of values with add(...).
    This takes quite a long time. I've already set the choice invisible before the filling, so a little bit performance is gained there.
    Is there any other way to tune this choice?
    Thank you!

    Try to fill the choice before adding it to the container

  • How to improve Performance for Select statement

    Hi Friends,
    Can  you please help me in improving the performance of the following query:
    SELECT SINGLE MAX( policyterm ) startterm INTO (lv_term, lv_cal_date) FROM zu1cd_policyterm WHERE gpart = gv_part GROUP BY startterm.
    Thanks and Regards,
    Johny

    long lists can not be produced with  a SELECT SINGLE, there is also nothing to group.
    But I guess the SINGLE is a bug
    SELECT MAX( policyterm ) startterm
                  INTO (lv_term, lv_cal_date)
                  FROM zu1cd_policyterm
                  WHERE gpart = gv_part
                  GROUP BY startterm.
    How many records are in zu1cd_policyterm ?
    Is there an index starting with gpart?
    If first answer is 'large' and second 'no'   =>   slow
    What is the meaning of gpart?  How many different values can it assume?
    If many different values then an index makes sense, if you are allowed to create
    an index.
    Otherwise you must be patient.
    Siegfried

  • Improvement suggestion for java.lang.Iterable

    The current design limits each container to one collection. Wouldn't it be better to define java.lang.Iterable as
    interface Iterable<T> {
    Iterator<T> iterator(Class<T> class)
    Then a conainer could hold multiple collections, as long as each collection contains different types.
    Then I could write
    class Zoo
    implements Iterable<Tiger>,
    implements Iterable<Elephant> {
    Iterator<Tiger> iterator(Class<Tiger> class) ...
    Iterator<Elephant> iterator(Class<Elephant> class) ...
    and the compiler would be able to handle this:
    for (Tiger tiger : zoo) .. give me all Tigers
    for (Elephant elephant : zoo) .. give me all Tigers

    The current design limits each container to one collection. Wouldn't it be better to define
    java.lang.Iterable as
    interface Iterable<T> {
    Iterator<T> iterator(Class<T> class)
    }If you were to do that, none of the existing Collection types could implement it without breaking backward compatibility.
    Then I could write
    class Zoo
    implements Iterable<Tiger>,
    implements Iterable<Elephant> {Unfortunately it's not possible to implement the same interface twice with different type arguments. I say unfortunately because I believe one of my own API designs would benefit from that capability, if it were possible.
    and the compiler would be able to handle this:
    for (Tiger tiger : zoo) .. give me all Tigers
    for (Elephant elephant : zoo) .. give me all TigersYou can still do this:
    class Zoo {
        Collection<Tiger> tigers() { ... }
        Collection<Elephant> elephants { ... }
    for (Tiger tiger : zoo.tigers())  ...
    for (Elephant elephant : zoo.elephants()) ...Mark

  • How to improve performance for Azure Table Storage bulk loads

    Hello all,
    Would appreciate your help as we are facing a challenge.
    We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
    We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
    We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
    Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
    Kindly, note that we shouldn't be using SQL/Azure SQL for this.
    I would really appreciate your help.
    Thanks

    I'd think you're just pooling the parallel connections to Azure, if you do it on one system.  You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
    You could speed it up by moving the data file to the cloud and process it with a Cloud worker role.  That way you'd be in the datacenter (which is a much faster, more optimized network.)
    Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
    Darin R.

  • How to improve performance for this code

    Hi,
    LOOP AT lt_element INTO ls_element.
    READ TABLE lt_element_ident INTO ls_element_ident
    WITH KEY element_id = ls_element-element_id BINARY SEARCH.
    IF sy-subrc EQ 0.
    MOVE ls_element_ident-value TO lv_guid.
    SELECT * FROM zcm_valuation_at
    APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
    WHERE case_guid = lv_guid.
    ENDIF.
    ENDLOOP.
    LOOP AT lt_caseattributes INTO ls_caseattributes.
    IF ls_caseattributes-ext_key IS INITIAL.
    SELECT SINGLE ext_key
    INTO CORRESPONDING FIELDS OF ls_caseattributes
    FROM scmg_t_case_attr
    WHERE case_guid = ls_caseattributes-case_guid.
    ENDIF.
    *To get the Status description of the Case
    SELECT SINGLE stat_ordno_descr
    INTO ls_caseattributes-status
    FROM scmgstatprofst AS a
    INNER JOIN scmg_t_case_attr AS b
    ON aprofile_id = bprofile_id
    AND astat_orderno = bstat_orderno
    WHERE case_guid = ls_caseattributes-case_guid.
    MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix TRANSPORTING status ext_key.
    ENDLOOP.
    READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
    Regards,
    Maruti

    Hi,
    try this kind of code:
    ==================================
    start new
    DATA:
      lt_scmgstatprofst LIKE scmgstatprofst OCCURS 0 WITH HEADER LINE,
      wa_scmg_t_case_attr LIKE scmg_t_case_attr.
    SELECT * FROM scmgstatprofst INTO TABLE lt_scmgstatprofst.
    SORT lt_scmgstatprofst BY profile_id stat_orderno.
    end new
    LOOP AT lt_element INTO ls_element.
      READ TABLE lt_element_ident INTO ls_element_ident
      WITH KEY element_id = ls_element-element_id BINARY SEARCH.
      IF sy-subrc EQ 0.
        MOVE ls_element_ident-value TO lv_guid.
        SELECT * FROM zcm_valuation_at
        APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
        WHERE case_guid = lv_guid.
      ENDIF.
    ENDLOOP.
    LOOP AT lt_caseattributes INTO ls_caseattributes.
      IF ls_caseattributes-ext_key IS INITIAL.
        SELECT SINGLE ext_key
        INTO CORRESPONDING FIELDS OF ls_caseattributes
        FROM scmg_t_case_attr
        WHERE case_guid = ls_caseattributes-case_guid.
      ENDIF.
    *To get the Status description of the Case
    start deletion
    SELECT SINGLE stat_ordno_descr
    INTO ls_caseattributes-status
    FROM scmgstatprofst AS a
    INNER JOIN scmg_t_case_attr AS b
    ON aprofile_id = bprofile_id
    AND astat_orderno = bstat_orderno
    WHERE case_guid = ls_caseattributes-case_guid.
    end deletion
    start new
      CLEAR wa_scmg_t_case_attr.
      SELECT SINGLE * FROM scmg_t_case_attr INTO wa_scmg_t_case_attr
        WHERE case_guid = ls_caseattributes-case_guid.
      READ TABLE lt_scmgstatprofst WITH KEY
        profile_id   = wa_scmg_t_case_attr-profile_id
        stat_orderno = wa_scmg_t_case_attr-stat_orderno
        BINARY SEARCH.
      IF sy-subrc IS INITIAL.
        ls_caseattributes-status = lt_scmgstatprofst-stat_ordno_descr.
      ENDIF.
    end new
      MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix
      TRANSPORTING status ext_key.
    ENDLOOP.
    READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
    ==================================
    Regards
    Walter Habich
    Edited by: Walter Habich on Jun 17, 2008 8:41 AM

Maybe you are looking for

  • Runtime errors: TSV_TNEW_PAGE_ALLOC_FAILED

    Hi, When I am trying to check the consistency of background jobs in SM65, It is throwig a runtime error TSV_TNEW_PAGE_ALLOC_FAILED. Kindly look into the below text message from ST22. ShrtText     No storage space available for extending an internal t

  • Texting Verizon's Roadside Service

    Why is it I can text the whole world but I can't text Verizon's road side service I lost my voice in 2008 due to cancer of the larynx so I am mute last time I had a break down I had to have a stranger call for me Why can't Verizon offer texting for t

  • Configuring business packages in E.P 7.0

    Hi friends,   Can anybody tell me the stpes in brief regarding how to configure the business packages in E.P 7.0. And if something need to be implemented in the business package , say for example in ESS(HR) , where i can go and make changes according

  • Color Management: Light image shows correct in Photoshop, but not in Illustrator

    Have a light cyan (13% dot max gradient piping the text) emboss letters spot channel .psd placed in AI. In Photoshop the image shows up great, but in Illustrator is notvisible (goes to white). Color profile is good & generated with hardware device i1

  • I need all the drivers for Hp15 r007tx windows 7 32 bit

    Dear Team, I have purchased hp 15 r007tx and need all the drivers for it. I have installed windows 7 32 bit on it.Plz give me the link. This question was solved. View Solution.