CF Performance Tuning Help

I moved our internal application from a P3 500mhz CPU 1U
server to a Duo-Core Xeon 1.6ghz server.
The newer box has a larger HD, more RAM, beefier CPU, yet my
application is running at the same or a smidgen faster than the old
server. Oh, I went from a Win2k, IIS 5.0 box to a Windows Server
2003 running IIS 6, mysql 5.0.
I did a Windows Performance Monitor look-see. I added all the
CF services and watched the graphs do their thing. I noticed that
after I ran query on my application, the jrun service Avg. Req Time
is pinned at 100%. Even several minutes after inactivity on the
box, this process is still at 100%. Anyone know if this is a
causing my performance woes? Anyone know what else I can look at to
increase the speed of the app (barring and serious code rebuild)?
Thanks!
chris

Anyone know what else I can look at to increase the speed of
the app
(barring and serious code rebuild)?
There are some tweaks and tips, but I will let more knowledge
folks
speak to these. But I am afraid you problem maybe a code
issue. From
the brief description I would be concerned there is some kind
of memory
leak thing happening in your code and until you track it down
and plug
it up, no amount of memory or performance tuning is going to
do you much
good.

Similar Messages

  • Performance tuning help

    hello friends,
    i am supposed to use ST05 and SE30 to do performance tuning,and also perform cost estimation,
    can some plz help me understand, i am not that good at reading large paragraphs, so plz donot give me links to help.sap
    thank you.

    Hi
    Se30  is the runtime analysis
    Here it will give you detailed graph about your program .
    abap time,database time application time ..
    st05 is sql trace where it will give you individual select stmnts in your program
    and also proper index is being used or not.
    Other performance tips
    check any select stmnts inside the loop and replace with read statement
    use binary search in read stmnts
    use proper index in where condition in select stmnt.
    if you want to execute st05..go st05..trace on-execute ur program--come to st05..trace off and --list trace...
    you get summary details of select queires and it will be pink or other color.
    Thanks

  • Query Performance Tuning - Help

    Hello Experts,
    Good Day to all...
    TEST@ora10g>select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    "CORE     10.2.0.4.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - Production
    SELECT fa.user_id,
              fa.notation_type,
                 MAX(fa.created_date) maxDate,
                                      COUNT(*) bk_count
    FROM  book_notations fa
    WHERE fa.user_id IN
        ( SELECT user_id
         FROM
           ( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
                                                      MAX(f2.notatn_id) f2_annotation_id
            FROM  book_notations f2,
                  title_relation tdpr
            WHERE f2.user_id IN ('100002616221644',
                                          '100002616221645',
                                          '100002616221646',
                                          '100002616221647',
                                          '100002616221648')
              AND f2.pack_id=tdpr.pack_id
              AND tdpr.title_id =93402
            GROUP BY f2.user_id
            ORDER BY 2 DESC)
         WHERE ROWNUM <= 10)
    GROUP BY fa.user_id,
             fa.notation_type
    ORDER BY 3 DESC;Cost of the Query is too much...
    Below is the explain plan of the query
    | Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                           |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   1 |  SORT ORDER BY                             |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   2 |   HASH GROUP BY                            |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    11 |   319 |     4   (0)| 00:00:01 |
    |   4 |     NESTED LOOPS                           |                                |    53 |  2385 |    50   (6)| 00:00:01 |
    |   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    29   (7)| 00:00:01 |
    |   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |
    |*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |
    |   8 |         VIEW                               |                                |     5 |    80 |    29   (7)| 00:00:01 |
    |*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  10 |           HASH GROUP BY                    |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5356 |   135K|    26   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                   |                                |  6917 |   243K|    27   (0)| 00:00:01 |
    |  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |
    |  15 |              INLIST ITERATOR               |                                |       |       |            |          |
    |* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5356 |       |     4   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   746 |       |     1   (0)| 00:00:01 |
    Table Details
    SELECT COUNT(*) FROM book_notations; --111367
    Columns
    user_id -- nullable field - VARCHAR2(50 BYTE)
    pack_id -- NOT NULL --NUMBER
    notation_type--     VARCHAR2(50 BYTE)     -- nullable field
    CREATED_DATE     - DATE     -- nullable field
    notatn_id     - VARCHAR2(50 BYTE)     -- nullable field      
    Index
    FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
    SELECT COUNT(*) FROM title_relation; --12678
    Columns
    pack_id - not null - number(38) - PK
    title_id - not null - number(38)
    Index
    IDX_TITLE_ID - Non Unique - TITLE_ID
    Please help...
    Thanks...

    Linus wrote:
    Thanks Bravid for your reply; highly appreciate that.
    So as you say; index creation on the NULL column doesnt have any impact. OK fine.
    What happens to the execution plan, performance and the stats when you remove the index hint?
    Find below the Execution Plan and Predicate information
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 126058086"
    "| Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |"
    "|   0 | SELECT STATEMENT                           |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   1 |  SORT ORDER BY                             |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   2 |   HASH GROUP BY                            |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    10 |   290 |     4   (0)| 00:00:01 |"
    "|   4 |     NESTED LOOPS                           |                                |    50 |  2250 |    53   (8)| 00:00:01 |"
    "|   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    32  (10)| 00:00:01 |"
    "|   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |"
    "|*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |"
    "|   8 |         VIEW                               |                                |     5 |    80 |    32  (10)| 00:00:01 |"
    "|*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  10 |           HASH GROUP BY                    |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5875 |   149K|    28   (0)| 00:00:01 |"
    "|  12 |             NESTED LOOPS                   |                                |  7587 |   266K|    29   (0)| 00:00:01 |"
    "|  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                      |     1 |    10 |     1   (0)| 00:00:01 |"
    "|* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |"
    "|  15 |              INLIST ITERATOR               |                                |       |       |            |          |"
    "|* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5875 |       |     4   (0)| 00:00:01 |"
    "|* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   775 |       |     1   (0)| 00:00:01 |"
    "Predicate Information (identified by operation id):"
    "   7 - filter(ROWNUM<=10)"
    "   9 - filter(ROWNUM<=10)"
    "  14 - access(""TDPR"".""TITLE_ID""=93402)"
    "  16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
    "              ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
    "              ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
    "  17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
    The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
    >
    Statistics
    BEGIN
    DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
    END;
    "COLUMN_NAME"     "NUM_DISTINCT"     "NUM_BUCKETS"     "HISTOGRAM"
    "NOTATION_ID"     110269     1     "NONE"
    "USER_ID"     213     212     "FREQUENCY"
    "PACK_ID"     20     20     "FREQUENCY"
    "NOTATION_TYPE"     8     8     "FREQUENCY"
    "CREATED_DATE"     87     87     "FREQUENCY"
    "CREATED_BY"     1     1     "NONE"
    "UPDATED_DATE"     2     1     "NONE"
    "UPDATED_BY"     2     1     "NONE"
    After removing the hint ; the query still shows the same "COST"
    Autotrace
    recursive calls     1
    db block gets     0
    consistent gets     34706
    physical reads     0
    redo size     0
    bytes sent via SQL*Net to client     964
    bytes received via SQL*Net from client     1638
    SQL*Net roundtrips to/from client     2
    sorts (memory)     3
    sorts (disk)     0
    Output of query
    "USER_ID"     "NOTATION_TYPE"     "MAXDATE"     "COUNT"
    "100002616221647"     "WTF"     08-SEP-11     20000
    "100002616221645"     "LOL"     08-SEP-11     20000
    "100002616221644"     "OMG"     08-SEP-11     20000
    "100002616221648"     "ABC"     08-SEP-11     20000
    "100002616221646"     "MEH"     08-SEP-11     20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
    I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
    I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
    David

  • [u][b]Performance Tuning Help[/b][/u] : Repeating HUGE Select Statement...

    I have a select statement that I am repeating 12 times with the only difference between them all being the date range that is grabbed. Essentially, I am grabbing the last 12 months of information from the same table here is a copy of one of the sections:
    (select
    a.account_id as account_id,
    a.company_id as company_id,
    a.account_number as acct_number,
    to_char(a.trx_date, 'YYYY/MM') as month,
    sum(a.amount_due_original) as amount
    from
    crystal.financial_vw a
    where
    a.cust_since_dt is not null
    and a.cust_end_dt is null
    and a.trx_date > add_months(sysdate, -13)
    and a.trx_date <= add_months(sysdate, -12)
         group by
              a.account_id,
              a.company_id,
              a.account_number,
              to_char(a.trx_date, 'YYYY/MM')) b
    I am now looking to do some tuning on this and was wondering if anyone has any suggestions. My initial thought was to use cursors or some sort of in memory storage to temporarily process the information into a pipe-delimited flat file.

    "Don't need:
    to_char(a.trx_date, 'YYYY/MM')"
    Are you sure?
    "Change to (just to make it easier to read):
    a.trx_date between add_months(sysdate, -13)
    and a.trx_date <= add_months(sysdate, -12)"
    What? That's not even valid syntax is it? Besides the fact that the BETWEEN operator is inclusive (i.e. > add_months(sysdate, -13) is not the same as between add_months(sysdate, -13) ...).
    "And be sure you have an index on:
    cust_since_dt, cust_end_dt, trx_date in financial_vw."
    What information did you base this conclusion on. Just because something is in the where clause doesn't mean you should automatically throw an index on it. What if 90% of the rows satisfy those null/not null criteria? What if there's only one year of data in this table? Are you certain an index would help pick out all the data for one month more efficiently than a full table scan?
    My immediate question was why are you breaking the data for each month up into separate subqueries like this at all? What is it that your doing with these subqueries that you don't believe can be accomplished with a single grouped query?

  • Need help in Performance tuning for function...

    Hi all,
    I am using the below algorithm for calculating the Luhn Alogorithm to calculate the 15th luhn digit for an IMEI (Phone Sim Card).
    But the below function is taking about 6 min for 5 million records. I had 170 million records in a table want to calculate the luhn digit for all of them which might take up to 4-5 hours.Please help me performance tuning (better way or better logic for luhn calculation) to the below function.
    A wikipedia link is provided for the luhn algorithm below
    Create or Replace FUNCTION AddLuhnToIMEI (LuhnPrimitive VARCHAR2)
          RETURN VARCHAR2
       AS
          Index_no     NUMBER (2) := LENGTH (LuhnPrimitive);
          Multiplier   NUMBER (1) := 2;
          Total_Sum    NUMBER (4) := 0;
          Plus         NUMBER (2);
          ReturnLuhn   VARCHAR2 (25);
       BEGIN
          WHILE Index_no >= 1
          LOOP
             Plus       := Multiplier * (TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1)));
             Multiplier := (3 - Multiplier);
             Total_Sum  := Total_Sum + TO_NUMBER (TRUNC ( (Plus / 10))) + MOD (Plus, 10);
             Index_no   := Index_no - 1;
          END LOOP;
          ReturnLuhn := LuhnPrimitive || CASE
                                             WHEN MOD (Total_Sum, 10) = 0 THEN '0'
                                             ELSE TO_CHAR (10 - MOD (Total_Sum, 10))
                                         END;
          RETURN ReturnLuhn;
       EXCEPTION
          WHEN OTHERS
          THEN
             RETURN (LuhnPrimitive);
       END AddLuhnToIMEI;
    http://en.wikipedia.org/wiki/Luhn_algorithmAny sort of help is much appreciated....
    Thanks
    Rede

    There is a not needed to_number function in it. TRUNC will already return a number.
    Also the MOD function can be avoided at some steps. Since multiplying by 2 will never be higher then 18 you can speed up the calculation with this.
    create or replace
    FUNCTION AddLuhnToIMEI_fast (LuhnPrimitive VARCHAR2)
          RETURN VARCHAR2
       AS
          Index_no     pls_Integer;
          Multiplier   pls_Integer := 2;
          Total_Sum    pls_Integer := 0;
          Plus         pls_Integer;
          rest         pls_integer;
          ReturnLuhn   VARCHAR2 (25);
       BEGIN
          for Index_no in reverse 1..LENGTH (LuhnPrimitive) LOOP
             Plus       := Multiplier * TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1));
             Multiplier := 3 - Multiplier;
             if Plus < 10 then
                Total_Sum  := Total_Sum + Plus ;
             else
                Total_Sum  := Total_Sum + Plus - 9;
             end if;  
          END LOOP;
          rest := MOD (Total_Sum, 10);
          ReturnLuhn := LuhnPrimitive || CASE WHEN rest = 0 THEN '0' ELSE TO_CHAR (10 - rest) END;
          RETURN ReturnLuhn;
       END AddLuhnToIMEI_fast;
    /My tests gave an improvement for about 40%.
    The next step to try could be to use native complilation on this function. This can give an additional big boost.
    Edited by: Sven W. on Mar 9, 2011 8:11 PM

  • Help request for Oracle DB Performance tuning

    I need some help on Oracle performance tuning.
    My environment is VB 6 frontend & Oracle 8
    in backend.
    The problem I am facing is particularly in
    muli-user environment. Some query which takes
    20 seconds to when there is only one user
    working in the network, takes more time
    (3 minutes to even 5 minutes) when there are
    4-5 users working in the network.
    What may be wrong ?
    Are there any parameters that I can
    fine tune ?
    We checked the resource utilization at the
    server level CPU utilization is max 50 %,
    Memoery utilization is 50 % (250 MB out of available 512 KB)
    null

    Hi Vinay,
    There can be many reasons for time delay.
    Here are some--
    1) You may not be releasing the locks on objects quickly after use.
    This can be obtained by querying v$locked_object.
    2) You may be holding many sessions concurrently and not closing the older ones.
    This u can get by querying v$session.
    These are some common problems in multi-user platform.
    R u using MTS?
    Yogesh.

  • Report running for long time & performance tuning

    Hi All,
    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    (2). Can you tell me about performance tuning in BO ?
    please help me.....
    Thanks
    Kumar

    (1). WebI report is running for long time.so what are the steps i need to check for it ?
    The first step is to see if the problem lies in the query on the data source or in webi itself. Depending on the data source there are different ways to extract the query and try to run it against the database. Which source does your report uses?
    (2). Can you tell me about performance tuning in BO ?
    I would recommend to start by reading the administrator's guide. There is a section about how to improve performance.
    Regards,
    Stratos

  • Performance tuning of BPEL processes in SOA Suite 11g

    Hi,
      We are working with a customer for performance tuning of SOA Suite 11g, one of the areas is to tune the BPEL processes. I am new to this and started out with stress testing Hello World process using SOAPUI tool. I would like help with the below topics -
    1. How do I interpret the statistics collected during stress testing? Do we have any benchmark set that can indicate that the performance is ok.
    2. Do we need to run stress tests for every BPEL process deployed?
    2. Is there any performance tuning strategy documentation available? Or can anybody share his/her experiences to guide me?
    Thanks in advance!
    Sritama

    1. How do I interpret the statistics collected during stress testing? Do we have any benchmark set that can indicate that the performance is ok.
    You need
    pay attention to:
    java heap usage vs java heap capacity
    java eden usage vs java eden capacity
    JDBC pool initial connections vs JDBC pool capacity connections
    if you are using linux: top
    if you are using aix: topas
    2. Do we need to run stress tests for every BPEL process deployed?
    yes, you need test each BPEL. You can use "Jmeter" tool.
    Download Jmeter from here: Apache JMeter - Apache JMeter&amp;trade;
    Other tools:
    jstat
    jstack
    jps -v
    Enterprise Manager
    WebLogic Console
    VisualVM
    JRockit Mission Control
    3. Is there any performance tuning strategy documentation available? Or can anybody share his/her experiences to guide me?
    I recommend "Oracle SOA Suite 11g Performance Tuning Cookbook" http://www.amazon.com/Oracle-Suite-Performance-Tuning-Cookbook/dp/1849688842/ref=sr_1_1?ie=UTF8&qid=1378482031&sr=8-1&keywords=oracle+soa+suite+11g+performance+tuning+cookbook

  • Performance tuning related issues

    hi experts
      i am new to performance tuning application. can any one share some stuff(bw3.5& bi7)related to the concern area.send any relavent docs to my id: [email protected] .
      thanks in advance
    regards
    gavaskar
    [email protected]

    hi Gavaskar,
    check this, you can download lot of performance materials
    Business Intelligence Performance Tuning [original link is broken]
    and e-learning -> intermediate course and advance course
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
    e.g
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/8e6183ad-0701-0010-e083-9ab1c6afe6f2
    performance tools in bw 3.5
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/07a4f070-0701-0010-3b91-a6bf7644c98f
    (here also you can download the presentation by righ click the disk drive icon)
    hope this helps.

  • [ADF-11.1.2] Proof of view performance tuning in oracle adf

    Hello,
    Take an example of : http://www.gebs.ro/blog/oracle/adf-view-object-performance-tuning-analysis/
    It tells me perfectly how to tune VO to achieve performance, but how to see it working ?
    For example: I Set Fetch size of 25, 'in Batch of' set to 1 or 26 I see following SQL Statement in Log
    [1028] SELECT Company.COMPANY_ID,         Company.CREATED_DATE,         Company.CREATED_BY,         Company.LAST_MODIFY_DATE,         Company.LAST_MODIFY_BY,         Company.NAME FROM COMPANY Companyas if it is fetching all the records from table at a time no matter what's the size of Batch. If I am seeing 50 records on UI at a time, then I would expect at least 2 SELECT statement fetching 26 records by each statement if I set Batch Size to 26... OR at least 50 SELECT statement for Batch size set to '1'.
    Please tell me how to see view performance tuning working ? How one can say that setting batch size = '1' is bad for performance?

    Anandsagar,
    why don't you just read up on http://download.oracle.com/docs/cd/E21764_01/core.1111/e10108/adf.htm#CIHHGADG
    there are more factors influencing performance than just query. Btw, indexing your queries also helps to tune performance
    Frank

  • Performance tuning in t

    hi,
    I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an  ALV report.
    please suggest me how to proced to change the code.

    Performance tuning for Data Selection Statement
    <b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.

  • Performance tuning in XI, (SAP Note 857530 )

    Could any one pls tell me where to find sap notes.
    I am looking for "SAP Note 857530 "
    Integration process performance(in sap XI).
    or how can I view the performance of the integration process ? or exactly how performance tuning is done.
    pls help,
    Best regards,
    verma.

    Hi,
    SAP Note:
    Symptom
    Performance bottlenecks when executing integration processes.
    Other terms
    ccBPM
    BPE
    Performance
    Integration Processes
    Solution
    This note refers to all notes that are concerned with improving the performance of the ccBPM runtime.
    This note will be continually updated as improvements are made.
    Also read the document "Checklist: Making Correct Use of Integration Processes" in the SAP Library documentation, on SAP Service Marketplace, and in SDN; it contains information about performance issues to bear in mind when you model integration processes.
    Refer to the appended notes and maintain the default code changes by using SNOTE, or by importing the relevant service packs. Note that some performance improvements cannot be implemented by using SNOTE and are instead only available in service packs.
    Regards
    vijaya

  • Performance tuning in oracle 10g

    Hi Guys
    i hope all are well,Have a nice day Today
    i have discuss with some performance tuning issue
    recently , i joined the new project that project improve the efficiency of the applicaton. in this environment oracle plsql langauage are used , so if i need to improve effiency of application what are the step are taken
    and what are the way to go through the process improvement
    kindly help me

    generate statspack/AWR reports
    HOW To Make TUNING request
    https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003

  • Performance tuning in EP6 SP14

    Hi,
    We just migrated our development EP5 SP6 portal to NW04 EP6 SP14 and are seeing some performance problems with a limited number of users (about 3). 
    Please point me to some good clear documentation related to performance tuning.  Better yet, please tell me some things you have done in your EP6 portal to improve performance.
    Any help is greatly appreciated.
    Regards,
    Rick

    Hi
    In general we have experienced that NW04 is faster than earlier versions.
    One of the big questions is if the response time is SERVER time (used for processing on the J2EE server), NETWORK time (latency and many roundtrips), CLIENT time (rendering, virus-scanning etc) or a combination.
    1) Is the capacity on the server ok?  CPU utilization and queue lenght high?  Memory swapping?
    2) A quick optimazation server-side is logging: Plese verify that that log and trace levels are ERROR /FATAL or NONE on J2EE and also avoid logging on IIS and IIS proxy is used
    3) Using a HTTP trace you can see if the NW portal for some reason generetes more roundtrips (more GETS) that the old portal for the same content - what is the network latency?  Try to do a "ping <server>" from a client pc and see the time (latency) - if it is below 20 ms the network should do a big difference.
    4) On the client try to call the portal with anti-virus de-activated if the delta/difference in response times are HUGE your client could be to old? (don't underestimate the client). Maybee the compression should be set different to avoid compres (server) and uncompres (client) - this is a tradeoff with network latency however.
    Also client-side caching (in browser) is important.
    These are just QUICK point to consider - tuning is complex
    BR
    Tom Bo

  • Performance Tuning in IR

    Hello All,
    We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
    But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
    Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
    Regards,
    Raj

    SQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
    When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
    To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
    To tune your bqy documents...
    1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
    2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
    3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
    4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
    5) Saved compressed documents
    6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
    7) Remove all duplicate images and keep the image file size small.
    Hope this helps!
    PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
    Edited by: user10899957 on Feb 10, 2009 6:07 AM

Maybe you are looking for

  • Problem with operations on photoshop layers

    I have a problem when I try to do some operations on the layouts like resize and rotate. To reproduce the problem follow these steps. Create a new creative suit extension project, select adobe Photoshop extended as csaw libraries. In the testPhotosho

  • Audio and Video tracks don't go up/down together EVEN when linked selection is checked!

    I'm sure this is something really simple but I just can't figure it out. And it must be a button or a option that I've checked because it didn't use to do that a few days ago. Basically, I have a clip on the timeline which has a video and an audio tr

  • Develop Form and Workflow in SharePoint Online and On-Premise

    Hi Expert, I have the question about Form and Workflow in SharePoint Online vs ShrePoint On-Premise Current Problem : Customer is implemented Form and Workflow in SharePoint Online but they have a problem some features that didn't work in SharePoint

  • Posting IDOC from MQ

    Hi, I have a scenario in which I have to Post an IDOC from MQ (MQ to IDOC scenario). I am getting an error in the communication channel  : Transform: failed to execute the transformation: com.sap.aii.messaging.adapter.trans.TransformException: Error

  • Software update inquiries

    Hello, I've heard that the new S^3 update Anna is coming soon to owners of S^3 phones which did not come with Anna when it was purchased. I got my C7 about a month ago and it did not come with Anna. Since I will be enjoying this update in a few days,