Performance tuning required

The table contains 2 crores of records
Its taking lot of time to get executed.
please suggest to improve the performance
The execution of this query taking lot of time in the below procedure
UPDATE ctrl_settled_shipments
SET
msrbnr = v_document_ids (i),
msrdat = SYSDATE
WHERE msdgnr = v_consignment_ids (i) AND msrbnr IS NULL;
PROCEDURE proctest(
data_source IN VARCHAR2,
toll_flag IN NUMBER,
damage_status IN NUMBER,
execid IN NUMBER,
msg OUT VARCHAR2
As
UPDATE controlling
SET
-- MSREKO= rec.lv_consignment_amount,
msrbnr = v_document_ids (i),
msrdat = SYSDATE
WHERE msdgnr = v_consignment_ids (i) AND msrbnr IS NULL;
lv_document_no NUMBER;
lv_consignment_number NUMBER;
lv_consignment_amount NUMBER;
lv_doc_type VARCHAR2 (2);
lv_damage_amount NUMBER;
lv_ilnbetr NUMBER;
lv_spedbetr NUMBER;
lv_spedbetr_flag NUMBER;
lv_ilnbetr_flag NUMBER;
c_consigment_numbers billing_pkg.cursor_typ;
TYPE t_num_array IS TABLE OF NUMBER
INDEX BY BINARY_INTEGER;
TYPE t_char_array IS TABLE OF VARCHAR2 (20)
INDEX BY BINARY_INTEGER;
v_consignment_ids t_char_array;
v_document_ids t_num_array;
v_doc_type_ids t_char_array;
v_consignment_amts t_num_array;
v_spedbetr_amts t_num_array;
v_ilnbetr_amts t_num_array;
v_spedbetr_flags t_num_array;
v_ilnbetr_flags t_num_array;
v_row_count NUMBER := 0;
ort varchar2(50);
cons_query varchar2(1000);
pctx bl_plog.log_ctx
:= bl_plog.init ('BL_UPDATED_CONTROLLING', plevel => bl_plog.LINFO);--NP
BEGIN
pctx.lsubsection := ''||execid;
BL_PLOG.INFO(pctx,' data_source ' || data_source);
IF UPPER (data_source) = 'CONTROLLING' THEN
cons_query :=
' SELECT document_no AS lv_document_no,
blc.msdgnr AS lv_consignment_number,
doc_type AS lv_doc_type,
0 AS lv_consignment_amount,
0 AS lv_ilnbetr,
0 AS lv_spedbetr,
0 AS lv_spedbetr_flag,
0 AS lv_ilnbetr_flag
FROM tmp_header_position_amount1,bl_controlling blc
WHERE document_no IS NOT NULL
and document_no = decode(doc_type,''R'',msrbnr,''G'',msgbnr)
and blc.exec_id = '||execid;
ELSIF UPPER (data_source) = 'BL_CONT_SCHADEN' THEN
cons_query :=
'SELECT document_no AS lv_document_no,
bcs.bcs_msdgnr AS lv_consignment_number,
doc_type AS lv_doc_type,
decode(doc_type,''R'',bcs_msreko,''G'',bcs_msguko) AS lv_consignment_amount,
nvl(bcs.bcs_ilnbetr,0) AS lv_ilnbetr,
nvl(bcs.bcs_spedbetr,0) AS lv_spedbetr,
bcs.bcs_spedbetr_flag AS lv_spedbetr_flag,
bcs.bcs_ilnbetr_flag AS lv_ilnbetr_flag
FROM tmp_header_position_amount1,bl_cont_schaden bcs
WHERE document_no IS NOT NULL
and document_no = decode(doc_type,''R'',bcs_msrbnr,''G'',bcs_msgbnr)
and bcs.exec_id = '||execid;
END IF;
OPEN c_consigment_numbers FOR cons_query;
LOOP
FETCH c_consigment_numbers
BULK COLLECT INTO v_document_ids, v_consignment_ids,
v_doc_type_ids, v_consignment_amts, v_ilnbetr_amts,
v_spedbetr_amts, v_spedbetr_flags, v_ilnbetr_flags LIMIT 200;
EXIT WHEN v_row_count = c_consigment_numbers%ROWCOUNT;
v_row_count := c_consigment_numbers%ROWCOUNT;
FOR i IN 1 .. v_consignment_ids.COUNT
LOOP
--IF UPPER (data_source) ='CONTROLLING' AND rec.lv_doc_type = Billing_Pkg.BILL THEN
IF UPPER (data_source) = 'CONTROLLING'
AND v_doc_type_ids (i) = billing_pkg.bill
THEN
UPDATE ctrl_settled_shipments
SET
msrbnr = v_document_ids (i),
msrdat = SYSDATE
WHERE msdgnr = v_consignment_ids (i) AND msrbnr IS NULL;
end if;
commit;
end loop;
EXCEPTION
WHEN OTHERS
THEN
msg:='records failed to update'||sqlerrm;
rollback;
end proctest;
Edited by: 812809 on 9 Jul, 2011 5:48 AM

You think your problem is the UPDATE? How long does a SINGLE update take? Ever tried? Did you run a trace followed by TKPROF?
Your problem is that you use a lot of code, nested loops, dynamic SQL etc. when you can do the update with a singe MERGE command, which will reduce to less than 10% of your code and be VERY much faster.
something like this. (If I got the idea of your code correct)
MERGE INTO ctrl_settled_shipments X
USING (
SELECT document_no AS lv_document_no,
   blc.msdgnr AS lv_consignment_number,
   doc_type AS lv_doc_type,
   s.rowid rid
FROM tmp_header_position_amount1,
      bl_controlling blc,
       ctrl_settled_shipments s
WHERE document_no IS NOT NULL
   and document_no = decode(doc_type,''R'',msrbnr,''G'',msgbnr)
and blc.exec_id = execid
and s.msdgnr = blc.msdgnr AND s.msrbnr IS NULL) Y
on x.rowid = y.rid
  WHEN MATCHED THEN UPDATE
   msrbnr = lv_document_no,
   msrdat = SYSDATE;

Similar Messages

  • Standards of Performance Tuning required

    Hi,
    Plz guide me where i would be able to get standards of performance tuning of Oracle DB. So that i can compare the statistics with that.
    Regards,

    First of all, make sure you understand performance concepts (http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm), then don't go on comparing statistics as this will not take you but to a dead end filled up with tons of numbers.
    It is not possible to talk about a so wide topic here, but If your database is running slow it could be because of one of two reasons, there is an outstanding demand of resources, or there is a lack of resources. Whichever the cause, it produces a bottleneck. Find the bottleneck, troubleshoot it and use statistics to gather a snapshot before and after the procedure.

  • Application Express performance - tuning required?

    We have recently implemented a new Application Express system running under Oracle10g on a Linux/RedHat 4 server (actually a Dell PowerEdge 2850 with 8gb RAM and twin 3gb CPU's). The application mostly flies but on odd days (increasingly) performance degrade to the point where a single refresh takes longer than 5 seconds. We only have about 10 users on the system and the database is shared between about 25 users. We have had the network checked and have ruled that out as it only peaks at about 1.6mbps over a 54mbps line. I have updated stats on the database too. Looking in Enterprise Manager the HTTP_Server is hardly doing anything, the database appears to be using hardly any CPU but the memry allocation line on EM seems to be running along at 8gb all the time. The server is bounced every weekend.
    The SQL is nearly all embedded in PL/SQL packages called from App. Ex. We have tried some of the parameter changes mentioned elsewhere in the forum, particularly changing the "keepAlive" parameter = off, from which we saw an immediate performance hike, but which then tailed away.
    Can anyone suggest where the problem might be and where I can start looking and tuning to improve the performance of our application. We want to roll it out much more widely but are worried it wont scale to many more users and performance will stop completely.
    Its all mighty puzzling so any help would be most welcome.

    Hi Jeremy,
    if you have again such a situation where everything is slow, run your page in debug mode to get some statistics what's taking long. If it is really the rendering of the page or if it is something else (eg network problems).
    Just rethought the above what I have written. You can immediately find out if the rendering sometimes really takes longer. Have a look at the dictionary view      APEX_WORKSPACE_ACTIVITY_LOG and the column ELAPSED_TIME. It will tell you how long it took to render the page. See if you really have peaks sometimes.
    If you don't have a peak there, you probably have more of an network or Apache problem.
    Patrick
    My APEX Blog: http://inside-apex.blogspot.com
    The ApexLib Framework: http://apexlib.sourceforge.net
    The APEX Builder Plugin: http://sourceforge.net/projects/apexplugin/

  • Oracle Performance Tuning Certification

    Hi Guys
    I am going for 10g OCP then maybe 11g Tuning Expert Certification.
    1. I look at the prerequisites of 10g Performance Tuning course:
    Required Prerequisites:
    * Knowledge of Database Administration
    * Oracle Database 10g: Administration Workshop I Release 2
    * Oracle Database 10g: Administration Workshop II
    * Oracle Database 10g: New Features for Administrators Release 2
    Does this mean that it is COMPULSORY that I take the 3 courses above first before I can take the Performance tuning course? This cost a lot of $$$! ; o
    2. Where can I get 11gTuning Expert Certification practice questions? From the 11g Performance Tuning course? Couldn't find it in SelfTestSoftware....
    Advance thanks!

    user8092567 wrote:
    Hi,
    Exam # 1Z0-044 does not exist.
    Why there is no Oracle exam for Performance Tuning for 10g?
    There is one for 9i and there will be one for 11g (currently it is in beta).
    Lorrys,
    I guess this needs to be corrected a bit. 9i exam is a part of teh OCP track, its not an individual exam which you may clear and would be awarded an certification for it. OCE is itself is an certification track and unlikely OCP, it doesn't need multiple exams to be cleared. You just have to clear one exam and youwould be awarded teh certification. Now, about the 10g exam of PT OCE not being there and given in 11g, it should make sense as 11g is already out and there is no point in launchnig a NEW certification track for a previous version. So that's why 11g PT is launched for the newest release, which is 11g.
    That said, I am not from Oracle so its just my own opinion about your comment.
    HTH
    Aman....

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • Performance Tuning for ECC 6.0

    Hi All,
      I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
      Can you please guide me how to do these steps in the sytem
    1) Reorganization should be done at least for the top 10 huge tables
    and their indexes
    2) Unaccessed data can be taken out by SAP Archiving
    3)Apply the relevant corrections for top sap standard objects
    4) CBO update statistics must be latest for all SAP and customer objects
    I have never done performance tuning and want to do it on this system.
    Regards,
    Jitender

    Hi,
    Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
    I require your inputs for performance tuning
    CPU
    Utilization  user    %                   3     Count                                    2
                  system  %                   3      Load average  1min                0.11
                  idle    %                       1                         5 min                0.21
                  io wait %                    93                       15 min                0.22
    System calls/s                        982  Context switches/s                    1752
    Interrupts/s                          4528
    Memory
    Physical mem avail  Kb             6291456 Physical mem free   Kb               93992
    Pages in/s                             473 Kb paged in/s                         3784
    Pages out/s                            211 Kb paged out/s                        1688
    Pool
    Configured swap     Kb            26869896 Maximum swap-space  Kb            26869896
    Free in swap-space  Kb            21631032 Actual swap-space   Kb            26869896
    Disk with highest response time
    Name                                   md3 Response time       ms                  51
    Utilization                              2     Queue                                    0
    Avg wait time       ms              0    Avg service time    ms                  51
    Kb transfered/s                       2   Operations/s                             0
    Current parameters in the system
    System:        sapretail_RET_01          Profile Parameters for SAP Buffers
    Date and Time: 08.01.2009       13:27:54
    Buffer Name                    Comment
    Profile Parameter             Value      Unit  Comment
    Program buffer                 PXA
    abap/buffersize               450000     kB    Size of program buffer
    abap/pxa                      shared           Program buffer mode
    |
    CUA buffer                     CUA
    rsdb/cua/buffersize           3000       kB    Size of CUA buffer
    The number of max. buffered CUA objects is always: size / (2 kB)
                                                                                    |
    Screen buffer                  PRES
    zcsa/presentation_buffer_area 4400000    Byte  Size of screen buffer
    sap/bufdir_entries            2000             Max. number of buffered screens
    |
    Generic key table buffer       TABL
    zcsa/table_buffer_area        30000000   Byte  Size of generic key table buffer
    zcsa/db_max_buftab            5000             Max. number of buffered objects
    |
    Single record table buffer     TABLP
    rtbb/buffer_length            10000      kB    Size of single record table buffer
    rtbb/max_tables               500              Max. number of buffered tables
    |
    Export/import buffer           EIBUF
    rsdb/obj/buffersize           4096       kB    Size of export/import buffer
    rsdb/obj/max_objects          2000             Max. number of objects in the buffer
    rsdb/obj/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/obj/mutex_n              0                Number of mutexes in Export/Import buffer
    |
    OTR buffer                     OTR
    rsdb/otr/buffersize_kb        4096       kB    Size of OTR buffer
    rsdb/otr/max_objects          2000             Max. number of objects in the buffer
    rsdb/otr/mutex_n              0                Number of mutexes in OTR buffer
    |
    Exp/Imp SHM buffer             ESM
    rsdb/esm/buffersize_kb        4096       kB    Size of exp/imp SHM buffer
    rsdb/esm/max_objects          2000             Max. number of objects in the buffer
    rsdb/esm/large_object_size    8192       Bytes Estimation for the size of the largest object
    rsdb/esm/mutex_n              0                Number of mutexes in Exp/Imp SHM buffer
    |
    Table definition buffer        TTAB
    rsdb/ntab/entrycount          20000            Max. number of table definitions buffered
    The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
                                                                                    |
    Field description buffer       FTAB
    rsdb/ntab/ftabsize            30000      kB    Size of field description buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of table descriptions buffered
    FTAB needs about 700 bytes per used entry
                                                                                    |
    Initial record buffer          IRBD
    rsdb/ntab/irbdsize            6000       kB    Size of initial record buffer
    rsdb/ntab/entrycount          20000            Max. number / 2 of initial records buffered
    IRBD needs about 300 bytes per used entry
                                                                                    |
    Short nametab (NTAB)           SNTAB
    rsdb/ntab/sntabsize           3000       kB    Size of short nametab
    rsdb/ntab/entrycount          20000            Max. number / 2 of entries buffered
    SNTAB needs about 150 bytes per used entry
                                                                                    |
    Calendar buffer                CALE
    zcsa/calendar_area            500000     Byte  Size of calendar buffer
    zcsa/calendar_ids             200              Max. number of directory entries
    |
    Roll, extended and heap memory EXTM
    ztta/roll_area                3000000    Byte  Roll area per workprocess (total)
    ztta/roll_first               1          Byte  First amount of roll area used in a dialog WP
    ztta/short_area               3200000    Byte  Short area per workprocess
    rdisp/ROLL_SHM                16384      8 kB  Part of roll file in shared memory
    rdisp/PG_SHM                  8192       8 kB  Part of paging file in shared memory
    rdisp/PG_LOCAL                150        8 kB  Paging buffer per workprocess
    em/initial_size_MB            4092       MB    Initial size of extended memory
    em/blocksize_KB               4096       kB    Size of one extended memory block
    em/address_space_MB           4092       MB    Address space reserved for ext. mem. (NT only)
    ztta/roll_extension           2000000000 Byte  Max. extended mem. per session (external mode)
    abap/heap_area_dia            2000000000 Byte  Max. heap memory for dialog workprocesses
    abap/heap_area_nondia         2000000000 Byte  Max. heap memory for non-dialog workprocesses
    abap/heap_area_total          2000000000 Byte  Max. usable heap memory
    abap/heaplimit                40000000   Byte  Workprocess restart limit of heap memory
    abap/use_paging               0                Paging for flat tables used (1) or not (0)
    |
    Statistic parameters
    rsdb/staton                   1                Statistic turned on (1) or off (0)
    rsdb/stattime                 0                Times for statistic turned on (1) or off (0)
    Regards,
    Jitender

  • LDAP Performance Tuning In Large Deployments - numconnect parameter

    LDAP Performance Tuning In Large Deployments - numconnect parameter
    <p>
    Tuning the LDAP connections
    (numconnect parameter)
    This parameter translates directly into the number of unidas processes that will
    be launched when Calendar Server is started. A process takes time to load, uses
    RAM, and when active, CPU cycles. And, unidas maintains an LDAP client
    connection to a Directory Server which can only support a fixed number of these
    connections. Since a calendar client does not require constant directory access
    then having a matching number of unidas processes (to match uniengd "client"
    processes) is not a good configuration.
    Basically, a calendar client will make many requests for LDAP information, even
    if the event information being retrieved is not currently view able. For example,
    if the calendar client is displaying a week view with 20 events and each event
    has 5 attendees, that will translate into at least 100 separate ldap search
    requests for the given name and surname of each attendee. What this means is
    that an "active" calendar user will require the services of a calendar server
    unidas connection quite often.
    Recommendation is that you increase the number of unidas connections
    to match the number of "active" calendar users. Our experience is that
    at least 20% of the number of configured users (lck_users from the
    /users/unison/misc/unison.ini file) are actually logged in, and 10% of
    those calendar users are active. For example, if have 3000 configured
    calendar users, 600 configured are logged in and 10% of the logged in
    are active, which would translate into at least 60 unidas connections.
    Keep in mind that configured vs logged in vs active might be different at each
    customer site, so please adjust your number of unidas connections
    accordingly. To set this up, edit the /users/unison/log/unison.ini file and add
    the numconnect parameter to the section noted (where "hostname" is the name of
    your local host):
    [LCK]
    lck_users = 600
    [hostname,unidas]
    numconnect = 60
    The calendar server will need to be restarted after making changes
    to the /users/unison/log/unison.ini file, before those changes will
    take effect.
    Note: Due to some architectural changes in the Calendar Server 4.x, the total
    number of DAS connections should never be set higher than 250.
    Recommendations for num_connect would be a maximum of 5% of logged on users.
    However, keep in mind that 250 das connections is a very high number.
    Example:
    [LCK]
    lck_users = 5000
    [hostname,unidas]
    numconnect = 250

    Thank you very much, I am looking from now for a good performance Tuning book writen by Jonathan Lewis. I dont think Jonathan can come to Spain and give lessons...Anyway I will email to him...
    But, could you please clarify 2 points to me
    1- Should I modify manually memory parameters like buffer cache, shared pool, large pool etc...if those areas are spotted Small and areas causes of performace problem in the AWR, ADDM or ASH reports even if the memory is automatic managed ?
    In the case of yes, Why Oracle named it "Memory automatic managed" if I have to set some values of memory manually ?
    2- When ADDM report suggests me to increase the SGA size; from where ADDM got this recomandation?. I mean is it recomandation based on statistics collected of Both Oracle and OS ? I am asking this question because, from our report I ran 3 weeks ago, ADDM suggested me to increase the SGA to 10GB (total memeory of the serve is 16GB), I did the change and from that moment the server is SWAP... and now ADDM report suggests me again to increase the SGA to 12GB .
    Best reagards

  • Performance tuning for Sales Order and its configuration data extraction

    I write here the data fetching subroutine of an extract report.
    This report takes 2.5 hours to extract 36000 records in the quality server.
    Kindly provide me some suggestions for performance tuning it.
        SELECT auart vkorg vtweg spart vkbur augru
                  kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
                  gwldt audat knumv
                  FROM vbak
                  INTO TABLE it_vbak
                  WHERE vbeln IN s_vbeln
                  AND erdat IN s_erdat
                  AND  auart IN s_auart
                  AND vkorg = p_vkorg
                  AND spart IN s_spart
                  AND vkbur IN s_vkbur
                  AND vtweg IN s_vtweg.
      IF NOT it_vbak[] IS INITIAL.
        SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
               yyequnr vbeln cuobj
               FROM vbap
               INTO TABLE it_vbap
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln
               AND   posnr = '000010'.
        SELECT bstkd inco1 zterm vbeln
               prsdt
               FROM vbkd
               INTO TABLE it_vbkd
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln.
        SELECT kbetr kschl knumv
               FROM konv
               INTO TABLE it_konv
               FOR ALL ENTRIES IN it_vbak
               WHERE knumv  =  it_vbak-knumv
               AND   kschl  =  'PN00'.
        SELECT vbeln parvw kunnr
               FROM vbpa
               INTO TABLE it_vbpa
               FOR ALL ENTRIES IN it_vbak
               WHERE vbeln  =  it_vbak-vbeln
               AND parvw IN ('PE', 'YU', 'RE').
      ENDIF.
      LOOP AT it_vbap INTO wa_vbap.
        IF NOT wa_vbap-cuobj IS INITIAL.
          CALL FUNCTION 'VC_I_GET_CONFIGURATION'
               EXPORTING
                    instance            = wa_vbap-cuobj
                    language            = sy-langu
               TABLES
                    configuration       = it_config
               EXCEPTIONS
                    instance_not_found  = 1
                    internal_error      = 2
                    no_class_allocation = 3
                    instance_not_valid  = 4
                    OTHERS              = 5.
          IF sy-subrc = 0.
            READ TABLE it_config WITH KEY atnam  =  'IND_PRODUCT_LINES'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_GQ'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_VKN'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_ZE'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
            READ TABLE it_config WITH KEY atnam  =  'IND_HQ'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
        READ TABLE it_config WITH KEY atnam  =  'IND_CALCULATED_INST_HOURS'.
            IF sy-subrc  =  0.
              wa_char-obj  =  wa_vbap-cuobj.
              wa_char-atnam  =  it_config-atnam.
              wa_char-atwrt  =  it_config-atwrt.
              APPEND wa_char TO it_char.
              CLEAR wa_char.
            ENDIF.
          ENDIF.
        ENDIF.
      ENDLOOP. " End of loop on it_vbap
    Edited by: jaya rangwani on May 11, 2010 12:50 PM
    Edited by: jaya rangwani on May 11, 2010 12:52 PM

    Hello Jaya,
    Will provide some point which will increase the performance of the program:
    1.     VBAK  & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
    2.     If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
    3.     Before using for all entries, check whether the internal table is not initial.
    And sort the internal table and delete adjacent duplicates.
    4.     Sort all the resultant internal table based on the required key fields and read always using the binary search.
    You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
    Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
    Regards,
    Selva K.

  • Performance tuning: lite sessions and local ServletContext

    I have been doing some research on iPlanet performance tuning. In our
    current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
    we don't use clustering there should be a couple of performance improvements
    we can make immediately:
    1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
    believe that if you use lite sessions, the session data is stored in the kjs
    process space as opposed to the kxs process space. This, of course, means
    that if a kjs dies the user's on it will lose their session information but
    it will provide a performance improvement by reducing kxs/kjs communication.
    2. Use local ServletContexts (<distributable>false</distributable> in
    web.xml) - This should cause the ServletContext to only be stored in the
    originating JVM. So again, if a kjs dies, the user will lose their
    ServletContext but again we will get a performance improvement by reducing
    kxs/kjs communcation.
    What I want to understand is how our load balancing configuration will
    effect our production environment if we use this configuration. Right now
    we use sticky load balancing on all our servlets but we don't have our JSPs
    registered and therefore sticky load balancing cannot always be trusted to
    return users to the iAS they came from. We make up for this by using
    hardware load balancing that keeps the majority of our users sticky.
    However, using lite sessions and local ServletContexts will require that a
    user not only stick to an iAS, but to a specific kjs as well. Using sticky
    load balancing would ensure that, but since we also rely on our hardware
    load balancers, could they create a problem? If a user gets sent back to
    the iAS they came from by our hardware load balancers, will the kxs process
    be smart enough to return them to the kjs they came from? If so, then I
    think that means that we can safely switch to lite sessions and local
    ServletContexts, but if not, I think many users will lose their sessions.
    Thanks,
    Linc

    Please follow thru this link for your answers
    http://developer.iplanet.com/viewsource/char_tuningias/index.jsp
    Thanks
    Shital Patel
    Lincoln wrote:
    I have been doing some research on iPlanet performance tuning. In our
    current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
    we don't use clustering there should be a couple of performance improvements
    we can make immediately:
    1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
    believe that if you use lite sessions, the session data is stored in the kjs
    process space as opposed to the kxs process space. This, of course, means
    that if a kjs dies the user's on it will lose their session information but
    it will provide a performance improvement by reducing kxs/kjs communication.
    2. Use local ServletContexts (<distributable>false</distributable> in
    web.xml) - This should cause the ServletContext to only be stored in the
    originating JVM. So again, if a kjs dies, the user will lose their
    ServletContext but again we will get a performance improvement by reducing
    kxs/kjs communcation.
    What I want to understand is how our load balancing configuration will
    effect our production environment if we use this configuration. Right now
    we use sticky load balancing on all our servlets but we don't have our JSPs
    registered and therefore sticky load balancing cannot always be trusted to
    return users to the iAS they came from. We make up for this by using
    hardware load balancing that keeps the majority of our users sticky.
    However, using lite sessions and local ServletContexts will require that a
    user not only stick to an iAS, but to a specific kjs as well. Using sticky
    load balancing would ensure that, but since we also rely on our hardware
    load balancers, could they create a problem? If a user gets sent back to
    the iAS they came from by our hardware load balancers, will the kxs process
    be smart enough to return them to the kjs they came from? If so, then I
    think that means that we can safely switch to lite sessions and local
    ServletContexts, but if not, I think many users will lose their sessions.
    Thanks,
    Linc

  • Performance Tuning for OBIEE Reports

    Hi Experts,
    I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
    The key point is the Dimension table is used in most of the OOTB reports.
    so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
    can anyone suggest good performance tuning tips to tune the reports.
    i created some indices on Materialized view columns and and on dimension table columns.
    i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
    is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
    Please Provide valuable suggestions and comments
    Thank You
    Kumar

    Kumar,
    Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    Hope that helps
    ~Srix

  • Performance Tuning Tips

    Dear All,
    In our project we are facing lot of problems with the Performance, users are compaining about the poor performance of the few reports and all, we are in the process of fine tuning the reports by following the all methods/suggestions provided by SAP ( like removing the select queries from Loops, For all entries , Binary serach etc )
    But still I want to know from you people what can we check from BASIS percpective ( all the settings ) and also ABAP percpective to improve the performance.
    And also I have one more query that what is " Table Statistics " , what is the use of this ...
    Please give ur valueble suggestions to us in improving the performance .
    Thanks in Advance !

    Hi
    <b>Ways of Performance Tuning</b>
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    <b>Selection Criteria</b>
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    <b>Points # 1/2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Select Statements   Select Queries</b>
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    <b>Point # 1</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    <b>Point # 2</b>
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    <b>Point # 3</b>
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    <b>Point # 4</b>
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    <b>Point # 5</b>
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Select Statements           contd..  SQL Interface</b>
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    <b>Point # 1</b>
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    <b>Point # 2</b>
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    <b>Point # 3</b>
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    <b>Select Statements       contd…           Aggregate Functions</b>
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    <b>Select Statements    contd…For All Entries</b>
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    <u>Points to be must considered FOR ALL ENTRIES</u> •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    <b>Select Statements    contd…  Select Over more than one Internal table</b>
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    <b>Point # 1</b>
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    <b>Point # 2</b>
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    <b>Point # 3</b>
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    <b>Internal Tables</b>
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    <b>Point # 2</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    <b>Point # 3</b>
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    <b>Point # 5</b>
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    <b>Point # 6</b>
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    <b>Point # 7</b>
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 8</b>
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    <b>Point # 9</b>
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 10</b>
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    <b>Point # 11</b>
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    <b>Point # 12</b>
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    <b>Point # 13</b>“SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    <b>Internal Tables         contd…
    Hashed and Sorted tables</b>
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    <b>Point # 1</b>
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    <b>Point # 2</b>
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    <b>Reward if usefull</b>

  • Performance Tuning for BAM 11G

    Hi All
    Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on Linux

    It would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
    Few key things to follow:
    1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
    2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
    3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
    the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
    Check the Dataobject tables involved in the query for missing indexes.
    4. Use batching (this is on by default for most cases)
    5. Fast network
    6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
    7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client

  • Performance Tuning Section in Business Blueprint

    Dear Gurus,
    I am at a client location doing a blueprint. We are in the closure level of the business blueprint and one of the sections need to be added is the Performance Tuning. Can anybody suggest me what would be content to be added in the Performance Tuning section of blueprint. If anybody got the content section with them please forward that to my mail id, i.e. [email protected]
    Thanks and regards
    Vijay

    Hi,
    Think abt all BI is made up of an dhow to tune them..
    Some hints:
    1 Performance tuning for data loads : ( number range buffering, parallel loading..pseudo delta..chnging technical properties of tables if rec >100,000...IDOC clogging...etc)
    2) Performnce tuing for Queries Aggregate...Cube modelling, OLAP cache....efficient multiprovider queries.etc.)
    MEntion about transactiona whoich you will require for these things..such as RSRV..RSRTRACE..STO3n..ST05..etc...
    gaurav

  • Oracle Database 11g: Performance Tuning exam 1Z0-054

    Hi,
    I want to take the certification Oracle Database 11g: Performance Tuning exam 1Z0-054 and want to write the exam.
    To earn the OCE credentials, I know that we need to take training from Oracle University. I am not an OCP currently.
    Can I write the exam now and later take the course to meet the certification requirements? I know it is not the right way, but I can not pay $3000 for the course as of now. I can write the exam through self study. I want to check if I can take the exam now and later take the course.
    Is it mandatory to take the course before taking the exam?
    Thanks,
    naveen.

    Naveen Kumar C wrote:
    Hi,
    y
    I want to take the certification Oracle Database 11g: Performance Tuning exam 1Z0-054 and want to write the exam.
    To earn the OCE credentials, I know that we need to take training from Oracle University. I am not an OCP currently.
    Can I write the exam now and later take the course to meet the certification requirements? I know it is not the right way, but I can not pay $3000 for the course as of now. I can write the exam through self study. I want to check if I can take the exam now and later take the course.
    Is it mandatory to take the course before taking the exam?
    Thanks,
    naveen.Indeed you can take the exam now, and subsequently earn the 11g PERFTUNE OCE credential when you either:
    1) complete, submit and get verified the OU authorised 11g perftune course (possible but seems a little cart before horse).
    or
    2) Become 11g DBA OCP.

  • Beta Testing Continues for "Oracle Database 11g: Performance Tuning" Exam

    !http://blogs.oracle.com/certification/2009-0204-1123.gif!
    The beta period continues for the <strong>Oracle Database 11g: Performance Tuning certification exam (1Z1-054)</strong>, which is a single exam requirement for Oracle 11g DBA OCPs to earn the Oracle Database 11g Performance Tuning Certified Expert (OCE) certification. Read more on the <a href="http://bit.ly/UGJh4">Oracle Certification Blog</a>.</p>

    Hello Hussein
    Yes true, I remember it for the OCE and Linux exams they rescheduled the end date several times. As far as I know it is related to the number of participants and the given feedback.
    I've also participated to several other exams, and I must admit that it is a long and hard process to get through. When I got the feedback 10 weeks after the beta period closure, I had to review nearly all the topics to get the exams passed the second time. But this it is a cheap and good exam preparation.
    What about you Hussein? Do you think that's trivial?
    Cheers,
    Hub

Maybe you are looking for