Performace tunning

Hi all,
How can we replaced the the below statements to improve the performance
DATA : BEGIN OF i_clint OCCURS 0, 
           clint LIKE klah-clint,                       
       END OF i_clint.                                
data:i_kssk LIKE i_clint OCCURS 0.
READ TABLE idoc_data WITH KEY
segnam = c_e1lfa1m. struct_e1lfa1m = idoc_data-sdata.
ws_lifnr = struct_e1lfa1m-lifnr.
READ TABLE idoc_data WITH KEY
segnam = c_e1lfb1m.
IF sy-subrc EQ 0.
struct_e1lfb1m = idoc_data-sdata.
IF struct_e1lfb1m-zwels NE '/' OR "Payment method
struct_e1lfb1m-hbkid NE '/'. "House bank
Please help me on this.

Hey Man,
I don´t got it 100%, but u can try it:
I hope it solves your question.
REPORT  ztmtest8.
Types
TYPES : BEGIN OF t_clint,
       clint TYPE klah-clint,
END OF t_clint.
Internal tables
DATA: it_kssk   TYPE TABLE OF t_clint.
Work Areas
DATA : wa_kssk  TYPE          t_clint.
Constants
CONSTANTS: c_bar    TYPE  c  VALUE '/'.
Logical
SORT idoc_data BY segnam.
READ TABLE idoc_data INTO wa_idoc_data WITH KEY segnam = c_e1lfa1m
                                                     BINARY SEARCH.
struct_e1lfa1m = idoc_data-sdata.
ws_lifnr       = struct_e1lfa1m-lifnr.
READ TABLE idoc_data INTO wa_idoc_data WITH KEY segnam = c_e1lfb1m
                                                     BINARY SEARCH.
IF sy-subrc IS INITIAL.
  struct_e1lfb1m = idoc_data-sdata.
  IF struct_e1lfb1m-zwels NE c_bar OR "Payment method
     struct_e1lfb1m-hbkid NE c_bar. "House bank
  ENDIF.
ENDIF.
Edited by: Thiago Moya on Feb 27, 2008 3:01 PM

Similar Messages

  • Oracle 11g performace tunning  book

    Please suggest the  good book for oracle 11g performace tunning ,
    which gives the complete idea of performance tunning.

    888412 wrote:
    Please suggest the  good book for oracle 11g performace tunning ,
    which gives the complete idea of performance tunning.
    Define what do you mean by "complete" . If you are willing to read just one book to read all about Performance Tuning, well guess what, there isn't any. Every book that you are going to read from a good author, like the one suggested by Martin , is going to be useful . And just so you know, 12c is the latest release.
    HTH
    Aman....

  • DRM performace tunning guide

    Hi All,
    Does anyone have performance tunning guide for DRM??
    what are all possible area we need to tune??
    1.DB side (if any parameters need to change then please suggest us)
    2.Server side tunning
    3. App pool tunning and IIS level tunning (if any please suggest me).
    4.Windows O/S level tunning..(Software level tunning and hardware level ).
    Thanks

    Hi,
    What are the performance issues that you are having and what is the version of DRM you are on?
    Thanks
    Denzz

  • Performace tunning regarding custum Program

    Hi all,
    How can we replaced the the below statements to improve the performance
    DATA : BEGIN OF i_clint OCCURS 0,
    clint LIKE klah-clint,
    END OF i_clint.
    data:i_kssk LIKE i_clint OCCURS 0.
    READ TABLE idoc_data WITH KEY
    segnam = c_e1lfa1m. struct_e1lfa1m = idoc_data-sdata.
    ws_lifnr = struct_e1lfa1m-lifnr.
    READ TABLE idoc_data WITH KEY
    segnam = c_e1lfb1m.
    IF sy-subrc EQ 0.
    struct_e1lfb1m = idoc_data-sdata.
    IF struct_e1lfb1m-zwels NE '/' OR "Payment method
    struct_e1lfb1m-hbkid NE '/'. "House bank
    Please help me on this.

    Hi,
    As SAP recommends, do not create/declare internal tables with header line. Create an explicit work area for internal tables and read into the work area or use field symbols (ASSIGNING).
    Also, when performing READ command on internal table, use BINARY SEARCH. However, precaution, always sort the internal table with the fields used for read in WITH KEY clause.
    Cheers.

  • Enq: TX - row lock contention problem

    Hi ,
    Db version 10.2.0.4
    os solaris.
    i have upgraded my database from 9.2.0.4 to 10.2.0.4 by using exp/imp as my database is small.
    I have created new instance of 10g and changed parameter values as 9i(as required). then imported from 9i to 10g instance.
    After importing in 10g instance we are face application wide performance problem..the response time of the applicatoin was very slow...
    i have taken awr report of various times and have seeen
    SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid IN (:1) FOR UPDATE
    this query is causing the problem..enq: TX - row lock contention
    Cache Sizes
    ~~~~~~~~~~~                       Begin        End
                   Buffer Cache:       756M       756M  Std Block Size:         8K
               Shared Pool Size:       252M       252M      Log Buffer:     1,264K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:              2,501.54              3,029.25
                  Logical reads:              2,067.79              2,504.00
                  Block changes:                 17.99                 21.78
                 Physical reads:                  0.02                  0.03
                Physical writes:                  0.41                  0.50
                     User calls:                140.74                170.44
                         Parses:                139.55                168.99
                    Hard parses:                  0.01                  0.01
                          Sorts:                 10.65                 12.89
                         Logons:                  0.32                  0.38
                       Executes:                139.76                169.24
                   Transactions:                  0.83
      % Blocks changed per Read:    0.87    Recursive Call %:    17.60
    Rollback per transaction %:    0.00       Rows per Sort:    16.86
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:  100.00
                Buffer  Hit   %:  100.00    In-memory Sort %:  100.00
                Library Hit   %:  100.03        Soft Parse %:  100.00
             Execute to Parse %:    0.15         Latch Hit %:   99.89
    Parse CPU to Parse Elapsd %:   93.19     % Non-Parse CPU:   94.94
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   86.73   86.55
        % SQL with executions>1:   90.99   95.33
      % Memory for SQL w/exec>1:   79.15   90.58
    Top 5 Timed Events                                         Avg %Total
    ~~~~~~~~~~~~~~~~~~                                        wait   Call
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    CPU time                                            397          86.3
    enq: TX - row lock contention           508          59    115   12.7 Applicatio
    log file sync                         2,991           5      2    1.1     Commit
    log file parallel write               3,238           5      2    1.1 System I/O
    SQL*Net more data to client          59,871           4      0    1.0    Network
    ^LTime Model Statistics              DB/Inst: WGMUGPR2/wgmugpr2  Snaps: 706-707
    -> Total time in database user-calls (DB Time): 460.5s
    -> Statistics including the word "background" measure background process
       time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    enq: TX - row lock contentio            508     .0          59     115       0.2
    log file sync                         2,991     .0           5       2       1.0
    log file parallel write               3,238     .0           5       2       1.1
    SQL*Net more data to client          59,871     .0           4       0      20.1
    control file parallel write           1,201     .0           1       1       0.4
    SQL*Net more data from clien          3,393     .0           1       0       1.1
    SQL*Net message to client           509,864     .0           1       0     170.9
    os thread startup                         3     .0           1     196       0.0
    db file parallel write                  845     .0           1       1       0.3
    -> % Total DB Time is the Elapsed Time of the SQL statement divided
       into the Total Database Time multiplied by 100
      Elapsed      CPU                  Elap per  % Total
      Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
            59          1        1,377        0.0    12.9 bwnt27fp0z3gm
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid IN (:1) FOR UPDATE
            41         41          459        0.1     8.9 8cdswsp7cva2h
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    select rpad(argument_name, 32, ' ') || in_out || ' ' || nvl(type_subname, data_t
    ype) info from user_arguments where package_name IS NULL and object_name = uppe
    r(:1) and argument_name is not null order by object_name, position
            39         38        7,457        0.0     8.4 271hn6sgra2d8
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    SELECT DISTINCT t_0.puid FROM PIMANTYPE t_0 WHERE (UPPER(t_0.ptype_name) = UPPER
    (:1))
            23         22          459        0.0     4.9 g92t08k78tgrw
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    SELECT PIMANTYPE.puid, ptimestamp, ppid, rowning_siteu, rowning_sitec, pis_froze
    n, ptype_class, ptype_name FROM PPOM_OBJECT, PIMANTYPE WHERE PPOM_OBJECT.puid =
    (PIMANTYPE.puid)
            22         22      158,004        0.0     4.9 chqpmv9c05ghq
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    SELECT puid,ptimestamp FROM PPOM_OBJECT WHERE puid = :1
            17         17        2,294        0.0     3.7 3n5trh11n1x8w
    Module: syncdizio_op@snstr09 (TNS V1-V3)
    SELECT PTYPECANNEDMETHOD.puid, ptimestamp, ppid, rowning_siteu, rowning_sitec, p
    is_frozen, pobject_desc, psecure_bits,VLA_344_5, pmethod_name, pmsg_name, ptype_
    name, pexec_seq, paction_type FROM PPOM_OBJECT,PBUSINESSRULE, PTYPECANNEDMETHOD
    WHERE PTYPECANNEDMETHOD.puid IN (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,in 9i there is a parameter ENQUEUE_RESOURCES but in 10g relese 2 its got obsoleted....
    am new to performace tunning please advice me....!
    Regards
    Vamshi

    The CBO has changed substantially between 9.2.x and 10.2.x. Pl see MOS Doc 754931.1 (Cost Based Optimizer - Common Misconceptions and Issues - 10g and Above). Pl verify that statistics have been gathered and are current - pl see MOS Doc 605439.1 (Master Note: Recommendations for Gathering Optimizer Statistics on 10g).
    Looking at your output, it seems to me that the database is entirely CPU-bound. 86.3% of time is spent on CPU. The last 5 SQL statements in the output, all of the elapsed time is spent on CPU.
    Pl post your init.ora parameters, along with your hardware specs. This question might be more appropriate in the "Database - General" forum.
    HTH
    Srini

  • Performance reports and measures for Online Trading Customer's databases

    We have a client from online trading commodity domain. there databases are running in healthy stat.
    We on daily basis share with them AWR report of time span 9:30AM to 10:30AM (peak trading hours).
    can you suggest what all recommendadtions we can include in this AWR report or how we can study on daily basis this AWR report extracted to mention recommendation to customer.
    apart from this what other performance reports we can share with customer which will make cilent view their database stat in terms of performace tunning and all.
    We can share daily database checks and all but I am trying to have some good performance end reports which can be extract to share with client??
    What performace reports you are sharing or can suggest in this regards??
    Thanks friends in advance.

    Hi
    Ankit Ashok Aggarwal wrote:
    We have a client from online trading commodity domain. there databases are running in healthy stat.
    We on daily basis share with them AWR report of time span 9:30AM to 10:30AM (peak trading hours). I have written a few blog posts about AWR analysis (http://savvinov.com/category/awr/) -- some of them may be useful to you. However, AWR reports are not so good for monitoring purposes. Monitoring is about trends -- AWR doesn't have that. Plus, application performance should be measured in its native metrics (KPIs or "key performance indicator"). For instance, for an online shop that would be the number of orders processed, average time it takes to process an order etc. Low level stats such as the number of table scans per unit time or IO stats doesn't really tell whether or not an application is performing satisfactorily.
    From the database point of view, a simple overview of the OEM Performance Page should be enough to get a basic idea whether the database is ok.
    can you suggest what all recommendadtions we can include in this AWR report or how we can study on daily basis this AWR report extracted to mention recommendation to customer.I've seen a lot of such recommendations -- 99% of them are garbage, and the customers treat it accordingly. Unless there are clear signs of a performance issue, recommendations are generally neither necessary, nor possible (without additional information about the application). Ain't broken -- don't fix it.
    apart from this what other performance reports we can share with customer which will make cilent view their database stat in terms of performace tunning and all.
    We can share daily database checks and all but I am trying to have some good performance end reports which can be extract to share with client??Ideally, you should have SLAs with your customers which should clearly define how much time a certain user action should take. Without an SLA and/or specific complaints from the user there is little you can do about database performance, except for when something obvious shows up on the report.
    Best regards,
    Nikolay

  • Awr dbms_sqltune  package

    Hi,
    Our company doesn't currently have oem installed on production. This will be done in March. Right now I am working with the sqltune package to access the database on the dev server before I run it on production. I was running the following procedure below and requested the GOLDUSER schema.
    Down in the load_sqlset procdure I think I have the load_option and update_option set to the correct values. I could be wrong though. By setting these all I want is to extract the old sql statements and performace tune them.
    I get those sql statements listed out and also I get a different named schema under "tables with new potential indices" (LEADUSERS).
    Should I be concerned with this? Would someone just look this procedure over to see if it is correct?
    I would appreciate your help in this matter.
    Thanks in advance.
    al
    declare
    cursor_1 dbms_sqltune.sqlset_cursor;
    begin
    open cursor_1 for
    select value(p)
    from table(dbms_sqltune.select_workload_repository)
    750,
    1501,
    'parsing_schema_name= ''GOLDUSER'' AND executions > 25',
    null,
    null,
    null,
    null,
    null,
    10)) p;
    dbms_sqltune.load_sqlset(
    sqlset_name => 'prod_awr',
    populate_cursor => cursor_1,
    load_option => 'MERGE',
    update_option => 'ACCUMULATE');
    end;
    /

    Check this one:
    http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
    OR
    DECLARE
    ret_val VARCHAR2(4000);
    BEGIN
    ret_val := dbms_sqltune.create_tuning_task(
    task_name=>'t1',
    sql_id=>' '); Execute the sql prior to this and get the sql_id in place it here
    dbms_sqltune.execute_tuning_task('t1');
    END;
    check the status by,
    SELECT status FROM DBA_ADVISOR_LOG WHERE task_name ='t1';
    Upon completion of the above,
    SET LONG 100000
    SET LONGCHUNKSIZE 99999
    SET LINESIZE 20000
    SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 't1') FROM DUAL;

  • WebCrawler Stats

    Hey Everyone :)
    Over the last 2 weeks, as some of the regulars might know, ive been compiling a web crawler, 1 week ago my webcrawler took 37 mintues to find 14000 sites and search 1150 of them, today after just a little bit of code performace tunning (thanks to members on this forum) my webcrawler now finds 5133 sites a minutes and searches 560 of them (averages).
    So instead of taking 37 minutes to find 14000 sites it now takes me 3 minutes to find 15000 sites and search 560 of them.
    Im running Windows XP, now processor (not dual, which would allow parralell threads haha i wish i had it though) and 512 ram.
    Now the reason im writing this is i need to know if my currents stats are as fast as i can go, or can i still go faster?
    Thanks in advance, Nick

    Yeah, I remember. Someone already pointedt out that multithreading will be faster because you'd reduce the impact of externally induced wait cycles. If you'd like to see whether your source can be tuned for performance, use a profiling tool. But bear in mind:
    1) optimization might not be worth the effort taken
    2) it also might make your code unreadable and more difficult to maintain
    3) the bottleneck's still the connection to the webserver and the resulting traffic.

  • How to tune up VA05

    Dear gurus
    While executing VA05
    Giving material number
    and date range from 01.04.2010 to 01.04.2010
    sales Organization 1000
    distribution Channel 10
    It takes more then an hour to perform why is that so ? how to tune it up.
    please help
    Regards
    Saad Nisar

    Hi Saad,
                This is a standard program(SAPMV75A), RIGHT? . Normal tools in sap for performace analysis are
    Run time analysis transaction SE30
    This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
    SQL Trace transaction ST05
    The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Kindly please take an ABAP'ers help.
    Regards,
    Ram Pedarla

  • Pls. help tune this query

    This is the SQL i would like to tune for performace...
    The table structure is given below.
    The table has about 5 million rows.
    On the first day, load_flag has all the rows as 'I'.
    Then from the second day onwards only around 10% of records will be between Load_Start_Time and Load_End_Time. Among these around 40% will have Record_key like 'TP%'. And among those most of the records (95%)will have load_flag as 'U' and a very few (5%) as 'I'. At present there are unique and primary key indexes on record_key. Please advice me whether it's better to go for an index on any of these columns and what type would be better. I thought it would help to have a bit map index on load_flag and a function base index on SUBSTR (RECORD_KEY).
    Also pls. let me know if the order of predicates is right.
    Thanks in advance.
    M_STG_TPDB_TPD_TL_W_PH_LOI_CNTBLTY_STATUS_CE     SQ_STG_TPD_STG_TL_CS_EXTRACTED_RECS_MOD     "SELECT TPD_STG_TL_CS_EXTRACTED_RECS.RECORD_KEY, TPD_STG_TL_CS_EXTRACTED_RECS.DATA_SOURCE, TPD_STG_TL_CS_EXTRACTED_RECS.CONTACTABLE_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.LEGAL_OWNERSHIP_ISSUE_IND, TPD_STG_TL_CS_EXTRACTED_RECS.ADMIN_CONTROL_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.BANKRUPTCY_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.ASSIGNED_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.IN_TRUST_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.DIVORCE_CASE_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.POA_COP_INDICATOR, TPD_STG_TL_CS_EXTRACTED_RECS.SOURCE_EXTRACT_DATE_TIME
    FROM
    TPD_STG_TL_CS_EXTRACTED_RECS
    WHERE
    LOAD_FLAG IN ('I','U')
    AND SUBSTR (RECORD_KEY, 1,2)='TP'
    AND STG_UPDATE_DATE_TIME>'$$Load_Start_Time'
    AND STG_UPDATE_DATE_TIME<='$$Load_End_Time'"
    Table structure
    CREATE TABLE TPD_STG_TL_CS_EXTRACTED_RECS
    RECORD_KEY VARCHAR2(35 BYTE),
    SCHEME_NAME VARCHAR2(50 BYTE),
    ORGANISATION_NAME VARCHAR2(50 BYTE),
    SUPERIOR_TITLE_1 VARCHAR2(50 BYTE),
    TITLE_1 VARCHAR2(50 BYTE),
    FIRST_NAME_1 VARCHAR2(50 BYTE),
    MIDDLE_NAME_1 VARCHAR2(50 BYTE),
    SURNAME_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_1_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_2_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_3_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_4_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_5_1 VARCHAR2(50 BYTE),
    ADDRESS_LINE_6_1 VARCHAR2(50 BYTE),
    POST_CODE_1 VARCHAR2(12 BYTE),
    COUNTRY_1 VARCHAR2(50 BYTE),
    OVERSEAS_INDICATOR_1 CHAR(1 BYTE),
    DOB_1 NUMBER(8),
    GENDER_1 CHAR(1 BYTE),
    NINO_1 VARCHAR2(9 BYTE),
    DEATH_INDICATOR_1 CHAR(1 BYTE),
    PRODUCT_HOLDING_ROLE_TYPE_1 VARCHAR2(21 BYTE),
    GONE_AWAY_INDICATOR_1 CHAR(1 BYTE),
    THAMES_LEGAL_OWNERSHIP_IND_1 CHAR(1 BYTE),
    SOURCE_SYSTEM_PARTY_INDV_ID_1 VARCHAR2(15 BYTE),
    SUPERIOR_TITLE_2 VARCHAR2(50 BYTE),
    TITLE_2 VARCHAR2(50 BYTE),
    FIRST_NAME_2 VARCHAR2(50 BYTE),
    MIDDLE_NAME_2 VARCHAR2(50 BYTE),
    SURNAME_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_1_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_2_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_3_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_4_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_5_2 VARCHAR2(50 BYTE),
    ADDRESS_LINE_6_2 VARCHAR2(50 BYTE),
    POST_CODE_2 VARCHAR2(12 BYTE),
    COUNTRY_2 VARCHAR2(50 BYTE),
    OVERSEAS_INDICATOR_2 CHAR(1 BYTE),
    DOB_2 NUMBER(8),
    GENDER_2 CHAR(1 BYTE),
    NINO_2 VARCHAR2(9 BYTE),
    DEATH_INDICATOR_2 CHAR(1 BYTE),
    PRODUCT_HOLDING_ROLE_TYPE_2 VARCHAR2(21 BYTE),
    GONE_AWAY_INDICATOR_2 CHAR(1 BYTE),
    THAMES_LEGAL_OWNERSHIP_IND_2 CHAR(1 BYTE),
    SOURCE_SYSTEM_PARTY_INDV_ID_2 VARCHAR2(15 BYTE),
    JOINT_OWNER_INDICATOR CHAR(1 BYTE),
    JOINT_LIFE_TYPE NUMBER(1),
    SAME_ADDRESS_INDICATOR CHAR(1 BYTE),
    TITLE_LA1 VARCHAR2(50 BYTE),
    FIRST_NAME_LA1 VARCHAR2(50 BYTE),
    SURNAME_LA1 VARCHAR2(50 BYTE),
    DOB_LA1 NUMBER(8),
    TITLE_LA2 VARCHAR2(50 BYTE),
    FIRST_NAME_LA2 VARCHAR2(50 BYTE),
    SURNAME_LA2 VARCHAR2(50 BYTE),
    DOB_LA2 NUMBER(8),
    PRODUCT_HOLDING_REF_NUMBER VARCHAR2(28 BYTE),
    PARENT_PRODUCT_HOLDING_REF_NUM VARCHAR2(9 BYTE),
    OCDB_REFERENCE_NUMBER VARCHAR2(17 BYTE),
    BUSINESS_GROUP CHAR(3 BYTE),
    SCHEME_NUMBER VARCHAR2(8 BYTE),
    TRUSTEE_SEQUENCE_NUMBER NUMBER(10),
    MEMBER_NUMBER VARCHAR2(10 BYTE),
    PRSN_ID NUMBER(10),
    OLD_SCHEME_NUMBER VARCHAR2(8 BYTE),
    PUBLIC_SECTOR_INDICATOR CHAR(1 BYTE),
    ELIGIBLE_INDICATOR CHAR(1 BYTE),
    SCHEME_STATUS NUMBER(1),
    SCHEME_TYPE CHAR(2 BYTE),
    Q_DATE_WITH_PROFIT_STATUS NUMBER(1),
    A_DATE_WITH_PROFIT_STATUS NUMBER(1),
    LATEST_WITH_PROFIT_STATUS NUMBER(1),
    NPSW_INDICATOR CHAR(1 BYTE),
    PRODUCT_HOLDING_STATUS CHAR(1 BYTE),
    MATURITY_DATE_OF_CONTRACT NUMBER(8),
    DUE_END_DATE_OF_CONTRACT NUMBER(8),
    OUT_OF_FORCE_DATE NUMBER(8),
    OUT_OF_FORCE_REASON_CODE NUMBER(2),
    DATA_SOURCE VARCHAR2(3 BYTE),
    PRODUCT_TYPE VARCHAR2(30 BYTE),
    PRODUCT_DESCRIPTION VARCHAR2(50 BYTE),
    SERVICING_AGENT_NUMBER VARCHAR2(10 BYTE),
    CONTACTABLE_INDICATOR CHAR(1 BYTE),
    LEGAL_OWNERSHIP_ISSUE_IND CHAR(1 BYTE),
    ADMIN_CONTROL_INDICATOR CHAR(1 BYTE),
    BANKRUPTCY_INDICATOR CHAR(1 BYTE),
    ASSIGNED_INDICATOR CHAR(1 BYTE),
    IN_TRUST_INDICATOR CHAR(1 BYTE),
    DIVORCE_CASE_INDICATOR CHAR(1 BYTE),
    POA_COP_INDICATOR CHAR(1 BYTE),
    DONOR_POLICY_INDICATOR CHAR(1 BYTE),
    TAX_JURISDICTION NUMBER(2),
    INELIGIBLE_DATE NUMBER(8),
    INELIGIBILITY_REASON NUMBER(2),
    A_DATE_CASH_OR_BONUS_TYPE CHAR(1 BYTE),
    VALUATION_APPLICABLE_DATE NUMBER(8),
    A_DATE_PIP_AMOUNT NUMBER(12),
    A_DATE_NPSW_AMOUNT NUMBER(10),
    A_DATE_MINIMUM_APPLIED_AMOUNT NUMBER(10),
    A_DATE_MINIMUM_CALC_AMOUNT NUMBER(10),
    A_DATE_POLICY_VALUE NUMBER(16),
    A_DATE_DATE_PIP_CALCULATED NUMBER(8),
    A_DATE_ALGORITHM_NUMBER NUMBER(2),
    M_DATE_VOTING_VALUE NUMBER(12),
    M_DATE_POLICY_VALUE NUMBER(16),
    E_DATE_CASH_OR_BONUS_TYPE CHAR(1 BYTE),
    E_DATE_PIP_AMOUNT NUMBER(12),
    E_DATE_NPSW_AMOUNT NUMBER(10),
    E_DATE_MINIMUM_APPLIED_AMOUNT NUMBER(10),
    E_DATE_MINIMUM_CALC_AMOUNT NUMBER(10),
    E_DATE_POLICY_VALUE NUMBER(16),
    E_DATE_DATE_PIP_CALCULATED NUMBER(8),
    E_DATE_ALGORITHM_NUMBER NUMBER(2),
    P_DATE_CASH_OR_BONUS_TYPE CHAR(1 BYTE),
    P_DATE_PIP_AMOUNT NUMBER(12),
    P_DATE_NPSW_AMOUNT NUMBER(10),
    P_DATE_MINIMUM_APPLIED_AMOUNT NUMBER(10),
    P_DATE_MINIMUM_CALC_AMOUNT NUMBER(10),
    P_DATE_POLICY_VALUE NUMBER(16),
    P_DATE_DATE_PIP_CALCULATED NUMBER(8),
    P_DATE_ALGORITHM_NUMBER NUMBER(2),
    SOURCE_EXTRACT_DATE_TIME DATE,
    SCHEME_SEQUENCE_NUMBER NUMBER(3),
    LOAD_FLAG CHAR(1 BYTE),
    STG_CREATE_DATE_TIME DATE,
    STG_UPDATE_DATE_TIME DATE
    TABLESPACE TPDBS01A_DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 128K
    NEXT 128K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    CREATE UNIQUE INDEX PK_STG_TL_CS_EXTRACTED_RECS ON TPD_STG_TL_CS_EXTRACTED_RECS
    (RECORD_KEY)
    LOGGING
    TABLESPACE TPDBS01A_DATA
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE (
    INITIAL 128K
    NEXT 128K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOPARALLEL;
    ALTER TABLE TPD_STG_TL_CS_EXTRACTED_RECS ADD (
    CONSTRAINT PK_STG_TL_CS_EXTRACTED_RECS
    PRIMARY KEY
    (RECORD_KEY)
    USING INDEX
    TABLESPACE TPDBS01A_DATA
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE (
    INITIAL 128K
    NEXT 128K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    ));

    I would like to discuss a bit more how a composite index is used. This is not anymore the question of the OP, but I think it will deepen my understanding and maybe that of others as well.
    So we have this situation to start with.
    * A select on two different columns.
    * An index on each column would lead to an INDEX RANGE SCAN because of the where condition.
    * Instead of 2 indexes where only one would be used (not considering bitmap conversions), we add a composite index on both columns.
    * The CBO will choose and access the index.
    Question is: How is this access is done in detail?
    So 400 rows come out of the index range scan.
    This would not be possible if it only scanned one of the two predicates. I agree. The output of the index seem to be only those (400) rows that fit with the where clause on both columns.
    This is supported by the fact that an Index only access is possible when we select only information that is in the index (Oracle 9i output).
    SQL> explain plan for
      2  select record_key
      3  from mytable
      4  where stg_update_date_time >= to_date('2007-11-11','yyyy-mm-dd')
      5  and stg_update_date_time < to_date('2007-11-12','yyyy-mm-dd')
      6  and record_key like 'TP%'
      7  /
    Explained.
    SQL>  select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT     |             |   160 |  2400 |     2 |
    |   1 |  INDEX RANGE SCAN    | I1          |   160 |  2400 |     2 |
    Note: cpu costing is off, PLAN_TABLE' is old version
    9 rows selected.
    SQL> Unfortunately I don't see the filter operation, probably because the plan table is old in my system.
    The index is build on columns "stg_update_date_time" and "record_key". So it must access these columns in that order.
    I think, that maybe the range scan is done on the date column and then a further access/filter operation is done to rule out any index entries (leaf nodes) that do not fit with the LIKE 'TP%' expression.
    Since I expect that is is faster to access the data in the index than in the table this should improve performance quite a bit. It is not the same access plan/speed however as with a clause like this:
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation                   |  Name       | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT            |             |     1 |   520 |     3 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| MYTABLE     |     1 |   520 |     3 |
    |   2 |   INDEX RANGE SCAN          | I1          |     1 |       |     2 |
    Note: cpu costing is off, PLAN_TABLE' is old version
    10 rows selected.
    SQL> The difference can be seen in the cost estimation.
    I hope somebody does understand what I would like to point at.
    Result of this considerations:
    1) Composite index can be used
    2) The index is used as a range scan on one column and as a filter operation after this access on the second column. This is like an extra select statement inside the index structure.
    3) The resulting table access is as small as possible.
    4) A faster access plan can be achieved when the first column is accessed with direct (unique) scan on the first and a range scan on the second column.
    br Sven
    Message was edited by:
    Sven Weller

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • Performance tunning of EP 7.0

    Dear Friends,
    My organisation is using EP 7.0 , where we are using ESS & MSS component.
    We are facing lot performace problem in EP 7.0.
    Can anybody help me how to tune my EP 7.0 system. what parameter to be consider for tunning.
    Appricate for immdeiate reply , as we have resolve this problem on prority.
    Thanks in advance.
    Regards,
    Anil Bhandary

    Rec. VM Version 1.4.2_20 b04
    For Dispatcher:
    -Xms256M
    -verbose:gc
    -Djava.security.policy=./java.policy
    -Xss2m
    For Server Node:
    -Xms2560M (if BW system is shared with this) or else 2048M will be fine
    -XX:NewSize=426M
    -XX:MaxNewSize=426M
    -XX:PermSize=512M
    -XX:MaxPermSize=512M
    -XX:SoftRefLRUPolicyMSPerMB=1
    -XX:+DisableExplicitGC
    -XX:SurvivorRatio=2
    -XX:TargetSurvivorRatio=90
    -verbose:gc
    -XX:+UseParNewGC
    -XX:+PrintGCTimeStamps
    -XX:+PrintGCDetails
    -XX:+UseTLAB
    -Dsun.io.useCanonCaches=false
    -Djava.awt.headless=true
    -Dorg.omg.CORBA.ORBClass=com.sap.engine.system.ORBProxy
    -Dorg.omg.CORBA.ORBSingletonClass=com.sap.engine.system.ORBSingletonProxy
    -Dorg.omg.PortableInterceptor.ORBInitializerClass.com.sap.engine.services.ts.jts.ots.PortableInterceptor.JTSInitializer
    -Djavax.rmi.CORBA.PortableRemoteObjectClass=com.sap.engine.system.PortableRemoteObjectProxy
    -XX:+HandlePromotionFailure
    -Djava.security.policy=./java.policy
    -Djava.security.egd=file:/dev/urandom
    -XX:ReservedCodeCacheSize=64M
    -XX:CodeCacheMinimumFreeSpace=2M
    -Xss2m
    MinThreadCount 40
    Keep Alive         Enable the reuse of HTTP connections for multiple requests                     enabled
    Use Cache         Enable the memory-based cache                                              enabled
    Directory List           List all files in directory of default files not found                         disabled
    Log Responses     Log all HTTP requests                                                            disabled
    CacheControl                   Static content expiration time in seconds (for browser cache)     604800
    SapCacheControl     Static content expiration time in seconds (for ICM cache)     604800
    HTTP Provider     MinimumGZipLength  1024
    VM     723909     Java VM settings for J2EE 6.40/7.0
    VM     1004255     How to create a full HPROF heap dump of J2EE Engine 6.40/7.0

  • How can you get your data back on i tunes

    Hi i am Justin,
    I have i phone and i have i tunes here is my problom. last night i formatted my HP windows 7 computer and i backed up my pictures and backed up i tunes witch i thoght i did becasue i backed it up on my external 500 MB hard drive and with out making a folder and c oping i tunes i only backed up library and before the format i went to edit, preferences and adavaned in i tunes (sorry about the bad spelling) becasue apples's step by step i read and saw the pictures were to go and i thought i can back up i tunes that way. My phone has 333 songs all my pictures apps and contacts and there is no way to re copy from the phone back in to i tunes i do not use i cloud becasue (one i do not want to pay for more space) and 2 the free version tells me i do not have enough space to back it up so i am worried one sync i will lose all my info. these songs are from cd's i have and off the Internet before the courts stopped Napster and Lime Wire. Sos what should i do just unstalli tunes and leave my computer as a re charging stastion? or is there any secret way to get all my things back then this time i can make a folder and do it right in case the next time i have to re format my drive from a problems or need to what is the best case here?
    Justin

    Sync is only oneway, from PC to your device. Unless you have the music on your PC, iTunes is going to wipe out what you have on your device if you are syncing to a new library.
    You can only transfer Purchased music over to Itunes on your PC.
    iTunes Store: Transferring purchases from your iOS device or iPod to a computer
    http://support.apple.com/kb/HT1848
    As for you own music, you may have to use a third party software. A good Free one is called Sharepod which you can download from Download.com here:
    http://download.cnet.com/SharePod/3000-2141_4-10794489.html?tag=mncol;2

  • I tunes downloads to wrong drive

    Hi guys some help please. When I download the latest I tunes 9.1.1 is wants to install to drice "G" which I don't have I want it to download on "C". Any ideas?

    Hey there,
    One thing you should check is where your iTunes searches for your iTunes folder. Head to iTunes and then Preferences. When the window pops up, head over to the Advanced tab. Near the top of the tab, you will see where iTunes currently locates your Music folder. Click the Change button and then direct it to where the folder is located on your External drive. See if that helps.
    B-rock

  • My i tunes account for my pc and i phone wont let me update my apps or download any new ones can some one tell me why?

    my i tunes account for both my pc and i phone wont let me update or download any apps.

    no none errors it started after i installed the new ios 5.1 on my i phone i wait ages ans it says cannot conect to the i tunes store. and on the pc when i go to update them it says service timed out after about ten mins of waiting

Maybe you are looking for