Query takes lots of time to execute how to minizie

Hi ALL,
I have a query like this which takes atleat 3 mins to execute how to optimiized the code from the below query
how to minimize the time
SELECT
PRO.PROJECT_NAME "Project Name",
POBJ.NAME "Object Name",
X."Major" "Internal_Major",
X."Minor" "Internal_Minor",
X."Normalized" "Internal_Normalized",
TRUNC(X."IDR",2) "Internal Defect Rate",
Y."Major" "External_Major",
Y."Minor" "External_Minor",
Y."Normalized" "External_Normalized",
TRUNC(Y."EDR",2) "External_Defect_Rate",
PRO.ID "PRO_ID",
POBJ.ID "POBJ_ID"
FROM
PROJECTS PRO, PROJECT_OBJECTS POBJ,
(SELECT
MN."PRO_ID",
MN."POBJ_ID",
MN."Major",
MN."Minor",
MN."Normalized",
DR."IDR_PRO_ID",
DR."IDR_POBJ_ID",
DR."IDR"
FROM IDR_TEST DR FULL OUTER JOIN MAJOR_MINOR_NORMALIZED MN
ON
MN."PRO_ID" = DR."IDR_PRO_ID" AND
MN."POBJ_ID" = DR."IDR_POBJ_ID") X,
(SELECT
EE."PRO_ID" E_PRO_ID,
EN."PRO_ID" EN_PRO_ID,
EE."POBJ_ID" E_POBJ_ID,
EN."POBJ_ID" EN_POBJ_ID,
EN."Major",
EN."Minor",
EN."Normalized",
EN."Normalized" / DECODE(EE."External_Effort",0,NULL,EE."External_Effort") "EDR"
FROM
EXTR_MAJOR_MINOR_NORMALIZED EN FULL OUTER JOIN EXTERNAL_EFFORT EE
ON
EE."PRO_ID" = EN."PRO_ID" AND
EE."POBJ_ID" = EN."POBJ_ID") Y
WHERE
PRO.ID = :P1_PROJECTS AND
PRO.ID = POBJ.PRO_ID AND
:P1_PROJECTS = DECODE(X.PRO_ID,NULL,:P1_PROJECTS,X.PRO_ID) AND
:P1_PROJECTS = DECODE(X.IDR_PRO_ID,NULL,:P1_PROJECTS,X.IDR_PRO_ID) AND
POBJ.ID = DECODE(X.POBJ_ID,NULL,POBJ.ID,X.POBJ_ID) AND
POBJ.ID = DECODE(X.IDR_POBJ_ID,NULL,POBJ.ID,X.IDR_POBJ_ID)
AND
:P1_PROJECTS = DECODE(Y.E_PRO_ID(+),NULL,:P1_PROJECTS,Y.E_PRO_ID(+)) AND
:P1_PROJECTS = DECODE(Y.EN_PRO_ID(+),NULL,:P1_PROJECTS,Y.EN_PRO_ID(+)) AND
POBJ.ID = DECODE(Y.E_POBJ_ID(+),NULL,POBJ.ID,Y.E_POBJ_ID(+)) AND
POBJ.ID = DECODE(Y.EN_POBJ_ID(+),NULL,POBJ.ID,Y.EN_POBJ_ID(+))
Problem is with the two full outer joins wht i am using
there only i find the query taking too much time
Thanks
Sudhir
Message was edited by:
Sudhir_N
Message was edited by:
Sudhir_N

You'll have to get an exection plan and manually tune the query.
That aside, inline views can really slow queries down because once the data is stored in a temporary table you can't do fast indexed lookups on their result sets. They also make queries be more complicated. Also, outer joins tend to slow queries down a lot in general.

Similar Messages

  • Urgen!! Query takes lots of time to execute and the production is in effect

    Hi,
    We have some data loading script. This scripts takes lots of time to execute. As Iam new to tunning please do let me know what is the wrong with the query !!
    Thanks In advance
    Query:
    =========
    INSERT /*+ PARALLEL */ INTO ris.ris_pi_profile
    (ID,COUNTRY_OF_CITIZENSHIP,IMMIGRATION_STATUS,SSN,DOB,GENDER,
    ETHNICITY,RACE,DEPARTMENT,DIVISION,INSTITUTION_ID,INST_EMAIL,EFFECT_DATE,ACADEMIC_TITLE,ACADEMIC_POSITION,
    OTH_PER_DATA,PCT_RESEARCH,PCT_TEACHING,PCT_CLINICAL,PCT_ADMIN,PCT_OTHER,PCT_TRAINING)
    SELECT
    ap.id,
    p.citizen_cd,
    decode(p.visa_cd,'CV',0,'F1',1,'H1',2,'H1B',3,'H2',4,'J1',5,'J2',6,'O1',7,'PR',8,'PRP',9,'TC',10,'TN',11,'TNN',12),
    (select n.soc_sec_num from sa.name n where n.name_id = p.name_id),
    (select n.birth_date from sa.name n where n.name_id = p.name_id),
    (select decode(n.gender_cd,'F',1,'M',2,0) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'H',1) from sa.name n where n.name_id = p.name_id),
    (select decode(n.ethnic_cd,'A',2,'B',3,'I',1,'P',4,'W',5) from sa.name n where n.name_id = p.name_id),
    a.dept_name,
    a.div_name,
    a.inst_id,
    (select n.email from sa.name n where n.name_id = p.name_id),
    a.eff_date,
    ac.acad_pos_desc,
    p.acad_pos_cd,
    0,
    p.research_pct,
    p.teach_pct,
    p.patient_pct,
    p.admin_pct,
    p.other_pct,
    p.course_pct
    FROM
    appl1 ap,
    sa.personal_data p,
    sa.address a,
    sa.academic_pos_cd ac,
    profile_pi f
    WHERE
    p.project_role_cd='PI'
    and ap.appl_id=f.appl_id
    and p.appl_id=f.appl_id
    and p.name_id=f.name_id
    and a.addr_id=f.addr_id
    and p.acad_pos_cd=ac.acad_pos_cd
    AND EXISTS (select 1 from ris.ris_institution i WHERE i.id = a.inst_id)
    AND EXISTS (select 1 from sa.academic_pos_cd acp WHERE acp.acad_pos_cd = p.acad_pos_cd);
    In the execution PLan I see lots of Nested loop, Hash Join
    Index( Unique scan)
    Table Access by ( Index rowid)
    This query is fast in Test DB but ver very slow in prod DB. Need your help Urgent.
    Minaz

    When your query takes too long...
    When your query takes too long ...

  • Query takes lots of time to execute

    I have written a query which is computed from two separate views when i run the query it take 3 mins to execute
    how to reduce the time and excecute fast this is the query
    SELECT
    X."PRO_ID",
    X."POBJ_ID",
    X."Major",
    X."Minor",
    X."Normalized",
    X."IDR_PRO_ID",
    X."IDR_POBJ_ID",
    X."IDR"
    FROM
    PROJECTS PRO,
    PROJECT_OBJECTS POBJ,
    (SELECT
    MN."PRO_ID",
    MN."POBJ_ID",
    MN."Major",
    MN."Minor",
    MN."Normalized",
    DR."IDR_PRO_ID",
    DR."IDR_POBJ_ID",
    DR."IDR"
    FROM
    MAJOR_MINOR_NORMALIZED MN FULL OUTER JOIN IDR DR
    ON
    MN."PRO_ID" = DR."IDR_PRO_ID" AND
    MN."POBJ_ID" = DR."IDR_POBJ_ID" ) X
    WHERE
    PRO.ID = POBJ.PRO_ID AND
    PRO.ID = DECODE(X."PRO_ID",NULL,PRO.ID,X."PRO_ID") AND
    PRO.ID = DECODE(X."IDR_PRO_ID",NULL,PRO.ID,X."IDR_PRO_ID") AND
    POBJ.ID = DECODE(X."POBJ_ID",NULL,POBJ.ID,X."POBJ_ID") AND
    POBJ.ID = DECODE(X."IDR_POBJ_ID",NULL,POBJ.ID,X."IDR_POBJ_ID") AND
    PRO.ID = 2673
    any suggestion
    Thanks
    Sudhir
    Message was edited by:
    Sudhir_N

    If Pro.id = 2673, you don´t need the table PRODUCTS. You can simplify the query to
    SELECT
    X."PRO_ID",
    X."POBJ_ID",
    X."Major",
    X."Minor",
    X."Normalized",
    X."IDR_PRO_ID",
    X."IDR_POBJ_ID",
    X."IDR"
    FROM
    PROJECT_OBJECTS POBJ,
    (SELECT
    MN."PRO_ID",
    MN."POBJ_ID",
    MN."Major",
    MN."Minor",
    MN."Normalized",
    DR."IDR_PRO_ID",
    DR."IDR_POBJ_ID",
    DR."IDR"
    FROM
    MAJOR_MINOR_NORMALIZED MN FULL OUTER JOIN IDR DR
    ON
    MN."PRO_ID" = DR."IDR_PRO_ID" AND
    MN."POBJ_ID" = DR."IDR_POBJ_ID" ) X
    WHERE
    2673 = POBJ.PRO_ID AND
    2673 = DECODE(X."PRO_ID",NULL,2673,X."PRO_ID") AND
    2673 = DECODE(X."IDR_PRO_ID",NULL,2673,X."IDR_PRO_ID") AND
    POBJ.ID = DECODE(X."POBJ_ID",NULL,POBJ.ID,X."POBJ_ID") AND
    POBJ.ID = DECODE(X."IDR_POBJ_ID",NULL,POBJ.ID,X."IDR_POBJ_ID") Regards,
    Miguel

  • Infoobject F4 help in the Query takes lot of time and hangs

    Hi All,
    I have 0Material in the query and the F4 help  for selection takes lot of time and hangs after some time.
    The 0Material has 2 million records the  Infoobject properties for Bex are as below:
    Selection:No selection restriction
    Query Def. Filter Value Selection: Values in Master Data Table
    Query Execution Filter Val.Selectn:Only Posted Values for Navigation
    We tried to alter some of the Properties but we have the same issue.
    Suggest other options that can be be looked into to get the F4 help in the Query .
    Thanks,
    Mike

    hi Mike,
    check if helps
    Note 817335 - Performance in F4 Help 
    Note 748623 - Input help (F4) has a very long runtime - recommendations
    Note 581802 - Variable dialog boxes: performance of the F4 help
    search oss note for f4 performance ......

  • Query taking lot of time to execute..

    Hi,
    I have a very complecated query which I am executing using JDBC. The query has an insert statement. This query takes 15 mins to complete. I'm running the query as stand alone java program. Can some one have some suggestions what is the best way to debug. I need to find out why the query is taking that long. I'm using oracle 10g with sql developer.
    ps = con.prepareStatement(query);
    ps.setString(1,date);
    ps.setString(2,code);
    timeStart = System.currentTimeMillis();
    ps.executeUpdate();
    timeEnd = System.currentTimeMillis();     
    System.out.println("Time Taken::"+(timeStart -timeEnd)+" ms");
    Thanks in advance
    Ajoo

    Perhaps you should post the query so we can see what you are doing.
    In the mean time, try writing a simple update query and run it. If it runs quickly, your original query has problems. If it runs slow, its caused by something other than your original query.
    P.S.:
    should be:
    System.out.println("Time Taken::"(timeEnd -timeStart)" ms");
    and not this:
    System.out.println("Time Taken::"(timeStart -timeEnd)" ms");
    Edited by: njbt7y on Jan 19, 2011 12:07 PM

  • Query takes a long time on EBAN table

    Hi,
    I am trying to execute a simple select statement on EBAN table. This query takes unexpectionally longer time to execute.
    Query is :
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
          banfn IN s_banfn AND
          ernam IN s_ernam
          and ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    Structure of gt_ekko_ekpo
    TYPES : BEGIN OF ty_ekko_ekpo,
            ebeln TYPE ekko-ebeln,
            ebelp TYPE ekpo-ebelp,
            bukrs TYPE ekko-bukrs,
            aedat TYPE ekko-aedat,
            lifnr TYPE ekko-lifnr,
            ekorg TYPE ekko-ekorg,
            ekgrp TYPE ekko-ekgrp,
            waers TYPE ekko-waers,
            bedat TYPE ekko-bedat,
            otb_value TYPE ekko-otb_value,
            otb_res_value TYPE ekko-otb_res_value,
            matnr TYPE ekpo-matnr,
            werks TYPE ekpo-werks,
            matkl TYPE ekpo-matkl,
            elikz TYPE ekpo-elikz,
            wepos TYPE ekpo-wepos,
            emlif TYPE ekpo-emlif,
      END OF ty_ekko_ekpo.
    Structure of GT_EBAN
    TYPES : BEGIN OF ty_eban,
      banfn TYPE eban-banfn,
      bnfpo TYPE eban-bnfpo,
      ernam TYPE eban-ernam,
      badat TYPE eban-badat,
      ebeln TYPE eban-ebeln,
      ebelp TYPE eban-ebelp,
      END OF ty_eban.
    Query seems to be OK to me. But still am not able to figure out the reason for this performance issue.
    Please provide your inputs.
    Thanks.
    Richa

    Hi Richa,
    Maybe you are executing the query with S_BANFN empty. Still based on the note 191492 you should change your query on like the following
    1st Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo.
    if sy-subrc = 0.
      delete gt_eban where ernam not in s_ernam.
    endif.
    endif.
    endif.
    endif.
    2nd Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo
         and ernam in s_ernam.
    endif.
    endif.
    endif.
    Hope this helps.
    Regards,
    R

  • Query from source table takes lots of time

    I have to load data from an Oracle table which has millions of records. The loading knowledge module I am using is 'lkm sql to oracle'.
    I see that the query from the source database takes lot of time and doing a full table scan where as it supposed to be using an index.
    I dont have any lkm with 'lkm oracle to oracle'. Some how I need to add a hint to this source query. How I can do this?

    This LKM is not indicated to large volume of data, try to use LKM Oracle to Oracle (DBLINK), see ODI KMs documentation for more information.
    Related to HINTS you can add this with KM options, see that some Oracle ODI11g new KMs already have some options to do it, so you can put the hint on interface options of KM or define the temp index on the interface

  • Execution of first request takes lot of time with JDBC OCI

    Hi,
    We are having an application which connects to Oracle server(10g Enterprise Edition Release 10.2.0.4.0 64bit) using Oracle JDBC OCI driver(10.2.0.1.0 production (10g R2)).
    Everything works fine.
    But what we observed is randomly the first request to the server vai JDBC driver in a day takes lot of time, varies from 5-15 minutes.
    If we execute the request again it executed immediately
    What we observed while debugging is preparedStatement.executeQuery() is taking time.
    We enabled logging at the JDBC driver layer and a snapshot isgiven below. Between T2CPreparedStatement.doDefineExecuteFetch () and T2CPreparedStatement.execute_for_rows () it takes 5 minutes. Any idea why it takes this much time and reason for it? Oracle driver is not throwing any exceptions
    *4-apr-2011 4:00:01* oracle.jdbc.driver.T2CPreparedStatement doDefineExecuteFetch
    FINE: T2CPreparedStatement.doDefineExecuteFetch ()
    *4-apr-2011 4:05:12* oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows () returns: void
    Detailed jdbc log is below
    -apr-2011 4:00:01 oracle.jdbc.driver.OracleStatement doExecuteWithTimeout
    FINE: OracleStatement.doExecuteWithTimeout() needToPrepareDefineBuffer = false
    4-apr-2011 4:00:01 oracle.jdbc.driver.PhysicalConnection needLine
    FINE: PhysicalConnection.needLine()--no return
    4-apr-2011 4:00:01 oracle.jdbc.driver.PhysicalConnection registerHeartbeat
    FINE: PhysicalConnection.registerHeartbeat()
    4-apr-2011 4:00:01 oracle.jdbc.driver.OracleStatement executeMaybeDescribe
    FINE: OracleStatement.execute_maybe_describe() rowPrefetchChanged = false, needToParse = false, needToPrepareDefineBuffer = false, columnsDefinedByUser = false
    4-apr-2011 4:00:01 oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows (executed_for_describe = false)
    4-apr-2011 4:00:01 oracle.jdbc.driver.T2CPreparedStatement doDefineExecuteFetch
    FINE: T2CPreparedStatement.doDefineExecuteFetch ()
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement executeForRows
    FINE: T2CPreparedStatement.execute_for_rows () returns: void
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleStatement executeMaybeDescribe
    FINE: OracleStatement.execute_maybe_describe():return validRows = 1, needToPrepareDefineBuffer = false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleStatement doExecuteWithTimeout
    FINE: OracleStatement.doExecuteWithTimeout():return validRows = 1, needToPrepareDefineBuffer = false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINE: OracleResultSetImpl.next(): closed=false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINER: closed=false, statement.currentRow=-1, statement.totalRowsVisited=0, statement.maxRows=0, statement.validRows=1, statement.gotLastBatch=true
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=1)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=2)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getString
    INFO: OracleResultSetImpl.getString(columnIndex=3)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl getTimestamp
    INFO: OracleResultSetImpl.getTimestamp(columnIndex=4)
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl wasNull
    INFO: OracleResultSetImpl.wasNull()
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINE: OracleResultSetImpl.next(): closed=false
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl next
    FINER: closed=false, statement.currentRow=0, statement.totalRowsVisited=1, statement.maxRows=0, statement.validRows=1, statement.gotLastBatch=true
    4-apr-2011 4:05:12 oracle.jdbc.driver.PhysicalConnection needLine
    FINE: PhysicalConnection.needLine()--no return
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement closeQuery
    FINE: T2CPreparedStatement.closeQuery ()
    4-apr-2011 4:05:12 oracle.jdbc.driver.T2CPreparedStatement closeQuery
    FINE: T2CPreparedStatement.closeQuery () returns: void
    4-apr-2011 4:05:12 oracle.jdbc.driver.OracleResultSetImpl close
    INFO: OracleResultSetImpl.close()
    Regards
    Sunil

    Hi,
    Thanks for the reply
    - SQL may vary. Its not the same query which takes time always. The same query executed immediately(using another connection from pool) works perfectly
    - If there is a failover or stale connection getting closed or timeout, will JDBC driver thhrows exdeption? I didn't see any exceptions in the driver logs. Is there anyway to identify whether failover or timeout is happening?
    Some other points
    - The application uses connection pooling. This means that connections are created in the beginning and kept in the pool. And it may be in the pool for 72 hours or , before this issue happens
    - Application uses prepared statements and the statements are preapred and kept and re used later. So the query which take s more time might be prepared before 48 hours or so.
    - This is not happening every day. Many times the pattern is , it happens after 60-72 hours after application is started.
    Sunil

  • Read cluster PCL4 .. takes lot of time

    Hi guys,
      Though I Know this is a functional related forum, i would want to post  techincal question related HR ABAP.
    I have a report that needs to read data from PCL4 cluster and display master data changes for an employee. Developed a big program to simulate "RPUAUD00" that displays the logged data.
    My concenr is the program that i developed is taking minutes to execute. I am using the 2 function modules "HR_INFOTYPE_LOG_GET_LIST" and "HR_INFOTYPE_LOG_GET_DETAIL" for this purpose. 
    The function module "HR_INFOTYPE_LOG_GET_LIST" takes lot of time to get executed as it checks all the personnel number with hanges even if there was only one personnel number that was changed. Assuming that my personnel area and personnel sub area has 2000 employees, even if one employee's data is changed, the function module reads the cluster for all the employee and takes a few minutes to comeback, before the function module "HR_INFOTYPE_LOG_GET_DETAIL" is executed to  get the exact details changerd. This is irritating the client. Any idea how to reduce the time as i am severely short of ideas as how to.
    regards
    Sam

    Unless you pass the exact pernr, the function cannot determine without looking at all the pernrs for changes..If your program tied to PNP, you can improve the run time passing the pernr list to the function call HR_INFOTYPE_LOG_GET_LIST..
    ~Suresh

  • Select Command taking a lot of time to execute

    Dear Experts,
    My below SELECT command taking a lot of time to execute.
          SELECT wid
                       matl_desc
                       INTO TABLE open_wid FROM zwb_table
                       WHERE ( weight2 = 0 OR weight2 IS NULL ).
    Table : zwb_table contains around 7 Lacs records. That's why taking a lot of time . I have also Indexed the table zwb_table with field WID & WEIGHT2 . (zwb_table contains only WID as Primary Key Field)
    Structure of Internal Table : open_wid ->
                                             wid LIKE zwb_table-wid,
                                             matl_desc LIKE zwb_table-matl_desc,
    Please suggest me how to Improve the Performance of the above Select command.
    Thanks in Advance
    JACK

    Hi Jack,
    you are having morethan 7lack records in z table. it is not good practice fetching all the records into internal table and restricting with where clause in loop statement.
    I hope you already created secondary index combination of primary key.
    check you select query fetching records based on index you have created in ST05.
    Refer below link for your program is using index that you have created.
    Re: Indexing
    Regards,
    Peranandam
    Edited by: peranandam chinnathambi on Apr 7, 2009 8:38 AM

  • Importing SLD content takes lot of time in Solman 4.0

    Hi Folks,
    Importing SLD content takes lot of time while installing Solution Manager 4.0. Its in the 42nd phase of 45...SQL 2005 DB.
    Its stuck in Configuring system landscape directory....and NOT thrown any error as of now...touch wood.
    Can anyone tell me how much time does it take OR is it gone in a loop?

    HI All,
    Finally I have received an error during the above mentionde phase....
    sapinst.log  -
    >>>
    Import Status: PREPARING
    Import Status: PREPARING
    Import Status: PREPARING
    ERROR: CIM_ERR_FAILED: IO error: Read timed out
    <BR>CONFIGURATION=
    ERROR 2007-10-22 15:55:34
    CJS-30059  J2EE Engine configuration error.<br>DIAGNOSIS: Error when configuring J2EE Engine. See output of logfile java.log: 'JSE'.
    java.exe.log -
    >>>
    Import Status: PREPARING
    TYPE=A<BR>STATE=<BR>INFO_SHORT=com.sap.sld.api.wbem.exception.CIMCommunicationException: com.sap.sld.api.wbem.exception.CIMCommunicationException: CIM_ERR_FAILED: IO error: Read timed out
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:158)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:720)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:694)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:638)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.referencesImpl(WBEMRemoteClient.java:375)
         at com.sap.sld.api.wbem.client.WBEMClient.references(WBEMClient.java:1773)
         at com.sap.sld.api.wbem.client.WBEMClientUtil.referencesComplete(WBEMClientUtil.java:490)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.collectAssociationsForRestoration(SAPCRUpgrade.java:700)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.delete(SAPCRUpgrade.java:355)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadFullImport(ImportHandler.java:1765)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadImpl(ImportHandler.java:1605)
         at com.sap.lcr.pers.delta.importing.ImportHandler.load(ImportHandler.java:1573)
         at com.sap.ctc.util.SLDConfig.importSldContent(SLDConfig.java:812)
         at com.sap.ctc.util.SLDConfig.performFunction(SLDConfig.java:154)
         at com.sap.ctc.util.ConfigServlet.doGet(ConfigServlet.java:69)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    Caused by: java.net.SocketTimeoutException: Read timed out
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.net.SocketInputStream.read(SocketInputStream.java:182)
         at com.tssap.dtr.client.lib.protocol.streams.ChunkedInputStream.readLine(ChunkedInputStream.java:323)
         at com.tssap.dtr.client.lib.protocol.streams.ResponseStream.readLine(ResponseStream.java:271)
         at com.tssap.dtr.client.lib.protocol.impl.Response.initialize(Response.java:476)
         at com.tssap.dtr.client.lib.protocol.Connection.getResponse(Connection.java:2604)
         at com.tssap.dtr.client.lib.protocol.Connection.sendInternal(Connection.java:1578)
         at com.tssap.dtr.client.lib.protocol.Connection.send(Connection.java:1427)
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:142)
         ... 30 more
    caused by:
    java.net.SocketTimeoutException: Read timed out
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:129)
         at java.net.SocketInputStream.read(SocketInputStream.java:182)
         at com.tssap.dtr.client.lib.protocol.streams.ChunkedInputStream.readLine(ChunkedInputStream.java:323)
         at com.tssap.dtr.client.lib.protocol.streams.ResponseStream.readLine(ResponseStream.java:271)
         at com.tssap.dtr.client.lib.protocol.impl.Response.initialize(Response.java:476)
         at com.tssap.dtr.client.lib.protocol.Connection.getResponse(Connection.java:2604)
         at com.tssap.dtr.client.lib.protocol.Connection.sendInternal(Connection.java:1578)
         at com.tssap.dtr.client.lib.protocol.Connection.send(Connection.java:1427)
         at com.sap.sld.api.wbem.client.WBEMHttpRequestSender.send(WBEMHttpRequestSender.java:142)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:720)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:694)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.send(WBEMRemoteClient.java:638)
         at com.sap.sld.api.wbem.client.WBEMRemoteClient.referencesImpl(WBEMRemoteClient.java:375)
         at com.sap.sld.api.wbem.client.WBEMClient.references(WBEMClient.java:1773)
         at com.sap.sld.api.wbem.client.WBEMClientUtil.referencesComplete(WBEMClientUtil.java:490)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.collectAssociationsForRestoration(SAPCRUpgrade.java:700)
         at com.sap.lcr.pers.delta.importing.SAPCRUpgrade.delete(SAPCRUpgrade.java:355)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadFullImport(ImportHandler.java:1765)
         at com.sap.lcr.pers.delta.importing.ImportHandler.loadImpl(ImportHandler.java:1605)
         at com.sap.lcr.pers.delta.importing.ImportHandler.load(ImportHandler.java:1573)
         at com.sap.ctc.util.SLDConfig.importSldContent(SLDConfig.java:812)
         at com.sap.ctc.util.SLDConfig.performFunction(SLDConfig.java:154)
         at com.sap.ctc.util.ConfigServlet.doGet(ConfigServlet.java:69)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:390)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:264)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:347)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:325)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:887)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:241)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:92)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:148)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    //   => Importing Data : E:/usr/sap/SOL/SYS/global/sld/model/CR_Content.zip URL=http://devsrv:50100 USER=J2EE_ADMIN ...
    Import Status: PREPARING
    ERROR: CIM_ERR_FAILED: IO error: Read timed out
    <BR>CONFIGURATION=

  • Background task taking a lot of time to execute

    Hi,
    There is a background task in my workflow which is taking a lot of time to execute.
    I have tested it in dialog mode.. and it works fine.. takes just 2 mins.
    Only when the task is executed from background.. it takes about 10 mins.
    Is this a config problem? How do I check if it is so?

    you could try activating a trace (sql) and see what the differences are,.
    you could check if maybe user parameters are influencing this. Or ask your basis team if there are a lot of differences between syystem users and dialog users in terms of resource allocation.
    Did you find anything different whilst debugging it?
    Kind regards, Rob Dielemans

  • Inserting into a base table of a materialized view takes lot of time.......

    Dear All,
    I have created a materialized view which refreshes on commit.materialized view is enabled query rewrite.I have created a materialized view log on the base table also While inserting into the base table it takes lot of time................Can u please tell me why?

    Dear Rahul,
    Here is my materialized view..........
    create materialized view mv_test on prebuilt table refresh force on commit
    enable query rewrite as
    SELECT P.PID,
    SUM(HH_REGD) AS HH_REGD,
    SUM(INPRO_WORKS) AS INPRO_WORKS,
    SUM(COMP_WORKS) AS COMP_WORKS,
    SUM(SKILL_WAGE) AS SKILL_WAGE,
    SUM(UN_SKILL_WAGE) AS UN_SKILL_WAGE,
    SUM(WAGE_ADVANCE) AS WAGE_ADVANCE,
    SUM(MAT_AMT) AS MAT_AMT,
    SUM(DAYS) AS DAYS,
    P.INYYYYMM,P.FIN_YEAR
    FROM PROG_MONTHLY P
    WHERE SUBSTR(PID,5,2)<>'PP'
    GROUP BY PID,P.INYYYYMM,P.FIN_YEAR;
    Please help me if query enable rewrite does any performance degradation......
    Thanks & Regards
    Kris

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Parsing the query takes too much time.

    Hello.
    I hitting the bug in в Oracle XE (parsing some query takes too much time).
    A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
    Please, raise a bug for Oracle XE.
    Steps to reproduce the issue:
    1. Extract files from testcase_dump.zip and testcase_sql.zip
    2. Under username SYSTEM execute script schema.sql
    3. Import data from file TESTCASE14.DMP
    4. Under username SYSTEM execute script testcase14.sql
    SQL text can be downloaded from http://files.mail.ru/DJTTE3
    Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
    Regards,
    Viacheslav.

    Bug number? Version fix applies to?
    Relevant Note that describes the problem and points out bug/patch availability?
    With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1.

Maybe you are looking for