Quesry taking more time to execute

Dear All,
User complained that his query is taking more time than yesterday and DB performance is also very slow, so what would be your strategy to check and fix the problem.
What could be the right approach to check and fix this.
Regards,
DevD!
Edited by: user12138514 on Nov 8, 2009 9:35 PM

And here is the cause list for slow performance:
Slow Network connection.
Bad Connection management.
Bad Use of Cursors and the Shared Pool
Bad SQL or Query is not perfectly tune or not using bind variables.
Use of nonstandard initialization parameters.
Getting Database I/O Wrong
Redo Log Setup Problems
Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.
Long Full Table Scans.
High Amounts of Recursive (SYS) SQL
Deployment and Migration Errors
Table require analyzed.
Table require indexed.
Overparsing.
You have old discs.
Not enough memory.
SGA is too small or too big.
Same for cache buffer.
Your OS need tuning.
Your disc is full.
Running so many instances on a single server.
You applications are badly written.
Your applications were ported from Sybase or Microsoft SQL Server.
Your applications were ported from DB2.
Your applications dynamically create temp tables.
Someone wrote referential integrity using triggers, java, vb ...
Someone wrote replication using triggers, java, vb ...
Someone wrote a sequence using max + 1.
Table datatypes are not as per DBMS concepts; like Dates and numbers are stored as strings.
The statistics are not up to date.
There are no statistics.
What are statistics?
We are using RBO.
That's a few of the possible causes I could think of, there are lots of others.
-To analyze your application, you can install Statspack. It will give you some indications.
Hth
Girish Sharma

Similar Messages

  • Query is taking more time to execute

    Hi,
    Query is taking more time to execute.
    But when i execute same query in other server then it is giving immediate output.
    What is the reason of it.
    thanks in advance.

    'My car doesn't start, please help me to start my car'
    Do you think we are clairvoyant?
    Or is your salary subtracted for every letter you type here?
    Please be aware this is not a chatroom, and we can not see your webcam.
    Sybrand Bakker
    Senior Oracle DBA

  • Stopping a Query taking more time to execute in runtime in Oracle Forms.

    Hi,
    In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
    We have tried three approach.
    1. set max fetch record in form and block level.
    2. set max fetch time in form and block level.
    in above two method does not provide the appropiate solution for us.
    3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
    It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
    But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
    Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
    This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
    Thanks and Regards,
    Suraj
    Edited by: user10673131 on Jun 25, 2009 4:55 AM

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Query is taking more time to execute in PROD

    Hi All,
    Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
    SELECT DISTINCT OOH.HEADER_ID
    ,OOH.ORG_ID
    ,ct.CUSTOMER_TRX_ID
    ,ool.ship_from_org_id
    ,ct.trx_number IDP_SHIPMENT_ID
    ,ctt.type STATUS_CODE
    ,SYSDATE STATUS_DATE
    ,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
    ,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
    ,rac_bill.orig_system_reference
    ,rac_ship_party.party_name SHIP_TO_NAME
    ,raa_ship_loc.address1 SHIP_TO_ADDR1
    ,raa_ship_loc.address2 SHIP_TO_ADDR2
    ,raa_ship_loc.address3 SHIP_TO_ADDR3
    ,raa_ship_loc.address4 SHIP_TO_ADDR4
    ,raa_ship_loc.city SHIP_TO_CITY
    ,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
    ,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
    ,raa_ship_loc.postal_code SHIP_TO_ZIP
    ,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
    ,ooh.creation_date CUSTOMER_ORDER_DATE
    ,ool.actual_shipment_date DATE_SHIPPED
    ,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
    ,'CSB', 'CHESAPEAKE'
    ,'DEP', 'CHESAPEAKE'
    ,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
    ,ooh.freight_carrier_code FREIGHT_CARRIER
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
    + NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
    ,ooh.freight_terms_code FREIGHT_TERMS
    ,'' IDP_BILL_OF_LADING
    ,(SELECT WAYBILL
    FROM WSH_DELIVERY_DETAILS_OE_V
    WHERE -1=-1
    AND SOURCE_HEADER_ID = ooh.header_id
    AND SOURCE_LINE_ID = ool.line_id
    AND ROWNUM =1) WAYBILL_CARRIER
    ,'' CONTAINERS
    ,ct.trx_number INVOICE_NBR
    ,ct.trx_date INVOICE_DATE
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
    ,NULL IDP_TAX_IDENTIFICATION_NBR
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
    ,NULL TAX_DESC_1
    ,NULL TAX_AMOUNT_2
    ,NULL TAX_DESC_2
    ,rt.name PAYMENT_TERMS
    ,NULL RELATED_INVOICE_NBR
    ,'Y' INVOICE_PRINT_FLAG
    FROM ra_customer_trx_all ct
    ,ra_cust_trx_types_all ctt
    ,hz_cust_accounts rac_ship
    ,hz_cust_accounts rac_bill
    ,hz_parties rac_ship_party
    ,hz_locations raa_ship_loc
    ,hz_party_sites raa_ship_ps
    ,hz_cust_acct_sites_all raa_ship
    ,hz_cust_site_uses_all su_ship
    ,ra_customer_trx_lines_all rctl
    ,oe_order_lines_all ool
    ,oe_order_headers_all ooh
    ,mtl_parameters mp
    ,ra_terms rt
    ,OE_ORDER_SOURCES oos
    ,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
    WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
    AND ctt.TYPE <> 'BR'
    AND ct.org_id = ctt.org_id
    AND ct.ship_to_customer_id = rac_ship.cust_account_id
    AND ct.bill_to_customer_id = rac_bill.cust_account_id
    AND rac_ship.party_id = rac_ship_party.party_id
    AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
    AND raa_ship.party_site_id = raa_ship_ps.party_site_id
    AND raa_ship_loc.location_id = raa_ship_ps.location_id
    AND ct.ship_to_site_use_id = su_ship.site_use_id
    AND su_ship.org_id = ct.org_id
    AND raa_ship.org_id = ct.org_id
    AND ct.customer_trx_id = rctl.customer_trx_id
    AND ct.org_id = rctl.org_id
    AND rctl.interface_line_attribute6 = to_char(ool.line_id)
    AND rctl.org_id = ool.org_id
    AND ool.header_id = ooh.header_id
    AND ool.org_id = ooh.org_id
    AND mp.organization_id = ool.ship_from_org_id
    AND ooh.payment_term_id = rt.term_id
    AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
    AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
    AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
    AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
    AND ool.flow_status_code = 'CLOSED'
    AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
    AND trx_hdr_table = 'CT'
    AND xla_ael_sl_v.gl_transfer_status = 'Y'
    AND xla_ael_sl_v.accounted_dr IS NOT NULL
    AND xla_ael_sl_v.org_id = ct.org_id;
    -- AND ct.trx_number = '2000080';
    }

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Performance tunned report taking more time to execute - URGENT

    Dear Experts,
    In One Report Program is Taking long time to execute at background session, I am Taking That Report To Tune The Performance But This is taking 12 hours more to excute .........
    The Code is given below.
    Before Tune.
    DATA  :  BEGIN OF ITAB OCCURS 0,
            LGOBE    LIKE T001L-LGOBE,
            105DT    LIKE MKPF-BUDAT,
            XBLNR    LIKE MKPF-XBLNR,
            BEDAT    LIKE EKKO-BEDAT,
            LIFNR    LIKE EKKO-LIFNR,
            NAME1    LIKE LFA1-NAME1,
            EKKO     LIKE EKKO-BEDAT,
            BISMT    LIKE MARA-BISMT,
            MAKTX    LIKE MAKT-MAKTX,
            NETPR    LIKE EKPO-NETPR,
            PEINH    LIKE EKPO-PEINH,
            VALUE    TYPE P DECIMALS 2,
            DISPO    LIKE MARC-DISPO,
            DSNAM    LIKE T024D-DSNAM,
            AGE      TYPE P DECIMALS 0,
            PARLIFNR LIKE EKKO-LIFNR,
            PARNAME1 LIKE LFA1-NAME1,
             MBLNR    LIKE MSEG-MBLNR,
             MJAHR    LIKE MSEG-MJAHR,
             ZEILE    LIKE MSEG-ZEILE,
             BWART    LIKE MSEG-BWART,
             MATNR    LIKE MSEG-MATNR,
             WERKS    LIKE MSEG-WERKS,
             LIFNR    LIKE MSEG-LIFNR,
             MENGE    LIKE MSEG-MENGE,
             MEINS    LIKE MSEG-MEINS,
             EBELN    LIKE MSEG-EBELN,
             EBELP    LIKE MSEG-EBELP,
             LGORT    LIKE MSEG-LGORT,
             SMBLN    LIKE MSEG-SMBLN,
             BUKRS    LIKE MSEG-BUKRS,
             GSBER    LIKE MSEG-GSBER,
             INSMK    LIKE MSEG-INSMK,
             XAUTO    LIKE MSEG-XAUTO,
             END OF ITAB.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
             EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
             FROM MSEG
             INTO CORRESPONDING FIELDS OF TABLE ITAB
             WHERE WERKS  EQ P_WERKS AND
                   MBLNR  IN S_MBLNR AND
                   BWART  EQ '105'   and
                   mblnr ne '5002361303' and
                   mblnr ne '5003501080' and
                   mblnr ne '5002996300' and
                   mblnr ne '5002996407' AND
                    mblnr ne '5003587026' AND
                    mblnr ne '5003587026' AND
                    mblnr ne '5003493186' AND
                    mblnr ne '5002720583' AND
                    mblnr ne '5002928122' AND
                    mblnr ne '5002628263'.
    After tune.
    TYPES :  BEGIN OF ST_ITAB ,
             MBLNR    LIKE MSEG-MBLNR,
             MJAHR    LIKE MSEG-MJAHR,
             ZEILE    LIKE MSEG-ZEILE,
             BWART    LIKE MSEG-BWART,
             MATNR    LIKE MSEG-MATNR,
             WERKS    LIKE MSEG-WERKS,
             LIFNR    LIKE MSEG-LIFNR,
             MENGE    LIKE MSEG-MENGE,
             MEINS    LIKE MSEG-MEINS,
             EBELN    LIKE MSEG-EBELN,
             EBELP    LIKE MSEG-EBELP,
             LGORT    LIKE MSEG-LGORT,
             SMBLN    LIKE MSEG-SMBLN,
             BUKRS    LIKE MSEG-BUKRS,
             GSBER    LIKE MSEG-GSBER,
             INSMK    LIKE MSEG-INSMK,
             XAUTO    LIKE MSEG-XAUTO,
            END OF ST_ITAB.
    DATA : ITAB TYPE STANDARD TABLE OF ST_ITAB WITH HEADER LINE.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
             EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
             FROM MSEG
             INTO TABLE ITAB
             WHERE WERKS  EQ P_WERKS AND
                   MBLNR  IN S_MBLNR AND
                   BWART  EQ '105'   AND
                   MBLNR NE '5002361303' AND
                   MBLNR NE '5003501080' AND
                   MBLNR NE '5002996300' AND
                   MBLNR NE '5002996407' AND
                    MBLNR NE '5003587026' AND
                    MBLNR NE '5003587026' AND
                    MBLNR NE '5003493186' AND
                    MBLNR NE '5002720583' AND
                    MBLNR NE '5002928122' AND
                    MBLNR NE '5002628263'.
    PLEASE GIVE ME THE SOULUTION......
    Reward avail for useful answer
    thanks in adv,
    jai.m

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • "3.x Analyzer Server" EVENT taking more Time when Executing Query

    Hi All,
    When I am Executing the Query Through RSRT   taking more time. When I have checked the statistics
    and observed that "3.x Analyzer Server" EVENT taking more Time .
    What I have to do , How I will reduce this 3.x Analyzer Server EVENT time.
    Please Suggest me.
    Thanks,
    Kiran Manyam

    Hello,
    Chk this on query performance:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance
    Reg,
    Dhanya

  • Trunc taking more time to execute..........

    Hiii
    I running a query that is including TRUNC in where condition. But it is taking more time time to execute. Query is ::::::::::
    SELECT POD.REQ_DISTRIBUTION_ID, X.*
    FROM
    SELECT MSI.SEGMENT1||'.'||MSI.SEGMENT2||'.'||MSI.SEGMENT3||'.'||MSI.SEGMENT4 ITEM, RT.TRANSACTION_TYPE,
         MSI.DESCRIPTION, rt.TRANSACTION_ID,RT.PARENT_TRANSACTION_ID,
         RSH.RECEIPT_NUM,
         RSH.SHIP_TO_ORG_ID,
         TRUNC(RT.TRANSACTION_DATE) RCP_DATE,
         RSL.QUANTITY_RECEIVED RCV_QTY,
         PLLA.SHIPMENT_NUM,
         PLA.LINE_NUM PO_LINE,
         PHA.SEGMENT1 PO_NUM,
         PHA.CREATION_DATE,
         PHA.APPROVED_DATE,
         PLA.QUANTITY PO_QTY,
         RSH.SHIPMENT_HEADER_ID,
         RSL.SHIPMENT_LINE_ID,
         PLLA.LINE_LOCATION_ID
    FROM PO_HEADERS_ALL PHA,
         PO_LINES_ALL PLA,
         MTL_SYSTEM_ITEMS MSI,
         RCV_SHIPMENT_HEADERS RSH,
         RCV_SHIPMENT_LINES RSL,
         RCV_TRANSACTIONS RT,
         PO_LINE_LOCATIONS_ALL PLLA
    WHERE PHA.PO_HEADER_ID = PLA.PO_HEADER_ID
    AND     PHA.PO_HEADER_ID = RSL.PO_HEADER_ID
    AND     PHA.PO_HEADER_ID = PLLA.PO_HEADER_ID
    AND     PHA.ORG_ID = PLLA.ORG_ID
    AND     PLA.ITEM_ID = MSI.INVENTORY_ITEM_ID
    AND     PLA.PO_LINE_ID = RSL.PO_LINE_ID
    AND     MSI.INVENTORY_ITEM_ID = RSL.ITEM_ID
    AND     MSI.ORGANIZATION_ID = RSH.SHIP_TO_ORG_ID
    AND     RSH.SHIPMENT_HEADER_ID = RSL.SHIPMENT_HEADER_ID
    AND     RSH.SHIPMENT_HEADER_ID = RT.SHIPMENT_HEADER_ID
    AND     RSL.SHIPMENT_LINE_ID = RT.SHIPMENT_LINE_ID
    AND     RSL.PO_LINE_ID = PLLA.PO_LINE_ID
    AND     RT.TRANSACTION_TYPE = 'RECEIVE'
    AND     NVL(MSI.ENABLED_FLAG,'N') = 'Y'
    AND     NVL(RSL.QUANTITY_RECEIVED,0) > 0
    AND     PHA.ORG_ID = :P_ORG_ID
    AND        TRUNC(RT.TRANSACTION_DATE) BETWEEN :P_FROM_DATE AND :P_TO_DATE ) X, PO_DISTRIBUTIONS_ALL POD
    WHERE POD.LINE_LOCATION_ID = X.LINE_LOCATION_ID
    How can i execute it fast. Any alternate for TRUNC..?
    PS

    You could use a function based index,
    create index idx_trunc_trans_date on RCV_TRANSACTIONS(TRUNC(TRANSACTION_DATE));I guess the trunc you are using will not be of any use unless you are having any time format in your p_from_date and p_to_date or you enter same date for to and from dates.
    Trunc just truncates the time in the date and resets it begining of the day. I dont think you need trunc at all.
    Just do
    IF P_FROM_DATE AND P_TO_DATE
    then
    l_to_date := (p_to_date+1)-(1/(24*3600))
    -- use local variables
    then useAND RT.TRANSACTION_DATE BETWEEN :P_FROM_DATE AND L_TO_DATE
    I believe you should get the same reults unless you have time in your from and to date parameters.
    If you are getting, same results, just remove the trunc.
    G.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Report taking more time to execute

    Hi,
       I Have created a report on " 0FIGL_O02 " General Ledger: Line Items(DSO)
      This report is creates for document details . so using this report i want to creat
       jump report. But When i am running the report it is taking more tme.
      variable sellections i uesed for this report.
    1. company code
    2.G/L Account
    3.Posting date
    there are no caliculations for this report.  But it is taaking more time and not getting
    executed.
    plz sugest.
    Regards,
    prasad.

    Hi,
    Please rebuild the statistics of the ODS.
    Also check the various options available below:
    ODS Performance
    -Vikram

  • Collection function taking more time to execute

    Hi all,
    I am using a collection function in my sql_report it is taking plenty of time to return rows, is there any way to get the resulted rows(using collection) without consuming more time.
    SELECT tab_to_string(CAST(COLLECT(wot_vw."Name") AS t_varchar2_tab)) FROM  REPORT_VW wot_vw
    WHERE(wot_vw."Task ID" = wot."task_id") GROUP BY wot_rept_vw."Task ID") as "WO"
    from   TASK_TBL wot
    INNER JOIN
    (SELECT "name", MAX("task_version") AS MaxVersion from TASK_TBL
             GROUP BY "name") q
    ON (wot."name" = q."name" AND wot."task_version" = q.MaxVersion)
    order by NLSSORT(wot."name",'NLS_SORT=generic_m')
    Here this order by is causing problem
    Apex version is 4.0
    Thanks.
    Edited by: apex on Feb 21, 2012 7:24 PM

    'My car doesn't start, please help me to start my car'
    Do you think we are clairvoyant?
    Or is your salary subtracted for every letter you type here?
    Please be aware this is not a chatroom, and we can not see your webcam.
    Sybrand Bakker
    Senior Oracle DBA

  • CO41 taking more time to execute

    Hi,
    I am running CO41 (PRD System), but the time it is taking to execute is extensively high. Ultimately i have to kill the sesison to stop the process.
    What can be the probable reason for the same.
    My Inputs to CO41 are,
    Planning Plant
    MRP Controller
    Production Plant
    Kindly guide.
    Thanks in advance,
    Harris

    Dear,
    This is runtime problems occur during the conversion of planned orders it may be due to ATP check or may be due to  buffering of certain tables.
    In the check control (Transaction OPPJ), you can define the stocks, inward movements and outward movements that are considered.
    Below, you will find a list of database tables that are accessed, depending on the setting.
    Stocks:
    Storage locations:    MARD
    Sales order stocks:   MSSA
    Project stocks:       MSSQ
    Subcontracting:       MSSL
    Batches:              MCHB
    Customer consignment: MSSK
    Inward/outward movements:
    Purchase orders:             MDBS (view for EKPO and EKET)
    Production orders:           MDFA (view for AFPO)
    Purchase requisitions:       EBAN
    Dependent req., order res.:  MDRS (view for RESB)
    Manual reservations:         MDRS (view for RESB)
    Stock transfer reservations: MDUR (view for REUL and RESB)
    Sales requirements:          VBBE (individual records)
                                VBBS (totals records)
    Please use releavant setting for the ATP check.
    Also raise to OSS for same.
    Hope it will help you.
    Regards,
    R.Brahmankar

  • Suddenly ODI scheduled executions taking more time than usual.

    Hi,
    I have set ODI packages scheduled for execution.
    From some days those are taking more time to execute themselves.
    Before they used to take 1 hr 30 mins approx.
    Now they are taking 3 - 3 hr 15 mins approx.
    And there no any major change in data in terms of Quantity.
    My ODI version s
    Standalone Edition Version 11.1.1
    Build ODI_11.1.1.3.0_GENERIC_100623.1635
    ODI packages are mainly using Oracle as SOURCE and TARGET DB.
    What things should i check to get to know reasons of sudden increase in time of execution.
    Any pointers regarding this would be appreciated.
    Thanks,
    Mahesh

    Mahesh,
    Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
    See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
    http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/

  • Issue with background job--taking more time

    Hi,
    We have a custom program which runs as the background job. It runs every 2 hours.
    Itu2019s taking more time than expected on ECC6 SR2 & SR3 on Oracle 10.2.0.4. We found that it taking more time while executing native SQL on DBA_EXTENTS. When we tried to fetch less number of records from DBA_EXTENTS, it works fine.
    But we need the program to fetch all the records.
    But it works fine on ECC5 on 10.2.0.2 & 10.2.0.4.
    Here is the SQL statement:
    EXEC SQL PERFORMING SAP_GET_EXT_PERF.
      SELECT OWNER, SEGMENT_NAME, PARTITION_NAME,
             SEGMENT_TYPE, TABLESPACE_NAME,
             EXTENT_ID, FILE_ID, BLOCK_ID, BYTES
       FROM SYS.DBA_EXTENTS
       WHERE OWNER LIKE 'SAP%'
       INTO
       : EXTENTS_TBL-OWNER, :EXTENTS_TBL-SEGMENT_NAME,
       :EXTENTS_TBL-PARTITION_NAME,
       :EXTENTS_TBL-SEGMENT_TYPE , :EXTENTS_TBL-TABLESPACE_NAME,
       :EXTENTS_TBL-EXTENT_ID, :EXTENTS_TBL-FILE_ID,
       :EXTENTS_TBL-BLOCK_ID, :EXTENTS_TBL-BYTES
    ENDEXEC.
    Can somebody suggest what has to be done?
    Has something changed in SAP7 (wrt background job etc) or do we need to fine tune the SQL statement?
    Regards,
    Vivdha

    Hi,
    there was an issue with LMT's but that was fixed  in 10.2.0.4
    besides missing system statistics:
    But WHY do you collect every 2 hours this information? The dba_extents view is based on really heavy used system tables.
    Normally , you would start queries of this type against dba_extents ie. to identify corrupt blocks and such:
    SELECT  owner , segment_name , segment_type
            FROM  dba_extents
           WHERE  file_id = &AFN
             AND  &BLOCKNO BETWEEN block_id AND block_id + blocks -1
    Not sure what you want to achieve with it.
    There are monitoring tools (OEM ?) around that may cover your needs.
    Bye
    yk

  • *More time to Execute*

    Hi,
    The following code is taking more time to execute...
    Kindly assist..
    check wagetype selection
      REFRESH it_p8wage.
      CLEAR:  it_p8wage, wa_p8wage, p8wage_flag.
      LOOP AT p0008.
        DO 20 TIMES
        VARYING wa_p8wage-lgart FROM p0008-lga01 NEXT p0008-lga02
        VARYING wa_p8wage-betrg FROM p0008-bet01 NEXT p0008-bet02.
          IF wa_p8wage-lgart IN lgartahm
            AND NOT wa_p8wage IS INITIAL.
            MOVE: 'X' TO p8wage_flag,
                  p0008-waers TO wa_p8wage-waers.
            INSERT wa_p8wage INTO TABLE it_p8wage.
            CLEAR wa_p8wage.
          ENDIF.
        ENDDO.
      ENDLOOP.
    no wagetype mathing found
      IF p8wage_flag IS INITIAL.
        READ TABLE lgartahm.
        IF sy-subrc EQ 0.
          REJECT.
        ENDIF.
      ENDIF.

    Hi,
    For each end every employye there may not be data for all the 20 wage types from lga01 bet01..................lga20 and bet20.
    in your code it is better to exit the do loop IF wa_p8wage-lgart IS INITIAL rather than looping 20 times even there is no data.
    Because wage types will be assigned to an employee in the same sequence form 1,2......20
    if the lga03 is initial there there will not be data for lga04 to lga 20.
    This may not be the excat reason for slow response time of ur report but may be usefull.
    look at the below code for the chane i suggested
    LOOP AT p0008.
    DO 20 TIMES
    VARYING wa_p8wage-lgart FROM p0008-lga01 NEXT p0008-lga02
    VARYING wa_p8wage-betrg FROM p0008-bet01 NEXT p0008-bet02.
    IF wa_p8wage-lgart IS INITIAL.
      exit.                  " this will exit the do loop
    ENDIF.
    IF wa_p8wage-lgart IN lgartahm
    AND NOT wa_p8wage IS INITIAL.
    MOVE: 'X' TO p8wage_flag,
    p0008-waers TO wa_p8wage-waers.
    INSERT wa_p8wage INTO TABLE it_p8wage.
    CLEAR wa_p8wage.
    ENDIF.
    ENDDO.
    ENDLOOP.

  • Custom Report taking more time to complete Normat

    Hi All,
    Custom report(Aging Report) in oracle is taking more time to complete Normal.
    In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
    We have enabled the trace and checked the trace file, but all the queries are working fine.
    Could you please suggest me regarding this issue.
    Thanks in advance!!

    TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Sort options: prsela exeela fchela
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
    ORA-00922: missing or invalid option
    parse error offset: 1049
    EXPLAIN PLAN option disabled.
    SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
    FROM
    HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.05 11 22 0 3
    total 3 0.00 0.05 11 22 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
    3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
    3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
    3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
    3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
    18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
    3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
    3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
    3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
    3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
    3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 11 0.01 0.05
    asynch descriptor resize 2 0.00 0.00
    INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
    MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
    LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
    AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
    VALUES
    ( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
    :B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
    NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
    60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
    SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 20 0.00 0.03 4 1 304 20
    Fetch 0 0.00 0.00 0 0 0 0
    total 21 0.00 0.03 4 1 304 20
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
    1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.01 0.03
    SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
    SEVERITY
    FROM
    FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
    M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
    A.APPLICATION_ID
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 4 12 0 2
    total 5 0.00 0.03 4 12 0 2
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
    1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 4 0.00 0.03
    DELETE FROM MO_GLOB_ORG_ACCESS_TMP
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.02 3 4 4 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.02 3 4 4 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
    1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 3 0.01 0.02
    SET TRANSACTION READ ONLY
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.01 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.01 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    begin Fnd_Concurrent.Init_Request; end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 148 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 148 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    log file sync 1 0.01 0.01
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
    :DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 9 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 9 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 8 0.00 0.00
    SQL*Net message from client 8 0.00 0.00
    begin fnd_global.bless_next_init('FND_PERMIT_0000');
    fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
    :security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
    :conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
    :form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
    :server_id); fnd_profile.put('ORG_ID', :org_id);
    fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
    fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
    fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 8 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 8 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
    FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
    FPI.SIZING_FACTOR
    FROM
    FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
    WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
    FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.00 0 7 0 1
    total 4 0.00 0.00 0 7 0 1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
    1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
    1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
    1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
    1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
    1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
    1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
    1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
    1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
    SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
    USERENV('LANGUAGE'), '.') + 1)
    FROM
    V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
    USERENV('SESSIONID')
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 0 0 1
    total 3 0.00 0.00 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 173 (recursive depth: 1)
    Rows Row Source Operation
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
    1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
    1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
    182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
    1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
    181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
    1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 2 0.00 0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 6 0.00 0.00 0 0 0 0
    Execute 6 0.01 0.02 0 165 0 4
    Fetch 1 0.00 0.00 0 0 0 1
    total 13 0.01 0.02 0 165 0 5
    Misses in library cache during parse: 0
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 37 0.00 0.00
    SQL*Net message from client 37 1.21 2.19
    log file sync 1 0.01 0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 49 0.00 0.00 0 0 0 0
    Execute 89 0.01 0.07 7 38 336 24
    Fetch 29 0.00 0.09 15 168 0 27
    total 167 0.02 0.16 22 206 336 51
    Misses in library cache during parse: 3
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    asynch descriptor resize 6 0.00 0.00
    db file sequential read 22 0.01 0.14
    48 user SQL statements in session.
    1 internal SQL statements in session.
    49 SQL statements in session.
    0 statements EXPLAINed in this session.
    Trace file compatibility: 10.01.00
    Sort options: prsela exeela fchela
    1 session in tracefile.
    48 user SQL statements in trace file.
    1 internal SQL statements in trace file.
    49 SQL statements in trace file.
    48 unique SQL statements in trace file.
    928 lines in trace file.
    1338833753 elapsed seconds in trace file.

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

Maybe you are looking for

  • Not able to select the other schema objects mrtadata

    Hi, My function returns the matadata of a table (xml format) in the current schema but unable return output of another schema objects. I have the privileges of export full database, import full database and select_catalog_role also. It doesn't throw

  • Passing date range parameters in MDX Query

    Following is my mdx query SELECT NON EMPTY     [Measures].[Cache Attendees Count] ON COLUMNS, NON EMPTY     ([Cache Attendees].[Visit Id].[Visit Id].ALLMEMBERS *     [Cache Attendees].[User Id].[User Id].ALLMEMBERS *     [Cache Attendees].[Screen Nam

  • Use of cost relevant Item categories L2N and G2N for debit and credit memo

    Dear Forum memebers, For  credit and debit memo  one of the customised client repot is showing  cost value which is not expected for credit and debit memo in the invoice register. This is due to use of L2N and G2N which are cost relevant item categor

  • Allow DB user to change their password

    Greetings, DAD authenticated access to HTMLDB 2.0 on 10gR1 on hpux. I have built a "account" page where among other things the users can change their passwords, I have built the page over anonymous PL/SQL (below). When a DAD authenticated user runs t

  • Xfinity TV app iPhone

    My app is messed up. When I go to networks and select Cinemax I get HBO movies. If I go to a network and then Series I get HBO series like Game of Thrones. How do I fix the app!