Query on blocks of time

Hi,
I'd like to query v$undustat end return the results as per 3 h blocks on 'begin_time',
and have SUM(any column) for that block (period of 3h).
Any suggestions are appreciated.
Thx,
Dobby

Hello Naveen,
To import variants, follow the procedure given here:
http://help.sap.com/saphelp_47x200/helpdata/en/5b/d230b743c611d182b30000e829fbfe/frameset.htm
Regards,
Siddhesh

Similar Messages

  • Need to send success or failure message to BAPI of SQL query action block

    Hi All,
    I need to send success or failure message(i.e. 1 or 0) to BAPI which has been called below sql query action block in transaction
    Here in SAP, i have created on BAPI with one import parameter and one export parameter.
    In the sql query action block, it insert records in the database table. If it fail or success, that message(numeric value 1 or 0) has to send to SAP program through BAPI.
    I have created a SAP ABAP program which sends message to MII Message listener through RFC then in the process rule message it will call transaction, in transaction, sql query will insert records into database table.  if it success or fail, it will pass to BAPI. And i calling that BAPI in the same SAP ABAP Program to show in the output screen.
    Here the issues, i am not getting the value from BAPI which has send by the transaction in MII.
    SAP Jco, I have mapped in conf. links --> BAPI input with sql query sucess
    Thanks,
    Kind Regards,
    Praveen Reddy M

    Dear All,
    I am facing problem regarding completion notifications in Oraclr r12, when user click the upon completion generated notifications it can only be viewd once, second time user cannot view the notifications.
    I have find that fnd_file_temp table store this information and deleted it.
    Plz help me how to view this URL again.
    thanks
    regards,
    Zubyr

  • Query Data Block issues

    We have a legacy Forms and Reports application(~750 items combined). I suspect the applications were developed initially under 6i, but they work quite well under 10GR2, and is testing well under 11GR2 (which we plan on migrating to soon). We ran into an issue the other day for Data Query Blocks, and wanted your expert opinion on it.
    Due to a change in business requirements, we recently had to increase the length of a database field (from 20 to 30). We went into Forms Developer, made the appropriate changes, and recompiled them all successfully. Everything *seems* to work fine, but we noticed we get failures on any item that tries to utilize that extra 10 characters. Apparently this is because the Query Data Block seems to use a cached/stale copy of the database field sizes, and still expects a size of 20! What we realized now is that we have to go into each of those forms, and REFRESH the Data Block (through the Data Block Wizard) for every one of them.
    My question is this: is there anyway to force the Form to refresh those Query Data Blocks absent going into each one and REFRESHING it? We've tried compiling through Forms Developer, and through Batch Compiles, and nothing seems to get those cached copies to update. I'd really prefer to tackle this issue once and for all since we are prepping for migration testing.  Frankly I suspect the table space was tweaked a good bit over the years since development (before I ever got here), and if they didn't REFRESH those Data Blocks properly, a great many of the Forms will be suffering this same cached/stale view problem.
    Appreciate your thoughts,
    Dave

    Q_Stephenson,
    That's an excellent suggestion.  I have seen vague hints and allegations of Forms manipulation using that method, but never felt comfortable with my overall knowledge(*) of Forms to look at it that deep.  I think it's time to clear some space and dig into the topic for a better look.
    Thank you,
    Dave
    (*) The majority of my programming background is in Java; it's only been recently that I've had to really dig deep into Forms.

  • Simple query is taking long time

    Hi Experts,
    The below query is taking long time.
    [code]SELECT   FS.*
      FROM   ORL.FAX_STAGE FS
             INNER JOIN
                   ORL.FAX_SOURCE FSRC
                INNER JOIN
                   GLOBAL_BU_MAPPING GBM
                ON GBM.BU_ID = FSRC.BUID
             ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
    WHERE       FSRC.IS_DELETED = 'N'
             AND GBM.BU_ID IS NOT NULL
             AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
    this query is returning 1645457 records.
    [code]PLAN_TABLE_OUTPUT
    | Id  | Operation           | Name                   | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT    |                        |   625K|   341M| 45113   (1)|
    |   1 |  HASH JOIN          |                        |   625K|   341M| 45113   (1)|
    |   2 |   NESTED LOOPS      |                        |   611 | 14664 |    22   (0)|
    |   3 |    TABLE ACCESS FULL| FAX_SOURCE             |  2290 | 48090 |    22   (0)|
    |   4 |    INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID |     1 |     3 |     0   (0)|
    |   5 |   TABLE ACCESS FULL | FAX_STAGE              |  2324K|  1214M| 45076   (1)|
    PLAN_TABLE_OUTPUT
    Note
       - 'PLAN_TABLE' is old version
    15 rows selected.[/code]
    The distinct number of records in each table.
    [code]SELECT FAX_STATUS,count(*)
    FROM fax_STAGE
    GROUP BY FAX_STATUS;
    FAX_STATUS    COUNT(*)
    BROKEN          10
    Broken - New    9
    Completed    2324493
    New             20
    SELECT is_deleted,COUNT(*)
    FROM  FAX_SOURCE
    GROUP BY IS_DELETED;
    IS_DELETED COUNT(*)
    N         2290
    Y         78[/code]
    Total number of records in each table.
    [code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
    SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
    SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
    [/code]
    To improve the performance of this query I have created the following indexes.
    [code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
    Bitmap index on FSRC.IS_DELETED.
    Normal Index on GBM.BU_ID and FSRC.BUID.
    [/code]
    But still the performance is bad for this query.
    What can I do apart from this to improve the performance of this query.
    Please help me .
    Thanks in advance.

    <I have created the following indexes.
    CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
    CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
    CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
    CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
    CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
    CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
    After creating the following indexes performance got improved.
    But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
    on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
    I am sending the explain plan before creating indexes and after indexes has been created.
    SELECT  FS.*
    FROM  ORL.FAX_STAGE FS
                    INNER JOIN
                    ORL.FAX_SOURCE FSRC
                  INNER JOIN
                      GLOBAL_BU_MAPPING GBM
                    ON GBM.BU_ID = FSRC.BUID
                ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
    WHERE      FSRC.IS_DELETED = 'N'
              AND GBM.BU_ID IS NOT NULL
              AND UPPER (FS.FAX_STATUS) =:B1;
    --OLD without indexes
    PLAN_TABLE_OUTPUT
    Plan hash value: 3076973749
    | Id  | Operation          | Name                  | Rows  | Bytes | Cost (%CPU)| Time    |
    |  0 | SELECT STATEMENT    |                        |  141K|    85M| 45130  (1)| 00:09:02 |
    |*  1 |  HASH JOIN          |                        |  141K|    85M| 45130  (1)| 00:09:02 |
    |  2 |  NESTED LOOPS      |                        |  611 | 18330 |    22  (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL| FAX_SOURCE            |  2290 | 59540 |    22  (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID |    1 |    4 |    0  (0)| 00:00:01 |
    |*  5 |  TABLE ACCESS FULL | FAX_STAGE              | 23245 |    13M| 45106  (1)| 00:09:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
      1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
      3 - filter("FSRC"."IS_DELETED"='N')
      4 - access("GBM"."BU_ID"="FSRC"."BUID")
          filter("GBM"."BU_ID" IS NOT NULL)
      5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
    21 rows selected.
    --NEW with indexes.
    PLAN_TABLE_OUTPUT
    Plan hash value: 665032407
    | Id  | Operation                        | Name                    | Rows  | Bytes | Cost (%CPU)| Time    |
    |  0 | SELECT STATEMENT                |                          |  5995 |  3986K|  3117  (1)| 00:00:38 |
    |*  1 |  HASH JOIN                      |                          |  5995 |  3986K|  3117  (1)| 00:00:38 |
    |  2 |  NESTED LOOPS                  |                          |  611 | 47658 |    20  (5)| 00:00:01 |
    |*  3 |    VIEW                          | index$_join$_002        |  2290 |  165K|    20  (5)| 00:00:01 |
    |*  4 |    HASH JOIN                    |                          |      |      |            |      |
    |*  5 |      HASH JOIN                  |                          |      |      |            |      |
    PLAN_TABLE_OUTPUT
    |  6 |      BITMAP CONVERSION TO ROWIDS|                          |  2290 |  165K|    1  (0)| 00:00:01 |
    |*  7 |        BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM      |      |      |            |      |
    |  8 |      INDEX FAST FULL SCAN      | IDX_BUID_RAM            |  2290 |  165K|    8  (0)| 00:00:01 |
    |  9 |      INDEX FAST FULL SCAN        | IDX_FAX_NUMBER_RAM      |  2290 |  165K|    14  (0)| 00:00:01 |
    |* 10 |    INDEX RANGE SCAN              | GLOBAL_BU_MAPPING_BUID  |    1 |    4 |    0  (0)| 00:00:01 |
    |  11 |  TABLE ACCESS BY INDEX ROWID    | FAX_STAGE                | 23245 |    13M|  3096  (1)| 00:00:38 |
    |* 12 |    INDEX RANGE SCAN              | IDX_UPPER_FAX_STATUS_RAM |  9298 |      |  2434  (1)| 00:00:30 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
      1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
      3 - filter("FSRC"."IS_DELETED"='N')
      4 - access(ROWID=ROWID)
      5 - access(ROWID=ROWID)
      7 - access("FSRC"."IS_DELETED"='N')
      10 - access("GBM"."BU_ID"="FSRC"."BUID")
          filter("GBM"."BU_ID" IS NOT NULL)
      12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
    31 rows selected
    Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
    Thanks.>

  • How to find out which query run how many times using ST03?

    Hi experts,
    I am trying to clean up all the junk queries in BWP. my questions is, How to find out that the particular query ran how my times in a year and by which user using ST03, How to do it step by step?
    I don't like to use statistic reports for this. How to do it using ST03?
    Please let me know. Thanks in advance.
    Sharat.

    Hi,
    Please find the below links which would help you solving you problem,
    SAP BW performance tunning : Includes screen shots
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/68fbb490-0201-0010-8eb5-f7e0adaff5bd
    SAP BW Query Performance Tuning with Aggregates:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    With Regards,
    PCR

  • Sql query is taking more time

    Hi all,
    db:oracle 9i
    I am facing below query prob.
    prob is that query is taking more time 45 min than earliar (10 sec).
    please any one suggest me .....
    SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
    2 tdar1.transaction_version
    3 FROM transaction_data_arc tdar1
    4 WHERE tdar1.transaction_name ='O96U '
    5 AND tdar1.transaction_type = 'REQUEST'
    6 AND tdar1.message_type_code ='PCN'
    7 AND NOT EXISTS (
    8 SELECT NULL
    9 FROM transaction_data_arc tdar2
    10 WHERE tdar2.request_id = tdar1.request_id
    11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
    12 AND tdar2.ID > tdar1.ID)
    13 GROUP BY tdar1.request_id,
    14 tdar1.lolm_transaction_id,
    15 tdar1.transaction_version;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
    1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
    2 1 FILTER
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=1 Card=1 Bytes=42)
    4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
    =3 Card=1)
    5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=5 Card=918 Bytes=20196)
    6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
    =8 Card=4760)

    prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
    You should post as much details as possible, how and what it is described in the FAQ, see:
    *3. How to improve the performance of my query? / My query is running slow*.
    When your query takes too long...
    How to post a SQL statement tuning request
    SQL and PL/SQL FAQ
    Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
    See: SQL and PL/SQL FAQ

  • Query is taking more time to execute

    Hi,
    Query is taking more time to execute.
    But when i execute same query in other server then it is giving immediate output.
    What is the reason of it.
    thanks in advance.

    'My car doesn't start, please help me to start my car'
    Do you think we are clairvoyant?
    Or is your salary subtracted for every letter you type here?
    Please be aware this is not a chatroom, and we can not see your webcam.
    Sybrand Bakker
    Senior Oracle DBA

  • Query is taking more time to execute in PROD

    Hi All,
    Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
    SELECT DISTINCT OOH.HEADER_ID
    ,OOH.ORG_ID
    ,ct.CUSTOMER_TRX_ID
    ,ool.ship_from_org_id
    ,ct.trx_number IDP_SHIPMENT_ID
    ,ctt.type STATUS_CODE
    ,SYSDATE STATUS_DATE
    ,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
    ,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
    ,rac_bill.orig_system_reference
    ,rac_ship_party.party_name SHIP_TO_NAME
    ,raa_ship_loc.address1 SHIP_TO_ADDR1
    ,raa_ship_loc.address2 SHIP_TO_ADDR2
    ,raa_ship_loc.address3 SHIP_TO_ADDR3
    ,raa_ship_loc.address4 SHIP_TO_ADDR4
    ,raa_ship_loc.city SHIP_TO_CITY
    ,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
    ,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
    ,raa_ship_loc.postal_code SHIP_TO_ZIP
    ,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
    ,ooh.creation_date CUSTOMER_ORDER_DATE
    ,ool.actual_shipment_date DATE_SHIPPED
    ,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
    ,'CSB', 'CHESAPEAKE'
    ,'DEP', 'CHESAPEAKE'
    ,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
    ,ooh.freight_carrier_code FREIGHT_CARRIER
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
    + NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
    ,ooh.freight_terms_code FREIGHT_TERMS
    ,'' IDP_BILL_OF_LADING
    ,(SELECT WAYBILL
    FROM WSH_DELIVERY_DETAILS_OE_V
    WHERE -1=-1
    AND SOURCE_HEADER_ID = ooh.header_id
    AND SOURCE_LINE_ID = ool.line_id
    AND ROWNUM =1) WAYBILL_CARRIER
    ,'' CONTAINERS
    ,ct.trx_number INVOICE_NBR
    ,ct.trx_date INVOICE_DATE
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
    NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
    ,NULL IDP_TAX_IDENTIFICATION_NBR
    ,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
    ,NULL TAX_DESC_1
    ,NULL TAX_AMOUNT_2
    ,NULL TAX_DESC_2
    ,rt.name PAYMENT_TERMS
    ,NULL RELATED_INVOICE_NBR
    ,'Y' INVOICE_PRINT_FLAG
    FROM ra_customer_trx_all ct
    ,ra_cust_trx_types_all ctt
    ,hz_cust_accounts rac_ship
    ,hz_cust_accounts rac_bill
    ,hz_parties rac_ship_party
    ,hz_locations raa_ship_loc
    ,hz_party_sites raa_ship_ps
    ,hz_cust_acct_sites_all raa_ship
    ,hz_cust_site_uses_all su_ship
    ,ra_customer_trx_lines_all rctl
    ,oe_order_lines_all ool
    ,oe_order_headers_all ooh
    ,mtl_parameters mp
    ,ra_terms rt
    ,OE_ORDER_SOURCES oos
    ,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
    WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
    AND ctt.TYPE <> 'BR'
    AND ct.org_id = ctt.org_id
    AND ct.ship_to_customer_id = rac_ship.cust_account_id
    AND ct.bill_to_customer_id = rac_bill.cust_account_id
    AND rac_ship.party_id = rac_ship_party.party_id
    AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
    AND raa_ship.party_site_id = raa_ship_ps.party_site_id
    AND raa_ship_loc.location_id = raa_ship_ps.location_id
    AND ct.ship_to_site_use_id = su_ship.site_use_id
    AND su_ship.org_id = ct.org_id
    AND raa_ship.org_id = ct.org_id
    AND ct.customer_trx_id = rctl.customer_trx_id
    AND ct.org_id = rctl.org_id
    AND rctl.interface_line_attribute6 = to_char(ool.line_id)
    AND rctl.org_id = ool.org_id
    AND ool.header_id = ooh.header_id
    AND ool.org_id = ooh.org_id
    AND mp.organization_id = ool.ship_from_org_id
    AND ooh.payment_term_id = rt.term_id
    AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
    AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
    AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
    AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
    AND ool.flow_status_code = 'CLOSED'
    AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
    AND trx_hdr_table = 'CT'
    AND xla_ael_sl_v.gl_transfer_status = 'Y'
    AND xla_ael_sl_v.accounted_dr IS NOT NULL
    AND xla_ael_sl_v.org_id = ct.org_id;
    -- AND ct.trx_number = '2000080';
    }

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Query Prediction takes long time - After upgrade DB 9i to 10g

    Hi all, Thanks for all your help.
    we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
    After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
    Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
    Thanks in advance
    skat

    Hi skat
    Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
    If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
    I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
    Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
    There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
    If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
    Best wishes
    Michael

  • Parsing the query takes too much time.

    Hello.
    I hitting the bug in в Oracle XE (parsing some query takes too much time).
    A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
    Please, raise a bug for Oracle XE.
    Steps to reproduce the issue:
    1. Extract files from testcase_dump.zip and testcase_sql.zip
    2. Under username SYSTEM execute script schema.sql
    3. Import data from file TESTCASE14.DMP
    4. Under username SYSTEM execute script testcase14.sql
    SQL text can be downloaded from http://files.mail.ru/DJTTE3
    Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
    Regards,
    Viacheslav.

    Bug number? Version fix applies to?
    Relevant Note that describes the problem and points out bug/patch availability?
    With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1.

  • Time_out Dump on this query take too long time

    hi experts,
    in my report a query taking too long time
    pl. provide performance tips or suggestions
    select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart    
           mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr    
           mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt    
           mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl    
           mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme    
           mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart    
           mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln    
           mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr    
           mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein    
           mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew    
           mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr    
           mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng    
           mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
           mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers    
           mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT    
        into table itab                                      
           from mkpf as mkpf                                 
            inner join mseg as mseg                          
                    on mkpf~MBLNR = mseg~mblnr               
                   and mkpf~mjahr = mseg~mjahr

    no the original query is, i use where clouse with conditions.
    select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart
           mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr
           mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt
           mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl
           mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme
           mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart
           mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln
           mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr
           mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein
           mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew
           mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr
           mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng
           mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
           mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers
           mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT
        into table itab
           from mkpf as mkpf
            inner join mseg as mseg
                    on mkpf~MBLNR = mseg~mblnr
                   and mkpf~mjahr = mseg~mjahr
        WHERE mkpf~budat IN budat
          AND mkpf~usnam IN usnam
          AND mkpf~vgart IN vgart
          AND mkpf~xblnr IN xblnr
          AND mkpf~zshift IN p_shift
          AND mseg~bwart IN bwart
          AND mseg~matnr IN matnr
          AND mseg~werks IN werks
          AND mseg~lgort IN lgort
          AND mseg~charg IN charg
          AND mseg~sobkz IN sobkz
          AND mseg~lifnr IN lifnr
          AND mseg~kunnr IN kunnr.

  • How query cost and execution time are releated ?

    hi experts,
      i am curious to know, how the query cost and execution time is related?
     Query taking less time ,query cost is 65%, but query taking more time but query cost is 0%.
    how to connect both and improve query performance.
    Thanks

    i think you are refering to cost (relative to the batch) execution 65%, where there are more that one statement, it may compare the cost of each statement with in the batch
    i assume it mainly take subtree cost and IO stat as cost, but in some cases i may wrong when there is multi line function and many other facter influence the cost, and i would say it depends on the query
    cost is unit-less
    The reason these costs exist is because of the query optimization SQL Server does: it does cost-based optimization, which means that the optimizer formulates a lot of different ways to execute the query, assigns a cost to each of these alternatives, and
    chooses the one with the least cost. The cost tagged on each alternative is heurestically calculated, and is supposed to roughly reflect the amount of processing and I/O this alternative is going to take.
    refer :
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2006/10/11/what-is-this-cost.aspx
    Thanks
    Saravana kumar C

  • Update query which taking more time

    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd<>13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank you
    Edited by: 960991 on Apr 16, 2013 7:11 AM

    960991 wrote:
    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
    Some things to look at ...
    1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
    2. Unnecessary "(" ")" in the subquery "(sum" are confusing
    3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
    4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
    5. Is tqtr5 part of an index? It is a bad idea to update indexed columns

  • Query executes faster 2nd time

    Hi,
    So, the result% parameters are as follows, on a specific instance:
    AME                                                                                   TYPE VALUE
    result_cache_mode                                                                         2 MANUAL
    result_cache_max_size                                                                     6 20971520
    result_cache_max_result                                                                   3 5
    result_cache_remote_expiration                                                            3 0
    Executed in 0,156 secondsWell, if i run first time a large query, which takes much time (WITHOUT result_cache hint), this executes in 10 minutes. Then, some of my collegue said me second time i run the query i obtain the results faster than the first time, but i told him caching is not enabled, so why we got faster the results? The query execution plan is the same.
    So, are any buffers, or something, from which the database retrieves the results when executing again that query? Or any explanation why it executes faster on subsequent invocations?
    Thanks

    Roger22 wrote:
    So even if the result caching is not used, the data can be retrieved from the buffers.? okA fundamental concept for all databases and all modern file systems. A read-write memory cache that sits between the s/w and spinning rust.
    Also the basic reason why measuring performance for comparison using elapsed time is flawed - as the very SAME query with the SAME execution plan on the SAME data can show DIFFERENT elapsed times. Amount of I/O is constant, but the type of I/O (logical/fast versus physical/slow) is not.

  • Query takes a long time on EBAN table

    Hi,
    I am trying to execute a simple select statement on EBAN table. This query takes unexpectionally longer time to execute.
    Query is :
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
          banfn IN s_banfn AND
          ernam IN s_ernam
          and ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    Structure of gt_ekko_ekpo
    TYPES : BEGIN OF ty_ekko_ekpo,
            ebeln TYPE ekko-ebeln,
            ebelp TYPE ekpo-ebelp,
            bukrs TYPE ekko-bukrs,
            aedat TYPE ekko-aedat,
            lifnr TYPE ekko-lifnr,
            ekorg TYPE ekko-ekorg,
            ekgrp TYPE ekko-ekgrp,
            waers TYPE ekko-waers,
            bedat TYPE ekko-bedat,
            otb_value TYPE ekko-otb_value,
            otb_res_value TYPE ekko-otb_res_value,
            matnr TYPE ekpo-matnr,
            werks TYPE ekpo-werks,
            matkl TYPE ekpo-matkl,
            elikz TYPE ekpo-elikz,
            wepos TYPE ekpo-wepos,
            emlif TYPE ekpo-emlif,
      END OF ty_ekko_ekpo.
    Structure of GT_EBAN
    TYPES : BEGIN OF ty_eban,
      banfn TYPE eban-banfn,
      bnfpo TYPE eban-bnfpo,
      ernam TYPE eban-ernam,
      badat TYPE eban-badat,
      ebeln TYPE eban-ebeln,
      ebelp TYPE eban-ebelp,
      END OF ty_eban.
    Query seems to be OK to me. But still am not able to figure out the reason for this performance issue.
    Please provide your inputs.
    Thanks.
    Richa

    Hi Richa,
    Maybe you are executing the query with S_BANFN empty. Still based on the note 191492 you should change your query on like the following
    1st Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo.
    if sy-subrc = 0.
      delete gt_eban where ernam not in s_ernam.
    endif.
    endif.
    endif.
    endif.
    2nd Suggestion:
    if gt_ekko_ekpo[] is not initial.
    SELECT banfn banfpo       INTO TABLE gt_eket
          FROM eket FOR ALL ENTRIES IN gt_ekko_ekpo
          WHERE
         ebeln = gt_ekko_ekpo-ebeln AND
          ebelp = gt_ekko_ekpo-ebelp.
    if sy-subrc = 0.
    delete gt_eket where banfn not in s_banfn.
    if gt_eket[] is not initial
    SELECT banfn bnfpo ernam badat ebeln ebelp
          INTO TABLE gt_eban
          FROM eban FOR ALL ENTRIES IN gt_eket
          WHERE
          banfn = gt_eket-banfn
         and  banfpo = gt_eket-banfpo
         and ernam in s_ernam.
    endif.
    endif.
    endif.
    Hope this helps.
    Regards,
    R

Maybe you are looking for