Query On BADI

Hi helpers,
I am beginner in this area and have few queries which are not cleared after searching the existing posts.
This is with reference to Available SAP BADI.
1. What is the exact use of the Adapter Class in BADI ?
2. All the implementation for the BADI's are called by the program. Then how can I call the specific implementations for the same BADI as I do not want to call all the implementations  . Is FILTER technique relevant here ?
Thanks in Advance
Regards
Shashank

Adapter class :When we  creates an interface for this Add-in enhancement management generates an adapter class for implementing the interface
When you implement  An interface with 'IF_EX_' inserted between the first and second characters of the BAdI name and
An adapter class with 'CL_EX_' inserted between the first and second characters of the BAdI name.
Filter :   Yes filter is used at this scenario  The advantage of filter is    Business Add-ins may be implemented depending on a specific filter value (example, country-specific versions: Distinct implementations of the BADI can be created for each country). A filter type must be entered when defining your enhancement (a country or industry sector, for example). All methods created in the enhancement's interface have filter value ' FLT_VAL' as their import parameter. The method then selects the active implementation based on the data provided in the filter value.
For more details please check the below link.
[http://wiki.sdn.sap.com/wiki/display/ABAP/DocumentonBADI]

Similar Messages

  • API to access query structure / bad performance Bex query processor

    Hi, we are using a big P&L query structure. Each query structure node selects a hierarchy node of the account.
    This setup makes the performance incredible bad. The Bex query processor caches and selects per structure node - which creates an awful mass of unnecessary SQL statements. (It would be more useful to try to merge the SQL statements as far as possible with an group by account to generate bigger SQL statements.)
    The structure is necessary to cover percentage calculations in the query, the hierarchy is used to “calculate” subtotals by selecting different nodes on different levels.
    I am searching now for a different approach to cover the reporting requirement - or - for a API to generate out of the master structure smaller query structures per area of the P&L. It there any class to access the query structure?
    We tried already to generate data entries per node level (duplicating one data record per node where it appears with an characteristic for the node name). But this approach generates too many data records.
    Not using hierarchy nodes would make the maintenance terrible. To generate "hard" selections in the structure out of the hierarchy an API to change the structure be also useful.

    The problem came from a wrong development of exit varibale used in Analysis Authorization
    Edited by: SSE-BW-Team SSE-BW-Team on Feb 28, 2011 1:46 PM

  • Select query in badi

    Hi,
      I want to use select statement in one of the methods of badi and use internal tables to hold data. It is not allowing to use internal tables with header. Can any body please give me an example.
    Thanks in Advance.

    DATA itab [TYPE linetype|LIKE lineobj] OCCURS n                                        [WITH HEADER LINE].
    This variant is not allowed in an ABAP Objects context. See Declaration with OCCURS not allowed.
    You can't use that variant in Abap Objects.
    Learn to use them this way.
    data: i_ekpo type standard table of ekpo,
            wa_ekpo like line of i_ekpo.
    select * into wa_ekpo from ekpo.
    endselect.
    "or
    select * into table i_ekpo from ekpo.
    Loop at i_ekpo into wa_ekpo.
    endloop.

  • Query in Badi

    Hi all,
            I have a requirement which requires me to give a Subject for the email that is sent . I have used the Badi 'COMPLETE_PROC_PPF' .
    For changing the subject line I use the class CL_BCS as per the following code:
    LV_SUBJECT = 'Hello!!!!Good Morning!!!!'.
    CALL METHOD io_send->SET_MESSAGE_SUBJECT
    EXPORTING
    IP_SUBJECT = LV_SUBJECT.
    But the message subject line does not get changed.
    Please help me out in resolving the above issue .
    Kindly reply immediately as this is bit urgent.
    Regards,
    Vijay

    hi
    I have worked on sending E-Mail using the class..
    Does the BADI trigger, just verify it once by setting a break point
    Here goes the code sample
    *setting the option to send an e-mail more than 50 characters
      DATA: t_sub TYPE string.
          CALL METHOD send_request->set_message_subject
            EXPORTING
              ip_subject = t_sub.
    Also check whether SCOT setting enable SMTP Node.. otherwise the subject line will not be more than 50 characters..

  • Query in Badi Implementation!!!!

    Hi all,
                     I have a requirement which requires me to Implement the Badi 'DOC_PERSONALIZE_BCS' .
    This is a filter dependent Badi.
    When I try to Implement this Badi , I am required to give a filter value.
    When I give the filter value and try to activate the Badi , I get a message as follows:
    "There are already Active Implementations for these Filter types"
    I checked and found out that there is already a Standard Implementation for this Badi.
    I want my Badi to get activated.
    Please tell me how I can activate my Badi for the Filter type.
    Kindly reply immediately as this is bit urgent.
    Regards,
    Vijay

    Hi,
    You can get your BADI activated even though the filter value is already defined in other implementation. This condition should not give any problem in activitating the BADI. But this condition should be avoided. Because if two implementations are present in BADI with same filter value then whenever the BADI is called, all implementations with the given filter value are executed which is not desirable.
    You need to deactivate the std implementation and then create new implementation or copy the std implementation and make changes in it as per your requirement.
    Please reward if useful.
    Regards,
    Ashlesha

  • HELP!! SQL query performance bad!!

    This is a very simple query, but it takes too long to process. I have tried changing the orders of everything to no avail.
    table RBN_MINUTE has 85 million records
    tabe RBN_LOGS has 12 million
    RBN_MINUTE has these indexes
    PK - Start_datetime, RBN_Log_Seq_id
    RBN_LOGs has these indexes
    PK - RBN_Log_Seq_id
    idx1 - Start_datetime, group_id
    Here is the query with the explain plan:
    select count(distinct a.rbn_log_seq_id)
    from RBN_minute a , RBN_logs b
    where a.start_datetime >= to_date('100319990800','mmddyyyyhh24mi')
    and a.start_datetime < to_date('100319990830','mmddyyyyhh24mi')
    and a.rbn_log_seq_id = b.rbn_log_seq_id
    COUNT(DISTINCTA.RBN_LOG_SEQ_ID)
    18697
    real: 107084ms
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=25551 Card=1 Bytes=1
    4611779)
    1 0 SORT (GROUP BY)
    2 1 MERGE JOIN (Cost=25551 Card=256347 Bytes=14611779)
    3 2 INDEX (FULL SCAN) OF 'PK_RBN_LOGS' (UNIQUE) (Cost=2460
    6 Card=8838774 Bytes=114904062)
    4 2 SORT (JOIN) (Cost=940 Card=124671 Bytes=5485524)
    5 4 TABLE ACCESS (BY GLOBAL INDEX ROWID) OF 'RBN_MINUTE'
    (Cost=5 Card=124671 Bytes=5485524)
    6 5 INDEX (RANGE SCAN) OF 'RBN_MINUTE_IDX3' (NON-UNIQU
    E) (Cost=2 Card=124671)
    Statistics
    272 recursive calls
    525 db block gets
    82973 consistent gets
    517 physical reads
    0 redo size
    426 bytes sent via SQL*Net to client
    676 bytes received via SQL*Net from client
    4 SQL*Net roundtrips to/from client
    7 sorts (memory)
    1 sorts (disk)
    1 rows processed
    Obviously, I don't need the join to RBN_Logs in this query, but I was just trying the most simple case. Thanks.
    null

    It appears to be using the wrong table
    to drive off of..
    couple of things to try..
    select /*+ FIRST_ROWS +*/
    or
    switch the order of the tables in the
    from clause
    or
    select /*+ USE_INDEX(a,<PKINDEXNAME>) +*/                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Bad Performance in a query into table BKPF

    Hi forum i have a really problem in the second query under the table
    BKPF.. some body cans help me, please
    *THIS IS THE QUERY UNDER MSEG
      SELECT tmsegmblnr tmkpfbudat tmsegbelnr tmsegbukrs tmseg~matnr
             tmsegebelp tmsegdmbtr tmsegwaers tmsegwerks tmseg~lgort
             tmsegmenge tmsegkostl
      FROM mseg AS tmseg JOIN mkpf AS tmkpf ON tmsegmblnr = tmkpfmblnr
      INTO CORRESPONDING FIELDS OF TABLE it_docs
      WHERE
        tmseg~bukrs IN se_bukrs AND
        tmkpf~budat IN se_budat AND
        tmseg~mjahr = d_gjahr AND
        ( tmsegbwart IN se_bwart AND tmsegbwart IN (201,261) ).
      IF sy-dbcnt > 0.
    I CREATE AWKEY FOR CONSULTING BKPF
        LOOP AT it_docs.
          CONCATENATE it_docs-mblnr d_gjahr INTO it_docs-d_awkey.
          MODIFY it_docs.
        ENDLOOP.
    THIS IS THE QUERY WITH BAD BAD PERFOMANCE
    I NEED KNOW "BELNR" FOR GO TO THE BSEG TABLE
        SELECT belnr awkey
        FROM bkpf
        INTO CORRESPONDING FIELDS OF TABLE it_tmp
        FOR ALL ENTRIES IN it_docs
        WHERE
          bukrs = it_docs-bukrs AND
          awkey = it_docs-d_awkey AND
          gjahr = d_gjahr AND
          bstat = space .
    THNKS

    Hi Josue,
    The bad performance is because you're not specifying the primary keys of the table BKPF in your WHERE condition; BKPF usually is a big table.
    What you really need is to create a new index on database for table BKPF via the ABAP Dictionary on fields BUKRS, AWKEY, GJAHR & BSTAT. You'll find the performace of the program will significantly increase after the new index is activated. But I would talk to the Basis first to confirm they have no issues if you create a new index for BKPF on the database system.
    Hope this helps.
    Cheers,
    Sougata.

  • Enhancement Spot & BADI

    Hi All,
    I know we can create a BADI in ECC 6.0 using the Enhancement Sopts. Then why do we need a BADI create option also in SE18, when Enhancement Spots take care of BADI's already?
    Please provide any inputs on the same.
    Thanks & Regards,
    Binu

    Hi,
    check below link
    https://www.sdn.sap.com/irj/sdn/advancedsearch?query=howtoimplement+badi&cat=sdn_all
    Regards,
    Madhu

  • Performance of native sql query detoriates

    Dear Experts,
    The performance of my native SQL query is bad. On the database the query takes less than 5 seconds to process. From my abap program I get a session timeout dump after 10 minutes. What might be the possible reason.
    Warm Regards,
    Abdullah

    I am not a DBA, but this is a wild guess.
    I have a native SQL query. It was running fine all morning(transported it to production today). By afternoon the report was not giving any output.
    I went to the MS SQL query analyzer and executed the query, it returned the results in less than 5 seconds. The same query when I was executing from SAP using native SQL took more than 10 minutes and gave a dump(time exceeded).
    My database guy asked me to execute the following on the database. Dbcc dbreindex('tablename')
    The report is running fine since then. I am still not satisfied if this is the reason the performance is back on track, but yeah the report is running fine again. There seems to be some problem with the indexes.
    I am using standard classes provided by SAP to execute my query and after execution the resultset reference object is being closed, I am closing the connection.
    the code is as below.
          PERFORM:
            connect               USING con_name con_ref,
            select_into_table     USING con_ref,
            disconnect            USING con_ref.
    *  FORM connect
    *  Connects to the database specified by the logical connection name
    *  P_CON_NAME which is expected to be specified in table DBCON. In case
    *  of success the form returns in P_CON_REF a reference to a connection
    *  object of class CL_SQL_CONNECTION.
    *  --> P_CON_NAME  logical connection name
    *  <-- P_CON_REF   reference to a CL_SQL_CONNECTION object
    FORM connect  USING    p_con_name TYPE dbcon-con_name
                           p_con_ref  TYPE REF TO cl_sql_connection
                           RAISING cx_sql_exception.
    * if CON_NAME is not initial then try to open the connection, otherwise
    * create a connection object representing the default connection.
      IF p_con_name IS INITIAL.
        CREATE OBJECT p_con_ref.
      ELSE.
        p_con_ref = cl_sql_connection=>get_connection( p_con_name ).
      ENDIF.
    ENDFORM.                    " connect
    *  FORM select_into_table
    *  Selects some rows from the test table and fetches the result rows
    *  into an internal table whose row structure corresponds to the
    *  queries select list columns.
    FORM select_into_table
      USING   p_con_ref TYPE REF TO cl_sql_connection
      RAISING cx_sql_exception.
      DATA:
        l_stmt         TYPE string,
        l_stmt_ref     TYPE REF TO cl_sql_statement,
        l_dref         TYPE REF TO data,
        l_res_ref      TYPE REF TO cl_sql_result_set,
    *Data related query
        l_itab         TYPE TABLE OF t_pricing_report,
        l_row_cnt      TYPE i.
    * create the query string
    CONCATENATE
        'select A.SEQ,A.CONDTABLE,A.CONDNAME,A.VKORG,A.VTWEG,A.MATKL,A.MATNR,B.MTEXT,A.VKGRP,A.SGRPNAME,'
        'A.VKBUR,A.SOFFNAME,A.ZSALES,A.SCNTNAME,A.KUNNR,A.SCSTNAME,A.PRBATCH,A.INCO1,'
        'A.INCO2,A.DATAB,A.DATBI,A.KBETR,A.KONWA,A.KOSRT,B.MTART,B.GROES,B.VOLUM,B.EXTWG,B.WRKST,'
        'A.MXWRT,A.GKWRT,'
        'B.PATTERN,B.RIM,B.SERIES,B.SPDINDEX,B.LDINDX,B.MGROUP,B.APPLN,B.SDWALL,B.MGRPTXT'
        'FROM Z_PRICELIST A,Z_MATERIALVIEW B'
        'WHERE A.MANDT = ? AND'
              'B.MANDT = A.MANDT AND'
              'A.MATNR = B.MATNR AND'
              'A.KSCHL = ? AND'
              'A.CONDTABLE LIKE ? AND'
              'A.VKORG LIKE ? AND'
              'A.VTWEG LIKE ? AND'
              'A.MATKL >= ? AND A.MATKL <= ? AND'
              'A.MATNR >= ? AND A.MATNR <= ? AND'
              'A.INCO1 LIKE ? AND'
              'A.INCO2 LIKE ? AND'
              'A.ZSALES >= ? AND A.ZSALES <= ? AND'
              'A.KUNNR  >= ? AND A.KUNNR <= ? AND'
              'A.PRBATCH  >= ? AND A.PRBATCH <= ? AND'
              'A.VKBUR  >= ? AND A.VKBUR <= ? AND'
              'A.VKGRP  >= ? AND A.VKGRP <= ? AND'
              'B.WRKST  >= ? AND B.WRKST <= ? AND'
              'B.MTART  >= ? AND B.MTART <= ? AND'
              '? BETWEEN A.DATAB AND A.DATBI AND'
              'B.GROES LIKE ? AND'
              'B.LDINDX LIKE ? AND'
              'B.SPDINDEX LIKE ? AND'
              'B.RIM LIKE ? AND'
              'B.SERIES LIKE ? AND'
              'B.PATTERN LIKE ? AND'
              'B.MGROUP LIKE ?'
              'order by A.MATNR'
        INTO l_stmt SEPARATED BY space.                         "#EC NOTEXT
    * create a statement object
      l_stmt_ref = p_con_ref->create_statement( ).
    * bind input variables
      GET REFERENCE OF l_col1 INTO l_dref.
      l_stmt_ref->set_param( l_dref ).
    *binding other references here
      GET REFERENCE OF l_col33 INTO l_dref.
      l_stmt_ref->set_param( l_dref ).
    * set the input values and execute the query
      l_col1  = sy-mandt.
    *..Assigning values here
      l_col33 = p_mgroup.
    *  PERFORM trace_2 USING 'EXECUTE_QUERY' l_stmt l_col1 l_col2.
      l_res_ref = l_stmt_ref->execute_query( l_stmt ).
    * set output table
      GET REFERENCE OF l_itab INTO l_dref.
      l_res_ref->set_param_table( l_dref ).
    * get the complete result set
      l_row_cnt = l_res_ref->next_package( ).
    * display the contents of the output table
    *  PERFORM trace_next_package USING l_itab.
    *  PERFORM trace_result USING l_row_cnt 'rows fetched'.
      pricing_report[] = l_itab[].
      free l_itab.
    * don't forget to close the result set object in order to free
    * resources on the database
      l_res_ref->close( ).
    ENDFORM.                    "select_into_table
    *  FORM disconnect
    *  Disconnect from the given connection. In case of the default
    *  connection this can be omitted.
    FORM disconnect
      USING   p_con_ref TYPE REF TO cl_sql_connection
      RAISING cx_sql_exception.
      DATA: l_con_name TYPE dbcon-con_name.
      l_con_name = p_con_ref->get_con_name( ).
      CHECK l_con_name <> cl_sql_connection=>c_default_connection.
    *  PERFORM trace_0 USING 'CLOSE CONNECTION' l_con_name.
      p_con_ref->close( ).
    *  PERFORM trace_result USING l_con_name 'closed'.
    ENDFORM.                    "disconnect
    *  FORM handle_sql_exception
    *  Write appropriate error messages when a SQL exception has occured
    *  -->  P_SQLERR_REF  reference to a CX_SQL_EXCEPTION object
    FORM handle_sql_exception
      USING p_sqlerr_ref TYPE REF TO cx_sql_exception.
      FORMAT COLOR COL_NEGATIVE.
      IF p_sqlerr_ref->db_error = 'X'.
        WRITE: / 'SQL error occured:', p_sqlerr_ref->sql_code,
               / p_sqlerr_ref->sql_message.                     "#EC NOTEXT
      ELSE.
        WRITE:
          / 'Error from DBI (details in dev-trace):',
            p_sqlerr_ref->internal_error.                       "#EC NOTEXT
      ENDIF.
    ENDFORM.                    "handle_sql_exception

  • BADI for VL01/VL02

    Hi Experts,
    I have one query on BADI,
    My req below:-
    I am using VL02/VL01 Tcode for this i need to used badi,
    Actually my req is while in VL01 is for inserting data ,at the same time i need to insert some screen data in to one Ztable .
    If the inserting into VL01 fails i dont insert data into Ztable.
    I specified my req as below .
    Please ,give some tips or sample code by using the Same BADI i specified below.
    In create
    i need to insert some screen data in to one Ztable
    we Should commit BOTH (likp and ztable )or commit NEITHER, do NOT allow LIKP to save if Ztable insert fails in order to work during collective processing, we need to use delivery-final-check BADI.
    I think we can use BADI_LE_SHIPMENT,i need some sample code for this BADI.
    Regards,
    Kalidas.T

    HI.
    The use of object orientated code within SAP has lead to new method of enhancing standard SAP code called
    Business Add-Ins or BADI's for short. Although the implementation concept is based on classes, methods and
    inheritance you do not really have to understand this fully to implement a BADI. Simply think of methods
    as a function module with the same import and export parameters and follow the simple instructions below.
    Steps:
    1. Execute Business Add-In(BADI) transaction SE18
    2. Enter BADI name i.e. LE_SHP_DELIVERY_PROC and press the display
    button
    3. Select menu option Implementation->Create
    4. Give implementation a name such as ZLE_SHP_DELIVERY_PROC
    5. You can now make any changes you require to the BADI within this
    implementation, for example choose the Interface tab
    6. Double click on the method you want to change, you can now enter
    any code you require.
    7. Please note to find out what import and export parameters a
    method has got return the original BADI definition
    (i.e. LE_SHP_DELIVERY_PROC) and double click on the method name
    for example within LE_SHP_DELIVERY_PROC there are CHANGE_FIELD_ATTRIBUTES and CHANGE_DELIVERY_ITEM are the method.
    8. When changes have been made activate the implementation
    refer the links to know more about BADI.
    http://esnips.com/doc/10016c34-55a7-4b13-8f5f-bf720422d265/BADIs.pdf
    http://esnips.com/doc/e06e4171-29df-462f-b857-54fac19a9d8e/ppt-on-badis.ppt
    http://esnips.com/doc/43a58f51-5d92-4213-913a-de05e9faac0d/Business-Addin.doc
    http://esnips.com/doc/1e10392e-64d8-4181-b2a5-5f04d8f87839/badi.doc
    http://esnips.com/doc/365d4c4d-9fcb-4189-85fd-866b7bf25257/customer-exits--badi.zip
    http://esnips.com/doc/3b7bbc09-c095-45a0-9e89-91f2f86ee8e9/BADI-Introduction.ppt
    Regards.
    Kiran Sure

  • Pass PO Number from MIRO to BADI

    Hi Experts,
    I have activated BADI 'Invoice_update'. Im checking some condition in the 'CHANGE_AT_SAVE' method.
    This BADI is activated when 'SAVE' is clicked in MIRO
    I have to pass the 'PO Number' ie EBELN entered in MIRO, to BADI code to pass a query in BADI.
    How can I pass the PO number to BADI.
    Please Suggest?
    Rajiv Ranjan

    Hi Rajiv,
    You will be able to find EBELN field, if you drill down the import parameter TI_RSEG_NEW (Invoice Document Item: New).
    If the BAdI is triggered during SAVE, meaning if there is any active implementation for this BAdI, then you will get this EBELN inside the method; you dont need to pass it. You can use the value TI_RSEG_NEW-EBELN in the program.
    if you debug the BAdI, you can ensure this.
    Hope this helps.
    Sajan Joseph.

  • Badi not called - BI

    Hello experts.
    I have a BADI implementation for a virtual characteristic in BI. But when I run the query, the badi does not seem to be called at all. I know this because I tried to debug by placing breakpoints but the query never stops at the break point. the same BADI, with exact same code, however, runs fine in our PROD system.
    Is there any setting or anything that I'm missing to make the BADI run in DEV?
    I want to debug and need some help.  Interface name: IF_EX_RSR_OLAP_BADI

    I am sure you would have checked about  the activation of the implementation and deactive and reactive the implementation and all.
    If the job can be run in background I place this code
    DATA: lv_c TYPE c VALUE 'J'.
    WHILE lv_c EQ 'J'.
    ENDWHILE.
    and try to keep the process in infinite loop and try to debug the work process thru SM50, and see how it behaves in background.
    check the mentors aproach of finding if the BADI is active.
    Re: BADI shows ACTIVE but when run... its not being called

  • Stop Query if it does not match with Condtions & Control the result count

    Dear All,
    I have requirement in my Environment, I have to stop the Queries which compromises these conditions
    1. If Query is running more than specified threshold (if it running more than 2 mins).
    2. If Query is fetching huge data (rows more than 1000 for example)
    Please note we have done enough study on resource governor, it works on compile time, we need to control the queries during run time. Also resource governor does not restrict the resources if they are free.
    Answers are appreciated.

    Hello,
    I would not advise you to do such things in your environment.2 mins is very less time how could you be so sure a 2 min query is bad. And your second point is totally baseless it wouldtake a query a fraction of secondto read 1000 rows.My answer would be please
    dont implement.
    If you want to test below query might achieve the first requirement, I have not tested it please treat this query as hint and optimize or add anything if required.
    If you schedule this query through agent for every 2 mins or 5 mins .It can achieve.But some query takes more time to rollback than to finish scheduling this would lead to unstable environment
    USE MASTER
    IF EXISTS (SELECT * FROM TEMPDB.SYS.ALL_OBJECTS WHERE NAME LIKE '#KILL_CONNECTION')
    BEGIN
    DROP TABLE #KILL_CONNECTION
    END
    CREATE TABLE #KILL_CONNECTION
    SESSION_ID INT
    ,TOTAL_ELAPSED_TIME BIGINT
    ,START_TIME DATETIME
    INSERT INTO #KILL_CONNECTION
    SELECT
    SESSION_ID
    ,TOTAL_ELAPSED_TIME
    ,START_TIME
    FROM SYS.DM_EXEC_REQUESTS
    WHERE TOTAL_ELAPSED_TIME > 7200 AND SESSION_ID > 50
    DECLARE @SESSION_ID BIGINT
    DECLARE @CMD VARCHAR(1000)
    DECLARE KILL_CONNECTION CURSOR FOR
    SELECT SESSION_ID
    FROM #KILL_CONNECTION
    OPEN KILL_CONNECTION
    FETCH NEXT FROM KILL_CONNECTION INTO @SESSION_ID
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SET @CMD = 'KILL ' + @SESSION_ID
    EXECUTE (@CMD)
    END
    CLOSE KILL_CONNECTION
    DEALLOCATE KILL_CONNECTION godrop table #Kill_connection
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Query occasionally causes table scans (db file sequential read)

    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.

    user2560749 wrote:
    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?Yes possible, One reason is stats of the table has been changed i.e somebody issued dbms_stats. If you are worried about execution plan getting changed then two option 1) Lock the stats 2) Use hint in the query.
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?Have a Ora-10053 trace enabled whenever query plan changes and analysis it.
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    If it db file sequential read then i see two things 1) It is doing index range scan(Not table full scan) or 2) It is scanning undo tablespaces.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Yes theoreitically it should practically we can only say by looking at run time explain plan(through 10053,10046 trace).
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.I am still not sure in which direction you are looking for solution.
    Is your query performing bad once in a fortnight and next day it is all same again.
    I suggest to
    1) Check if the data is scanning undo tablespace. I see you mentioned there is lot of inserts, could be that Oracle would be scanning undo tablespace because of delayed clean out.
    2) Check if that particular day the number of records are high compared to other day.
    Or once it starts performing bad then for next couple of days there is no change in response time.
    1) Check if explain plan has been changed?
    And what action you take to bring back the response time to normal?
    Regards
    Anurag

  • AD Organization Lookup Query

    I am trying to set up OIM to provision to two different AD domains. I am trying to avoid the process of cloning the entire connector. So for know I just created two IT Resources and duplicated the Lookup.ADReconciliation.GroupLookup and Lookup.ADReconciliation.Organization objects. The Group Lookup is part of the IT Resource and hence didn't cause any problems.
    The Organization Lookup is however defined at the form level. I need the list of Organizations to be specific to the AD Domain. I have done the following:
    1. Created two IT Resources of type AD Server:
    -- AD Server 1
    -- AD Server 2
    2. Created new lookup called Lookup.AD.OrganizationLookupITResourceMapping with the following values:
    -- AD Server 1 == Lookup.ADReconciliaiton.Organization
    -- AD Server 2 == Lookup.AD2Reconciliation.Organization
    3. I then changed the "Organization Name" column to the following:
    Lookup Query == select lkv_decoded, lkv_encoded from lku, lkv where lku.lku_key = lkv.lku_key and lku.lku_type_string_key = (select lkv.lkv_decoded from lku, lkv where lku. lku_type_string_key = 'Lookup.AD.OrganizationLookupITResourceMapping' and lku.lku_key = lkv.lku_key and lkv_encoded = '$Form data.UD_ADUSER_AD$');
    Column Names == lkv_decoded
    Lookup Column Names == lkv_decoded
    Column Captions == Organization Name
    Column Widths == 100
    When I run the Preview Form I get the error message "Query Failed. Error: Dataset is not open". In the server logs I get "invalid character" exception from SQL. If I use SQL Plus and run this query, substituting for '$Form data.UD_ADUSER_AD$', I get the correct response. My guess is that the form data is not being returned as expected. What should I use here in order to receive the name of the IT Resource?
    Thanks,
    Pete

    More information from the log files:
    [2011-02-23T16:30:45.441-05:00] [oim_server1] [ERROR] [] [XELLERATE.DATABASE] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: xelsysadm] [ecid: 0000ItKRa^jC^qQSaauHOq1DPIEM0002nl,0] [APP: oim#11.1.1.3.0] [dcid: dc05021680aa4152:3b03b6fb:12e53163b22:-7ffd-0000000000002d5a] SELECT sdc.sdc_name, sdk.sdk_name from sdc, sdk where sdc.sdk_key=sdk.sdk_key and sdc.sdc_name = 'UD_ADUSER_AD' and sdk.sdk_active_version=sdc.sdc_version and sdk.sdk_key=(SELECT sdk_key FROM TOS WHERE pkg_key = )[[
    java.sql.SQLSyntaxErrorException: ORA-00936: missing expression
    2011-02-23T16:30:54.309-05:00] [oim_server1] [ERROR] [] [XELLERATE.DATABASE] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: xelsysadm] [ecid: 0000ItKRcjKC^qQSaauHOq1DPIEM0002nw,0] [APP: oim#11.1.1.3.0] [dcid: dc05021680aa4152:3b03b6fb:12e53163b22:-7ffd-0000000000002d65] select lkv_decoded, lkv_encoded from lku, lkv where lku.lku_key = lkv.lku_key and lku.lku_type_string_key = (select lkv.lkv_decoded from lku, lkv where lku. lku_type_string_key = 'Lookup.AD.OrganizationLookupITResourceMapping' and lku.lku_key = lkv.lku_key and lkv_encoded = 'BAD QUERY or BAD FORM DATA'); and 1=2[[
    java.sql.SQLSyntaxErrorException: ORA-00911: invalid character
    So obviously my query is bad. How do I fix it?
    Thanks,
    Pete

Maybe you are looking for

  • Purchase Analysis Report

    Hi Experts When I Run Purchase analysis report in SAP Business One 8.8 under individual vendor view the Open A/P Invoice (not done Payment) sowing 0.00 in the Applied amount and the purchase amount is showing negative value bacuse it is a credit memo

  • Yoga 8 and 10 Tablets - KitKat 4.4.2 - Wifi Connection and Notification

    After upgrading to KitKat 4.4.2, some customers report that sometimes after device awakes from sleep, the Wifi connection does not resume on their Yoga 8 or 10 tablets (B6000-F/H or B8000-F/H). Also, Notifications may not show the name of the connect

  • I blew a fuse and now can't turn my Mac on

    Fuse blew in house. I cannot turn Mac on. Everything else turns on fine. Even in same outlet. Please help before i take to Genius Bar

  • Master Data Deletion - Data used in query

    Hi all, I work with BI 7. I noticed a strange thing. I wanted to delete data from a master data object, but the operation is not allowed. If I see the logs for "where used list", I see a query. I deleted data from DSO objects and infocubes, and I tho

  • Bought some stuff on the app store - now it asks to validate payment option but fails every time!?

    I bought a couple things on the App store which came to £5.98. Now every time I try to download or update any apps it asks to validate my payment option but fails every time but the money has been taken from my bank - I've scheduled Apple support to