SELECT FROM AUSP perfomance improvement

Hi!
Generally speaking our problem is long runtimes for transactions dealing with configuration (CU50, VA01/VA02/VA03).
ABAP runtime analysis (SE30) and performance trace analysis (ST05) clearly show that 85% percents of total execution time is spent on single SELECT statement of standard FM (CLSE_SELECT_AUSP), which is called many thousands of times:
          select * from ausp appending table x_ausp
                             for all entries in ix_objek
                 where objek  eq ix_objek-low
                 and   mafid  in ix1_mafid
                 and   klart  eq klart
                 and   atwrt  in x_wertc
                 and   atflv  in x_wertn
                 and   atcod  in ix1_atcod
                 and   datuv  <= key_date.
In spite of database specific request always looks like:
SELECT * FROM AUSP
WHERE MANDT = :A0
AND "OBJEK" = :A1
AND "MAFID" = :A2
AND "KLART" = :A3
AND "DATUV" <= :A4
and access is done based on index AUSP~N3 range scan we have an unacceptable runtimes.
Can anybody suggest any perfomance tuning? We tried to create custom index based only on fileds from previous select, but improvment is too small. Is it possible to activate some type of table buffering for AUSP? Any help appreciated.
Regards,
Maxim.

Hello Maxim,
Are the database statistics up to date for this table ?
You can do this using program RSANAORA (if you have an oracle database). This should ensure you the right index is chosen. You might want to check with your BC people first, as this is normally their job.
kind regards
Pieter

Similar Messages

  • How to get the data from AUSP TABLE

    Hi Experts,
    Select * from AUSP where atinn = w_attin
                                  And   klart = ‘001’
                                  And  atwrt between w_low  and w_high.
    Endselect.
    Here I am using this select to retrive data from ausp where atwrt in between 00000[w_low] and 09999[w_high].
    In EPC it displays an error like data types incompatability..
    Because atwrt is character field of length 30 and w_low and w_high are integer fields.
    How to rectify this problem.

    <b>Hi
    Code like this
    data: v_low type AUSP-ATWRT,
    v_high        type AUSP-ATWRT.
    v_low = w_low .
    v_high = w_high.
    Select * from AUSP into corresponding fields of table itab where atinn = w_attin
    And klart = ‘001’
    And atwrt between v_low and v_high.
    DONT USE SELECT...ENDSELECT as performance issues is there rather use into table.
    Mark points if helpful.
    Regs
    Manas Ranjan Panda</b>

  • Selecting from BKPF and BSEG

    Hi all,
    Can someone help me optimize the performance of this code as i don't know how else to select from BSEG as i cannot use a join coz its a cluster table. I need to find a way to improve the perfomance.Any help would be much appreciated.
    SELECT belnr gjahr bukrs budat stblg awkey blart
             FROM bkpf INTO CORRESPONDING FIELDS OF TABLE t_bkpf
             WHERE belnr IN s_belnr and
                   gjahr = p_gjahr  and
                   bukrs = p_bukrs.
      LOOP AT t_bkpf.
          SELECT belnr gjahr buzei ebeln ebelp
               wrbtr fipos geber fistl fkber
               augbl augdt shkzg
        INTO CORRESPONDING FIELDS OF table t_bseg
        FROM bseg
        WHERE belnr = t_bkpf-belnr
        AND   gjahr = t_bkpf-gjahr
        AND   bukrs = t_bkpf-bukrs.
    Thanks alot
    seema

    Hi Seema,
    You have to avoid the database retrival inside the loop.You can use for all entries.
    This is taken from a link.
    Some of the SAP tables are not transparant, but pooled or clustered. Be aware of this !
    There are a lot of limitations on how such tables can be accessed. You can not include such
    tables in database views and join constructs. The FI-GL table BSEG, which is one of our
    biggest PR1 tables, is an example of a clustered table. At the database-level, there is no table
    called BSEG, but instead RFBLG is being used for the BSEG data. Most of the fields known
    in BSEG are not known in the database table RFBLG, but are compressed in a VARDATA
    field of RFBLG. So tests in the WHERE clause of SELECTs agains BSEG are not used by
    the database (e.g. lifnr = vendor account number, hkont = G/L account, kostl = cost center).
    As a consequence, these tests are done after the facts similar to using the CHECK statement,
    and as already said in tip 1, CHECK statements are worse than tests in the WHERE-clause.
    Check this link also.
    How to Read BSEG Efficiently
    you'll never select table bkpf alone.
    alternatives:
    1) select with header information from bkpf
    2) use secondary index tables
    Re: Tuning cluster table selection
    3) use logical data base e.g.: BRF
    Hope this helps.If so,reward points.Otherwise, get back.

  • SELECT from table vs. CALL FUNCTION

    Hello,
    I have always wondered what the "best practice" is in this case, so I am looking for input.  When writing custom reports etc. in SAP, is it generally regarded as better practice to write a SELECT statement to get a line from say a Txxx configuration table, or is it best to use an associated BAPI or call to function module?  I know in some cases perfomance must obvioulsy be considered, but in terms of SAP's recommendations or upgrade considerations, which is typically better? 
    Assume for example something as simple as getting company code data...  Is it best to do <b>SELECT * FROM T001...</b> or to call a BAPI like <b>BAPI_COMPANYCODE_GETDETAIL</b>?
    Any feedback would be greatly appreciated.

    Hi
    Never accusing people of regarding performance, however I emphasize safety while developing. Even if it is a T* table, I try to use a standard function or call a subroutine of a standard program doing the work. I feel safer this way. Nevertheless, when I can't find any suitable (as of its interface), I use direct SELECT statements.
    If it is a simple report program then doing direct selects may be tolerable. However, if it is a more sophisticated one, especially if it deals with database updates, I highly recommend to use standards.
    As another way, I am aware you mean simple tables, but by the way, if it has a functional relation with a business entity (e.g. pa0001 for employee), I regard it as mandatory using standard FMs where applicable (I feel myself obliged to call "HR_READ_INFOTYPE" instead of using "SELECT* FROM PA0001...").
    *--Serdar

  • [Performance Issue] Select from MSEG

    Hi experts,
    Need your help on how to improve the performance in the select from MSEG, it takes about 30 minutes to just finish the select. Thanks!
        SELECT mblnr
               mjahr
               zeile
               bwart
               matnr
               werks
               lgort
               charg
               shkzg
               menge
               ummat
               lgpla
          FROM mseg
          INTO CORRESPONDING FIELDS OF TABLE i_mseg2
           FOR ALL ENTRIES IN i_likp
          WHERE bwart IN ('601','602','653','654')
           AND matnr IN s_matnr
           AND werks IN s_werks
           AND lgort IN s_sloc
           AND lgpla EQ i_likp-vbeln.

    store all the vbeln to ranges.
    ranges:r_vbeln for i_likp-vbeln.
    r_vbeln-option = 'EQ'.
    r_vbeln-sign = 'I'.
    loop at i_likp.
    r_vbeln-low = i_likp-vbeln.
    append r_vbeln.
    endloop.
    sort r_vbeln ascending
    delete adjacent duplicates from r_vbeln.
    then modify the fetch as below.
    donot use a loop to fetch data from mseg.
    SELECT mblnr mjahr zeile bwart matnr werks lgort charg shkzg menge ummat lgpla
    FROM mseg client specified INTO CORRESPONDING FIELDS OF TABLE i_mseg2
    FOR ALL ENTRIES IN i_likp
    where mandt = sy-mandt
    and (bwart = '601' or bwart = '602' or bwart = '653' or bwart = '654' )
    AND matnr IN s_matnr
    AND werks IN s_werks
    AND lgort IN s_sloc
    AND lgpla in r_vbeln.
    there is another table where u can get this data...i,ll let u know shortly...
    try with this if useful
    reward points....

  • Optimize select from table

    Hi,
    I have to select from a large table called sales_data which has 1 cr rows.
    It has an index.
    I want to select all the rows. It is taking about 1 hr to select and insert into a new table.
    Create table new_tab as select col1,col2,col3,col4 from sales_data;
    Is there any way to reduce time.
    TIA

    Have you tried serial/parallel direct-load INSERT method. It will be good if you can disable the constraint on the source_table before performing the DML.
    You can give it a try (any of the 2 options below ) and see if it improves the performance.
    INDEX on the source_table is NOT required -- DISBALE it before performing the DML.
    SQL> <Your query to disable> constraints on the source_table
    SQL > ALTER TABLE target_table NOLOGGING;
    Option #1 For SERIAL direct load method,
    INSERT /*+ APPEND */
    Into target_table
    Select * from source_table;
    Or using PARALLEL Direct load method -
    SQL > ALTER SESSION ENABLE PARALLEL DML
    SQL > ALTER SESSION ENABLE PARALLEL DML;
    SQL > INSERT /*+ PARALLEL(target_table,12) */ INTO target_table
    SELECT /*+ PARALLEL(source_table,12) */ * FROM source_table;
    Good luck.
    Shailender Mehta

  • Performance problem with select from _DIFF view

    Hi,
    we have a versioned table with more then one million records. We use the DBMS_WM.SetDiffVersions procedure and select from DIFF view to get data differences between two workspaces. The problem is that the select from the DIFF view is very slow. I takes more than 15 minutes. Has anybody an idea why it consumes so much time? Is there any way how to improve it?
    Thanks and regards
    Ondrej

    Hi,
    This can be due to any number of things, but is typically caused by an inefficient optimizer plan. Make sure that statistics on the _LT table have been recently analyzed.
    Also the following information would be useful:
    1. What is the relationship of the workspaces that you are trying to compare (parent/child, children of the same parent, etc) ?
    2. How many and of what type dml are being performed in the workspaces ?
    3. What version of Workspace Manager are you using and what is the version of the database ?
    4. What is the time needed to select from the _DIFF view based on the primary key ?
    Regards,
    Ben

  • OracleBulkCopy WritetoServer does Select * from TableName , fix it please

    In Odp.Net latest version , whenever OracleBulkCopy.WritetoServer is called. It internally calls "Select * from TableName " to validate the table schema.
    I did some debugging and found that this is called in function GetMetaData() in OracleBulkCopy class .
    Please fix this issue by appending where 1=0 in the select clause so that it does not do a full table scan for every bulk copy insert.
    The whole advantage of OracleBulkcopy performance improvement is lost because of this line.
    When the Destination Table has < 1 lakh records this is not a big issue, but it degrades very badly when the destination table name has million records .
    Please release a patch as soon as possible.

    yes 100% sure, we even captured the trace logs from the Oracle Db end and we saw that "Select * from Tablename" was executed. See the trace below.
    To elaborate on it, I am a .Net Consultant, i even went into the Odp.Net dll code and verified that GetMetaData function is called when WritetoServer is called on a new OracleBulkCopyInstance.
    trace below
    SQL ID: 46dym63c8qhxd
    Plan Hash: 0
    select *
    from
    TABLEX
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 60 0 0
    Execute 0 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 1 0.01 0.01 0 60 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 94
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SQL ID: 5s2vzq2q78n3w
    Plan Hash: 0
    LOCK TABLE "SCHEMAX"."TABLEX" IN ROW EXCLUSIVE MODE NOWAIT
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 94 (recursive depth: 1)
    SQL ID: dbxwyh9wxp32b
    Plan Hash: 0
    LOCK TABLE "SCHEMAX"."TABLEX" PARTITION ("P3") IN EXCLUSIVE MODE
    NOWAIT
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 94 (recursive depth: 1)
    SQL ID: 9y27jut66ajz5
    Plan Hash: 0
    INSERT /*+ SYS_DL_CURSOR */ INTO "SCHEMAX"."TABLEX" PARTITION ("P3")
    ("PARTITION_ID","GID","SEGMENT_ID","RID","STATUS",
    "VERIFIC","FILE_ID","RECORD_ID","SUB_REC_ID",
    "S_ID2","FILLER1","FILLER2","FILLER3","FILLER4","FILLER5",
    "FILLER6","FILLER7","FILLER8","FILLER9","FILLER10","FILLER11","FILLER12",
    "FILLER13","FILLER14","FILLER15","FILLER16","FILLER17","FILLER18",
    "FILLER19","FILLER20","FILLER21","FILLER22","FILLER23","FILLER24",
    "FILLER25","FILLER26","FILLER27","FILLER28","FILLER29","FILLER30",
    "ID_BODY","CODE","EXP_DATE","SUB_RECORD_3","OCCUR",
    "PARENT","SEGMENT","ID_VALUE","IS_ID")
    VALUES
    (NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,
    NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,
    NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,
    NULL,NULL,NULL,NULL)
    Edited by: Y2KPRABU on Jul 16, 2012 11:38 PM

  • Perfomance Improvement -urgent

    Hello friends,
    Could you please help me to improve the perfomace of the below code
    Below is my code
      IF SO_MATNR IS INITIAL.
        SELECT MARCMATNR MARCWERKS MARCDISMM MARCLGFSB
               MARDLGORT MARDLABST MARD~LGPBE
              INTO CORRESPONDING FIELDS OF TABLE IT_DATA
              FROM MARC
              INNER JOIN MARD ON MARDMATNR EQ MARCMATNR
                             AND MARDWERKS EQ MARCWERKS
              INNER JOIN MARA ON MARCMATNR EQ MARAMATNR
              WHERE MARC~WERKS IN SO_WERKS
                AND MARC~DISMM IN SO_DISMM
                AND MARD~LGORT IN SO_LGORT
                AND MARA~MTART IN SO_MTART
                AND MARA~LVORM EQ SPACE
                AND MARA~MSTAE IN SO_MSTAE
                AND MARC~LGFSB IN SO_LGFSB.
      ELSE.
        SELECT MARCMATNR MARCWERKS MARCDISMM MARCLGFSB
               MARDLGORT MARDLABST MARD~LGPBE
               INTO CORRESPONDING FIELDS OF TABLE IT_DATA
               FROM MARC
               INNER JOIN MARD ON MARDMATNR EQ MARCMATNR
                              AND MARDWERKS EQ MARCWERKS
               INNER JOIN MARA ON MARCMATNR EQ MARAMATNR
               WHERE MARC~MATNR IN SO_MATNR
                 AND MARC~WERKS IN SO_WERKS
                 AND MARC~DISMM IN SO_DISMM
                 AND MARD~LGORT IN SO_LGORT
                 AND MARA~MTART IN SO_MTART
                 AND MARA~LVORM EQ SPACE
                 AND MARA~MSTAE IN SO_MSTAE.
      ENDIF.
    The main internal table IT_DATA is having nerly 1million 90 thousand recordsin production server
    IF SY-SUBRC IS INITIAL.
    *- Filter: Lagerort nicht leer -
        DELETE IT_DATA WHERE LGORT EQ SPACE.
        SORT IT_DATA ASCENDING BY MATNR WERKS LGORT.
        DELETE ADJACENT DUPLICATES FROM IT_DATA
                        COMPARING MATNR WERKS LGORT.
        LOOP AT IT_DATA ASSIGNING .
          MOVE SY-TABIX TO LVA_TABIX.
    *- Filter vorhandene Lagerfachkärtchen für alle Dispomerkmale -
          SELECT MANDT INTO SY-MANDT
                       FROM Z48M3_STOCKCARDS UP TO 1 ROWS
                       WHERE MATNR EQ -MATNR.
            CONTINUE.
          ENDIF.
    *- Filter Lagerbestand -
          PERFORM LAGERBESTAND IN PROGRAM Z48M_LFK_DRUCKEN
                            USING -DISMM.
    *- Filter VB -
            WHEN 'VB'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter VK -
            WHEN 'VK'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter V1 -
            WHEN 'V1'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter PD -
            WHEN 'PD'.
              SELECT SINGLE MANDT INTO SY-MANDT
                     FROM Z48M_UEBRIGMAT
                     WHERE MATNR EQ -GTXT1.
                CONTINUE.
              ENDIF.
    *- Filter VP -
            WHEN 'VP'.
              IF  DELETE -
          DELETE IT_DATA INDEX LVA_TABIX.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    " daten_lesen
    In side the loop of internal table it_data so many caliculations are going on and finally if the quantity is (menge) is 0 they are deleting the record from the it_data.
    The output is to display the final it_data after the above caliculations in the loop.
    Its taking nearly 2 or 3 days without giving any ouput .
    I have avoided 1) Select * statments
    2) select statments inside the loop.
    But due to above million records its not showing any improvementCould you please suggest me some important points to improve the performance of the above program.
    Shall i go for sorted internal table ,the how the code will be by using sorted itab 
    waiting for replies,
    Points will be awarded for  your help.
    Arvind.

    HI ,
    i tried the above points as u mentioned by using for all entries than join i tried but still the problem is there .
    The total time consumption is inside the loop .
    i forgot to paste some code ,below is my code..
      IF SY-SUBRC IS INITIAL.
    *- Filter: Lagerort nicht leer -
        DELETE IT_DATA WHERE LGORT EQ SPACE.
        SORT IT_DATA ASCENDING BY MATNR WERKS LGORT.
        DELETE ADJACENT DUPLICATES FROM IT_DATA
                        COMPARING MATNR WERKS LGORT.
        LOOP AT IT_DATA ASSIGNING .
          MOVE SY-TABIX TO LVA_TABIX.
    *- Filter vorhandene Lagerfachkärtchen für alle Dispomerkmale -
          SELECT MANDT INTO SY-MANDT
                       FROM Z48M3_STOCKCARDS UP TO 1 ROWS
                       WHERE MATNR EQ -MATNR.
            CONTINUE.
          ENDIF.
    *- Filter Lagerbestand -
          PERFORM LAGERBESTAND IN PROGRAM Z48M_LFK_DRUCKEN
                            USING -DISMM.
    *- Filter VB -
            WHEN 'VB'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter VK -
            WHEN 'VK'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter V1 -
            WHEN 'V1'.
              PERFORM MAKTX USING -GTXT1.
              CONTINUE.
    *- Filter PD -
            WHEN 'PD'.
              SELECT SINGLE MANDT INTO SY-MANDT
                     FROM Z48M_UEBRIGMAT
                     WHERE MATNR EQ -GTXT1.
                CONTINUE.
              ENDIF.
    *- Filter VP -
            WHEN 'VP'.
              IF  DELETE -
          DELETE IT_DATA INDEX LVA_TABIX.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    " daten_lesen
    ****the below code is of the performs inside the loop.
    FORM LAGERBESTAND USING UPA_MATNR TYPE TY_DATA-MATNR
                            UPA_WERKS TYPE TY_DATA-WERKS
                            UPA_LGORT TYPE TY_DATA-LGORT
                            UPA_DISMM TYPE TY_DATA-DISMM
                            URA_BWART TYPE TABLE
                   CHANGING UPA_XOK TYPE FLAG.
      DATA: LWA_MARD TYPE MARD,
            LWA_MSEG TYPE MSEG,
            LIT_MSEG TYPE STANDARD TABLE OF MSEG,
            LIT_HBGK TYPE STANDARD TABLE OF Z48M_HBGK.
      MOVE 'X' TO UPA_XOK.
      CHECK UPA_DISMM EQ 'PD'
         OR UPA_DISMM EQ 'VP'.
      SELECT SINGLE * INTO LWA_MARD
             FROM MARD
             WHERE MATNR EQ UPA_MATNR
               AND WERKS EQ UPA_WERKS
               AND LGORT EQ UPA_LGORT.
      IF SY-SUBRC IS INITIAL.
        SELECT * APPENDING TABLE LIT_MSEG
                 FROM MSEG
                 WHERE MATNR EQ UPA_MATNR
                   AND WERKS EQ UPA_WERKS
                   AND LGORT EQ UPA_LGORT
                   AND BWART IN URA_BWART
                   AND SOBKZ EQ 'Q'
                   AND XAUTO EQ SPACE.
        PERFORM DEL_STORNO_BELEGE IN PROGRAM Z48M_ZAEHLISTE
                                  TABLES LIT_MSEG.
        REFRESH LIT_HBGK.
        LOOP AT LIT_MSEG INTO LWA_MSEG.
          SELECT * APPENDING TABLE LIT_HBGK
                   FROM Z48M_HBGK
                   WHERE MBLNR EQ LWA_MSEG-MBLNR
                     AND MJAHR EQ LWA_MSEG-MJAHR
                     AND ZEILE EQ LWA_MSEG-ZEILE.
        ENDLOOP.
        PERFORM GET_VORAB_BESTAND IN PROGRAM Z48M_ZAEHLISTE
                              TABLES LIT_HBGK
                               USING UPA_MATNR UPA_WERKS UPA_LGORT
                            CHANGING LWA_MARD-UMLME.
        IF LWA_MARD-UMLME IS INITIAL
        AND LWA_MARD-LABST IS INITIAL.
          CLEAR UPA_XOK.
        ENDIF.
      ENDIF.
    ENDFORM.                    " lagerbestand
    FORM del_storno_belege TABLES   pt_mseg STRUCTURE mseg.
      DATA: lwa_mseg TYPE mseg.
    Stornierte Belege
      LOOP AT pt_mseg INTO wa_mseg.
    Wurde Beleg storniert?
        READ TABLE pt_mseg INTO lwa_mseg
               WITH KEY  sjahr  = wa_mseg-mjahr
                         smbln  = wa_mseg-mblnr
                         smblp  = wa_mseg-zeile.
        IF sy-subrc = 0.
    Stornierter Beleg und Stornobeleg löschen
          DELETE pt_mseg WHERE mblnr = wa_mseg-mblnr
                           AND mjahr = wa_mseg-mjahr
                           AND zeile = wa_mseg-zeile.
          DELETE pt_mseg WHERE mblnr = lwa_mseg-mblnr
                           AND mjahr = lwa_mseg-mjahr
                           AND zeile = lwa_mseg-zeile.
        ENDIF.
      ENDLOOP.
    FORM get_vorab_bestand    TABLES   pt_z48m_hbgk
                              USING    p_matnr
                                       p_werks
                                       p_lgort
                              CHANGING p_menge.
      DATA: lt_z48m_hbgk TYPE TABLE OF z48m_hbgk,
            lwa_z48m_hbgk TYPE z48m_hbgk.
      DATA: lf_sbdkz LIKE marc-sbdkz,
            lf_sum411 LIKE z48m_hbgk-rest,
            lf_sum412 LIKE z48m_hbgk-rest.
    Initialisierung
      REFRESH: lt_z48m_hbgk.
      CLEAR: p_menge, lf_sbdkz, lf_sum411, lf_sum412.
      SELECT SINGLE sbdkz
      INTO   lf_sbdkz
      FROM   marc
      WHERE  matnr = p_matnr
      AND    werks = p_werks.
    sc88wa2-15.11.05 angepaßt an ZPRUEF_HBGK
    HBGK auslesen (Nicht physisch entnomme Mengen)
      SELECT *
      FROM   z48m_hbgk INTO TABLE lt_z48m_hbgk
      WHERE  matnr      = p_matnr
      AND    werks      = p_werks
      AND    lgort      = p_lgort.
    AND    zzdruck_kz = space
    AND    mblnr_v    = space
    AND    kz_vorab   = space
    AND    bwart     <> c_bwart_411.
    CHECK sy-subrc = 0.
    SORT lt_z48m_hbgk BY erdat.
    LOOP AT lt_z48m_hbgk INTO lwa_z48m_hbgk.
      Nicht physisch entnommene Menge
       ADD lwa_z48m_hbgk-rest TO p_menge.
    ENDLOOP.
    sc88wa2-15.11.05 angepaßt an ZPRUEF_HBGK
      LOOP AT lt_z48m_hbgk INTO lwa_z48m_hbgk.
        CASE lwa_z48m_hbgk-bwart.
          WHEN 411.
          Offene Menge der 411er Datensätze
          bei Einzelbedarf muß das DruckKz. noch beachtet werden
            IF ( lf_sbdkz = 2 ) OR
               ( lf_sbdkz = 1 AND lwa_z48m_hbgk-zzdruck_kz IS INITIAL ).
              lf_sum411 = lf_sum411 + lwa_z48m_hbgk-rest.
            ENDIF.
          WHEN 412.
          Offene Menge der 412er Datensätze
          nur für initiales Druckkennzeichen
            IF lwa_z48m_hbgk-zzdruck_kz IS INITIAL.
              lf_sum412 = lf_sum412 + lwa_z48m_hbgk-rest.
            ENDIF.
        ENDCASE.
      ENDLOOP.
    gebucht nicht entnommen berechnen
      p_menge = lf_sum412 - lf_sum411.
    ENDFORM.                    " get_vorab_bestand
    Could you please suggest some points for the above code to improve perfomacnce.
    Thanks in advance,
    Arvind.

  • SELECTing from a large table vs small table

    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please help

    user597961 wrote:
    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal".

  • Selection from a Maintenance View

    Hi,
        I have to fetch data from a view, unfotunately that view is a mainteance view. Is there any other way to select/fetch data from that view in my report program ?
    Regards,
    Bharath Mohan B

    Hi
    U cannot access the data from maintanence view.
    Only projection and database view can be used in ABAP code.
    If u want to access the Data from more the one table , Create the DATABASE VIEW and use this view in ur program.
    For example : UR Database view name is Ztest
    in ur program
    table Ztest.
    select * from Ztest ***************
    *******************UR Business Logic
    <b>Reward if useful</b>

  • URGENT : select from table statement in ABAP OO

    Hi all,
    I am an absolute ABAP OO beginner and need some quick help from an expert. How can I make a selection from an existing table (eg MARA) in an BADI which is programmed according to ABAP OO principles.
    In the old ABAP school you could make a TABLES statement at the beginning and the do a SELECT¨* FROM but this does not work in ABAP OO.
    How should i define such simple selections from existing tables. Anyone ?
    Thanks a lot,
    Eric Hassenberg

    *define internal table
    data: i_mara like standard table of mara.
    *select to this table
    select * from mara into table i_mara.
    Also you have to define work area for this internal table in order to use it.
    data:w_mara like line of i_mara.

  • Select from v$sql_plan in a procedure

    Hi
    I''m attempting to save plans (from V$SQL_PLAN) into a table using a procedure in schema APPS, but keep getting missing table error,
    PL/SQL: ORA-00942: table or view does not existI then granted an explicit SELECT to APPS on the V$SQL_PLAN table from a schema with
    a DBA role, but still get the same error when compiling the procedure.
    SQL> create table gl_imp_post_plans as ( select * from v$sql_plan where rownum < 1);
    Table created.
    SQL> select count(*) from v$sql_plan;
      COUNT(*)
         13506
    SQL> create or replace procedure Ins_Plan_from_Dictionary as
      2 
      3    begin
      4      insert into GL_Imp_Post_Plans
      5      select  sqo.*
      6      from    v$sql_plan sqo
      7      where  (sqo.sql_id) not in (select distinct gipi.SQL_ID
      8                                  from   GL_Imp_Post_Plans gipi)
      9      and    (sqo.sql_id) in     (select distinct
    10                                         sqi.sql_id
    11                                  from   v$sql_plan sqi
    12                                  where  sqi.object_owner = 'APPS'
    13                                  and    sqi.object_name  in ('GL_BALANCES','GL_DAILY_BALANCES','GL_JE_LINES') );
    14      commit;
    15 
    16 
    17      exception
    18        when others then
    19          rollback;
    20  --        sysao_util.Message ('O', 'Error ' || sqlerrm);
    21 
    22  end Ins_Plan_from_Dictionary;
    23  /
    Warning: Procedure created with compilation errors.
    SQL> show err
    Errors for PROCEDURE INS_PLAN_FROM_DICTIONARY:
    LINE/COL ERROR
    4/5      PL/SQL: SQL Statement ignored
    11/40    PL/SQL: ORA-00942: table or view does not exist
    SQL>
    SQL> l 11
    11*                                 from   v$sql_plan sqiThe same error occurs when I attempt to select from GV$SQL_PLAN or DBA_HIST_SQL_PLAN.
    Could anybody suggest how I can persist the rows into a table using a procedure?
    thanks

    thanks, yes this works:
    create or replace procedure Ins_Plan_from_Dictionary as
      begin
        execute immediate 'begin
                            insert into GL_Imp_Post_Plans
                            select  sqo.*
                            from    v$sql_plan sqo
                            where  (sqo.sql_id) not in (select distinct gipi.SQL_ID
                                                        from   GL_Imp_Post_Plans gipi)
                            and    (sqo.sql_id) in     (select distinct
                                                               sqi.sql_id
                                                        from   v$sql_plan sqi
                                                        where  sqi.object_owner = ''APPS''
                                                        and    sqi.object_name  in (''GL_BALANCES'',''GL_DAILY_BALANCES'',''GL_JE_LINES'') );
                            commit;
                           end;';
        exception
          when others then
            rollback;
    --        sysao_util.Message ('O', 'Error ' || sqlerrm);
    end Ins_Plan_from_Dictionary;
    /

  • Can not select from v$mttr_target_advice

    Hello
    When i try to select from the view it never comes back and is a problem for the mmon process. Anyone have any ideas.

    Could you pleasse post the error message and the version you are using?
    Best Regards
    Krystian Zieja / mob

  • Can not select from my own MV. Please help.

    Hello Gurus,
    I have created a MV with following clauses
    CREATE or REPLACE MATERIALIZED VIEW "OWNER_NAME1"."MV_Name1"
    BUILD IMMEDIATE
    USING INDEX
    REFRESH COMPLETE
    AS
    SELECT column1, column2 .... from table1,table2
    where .....
    I have logged in to DB with the 'owner_name1' schema itself which is the owner of the MV.
    But, when I try to select from the above MV. It gives error "Ora-00942 table or view does not exist"
    I can see the same under 'user_objects' view as an object of owner_name1 schema.
    Could you please help me in understanding where I have gone wrong?
    DB - Oracle 9i on unix platform.
    Thanks in advance!
    Abhijit.

    Oh! I missed to mention the exact steps followed by me which created error for me,
    viz.
    1) I have 2 Database and their users as follows
    bq. i) DB1 in local server - 'localUser'
    bq. ii) DB2 in remote server - 'RemoteUser1' and 'RemoteUser2'
    2) 'RemoteUser2' user in DB2 has 'select' privilage on table 'RemoteTable1' of 'RemoteUser1' ( both are remote DB's users ! )
    i.e. select * from RemoteUser1.RemoteTable1; --works okay when logged into RemoteUser2. no synonyms are created hence using schema_name.table_name convention.
    3) Logged in to 'localUser' in DB1.
    4) Created a DB link 'local_to_remote2' in 'localUser' schema ( in DB1) to 'RemoteUser2' schema (in DB2)
    i.e.
    create database link local_to_remote2 connect to RemoteUser2 identified by password using 'connection_string';
    DBLink was created successfully.
    5) I could select from the tables of 'RemoteUser2' using DB Link. (by logging in to 'localUser')
    i.e. select * from RemoteUser1.RemoteTable1@local_to_remote2 ; --- gives me expected output. no issues!
    6) Now, I created below MV in 'localUser' ( no need to tell in 'DB1' )
    the exact syntax I used is as follows,
    CREATE or REPLACE MATERIALIZED VIEW "localUser"."MV_Name1"
    BUILD IMMEDIATE
    USING INDEX
    REFRESH COMPLETE
    AS
    SELECT column1, column2
    From RemoteUser1.RemoteTable1@local_to_remote2
    where condition1
    and condition2;
    The MV was created successfully, and I could see it as an 'Valid' object of 'localUser' schema.
    i.e. select * from user_objects where object_name ='MV_NAME1' and status ='VALID' --- tells that above create MV is an object of owner 'localUser'
    But, when I try to select from the said MV. it gives me error "Ora-00942 table or view does not exist"
    i.e. select * from MV_Name1; ---- neither this
    select * from localUser.MV_Name1; ---- nor this works :(
    Even when I try to drop the same MV it gives me same error. :(
    Could you please suggest me anything so that I will be able to select from MY OWN MV ?
    Please help Gurus.

Maybe you are looking for

  • Make a white plate?

    We are a print shop, if we need to print white, we normally just change white to a cream color, and know its white for making our Flexo plates.  We sent a 5 color spot image to a digital house, that is going to output our label digital with an HP ind

  • No delivery expediter for PO item without material master

    Hello gurus, when I create a PO with an item without material master (just user-defined text), my system does not create a delivery expediter in ME91F, although the delivery is overdue. When I use a material master in the item, the expediter is creat

  • Updating a characteristic value (CT04)

    I have created a new class with corresponding characteristic. My characteristic has a single value, "01" representing document version number. I the MM, i add this new class to the Classification view for the appropriate material number. However, i h

  • Sender http problem

    hi, where should i change url for sender http   symbols like  / by %2F and . by %3E in the url generated by the HTTP client ? where i i find this url http://xxxxx:8000/sap/xi/adapter_plain?namespace=http%3A//sap.com/xi/yyyyy&interface=abcd&service=gh

  • Configurin​g 6023E AI

    Hi, I'm using a PCI 6023E board with LabWindows /CVI 6, DAQ 6.9.1 and MAX 2.1.3.15. I need to acquire uniploar signal, range 0/10V RSE. The fonction (AI_Configure (1, 0, 1, 10, 1, 0) returns an invalidValueError, which not appears if I