Optimizing a Select Statement

Hi,
I have recently been asked to generate a program that reports of payroll postings to FI.  This involves creating a giant select statement from the ppoix table to gather all the postings.  My select statement is as follows:
  SELECT pernr                       "EE Number
         seqno                       "Sequential number
         actsign                     "Indicator: Status of record
         runid                       "Number of posting run
         postnum                     "Number
         tslin                       "Line number of data transfer
         lgart                       "Wage Type
         betrg                       "Amount
        waers                       "Currency
        anzhl                       "Number
        meins                       "Base unit of measure
         spprc         "Special processing of posting items
        momag         "Transfer to FI/CO:EE grouping for acct determi
        komok         "Transfer to FI/CO: Symbolic account
        mcode                       "Matchcode search term
        koart                       "Account assignment type
        auart                       "Expenditure type
        nofin         "Indicator: Expenditure type is not funded
           INTO CORRESPONDING FIELDS OF TABLE i_ppoix
           FROM ppoix
           FOR ALL ENTRIES IN run_doc_xref
           WHERE runid  = run_doc_xref-runid
             AND tslin  = run_doc_xref-linum
             AND spprc  <> 'A'
             AND lgart IN s_lgart
             AND pernr in s_pernr.
where s_pernr is a select option that holds personnel nummbers and s_lgart is a select option that holds wagetypes.  This statement works fine for a certain amount of personnel numbers and a certain amount of wagetypes, but once you exceed a certain limit the Database does not allow you to perform a select statement this large.  Is there a better way to perform such a large select such as this one) ie: FM, or some other method I am not aware of.  This select statement comes from the standard SAP delivered cost center admin report and this report dumps as well when too much data is passed to it.
any ideas would be much appreciated.
thanks.

The problem here is with the select-options.
For a select statement, you cannot have more that certain amount of data.
The problem with your select becomes complex because of the FOR ALL ENTRIES in and the huge s_pernr and the 40 million records :(.
I am guessing that the s_lgart will be small.
How many entries do you have in internal table "run_doc_xref"?
If there are not that many, then I would suggest this:
TYPES:
  BEGIN OF ty_temp_ppoix,
   pernr   TYPE ppoix-pernr,
   lgart   TYPE ppoix-lgart,
   seqno   TYPE ppoix-seqno,
   actsign TYPE ppoix-actsign,
   runid   TYPE ppoix-runid,
   postnum TYPE ppoix-postnum,
   tslin   TYPE ppoix-tslin,
   betrg   TYPE ppoix-betrg,
   spprc   TYPE ppoix-spprc,
  END OF ty_temp_ppoix.
DATA:
i_temp_ppoix TYPE SORTED TABLE OF ty_temp_ppoix
   WITH NON-UNIQUE KEY pernr lgart
   INITIAL SIZE 0
   WITH HEADER LINE.
DATA:
  v_pernr_lines TYPE sy-tabix,
  v_lgart_lines TYPE sy-tabix.
IF NOT run_doc_xref[] IS INITIAL.
  DESCRIBE TABLE s_pernr LINES v_pernr_lines.
  DESCRIBE TABLE s_lgart LINES v_lgart_lines.
  IF v_pernr_lines GT 800 OR
     v_lgart_lines GT 800.
* There is an index on runid and tslin. This should be ok
* ( still bad because of the huge table :(  )
    SELECT pernr lgart seqno actsign runid postnum tslin betrg spprc
* Selecting into sorted TEMP table here
      INTO TABLE i_temp_ppoix
      FROM ppoix
      FOR ALL ENTRIES IN run_doc_xref
      WHERE runid = run_doc_xref-runid
      AND   tslin = run_doc_xref-linum
      AND   spprc <> 'A'.
* The sorted table should make the delete faster
    DELETE i_temp_ppoix WHERE NOT pernr IN s_pernr
                        AND   NOT lgart IN s_lgart.
* Now populate the actual target
    LOOP AT i_temp_ppoix.
      MOVE: i_temp_ppoix-pernr TO i_ppoix-pernr.
*  and the rest of the fields
      APPEND i_ppoix.
      DELETE i_temp_ppoix.
    ENDLOOP.
  ELSE.
    SELECT pernr seqno actsign runid postnum tslin lgart betrg spprc
* Selecting into your ACTUAL target here
      INTO TABLE i_ppoix
      FROM ppoix
      FOR ALL ENTRIES IN run_doc_xref
      WHERE runid = run_doc_xref-runid
      AND   tslin = run_doc_xref-linum
      AND   spprc <> 'A'
      AND   pernr IN s_pernr
      AND   lgart IN s_lgart.
  ENDIF.
ELSE.
* Error message because of no entries in run_doc_xref?
* Please answer this so a new solution can be implemented here
* if it is NOT an error
ENDIF.
Hope this helps.
Regards,
-Ramesh

Similar Messages

  • Performance Issue in Select Statement (For All Entries)

    Hello,
    I have a report where i have two select statement
    First Select Statement:
    Select A B C P Q R
         from T1 into Table it_t1
              where ....
    Internal Table it_t1 is populated with 359801 entries through this select statement.
    Second Select Statement:
    Select A B C X Y Z
         from T2 in it_t2 For All Entries in it_t1
              where A eq it_t1-A
                 and B eq it_t1-B
                 and C eq it_t1-C
    Now Table T2 contains more than 10 lac records and at the end of select statement it_t2 is populated with 844003 but it takes a lot of time (15 -20 min) to execute second select statement.
    Can this code be optimized?
    Also i have created respective indexes on table T1 and T2 for the fields in Where Condition.
    Regards,

    If you have completed all the steps mentioned by others, in the above thread, and still you are facing issues then,.....
    Use a Select within Select.
    First Select Statement:
    Select A B C P Q R package size 5000
         from T1 into Table it_t1
              where ....
    Second Select Statement:
    Select A B C X Y Z
         from T2 in it_t2 For All Entries in it_t1
              where A eq it_t1-A
                 and B eq it_t1-B
                 and C eq it_t1-C
    do processing........
    endselect
    This way, while using for all entries on T2, your it_t1, will have limited number of entries and thus the 2nd select will be faster.
    Thanks,
    Juwin

  • Multiple select statements in PL/SQL

    Hi All
    I am new to PL/SQL and my experience is in writing TSQL. There we can write a SQL statement like this to return 3 result set
    SELECT empname FROM Employee
    SELECT authname FROM Author
    SELECT athname FROM sport
    how can we write the same 3 statements in PL/SQL and attain the 3 resultsets.
    I tried to implement the same using PL/SQL anonymous blocks. But it didn't worked.
    DECLARE
    P_RECORDSET OUT SYS_REFCURSOR
    BEGIN
    OPEN P_RECORDSET FOR
    SELECT empname FROM Employee;
    SELECT authname FROM Author;
    SELECT athname FROM sport;
    END;
    can anybody show how it can be done.
    Thanks in advance
    George
    Edited by: user6290570 on Sep 16, 2009 11:23 PM

    george2009 wrote:
    No i just want to select 3 result sets from 3 select statements, so that it is helpful to compare the resultsets. Compare? How? This is done using the SQL language. Not PL/SQL. Not Java. Not VB. Not anything else.
    You would use these other language for flow control and certain forms of conditional logic - but the actual comparison of data sets is done in SQL.
    Of course, that is if you do want to do it the most optimal way, that will perform well, and scale well.
    SQL is not an I/O API layer - to be used to read() a record and write() a record as if the RDBMS is an ISAM file. That form of row-by-row and slow-by-slow processing dates back to the 80's when we used Cobol.. (or at least for those old farts like me that can actually remember coding in Cobol in the 80's ;-) ).
    You want to design and code database applications that are fast, robust, and can scale? Then learn how to use SQL correctly.

  • Index not being used in Select statement

    Friends
    I have the following SQL statement:
    SELECT
    a.acct,
    a.date_field,
    UPPER(b.feegroup) feegrp,
    SUM (a.fee1) fee1,
    SUM (a.fee2) fee2,
    SUM (a.fee3) fee3
    FROM table1 a, table2 b
    WHERE 1 = 1
    AND a.fee_id = b.fee_id
    GROUP BY a.acct, a.date_field, b.feegroup;
    Both the tables have index on fee_id column. When I run the explain plan for this statement, I am getting the following output:
    Operation | Option | Object Name | Position
    SELECT STATEMENT | | | 560299
    HASH | GROUP BY| |1
    TABLE ACCESS | FULL| table2 | 1
    TABLE ACCESS | FULL| table1 | 2
    Why Oracle is not using the index?
    Edited by: darshilm on Dec 10, 2009 3:56 PM

    The proposed plan is the optimal according to your current conditions in the "where clause" where you have only the equality join condition and therefore the CBO can use HASH JOIN. Using any kind of index access would just increase the amount of required work unless you would add some very restrictive conditions which will select rows from relatively small amount of blocks. Here I have to mention that what really counts in the CBO cost calculation is the amount of blocks accessed and not the number of rows. The "currency" for I/O in Oracle is a block and not a row. CBO always uses an assumption that there is nothing in the buffer cache and it will have to perform a physical read for every block.
    How many blocks will actually be accessed depends on the data distribution. It can happen that every single row that you have to retrieve resides in a different block and although you access only 1000 rows out of a million row table you would have to visit almost every block of that table. For such a situation a FULL TABLE SCAN is the best access path and Oracle will use multiblock I/O for that. On the other side you can have those 1000 rows only in a few blocks and then the index access would be the most appropriate one. For index access Oracle uses single block I/O. Usually the actual situation is somewhere between this two extreme situations. But you can run some tests by yourself and see when an index access will be replaced by a full table scan while you will make your predicates less selective.
    HTH, Joze
    Co-author of the forthcoming book "Expert Oracle Practices"
    http://www.apress.com/book/view/9781430226680
    Oracle related blog: http://joze-senegacnik.blogspot.com/
    Blog about flying: http://jsenegacnik.blogspot.com/
    Blog about Building Ovens, Baking and Cooking: http://senegacnik.blogspot.com

  • Joins And For all Enteries in Select Statement

    Could you please tell me when there is a high amount of data which is being handled in the table, does the use of INNER JOINS and FOR ALL ENTERIES in SELECT Statement decreases the system performance? ?
    Can you also let me know where can i get some tips regarding do's and dont's for ABAP Programming, I want to increase my system performance.
    Currently the programs which are being used are taking a lot of time for execution...
    Its very URGENT!

    Hai Jyotsna
    Go through the following Tips for improving Performence
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
    The plus
    Large amount of data
    Mixing processing and reading of data
    Fast internal reprocessing of data
    Fast
    The Minus
    Difficult to program/understand
    Memory could be critical (use FREE or PACKAGE size)
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the driver table
    Sorting the driver table
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
                   FOR ALL ENTRIES IN i_tab
                      WHERE mykey >= i_tab-low and
                 mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data
    Mixing processing and reading of data
    Easy to code - and understand
    The minus:
    Large amount of data
    when mixed processing isn’t needed
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data
    Similar to Nested selects - when the accesses are planned by the programmer
    In some cases the fastest
    Not so memory critical
    The minus
    Very difficult to program/understand
    Mixing processing and reading of data not possible
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
    LOOP
    AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    ·                The runtime analysis (SE30)
    ·                SQL Trace (ST05)
    ·                Tips and Tricks tool
    ·                The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a statement can not be used with a buffered table, so when using these statements the buffer will be bypassed. These statements are:
    Select DISTINCT
    ORDER BY / GROUP BY / HAVING clause
    Any WHERE clause that contains a sub query or IS NULL expression
    JOIN s
    A SELECT... FOR UPDATE
    If you wan t to explicitly bypass the buffer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The database server will usually be the bottleneck, so sometimes it is better to move the sort from the database server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT statement to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the database server sort it.
    Avoid the SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplicate rows.
    Thanks & regards
    Sreenivasulu P

  • How to optimized Multiple CASE statement

    Hi..
    Somebody please help me out to use most optimized Multiple CASE statement in below example.. 
    Select
    CAST(COUNT(DISTINCT(CASE WHEN period_id <350 THEN period_id END)) AS DECIMAL(30,0))  AS Column1,                  CAST(COUNT(DISTINCT(CASE WHEN period_id >=351 AND period_id<=375 THEN period_id END))
    AS DECIMAL(30,0)) AS Column2,                                
    CAST(COUNT(DISTINCT(CASE WHEN period_id >=376 AND period_id<=400 THEN period_id END)) AS DECIMAL(30,0)) AS Column3,                             
    CAST(COUNT(DISTINCT(CASE WHEN period_id >=401 AND period_id<=450 THEN period_id END)) AS DECIMAL(30,0)) AS Column4,                             
    CAST(COUNT(DISTINCT(CASE WHEN period_id >=451 AND period_id<=575 THEN period_id END)) AS DECIMAL(30,0)) AS Column5,
    CAST(COUNT(DISTINCT(CASE WHEN pmc ='UVS' THEN pmc END)) AS DECIMAL(30,0)) AS Column6,                                
    CAST(COUNT(DISTINCT(CASE WHEN pmc ='VAL' THEN pmc END)) AS DECIMAL(30,0)) AS Column7                            
    FROM [Table1]
    Thanks in Advance,
    Deepak Goyal

    How is an index going to help in a query with no where clause or join?
    Rhetorical question but it brings to light the problem.  This query is going to produce a table/index scan precisely because there's no where clause.
    You need to change the way you are thinking about the problem.  The problem isn't that you need a better performing case statement, the problem is that you need to minimize the number of records you ask SQL Server to read from disk.  You do that
    by using a where clause or by joining to a smaller table.
    Here's one option which should give you the desired results while performing much better:
    create index idx_period_id on Table1 (period_id)
    create index idx_pmc on Table1 (pmc) include (period_id)
    select (select count(distinct period_id) from Table1 where period_id < 350) as Column1,
    (select count(distinct period_id) from Table1 where period_id between 351 and 375) as Column2,
    (select count(distinct period_id) from Table1 where period_id between 376 and 400) as Column3,
    (select count(distinct period_id) from Table1 where period_id between 401 and 450) as Column4,
    (select count(distinct period_id) from Table1 where period_id between 451 and 575) as Column5,
    (select count(distinct period_id) from Table1 where pmc = 'UVS') as Column6,
    (select count(distinct period_id) from Table1 where pmc = 'VAL') as Column7

  • Optimize SELECT statement

    Hello Colleagues,
    Please assist me in optimizing the two SELECT statement which is taking too much time
    1. SELECT SINGLE COUNT(*) FROM j_3abdsi
                              WHERE matnr IN r_matnr
                                AND j_4krcat EQ im_valcat.
    -- I just need to check whether there are entries in the table with this criteria. 
    2.   SELECT e~vbeln FROM ( vbep AS e
             INNER JOIN vbap AS p ON pvbeln = evbeln
                            AND pposnr = eposnr )
              INTO lv_vbeln
             WHERE p~matnr IN r_matnr
               AND e~j_4krcat eq im_valcat.
        EXIT.
      ENDSELECT.
    -- here also the case is same. Just need to check whether any value exist.
    Kindly help.
    Regards
    Sudha
    Edited by: Sudha Naik on Oct 8, 2009 7:32 AM

    Hi Sudha,
    1) I do not understand why you would say SELECT SINGLE COUNT(). If you know the entire primary key and would like to retrieve a record you will use SELECT SINGLE to retrieve just that single record. SELECT COUNT() returns the number of records meeting your selection criteria in the system field SY-DBCNT. SELECT COUNT(*) is generally used to check for the existance of the record because this construct just check the number of qualifying records and does not retrieve any data fields from the table. You seem to have merged both these statements and have created something I have never seen before. If you are only interested in finding the existance of the record may I suggest that you use the following.
    DATA: w_matnr TYPE mara-matnr.
    SELECT matnr
      UP TO 1 ROWS
      FROM j_3abdsi
      WHERE matnr    IN r_matnr
      AND   j_4krcat EQ im_valcat.
    ENDSELECT.
    SELECT COUNT(*) is good if you are using an index. If not you will be trying to count the records without an index. This can be time consuming. If you are not interested in the record count and only want to know if a qualifying record exist the above option will be faster because SAP check for the first instance and then returns SY-SUBRC = 0.
    2) Use the following code.
    SELECT e~vbeln
      UP TO 1 ROWS
      FROM       vbap AS p
      INNER JOIN vbep AS e
      ON  p~vbeln = e~vbeln
      AND p~posnr = e~posnr )
      INTO lv_vbeln
      WHERE p~matnr    IN r_matnr
      AND   e~j_4krcat EQ im_valcat.
    ENDSELECT.

  • How to compile the hint to force selection statement to use index

    Hello expert,
    will you please tell me how to compile the hint to force selection statement to use index?
    Many Thanks,

    Not sure what you mean by compile, but hint is enclosed in /*+ hint */. Index hint is INDEX(table_name,index_name). For example:
    SQL> explain plan for
      2  select * from emp
      3  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3956160932
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |    14 |   546 |     3   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| EMP  |    14 |   546 |     3   (0)| 00:00:01 |
    8 rows selected.
    SQL> explain plan for
      2  select /*+ index(emp,pk_emp) */ *
      3  from emp
      4  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 4170700152
    | Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |        |    14 |   546 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMP    |    14 |   546 |     2   (0)| 00:00:01 |
    |   2 |   INDEX FULL SCAN           | PK_EMP |    14 |       |     1   (0)| 00:00:01 |
    9 rows selected.
    SQL> Hint in the above example is forcing optimizer to use index which resul;ts in a bad execution plan. Most of the time optimizer does not need hints and chooses an optimal plan. In most cases sub-optimal plan is result of stale or incomplete statistics.
    SY.

  • Simulation of SELECT statements

    Hi,
    Can anybody help on developing a SELCT-TOOL program with following conditions:
    This new program should be able to pass any SELECT statement to the database and evaluate certain KPIs like response time, number of records selected, index used u2026u2026
    The program should not be able to display selected records.
    The program should be able to simulate working in foreground as well as in background.
    The program should be able to generate a result list which could be printed on our laser printer.
    The program should not be executable for everybody but for selected users only.
    Reason for this request is that we often face issues in Prod with programs having select statements which are neither optimized nor sufficiently tested. Hence we need to find a way to simulate all select statements of any coding on Prod before actually transporting them.
    I used ST05 transaction, but it is not sufficient to test all kinds of SELECT statments.
    It is restricted to very limited SQL statements.
    Thanks,
    Rama

    Hi,
    I did same thing already as i mentioned in my request.
    I have used ST05 transaction --> Enter SQL statement.
    It works if SQL statement is too simple.
    But in real time we give little bit complex SELECT statements.
    we use Internal Tables, some times Joins too....
    I need solution for medium/complex SELECT statements.
    Any help?
    Thanx,
    Rama

  • If statement in select statement alias

    I have the following select statement. It has the alias Survivors, Deaths and "All Cases". Is it posible to use :P_LANGUAGE variable to say that -- IF :P_LANGUAGE = FRENCH THEN alias are Survivants for survivors, Décès for Deaths, Tous_les_cas for All Cases. Please advise
    SELECT ALL T_NTR_MULTIBAR.CAT, T_NTR_MULTIBAR.NUM_CASES_LEFTBAR AS Survivors,
    T_NTR_MULTIBAR.NUM_CASES_MIDDLEBAR AS Deaths, T_NTR_MULTIBAR.NUM_CASES_RIGHTBAR AS "All Cases"
    FROM T_NTR_MULTIBAR
    WHERE INSTANCE_NUM = :P_INSTANCENUM
    order by ORDERS

    You may not be able to add this condition inside the SQL Statement. But you can add this condition outside the statement, if you're using PL/SQL...
    IF :p_language = french THEN
    SELECT ALL t_ntr_multibar.cat,
      t_ntr_multibar.num_cases_leftbar AS survivors,
      t_ntr_multibar.num_cases_middlebar AS deaths,
      t_ntr_multibar.num_cases_rightbar AS "All Cases"
    ELSE
    END IF;

  • How to get all values from an interval using select statement

    Hi,
    Is it possible to write a select statement that returns all values from an interval? Interval boundaries are variable.
    something like this:
    select (for x in 1,1024 loop x end loop) from dual
    (this, of course, doesn't work)
    The result in this example should be 1024 rows of numbers from 1 to 1024. These numbers are parameters, so it is not possible to create a table with predefined values.
    Thanks in advance for your help,
    Mia.

    For your simple case, with a lower boundary of 1, you can use:
    SELECT rownum
    FROM all_objects
    WHERE rownum <= 1024For a set of number between say 50 - 100, you can use something like:
    SELECT rownum + (50 - 1)
    FROM all_objects
    WHERE rownum <= (100 - 50 + 1)Note, that all_objects was used only because it generally has a lot of rows. Any table with at least the number of rows in your range will work.
    TTFN
    John

  • Select statement operators in ecc 6.

    Hi Experts,
    I have a small doubt about the '>=' ( greater than or equal to ) operator usage in select statement. Is this operator by any chance perform not as desired in ECC 6.0. Is it a good option to use 'GE' instead of '>='. ?
    It may sound a bit awkward, but still I would like to know. I am facing a situation, which could be related to this. An early response would be highly appreciated.
    I would request,you NOT TO REPLY with links/explanations which says how to use select statement. Only answer if you have the  answers related to this query.
    Regards,
    Sandipan

    >
    Jaideep Sharma wrote:
    > Hi,
    > The only difference is GE will take a little more time than >= as system need to convert the keyword into actual operator when fetching data from Database.
    >
    > KR Jaideep,
    ????? Every Open SQL statements is translated to the SQL slang the underlying database is talking regardless if you type GE or >=
    If the result differs using >= or GE i would open a call at SAP instead of asking in SDN.

  • How to find the number of fetched lines from select statement

    Hi Experts,
    Can you tell me how to find the number of fetched lines from select statements..
    and one more thing is can you tell me how to check the written select statement or written statement is correct or not????
    Thanks in advance
    santosh

    Hi,
    Look for the system field SY_TABIX. That will contain the number of records which have been put into an internal table through a select statement.
    For ex:
    data: itab type mara occurs 0 with header line.
    Select * from mara into table itab.
    Write: Sy-tabix.
    This will give you the number of entries that has been selected.
    I am not sure what you mean by the second question. If you can let me know what you need then we might have a solution.
    Hope this helps,
    Sudhi
    Message was edited by:
            Sudhindra Chandrashekar

  • Use of LIKE in where clause of select statement for multiple records

    Hi Experts,
    I have a account number field which is uploaded from a file. Now this account numbers uploaded does not match fully with sap table account numbers but it contains all of the numbers provided in the file mostly in the upright positions.
    For example in file we have account number as 2ARS1 while in sap table the value is 002ARS1.
    And i want to fetch data from sap table based on account number uploaded. So, i am trying to use LIKE with for all entries but its not working as mentioned below but LIKE is not working with FOR ALL ENTRIES.
    data : begin of t_dda occurs 0,
            dda(19) type c,
           end of t_dda.
    data : begin of t_bukrs occurs 0,
            bukrs type t012k-bukrs,
           end of t_bukrs.
    data : dda type t012k-bankn,
           w_dda type t012k-bankn.
    CONCATENATE '%'
                             '2ARS1'
                     INTO  W_DDA.
    MOVE W_DDA TO T_DDA-DDA.
    APPEND T_DDA.
    CLEAR T_DDA.
    free t_bukrs.
    SELECT BUKRS
      FROM T012K
      into TABLE t_bukrs
        for all entries in t_dda
    WHERE BANKN like t_dda-dda.
    Can anybody suggest what should i use to get the data for multiple account numbers using one select statement only instead on using SELECT UP TO 1 ROWS in LOOP....ENDLOOP ?
    Thanks in advance,
    Akash

    Hi,
    yes, For All entries won't work for LIKE with '%  '.
    I think the other alternative is go for Native SQL by writing sub-query
    sample code is here:
    data: begin of i_mara occurs 0,
              matnr like mara-matnr,
              matkl like mara-matkl,
           end of i_mara.
    exec sql.
    select matnr, matkl from mara where matnr in (select matnr from marc) and matnr like '%ma' into :i_mara
    endexec.
    loop at i_mara.
    write:/ i_mara-matnr, i_mara-matkl.
    endloop.
    hope u got it.
    regards
    Mahesh
    Edited by: Mahesh Reddy on Jan 21, 2009 2:32 PM

  • What is the use of additon in up to 1 rows in SELECT statement

    Hi All,
             What is the use of up to 1 rows in select statement.
    for example
    SELECT kostl
          FROM pa0001
          INTO y_lv_kostl UP TO 1 ROWS
          WHERE pernr EQ pernr
          AND endda GE sy-datum.
        ENDSELECT.
    I'm unable to get in wat situations we hav to add up to 1 rows
    please help me out...
    Thanks,
    santosh.

    Hi,
    Use "select up to 1 rows" only if you are sure that all the records returned will have the same value for the field(s) you are interested in. If not, you will be reading only the first record which matches the criteria, but may be the second or the third record has the value you are looking for.
    The System test result showed that the variant Single * takes less time than Up to 1 rows as there is an additional level for COUNT STOP KEY for SELECT ENDSELECT UP TO 1 ROWS.
    The 'SELECT .... UP TO 1 ROWS' statement is subtly different. The database selects all of the relevant records that are defined by the WHERE clause, applies any aggregate, ordering or grouping functions to them and then returns the first record of the result set.
    Regards,
    Bhaskar

Maybe you are looking for