Combined select query
Hi experts,
I have 4 select queries like this:
SELECT * FROM t009
INTO TABLE it_t009
FOR ALL ENTRIES IN lt_bukrs
WHERE periv = lt_bukrs-periv.
LOOP AT lt_bukrs INTO ls_bukrs.
READ TABLE it_t009 WITH KEY periv = ls_bukrs-periv.
IF sy-subrc NE 0.
it_incon-orgunit = 'b'.
it_incon-value = ls_bukrs-bukrs.
append it_incon.
delete table lt_bukrs from ls_bukrs.
endif.
ENDLOOP.
SELECT * FROM t009
INTO TABLE it_t009
FOR ALL ENTRIES IN lt_kokrs
WHERE periv = lt_kokrs-lmona.
LOOP AT lt_kokrs INTO ls_kokrs.
READ TABLE it_t009 WITH KEY periv = ls_kokrs-lmona.
IF sy-subrc NE 0.
it_incon-orgunit = 'k'.
it_incon-value = ls_kokrs-kokrs.
append it_incon.
delete table lt_kokrs from ls_kokrs.
endif.
ENDLOOP.
SELECT * FROM t009
INTO TABLE it_t009
FOR ALL ENTRIES IN lt_erkrs
WHERE periv = lt_erkrs-periv.
LOOP AT lt_erkrs INTO ls_erkrs.
READ TABLE it_t009 WITH KEY periv = ls_erkrs-periv.
IF sy-subrc NE 0.
it_incon-orgunit = 'e'.
it_incon-value = ls_erkrs-erkrs.
append it_incon.
delete table lt_erkrs from ls_erkrs.
endif.
ENDLOOP.
SELECT * FROM t009
INTO TABLE it_t009
FOR ALL ENTRIES IN lt_verso
WHERE periv = lt_verso-periv.
LOOP AT lt_verso INTO ls_verso.
READ TABLE it_t009 WITH KEY periv = ls_verso-periv.
IF sy-subrc NE 0.
it_incon-orgunit = 'v'.
it_incon-value = ls_verso-verso.
append it_incon.
delete table lt_verso from ls_verso.
endif.
ENDLOOP.
Now If I need to combine these select queries and append the internal table it_incon, how can I go about the same? What would be the code for that?
Thanks,
Ajay.
Hi,
loop at all 4 tables (lt_bukrs, lt_kokrs, lt_verso) separately and append the PERIV values to a single internal table e.g. I_PERIV_TMP
Then,
sort I_PERIV_TMP by PERIV.
delete adjacent duplicates from I_PERIV_TMP comparing PERIV.
SELECT * FROM t009
INTO TABLE it_t009
FOR ALL ENTRIES INI_PERIV_TMP
WHERE periv = I_PERIV_TMP-PERIV.
if sy-subrc = 0.
> then do your 4 seperate loops.
endif.
Regards,
Dev.
Similar Messages
-
Hi All,
Please tell me how to combine the following 2 select queries:
SELECT SINGLE konda
FROM vbkd
INTO v_konda
WHERE vbeln EQ v_vbelv.
SELECT SINGLE traty
FROM vbkd
INTO wa_output-out_code
WHERE vbeln EQ v_vbelv
AND posnr EQ v_posnr.
in order to access VBKD table only once.(here wa_output is a work area)
Thanks,
Neethu.Hi,
I hope there is no other option for your requirement.
Because the requirements are having distinct meaning.
So you have to write two SELECT statements.
But you can try like the following:
SELECT SINGLE konda traty FROM vbkd
INTO ( v_konda, wa_output-out_code )
WHERE vbeln EQ v_vbelv AND posnr EQ v_posnr.
if sy-subrc ne 0.
SELECT SINGLE konda FROM vbkd
INTO v_konda WHERE vbeln EQ v_vbelv.
endif.
Thanks and Best Regards,
Suresh -
Select Query...... Combinations
Hi Experts,
I have a Custom table for finding approver which looks like the below data. If no data is entered in the highlighted fields it means " * " which is applicable for all.
Appl Type
Appl Subtype
Pers Area
Pers SubArea
Emp Grp
Emp Subgrp
End Date
Appr Level
Begin Date
Approver
CLAIM
1111
31.12.9999
3
01.01.2010
XXX
CLAIM
1111
PA
31.12.9999
3
01.01.2010
YYY
I am trying to write Select Query on this table as below.
DATA: t_appl_subty TYPE RANGE OF zzp_appl_sub,
w_appl_subty-sign = |I|.
w_appl_subty-option = |EQ|.
w_appl_subty-low = i_appl_subty.
APPEND w_appl_subty TO t_appl_subty.
CLEAR w_appl_subty.
w_appl_subty-sign = |I|.
w_appl_subty-option = |EQ|.
w_appl_subty-low = ''.
APPEND w_appl_subty TO t_appl_subty.
CLEAR w_appl_subty.
w_persa-sign = |I|.
w_persa-option = |EQ|.
w_persa-low = i_persa.
APPEND w_persa TO t_persa.
CLEAR w_persa.
w_persa-sign = |I|.
w_persa-option = |EQ|.
w_persa-low = ''.
APPEND w_persa TO t_persa.
CLEAR w_persa.
w_btrtl-sign = |I|.
w_btrtl-option = |EQ|.
w_btrtl-low = i_btrtl.
APPEND w_btrtl TO t_btrtl.
CLEAR w_btrtl.
w_btrtl-sign = |I|.
w_btrtl-option = |EQ|.
w_btrtl-low = ''.
APPEND w_btrtl TO t_btrtl.
CLEAR w_btrtl.
w_persg-sign = |I|.
w_persg-option = |EQ|.
w_persg-low = i_persg.
APPEND w_persg TO t_persg.
CLEAR w_persg.
w_persg-sign = |I|.
w_persg-option = |EQ|.
w_persg-low = ''.
APPEND w_persg TO t_persg.
CLEAR w_persg.
w_persk-sign = |I|.
w_persk-option = |EQ|.
w_persk-low = i_persk.
APPEND w_persk TO t_persk.
CLEAR w_persk.
w_persk-sign = |I|.
w_persk-option = |EQ|.
w_persk-low = ''.
APPEND w_persk TO t_persk.
CLEAR w_persk.
SELECT SINGLE * FROM zapprover
INTO wa_approver
WHERE appl_typ = i_appl_typ
AND appl_subty IN t_appl_subty
AND persa IN t_persa
AND btrtl IN t_btrtl
AND persg IN t_persg
AND persk IN t_persk
AND appr_lvl = i_appr_lvl
AND endda GE sy-datum.
I am passing data as
i_appl_typ = 'CLAIM'
i_appl_subty = '1111'
i_persa = 'PA'
i_btrtl = ' '
i_persg = ' '
i_persk = ' '
i_appr_lvl = 3
It is picking the First record, but as per my requirement if no data available with data “PA” then it has to fetch first record. I am not sure,It seems to be leading to combinations.
How can I write Select Query?
Please help me.Hi Jozef,
Initially i tried like this,
GET RUN TIME FIELD DATA(t1).
DATA: it_approver TYPE STANDARD TABLE OF zapprover,
lv_subrc TYPE sy-subrc.
DATA: lv_appl_typ TYPE zzp_appl_typ,
lv_appl_subty TYPE zzp_appl_sub,
lv_persa TYPE persa,
lv_btrtl TYPE btrtl,
lv_persg TYPE persg,
lv_persk TYPE persk,
lv_appr_lvl TYPE zzp_appr_lvl.
REFRESH it_approver[].
CLEAR: wa_approver,lv_appl_typ,lv_appl_subty,lv_persa,lv_btrtl,lv_persg,lv_persk,lv_appr_lvl,lv_subrc.
PERFORM fill_local_values USING i_appl_typ
i_appl_subty
i_persa
i_btrtl
i_persg
i_persk
i_appr_lvl
CHANGING lv_appl_typ
lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl.
SELECT * FROM zapprover
INTO TABLE it_approver
WHERE appl_typ = i_appl_typ
AND endda GT sy-datum.
IF 0 = sy-subrc.
SORT it_approver.
IF i_appl_typ = 'CLAIM'.
PERFORM read_appr TABLES it_approver
USING lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl
CHANGING lv_subrc
wa_approver.
IF NOT lv_subrc IS INITIAL AND wa_approver IS INITIAL.
DO 4 TIMES.
CLEAR lv_subrc.
PERFORM fill_local_values USING i_appl_typ
i_appl_subty
i_persa
i_btrtl
i_persg
i_persk
i_appr_lvl
CHANGING lv_appl_typ
lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl.
CASE sy-index.
WHEN 1.
CLEAR: lv_persk.
WHEN 2.
CLEAR: lv_persg.
WHEN 3.
CLEAR: lv_btrtl.
WHEN 4.
CLEAR: lv_persa.
WHEN OTHERS.
ENDCASE.
PERFORM read_appr TABLES it_approver
USING lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl
CHANGING lv_subrc
wa_approver.
IF lv_subrc IS INITIAL.
EXIT.
ENDIF.
ENDDO.
IF NOT lv_subrc IS INITIAL AND wa_approver IS INITIAL.
DO 6 TIMES.
CLEAR lv_subrc.
PERFORM fill_local_values USING i_appl_typ
i_appl_subty
i_persa
i_btrtl
i_persg
i_persk
i_appr_lvl
CHANGING lv_appl_typ
lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl.
CASE sy-index.
WHEN 1.
CLEAR: lv_persa,lv_btrtl.
WHEN 2.
CLEAR: lv_persa,lv_persg.
WHEN 3.
CLEAR: lv_persa,lv_persk.
WHEN 4.
CLEAR: lv_persg,lv_btrtl.
WHEN 5.
CLEAR: lv_persg,lv_persk.
WHEN 6.
CLEAR: lv_persk,lv_btrtl.
WHEN OTHERS.
ENDCASE.
PERFORM read_appr TABLES it_approver
USING lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl
CHANGING lv_subrc
wa_approver.
IF lv_subrc IS INITIAL.
EXIT.
ENDIF.
ENDDO.
ELSE.
EXIT.
ENDIF.
IF NOT lv_subrc IS INITIAL AND wa_approver IS INITIAL.
DO 4 TIMES.
CLEAR lv_subrc.
PERFORM fill_local_values USING i_appl_typ
i_appl_subty
i_persa
i_btrtl
i_persg
i_persk
i_appr_lvl
CHANGING lv_appl_typ
lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl.
CASE sy-index.
WHEN 1.
CLEAR: lv_persa,lv_btrtl,lv_persg.
WHEN 2.
CLEAR: lv_persa,lv_persg,lv_persk.
WHEN 3.
CLEAR: lv_persa,lv_persk,lv_btrtl.
WHEN 4.
CLEAR: lv_btrtl,lv_persg,lv_persk.
WHEN OTHERS.
ENDCASE.
PERFORM read_appr TABLES it_approver
USING lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl
CHANGING lv_subrc
wa_approver.
IF lv_subrc IS INITIAL.
EXIT.
ENDIF.
ENDDO.
ELSE.
EXIT.
ENDIF.
IF NOT lv_subrc IS INITIAL AND wa_approver IS INITIAL.
CLEAR: lv_persa,lv_btrtl,lv_persg,lv_persk.
PERFORM read_appr TABLES it_approver
USING lv_appl_subty
lv_persa
lv_btrtl
lv_persg
lv_persk
lv_appr_lvl
CHANGING lv_subrc
wa_approver.
ELSE.
EXIT.
ENDIF.
ENDIF.
ENDIF.
ELSE.
" Error Handling
ENDIF.
GET RUN TIME FIELD DATA(t2).
DATA(t3) = t2 - t1.
FORM fill_local_values USING VALUE(p_0135) TYPE zzp_appl_typ
VALUE(p_0136) TYPE zzp_appl_sub
VALUE(p_0137) TYPE persa
VALUE(p_0138) TYPE btrtl
VALUE(p_0139) TYPE persg
VALUE(p_0140) TYPE persk
VALUE(p_0141) TYPE zzp_appr_lvl
CHANGING p_appl_typ TYPE zzp_appl_typ
p_appl_sub TYPE zzp_appl_sub
p_persa TYPE persa
p_btrtl TYPE btrtl
p_persg TYPE persg
p_persk TYPE persk
p_appr_lvl TYPE zzp_appr_lvl.
p_appl_typ = p_0135.
p_appl_sub = p_0136.
p_persa = p_0137.
p_btrtl = p_0138.
p_persg = p_0139.
p_persk = p_0140.
p_appr_lvl = p_0141.ENDFORM. " FILL_LOCAL_VALUES
FORM read_appr TABLES p_table STRUCTURE ztinp_approver
USING VALUE(p_0050) TYPE zzp_appl_sub
VALUE(p_0051) TYPE persa
VALUE(p_0052) TYPE btrtl
VALUE(p_0053) TYPE persg
VALUE(p_0054) TYPE persk
VALUE(p_0055) TYPE zzp_appr_lvl
CHANGING p_subrc TYPE sy-subrc
p_result TYPE ztinp_approver.
READ TABLE p_table INTO p_result
WITH KEY appl_subty = p_0050
persa = p_0051
btrtl = p_0052
persg = p_0053
persk = p_0054
appr_lvl = p_0055 BINARY SEARCH.
p_subrc = sy-subrc.
ENDFORM. " READ_APPR
I don't know whether this a best method or not, but working perfectly as i wanted. T3 is getting around 600 - 800ms. I don't how much time it could be for best execution. And i am feeling guilty to write this much code for single record. Please suggest me the simple and best way. -
Select query is working on oracle 10.1.0 but its not working in 10.2.0
select query is working and retrieving some data from oracle database server 10.1.0.2.0, but same query is not working in 10.2.0.1.0 database server, its throws(ORA-00942: table or view does not exist)
But schema related tables and relevant details are same in 10.2.0.1.0 database server, so don't think that table is missing on that schema.
Note: Query length is upto 480 line
I have validate all the things, everything is fine, i don't why that query is not executing in different version.
I am in helpless in this situation?, anybody faced this issue?
Thanks in advanceValidated means all the tables and and columns are verified, i just running in sqlprompt,
Say for example:
sql> select * from table1;
One thing i observed while executing the query its showed error in one location of select sql. i mean particular word in select sql.
After that i combined some three lines of huge select sql into single then i am getting error in different location i mean different word...
My question is how same query executing in Oracle 10g Release 1, same dump (its exported from Release1) imported into oracle 10g release 2 is not executing. its shows Table or view doesn't exit. -
hi friends,
i've a view called "risk_efforts" with fields user_id,user_name,wknd_dt,week_day,prod_efforts,unprod_efforts.
Name Type
ROW_ID NUMBER
USER_ID VARCHAR2(14)
USER_NAME VARCHAR2(50)
WKND_DT VARCHAR2(8)
WEEK_DAY VARCHAR2(250)
PROD_EFFORTS NUMBER
UNPROD_EFFORTS NUMBER
data is like this:
when there is some data in prod_efforts, unprod_efforts will be null
when there is some data in unprod_efforts, prod_efforts will be null
for example:
USER_ID USER_NAME WKND_DT WEEK_DAY PROD_EFFORTS UNPROD_EFFORTS
G666999 GTest 20100403 TUE null 3
G666999 GTest 20100403 TUE 14 null
now i want to combine these 2 rows into 1 row i.e o/p should be like this
USER_ID USER_NAME WKND_DT WEEK_DAY PROD_EFFORTS UNPROD_EFFORTS
G666999 GTest 20100403 TUE 14 3
i've tried all combinations but couldn't get the query. Please help me with the exact SQL select query.
thanks,
GirishWelcome to the forum.
First read this:
Urgency in online postings
Secondly, it's always helpful to provide the following:
1. Oracle version (SELECT * FROM V$VERSION)
2. Sample data in the form of CREATE / INSERT statements.
3. Expected output
4. Explanation of expected output (A.K.A. "business logic")
5. Use \ tags for #2 and #3. See FAQ (Link on top right side) for details.
You have provided #3 and #4. However with no usable form of sample data forum members will often not respond as quickly as they could if you provided #2.
I'm just wagering a guess here but what about this:SELECT ROW_ID
, USER_ID
, WKND_DT
, WEEK_DAY
, MAX(PROD_EFFORTS) AS PROD_EFFORTS
, MAX(UNPROD_EFFORTS) AS UNPROD_EFFORTS
FROM RISK_EFFORTS
GROUP BY ROW_ID
, USER_ID
, WKND_DT
, WEEK_DAY -
How to create a Type Object with Dynamic select query columns in a Function
Hi Every One,
I'm trying to figure out how to write a piplined function that executes a dynamic select query and construct a Type Object in order to assigned it to the pipe row.
I have tried by
SELECT a.DB_QUERY INTO actual_query FROM mytable a WHERE a.country_code = 'US';
c :=DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c,actual_query,DBMS_SQL.NATIVE);
l_status := DBMS_SQL.EXECUTE(c);
DBMS_SQL.DESCRIBE_COLUMNS(c, col_cnt, rec_tab);
FOR j in 1..col_cnt LOOP
DBMS_SQL.DEFINE_COLUMN(c,j,v_val,2000);
END LOOP;
FOR j in 1..col_cnt LOOP
DBMS_SQL.COLUMN_VALUE(c,j,v_val);
END LOOP;
But got stuck, how to iterate the values and assign to a Type Object from the cursor. Can any one guide me how to do the process.
Thanks,
mallikj2Hi Justin,
First of thanks for your reply, and coming to my requirement, I need to report the list of items which are there in the dynamic select statement what am getting from the DB. The select statement number of columns may vary in my example for different countries the select item columns count is different. For US its '15', for UK it may be 10 ...like so, and some of the column value might be a combination or calculation part of other table columns (The select query contains more than one table in the from clause).
In order to execute the dynamic select statement and return the result i choose to write a function which will parse the cursor for dynamic query and then iterate the values and construct a Type Object and append it to the pipe row.
Am relatively very new for these sort of things, welcome in case of any suggestions to make it simple (Instead of the function what i thought to work with) also a sample narrating the new procedure will be appreciated.
Thanks in Advance,
mallikj2. -
SELECT QUERY BASED ON SECONDARY INDEX
Hi all,
CAN ANYONE TELL ME HOW TO WRITE SELECT QUERY BASED ON SECONDARY INDEX.
IN WHAT WAY DOES IT IMPROVE PERFORMANCE.
i KNOW WHEN CREATING SECONDARY INDEX I NEED TO GIVE AN INDEX NO -iT SHOULD BE ANY NUMBER RIGHT?
I HAVE TO LIST ALL PRIMARY KEYS FIRST AND THEN THE FIELD FOR WHICH I AM CREATING SECONDARY INDEX RIGHT?
LETS SAY I HAVE 2 PRIMARY KEYS AND I WANT TO CREATE SEONDARY INDEX FOR 2 FIELDS THEN
I NEED TO CREATE A SEPERTE SECONDARY INDEX FOR EACH ONE OF THOSE FIELDS OR ONE SHOULD BE ENOUGH
pLS LET ME KNOW IF IAM WRONGHI,
If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVINGclauses, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
You create secondary indexes using the ABAP Dictionary. There you can create its columns and define it as UNIQUE. However, you should not create secondary indexes to cover all possible combinations of fields.
Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. <b>As a rule, secondary indexes should not contain more than four fields</b>, <b>and you should not have more than five indexes for a single database table</b>.
<b>What to Keep in Mind for Secondary Indexes:</b>
http://help.sap.com/saphelp_nw04s/helpdata/en/cf/21eb2d446011d189700000e8322d00/content.htm
http://www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
Regards
Sudheer -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Can any one send select query for this?
Hi,
can any one plese send select query for the following query.please send as early as possible.
Loop through the I_BSID internal table to fill records in I_OUTPUT.Combine data from I_BSID, I_KNKK, I_KNKK_KNKLI, I_KNA1 and I_KNVV into I_OUTPUT based on the linking conditions .Field Description Source are
I_OUTPUT-BUKRS Company code I_BSID-BUKRS
I_OUTPUT-KUNNR Customer number I_BSID-KUNNR
I_OUTPUT-NAME1 Customer Name I_KNA1-NAME1
I_OUTPUT-KNKLI Credit account I_KNKK-KNKLI
I_OUTPUT-KDGRP Customer Group I_KNKK-KDGRP
I_OUTPUT-KLIMK Credit Limit I_KNKK_KNKLI-KLIMK
I_OUTPUT-KVGR1 Business Unit I_KNVV-KVGR1
I_OUTPUT-REBZG Invoice Number I_BSID-REBZG
I_OUTPUT-BLDAT Invoice Date I_BSID-BLDAT
I_OUTPUT-WAERS Document Currency I_BSID-WAERS
I_OUTPUT-DUE_DATE Due Date Based on below
Calculation
Get the Payment terms days combining I_BSID and I_T052 based on the linking conditions mentioned above. Note : a) Baseline Date : If baseline date I_BSID-ZFBDT is blank , use the Document Date.b) Payment Term Days :If I_BSID-ZBD3T is not blank, take this as Payment Term Days for Due date calculation.
If I_BSID-ZBD3T is Blank, then get payment term days from I_T052 based on I_BSID-ZTERM. If there are more than one record in I_T052 for the given Payment term, get the day part from baseline date and select the first record where the day limit I_T052-ZTAGG is greater than the day part.If I_BSID-ZBD3T is blank and I_BSID-ZTERM is also blank, then take Y000 (Due Immediately) as Payment term and proceed with the above logic. Set the payment term field blank while printing.
Calculate Due date : For Debits, Determine Due Date = Baseline Date + Payment term Days(not discount days) For Credits Due date = Baseline date.Then, move the amount I_BSID-DMBTR to respective buckets (Not yet Due, Current Due, Past due 1-30, Past Due 31-60 etc.) Based on the due date.
Thanks&Regards,
praveen kumar.AHI,
To get Open Items you can use Function module:
data:i_items TYPE STANDARD TABLE OF rfpos.
CALL FUNCTION 'CUSTOMER_OPEN_ITEMS'
EXPORTING
bukrs = p_bukrs
kunnr = wa_customer-kunnr
TABLES
t_postab = i_items
EXCEPTIONS
no_open_items = 1
OTHERS = 2.
Table I_items will have all the open items for that Customer in the given company code.
Well for Clear Items: Try
GET_CLEARED_ITEMS or FMITPOFM_CLEARED_ITEMS_GET.
Hope it helps.
Manish -
Issue with select query for secondary index
Hi all,
I have created a secondary index A on mara table with fields Mandt and Packaging Material Type VHART.
Now i am trying to write a report
Tables : mara.
data : begin of itab occurs 0.
include structure mara.
data : end of itab.
*select * from mara into table itab*
CLIENT SPECIFIED where
MANDT = SY-MANDT and
VHART = 'WER'.
I'm getting an error
Unable to interpret "CLIENT". Possible causes of error: Incorrect spelling or comma error.
if i change to my select query to
*select * from mara into table itab*
where
MANDT = SY-MANDT and
VHART = 'WER'.
I'm getting an error
Without the addition "CLIENT SPECIFIED", you cannot specify the client field "MANDT" in the WHERE condition.
Let me know if iam wrong and we are at 4.6c
ThanksLike I already said, even if you have added the mandt field in the secondary index, there is no need the use it in the select statement.
Let me elaborate on my reply before. If you have created a UNIQUE index, which I don't think you have, then you should include CLIENT in the index. A unique index for a client-dependent table must contain the client field.
Additional info:
The accessing speed does not depend on whether or not an index is defined as a unique index. A unique index is simply a means of defining that certain field combinations of data records in a table are unique.
Even if you have defined a secondary index, this does not automatically mean, that this index is used. This also depends on the database optimizer. The optimizer will determine which index is best and use it. So before transporting this index, you should make sure that the index is used. How to check this, have a look at the link:
[check if index is used|http://help.sap.com/saphelp_nw70/helpdata/EN/cf/21eb3a446011d189700000e8322d00/content.htm]
Edited by: Micky Oestreich on May 13, 2008 10:09 PM -
How an INDEX of a Table got selected when a SELECT query hits the Database
Hi All,
How an Index got selected when a SELECT query hits the Database Table.
My SELECT query is as ahown below.
SELECT ebeln ebelp matnr FROM ekpo
APPENDING TABLE i_ebeln
FOR ALL ENTRIES IN i_mara_01
WHERE werks = p_werks AND
matnr = i_mara_01-matnr AND
bstyp EQ 'F' AND
loekz IN (' ' , 'S') AND
elikz = ' ' AND
ebeln IN s_ebeln AND
pstyp IN ('0' , '3') AND
knttp = ' ' AND
ko_prctr IN r_prctr AND
retpo = ''.
The fields in the INDEX of the Table EKPO should be in the same sequence as in the WHERE clasuse?
Regards,
VijiHi,
You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
Database Indexes
Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
reference : help.sap.com
thanx. -
Regarding to perform in select query
could any tell the select query in this piece of code would affect the performance of the programe
DATA: BEGIN OF OUTREC,
BANKS LIKE BNKA-BANKS,
BANKL LIKE BNKA-BANKL,
BANKA LIKE BNKA-BANKA,
PROVZ LIKE BNKA-PROVZ, "Region (State, Province, County)
BRNCH LIKE BNKA-BRNCH,
STRAS LIKE BNKA-STRAS,
ORT01 LIKE BNKA-ORT01,
SWIFT LIKE BNKA-SWIFT,
END OF OUTREC.
OPEN DATASET P_OUTPUT FOR OUTPUT IN TEXT MODE.
IF SY-SUBRC NE 0. EXIT. ENDIF.
SELECT * FROM BNKA
WHERE BANKS EQ P_BANKS
AND LOEVM NE 'X'
AND XPGRO NE 'X'
ORDER BY BANKS BANKL.
PERFORM TRANSFER_DATA.
ENDSELECT.
CLOSE DATASET P_OUTPUT.
*& Transfer the data to the output file
FORM TRANSFER_DATA.
OUTREC-BANKS = BNKA-BANKS.
OUTREC-BANKL = BNKA-BANKL.
OUTREC-BANKA = BNKA-BANKA.
OUTREC-PROVZ = BNKA-PROVZ.
OUTREC-BRNCH = BNKA-BRNCH.
OUTREC-STRAS = BNKA-STRAS.
OUTREC-ORT01 = BNKA-ORT01.
OUTREC-SWIFT = BNKA-SWIFT.
TRANSFER OUTREC TO P_OUTPUT.
ENDFORM. " READ_IN_DATAHi
Ways of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one Internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
Points # 1/2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4. For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
5. Use Select Single if all primary key fields are supplied in the Where condition .
Point # 1
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
Point # 2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Point # 3
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
Point # 4
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
Point # 5
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements contd.. SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
2. For all frequently used Select statements, try to use an index.
3. Using buffered tables improves the performance considerably.
Point # 1
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
Point # 2
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
Point # 3
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements contd Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements contd For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements contd Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3. Instead of using nested Select loops it is often better to use subqueries.
Point # 1
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
Point # 2
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Point # 3
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
Point # 2
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
Point # 3
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
Point # 5
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
Point # 6
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
Point # 7
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
Point # 8
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
Point # 9
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 10
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
Point # 11
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
Point # 12
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 13
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
How to do outer join select query for an APEX report
Hello everyone,
I am Ann.
I have one select statement that calculate the statistics for one month(October 2012 in this example)
select ph.phase_number
, sum ( (case
WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE('Oct 2012','MON YYYY'))
THEN last_day(TO_DATE('Oct 2012','MON YYYY'))
ELSE ph.date_finished
END )
- ph.date_started + 1) / count(def.def_id) as avg_days
from phase_membership ph
inner join court_engagement ce on ph.mpm_eng_id = ce.engagement_id
inner join defendant def on ce.defendant_id = def.def_id
where def.active = 1
and ph.date_started <= last_day(TO_DATE('Oct 2012','MON YYYY'))
and ph.active = 1
and UPPER(ce.court_name) LIKE '%'
group by rollup(phase_number)
Result is as below
Phase_Number AVG_DAYS
Phase One 8.6666666666666667
Phase Two 14.6
Phase Three 12
11.4615365
I have other select list mainly list the months between two date value.
select to_char(which_month, 'MON YYYY') as display_month
from (
select add_months(to_date('Aug 2012','MON YYYY'), rownum-1) which_month
from all_objects
where
rownum <= months_between(to_date('Oct 2012','MON YYYY'), add_months(to_date('Aug 2012','MON YYYY'), -1))
order by which_month )
Query result is as below
DISPLAY_MONTH
AUG 2012
SEP 2012
OCT 2012
Is there any way that I can join these two select statement above to generate a result like:
Month Phase Number Avg days
Aug 2012 Phase One 8.666
Sep 2012 Phase One 7.66
Oct 2012 Phase One 5.66
Aug 2012 Phase Two 8.666
Sep 2012 Phase Two 7.66
Oct 2012 Phase Two 5.66
Aug 2012 Phase Three 8.666
Sep 2012 Phase Three 7.66
Oct 2012 Phase Three 5.66
Or
Month Phase Number Avg days
Aug 2012 Phase One 8.666
Aug 2012 Phase Two 7.66
Aug 2012 Phase Three 5.66
Sep 2012 Phase One 8.666
Sep 2012 Phase Two 7.66
Sep 2012 Phase Three 5.66
Oct 2012 Phase One 8.666
Oct 2012 Phase Two 7.66
Oct 2012 Phase Three 5.66
And it can be order by either Phase Number or Month.
My other colleague suggest I should use an left outer join but after trying so many ways, I am still stuck.
One of the select I tried is
select a.display_month,b.* from (
select to_char(which_month, 'MON YYYY') as display_month
from (
select add_months(to_date('Aug 2012','MON YYYY'), rownum-1) which_month
from all_objects
where
rownum <= months_between(to_date('Oct 2012','MON YYYY'), add_months(to_date('Aug 2012','MON YYYY'), -1))
order by which_month )) a left outer join
( select to_char(ph.date_finished,'MON YYYY') as join_month, ph.phase_number
, sum ( (case
WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE(a.display_month,'MON YYYY'))
THEN last_day(TO_DATE(a.display_month,'MON YYYY'))
ELSE ph.date_finished
END )
- ph.date_started + 1) / count(def.def_id) as avg_days
from phase_membership ph
inner join court_engagement ce on ph.mpm_eng_id = ce.engagement_id
inner join defendant def on ce.defendant_id = def.def_id
where def.active = 1
and ph.date_started <= last_day(TO_DATE(a.display_month,'MON YYYY'))
and ph.active = 1
and UPPER(ce.court_name) LIKE '%'
group by to_char(ph.date_finished,'MON YYYY') , rollup(phase_number)) b
on a.display_month = b.join_month
but then I get an error
SQL Error: ORA-00904: "A"."DISPLAY_MONTH": invalid identifier
I need to display a report on APEX with option for people to download at least CSV format.
I already have 1 inteactive report in the page, so don’t think can add another interactive report without using the iframe trick.
If any of you have any ideas, please help.
Thanks a lot.
AnnFirst of all, a huge thanks for following this Frank.
I have just started working here, I think the Oracle version is 11g, but not sure.
To run Oracle APEX version 4, I think they must have at least 10g R2.
This report is a bit challenging for me.I has never worked with PARTITION before.
About the select query you suggested, I run , and it seems working fine, but if I try this,
it return error ORA-01843: not a valid month
DEFINE startmonth = "Aug 2012";
DEFINE endmonth = "Oct 2012";
WITH all_months AS
select add_months(to_date('&startmonth','MON YYYY'), rownum-1) AS which_month
, add_months(to_date('&startmonth','MON YYYY'), rownum ) AS next_month
from all_objects
where
rownum <= months_between(to_date('&endmonth','MON YYYY'), add_months(to_date('&startmonth','MON YYYY'), -1))
select TO_CHAR (am.which_month, 'Mon YYYY') AS month
, ph.phase_number
, sum ( (case
WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE(am.which_month,'MON YYYY'))
THEN last_day(TO_DATE(am.which_month,'MON YYYY'))
ELSE ph.date_finished
END )
- ph.date_started + 1) / count(def.def_id) as avg_days
FROM all_months am
LEFT OUTER JOIN phase_membership ph PARTITION BY (ph.phase_number)
ON am.which_month <= ph.date_started
AND am.next_month > ph.date_started
AND ph.date_started <= last_day(TO_DATE(am.which_month,'MON YYYY')) -- May not be needed
AND ph.active = 1
LEFT OUTER join court_engagement ce on ph.mpm_eng_id = ce.engagement_id
and ce.court_name IS NOT NULL -- or something involving LIKE
LEFT OUTER join defendant def on ce.defendant_id = def.def_id
AND def.active = 1
group by rollup(phase_number, am.which_month)
ORDER BY am.which_month
, ph.phase_number
;Here is the shorted versions of the three tables:
A_DEFENDANT, A_ENGAGEMENT, A_PHASE_MEMBERSHIP
CREATE TABLE "A_DEFENDANT"
"DEF_ID" NUMBER NOT NULL ENABLE,
"FIRST_NAME" VARCHAR2(50 BYTE),
"SURNAME" VARCHAR2(20 BYTE) NOT NULL ENABLE,
"DOB" DATE NOT NULL ENABLE,
"ACTIVE" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
CONSTRAINT "A_DEFENDANT_PK" PRIMARY KEY ("DEF_ID"))
Sample Data
Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (101,'Joe','Bloggs',to_date('12/12/99','DD/MM/RR'),1);
Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (102,'John','Smith',to_date('20/05/00','DD/MM/RR'),1);
Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (103,'Jane','Black',to_date('15/02/98','DD/MM/RR'),1);
Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (104,'Minnie','Mouse',to_date('13/12/88','DD/MM/RR'),0);
Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (105,'Daisy','Duck',to_date('05/08/00','DD/MM/RR'),1);
CREATE TABLE "A_ENGAGEMENT"
"ENGAGEMENT_ID" NUMBER NOT NULL ENABLE,
"COURT_NAME" VARCHAR2(50 BYTE) NOT NULL ENABLE,
"DATE_REFERRED" DATE,
"DETERMINATION_HEARING_DATE" DATE,
"DATE_JOINED_COURT" DATE,
"DATE_TREATMENT_STARTED" DATE,
"DATE_TERMINATED" DATE,
"TERMINATION_TYPE" VARCHAR2(50 BYTE),
"ACTIVE" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
"DEFENDANT_ID" NUMBER,
CONSTRAINT "A_ENGAGEMENT_PK" PRIMARY KEY ("ENGAGEMENT_ID"))
Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (1,'AA',to_date('12/08/12','DD/MM/RR'),null,to_date('12/08/12','DD/MM/RR'),null,null,null,1,101);
Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (2,'BB',to_date('01/09/12','DD/MM/RR'),null,to_date('02/09/12','DD/MM/RR'),null,null,null,1,102);
Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (3,'AA',to_date('02/09/12','DD/MM/RR'),null,to_date('15/09/12','DD/MM/RR'),null,null,null,1,103);
Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (4,'BB',to_date('01/10/12','DD/MM/RR'),null,to_date('02/10/12','DD/MM/RR'),null,null,null,1,105);
CREATE TABLE "A_PHASE_MEMBERSHIP"
"MPM_ID" NUMBER NOT NULL ENABLE,
"MPM_ENG_ID" NUMBER NOT NULL ENABLE,
"PHASE_NUMBER" VARCHAR2(50 BYTE),
"DATE_STARTED" DATE NOT NULL ENABLE,
"DATE_FINISHED" DATE,
"NOTES" VARCHAR2(2000 BYTE),
"ACTIVE" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
CONSTRAINT "A_PHASE_MEMBERSHIP_PK" PRIMARY KEY ("MPM_ID"))
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (1,1,'PHASE ONE',to_date('15/09/12','DD/MM/RR'),to_date('20/09/12','DD/MM/RR'),null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (2,1,'PHASE TWO',to_date('21/09/12','DD/MM/RR'),to_date('29/09/12','DD/MM/RR'),null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (3,2,'PHASE ONE',to_date('12/09/12','DD/MM/RR'),null,null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (4,3,'PHASE ONE',to_date('20/09/12','DD/MM/RR'),to_date('01/10/12','DD/MM/RR'),null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (5,3,'PHASE TWO',to_date('02/10/12','DD/MM/RR'),to_date('15/10/12','DD/MM/RR'),null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (6,4,'PHASE ONE',to_date('03/10/12','DD/MM/RR'),to_date('10/10/12','DD/MM/RR'),null,1);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (7,3,'PHASE THREE',to_date('17/10/12','DD/MM/RR'),null,null,0);
Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (8,1,'PHASE THREE',to_date('30/09/12','DD/MM/RR'),to_date('16/10/12','DD/MM/RR'),null,1);
The requirements are:
The user must be able to request the extract for one or more calendar months, e.g.
May 2013
May 2013 – Sep 2013.
The file must contain a separate row for each calendar month in the requested range. Each row must contain the statistics computed for that calendar month.
The file must also include a row of totals.
The user must be able to request the extract for either Waitakere or Auckland or Consolidated (both courts’ statistics accumulated).
Then the part that I am stuck is
For each monitoring phase:
Phase name (e.g. “Phase One”)
Avg_time_in_phase_all_particip
for each phase name,
Add up days in each “phase name” Monitoring Phase, calculated as:
If Monitoring Phase.Date Finished is NULL or > month end date,
+(*Month end date* Minus Monitoring Phase.Date Started Plus 1)+
Otherwise (phase is complete)
+(Monitoring Phase.Date Finished Minus Monitoring Phase.Date Started Plus 1.)+
Divide by the numbers of all participants who have engaged in “phase name”.
This is the words of the Business Analyst,
I try to do as required but still struggle to identify end_month for the above formula to display for the range of months.
Of course, I can write two nested cursor. The first one run the list of month, then for each month, run the parameterised report.
But I prefer if possible just use SQL statements, or at least a PL/SQL but return a query.
With this way, I can create an APEX report, and use their CSV Extract function.
Yes, you are right, court_name is one of the selection parameters.
And the statistics is not exactly for one month. It is kind of trying to identify all phases that are running through the specified month (even phase.date_started is before the month start).
This is the reason why I put the condition AND ph.date_started <= last_day(TO_DATE('Oct 2012','MON YYYY')) (otherwise I get negative avg_days)
User can choose either one court "AA" or "BB" or combined which is all figures.
Sorry for bombarding you a lot of information.
Thanks a lot, again.
Edited by: Ann586341 on Oct 29, 2012 9:57 PM
Edited by: Ann586341 on Oct 29, 2012 9:59 PM -
Basic query regarding work-area and select query
hi
dear sdn members,
thanks too all for solving all my query's up till now
i am stuck in a problem need help
1) why basically work-area has been used ? the sole purpose
2) different types of select query ? only coding examples
note: no links pls
regards,
virushi,
Work Area
Description for a data object that is particularly useful when working with internal tables or database tables as a source for changing operations or a target for reading operations.
WORKAREA is a structure that can hold only one record at a time. It is a collection of fields. We use workarea as we cannot directly read from a table. In order to interact with a table we need workarea. When a Select Statement is executed on a table then the first record is read and put into the header of the table and from there put into the header or the workarea(of the same structure as that of the table)of the internal table and then transferred top the body of the internal table or directly displayed from the workarea.
Each row in a table is a record and each column is a field.
While adding or retrieving records to / from internal table we have to keep the record temporarily.
The area where this record is kept is called as work area for the internal table. The area must have the same structure as that of internal table. An internal table consists of a body and an optional header line.
Header line is a implicit work area for the internal table. It depends on how the internal table is declared that the itab will have the header line or not.
.g.
data: begin of itab occurs 10,
ab type c,
cd type i,
end of itab. " this table will have the header line.
data: wa_itab like itab. " explicit work area for itab
data: itab1 like itab occurs 10. " table is without header line.
The header line is a field string with the same structure as a row of the body, but it can only hold a single row.
It is a buffer used to hold each record before it is added or each record as it is retrieved from the internal table. It is the default work area for the internal table.
With header line
SELECT.
Put the curson on that word and press F1 . You can see the whole documentation for select statements.
select statements :
SELECT result
FROM source
INTO|APPENDING target
[[FOR ALL ENTRIES IN itab] WHERE sql_cond]
Effect
SELECT is an Open-SQL-statement for reading data from one or several database tables into data objects.
The select statement reads a result set (whose structure is determined in result ) from the database tables specified in source, and assigns the data from the result set to the data objects specified in target. You can restrict the result set using the WHERE addition. The addition GROUP BY compresses several database rows into a single row of the result set. The addition HAVING restricts the compressed rows. The addition ORDER BY sorts the result set.
The data objects specified in target must match the result set result. This means that the result set is either assigned to the data objects in one step, or by row, or by packets of rows. In the second and third case, the SELECT statement opens a loop, which which must be closed using ENDSELECT. For every loop pass, the SELECT-statement assigns a row or a packet of rows to the data objects specified in target. If the last row was assigned or if the result set is empty, then SELECT branches to ENDSELECT . A database cursor is opened implicitly to process a SELECT-loop, and is closed again when the loop is ended. You can end the loop using the statements from section leave loops.
Up to the INTO resp. APPENDING addition, the entries in the SELECTstatement define which data should be read by the database in which form. This requirement is translated in the database interface for the database system´s programming interface and is then passed to the database system. The data are read in packets by the database and are transported to the application server by the database server. On the application server, the data are transferred to the ABAP program´s data objects in accordance with the data specified in the INTO and APPENDING additions.
System Fields
The SELECT statement sets the values of the system fields sy-subrc and sy-dbcnt.
sy-subrc Relevance
0 The SELECT statement sets sy-subrc to 0 for every pass by value to an ABAP data object. The ENDSELECT statement sets sy-subrc to 0 if at least one row was transferred in the SELECT loop.
4 The SELECT statement sets sy-subrc to 4 if the result set is empty, that is, if no data was found in the database.
8 The SELECT statement sets sy-subrc to 8 if the FOR UPDATE addition is used in result, without the primary key being specified fully after WHERE.
After every value that is transferred to an ABAP data object, the SELECT statement sets sy-dbcnt to the number of rows that were transferred. If the result set is empty, sy-dbcnt is set to 0.
Notes
Outside classes, you do not need to specify the target area with INTO or APPENDING if a single database table or a single view is specified statically after FROM, and a table work area dbtab was declared with the TABLES statement for the corresponding database table or view. In this case, the system supplements the SELECT-statement implicitly with the addition INTO dbtab.
Although the WHERE-condition is optional, you should always specify it for performance reasons, and the result set should not be restricted on the application server.
SELECT-loops can be nested. For performance reasons, you should check whether a join or a sub-query would be more effective.
Within a SELECT-loop you cannot execute any statements that lead to a database commit and consequently cause the corresponding database cursor to close.
SELECT - result
Syntax
... lines columns ... .
Effect
The data in result defines whether the resulting set consists of multiple rows (table-like structure) or a single row ( flat structure). It specifies the columns to be read and defines their names in the resulting set. Note that column names from the database table can be changed. For single columns, aggregate expressions can be used to specify aggregates. Identical rows in the resulting set can be excluded, and individual rows can be protected from parallel changes by another program.
The data in result consists of data for the rows lines and for the columns columns.
SELECT - lines
Syntax
... { SINGLE }
| { { } } ... .
Alternatives:
1. ... SINGLE
2. ... { }
Effect
The data in lines specifies that the resulting set has either multiple lines or a single line.
Alternative 1
... SINGLE
Effect
If SINGLE is specified, the resulting set has a single line. If the remaining additions to the SELECT command select more than one line from the database, the first line that is found is entered into the resulting set. The data objects specified after INTO may not be internal tables, and the APPENDING addition may not be used.
An exclusive lock can be set for this line using the FOR UPDATE addition when a single line is being read with SINGLE. The SELECT command is used in this case only if all primary key fields in logical expressions linked by AND are checked to make sure they are the same in the WHERE condition. Otherwise, the resulting set is empty and sy-subrc is set to 8. If the lock causes a deadlock, an exception occurs. If the FOR UPDATE addition is used, the SELECT command circumvents SAP buffering.
Note
When SINGLE is being specified, the lines to be read should be clearly specified in the WHERE condition, for the sake of efficiency. When the data is read from a database table, the system does this by specifying comparison values for the primary key.
Alternative 2
Effect
If SINGLE is not specified and if columns does not contain only aggregate expressions, the resulting set has multiple lines. All database lines that are selected by the remaining additions of the SELECT command are included in the resulting list. If the ORDER BY addition is not used, the order of the lines in the resulting list is not defined and, if the same SELECT command is executed multiple times, the order may be different each time. A data object specified after INTO can be an internal table and the APPENDING addition can be used. If no internal table is specified after INTO or APPENDING, the SELECT command triggers a loop that has to be closed using ENDSELECT.
If multiple lines are read without SINGLE, the DISTINCT addition can be used to exclude duplicate lines from the resulting list. If DISTINCT is used, the SELECT command circumvents SAP buffering. DISTINCT cannot be used in the following situations:
If a column specified in columns has the type STRING, RAWSTRING, LCHAR or LRAW
If the system tries to access pool or cluster tables and single columns are specified in columns.
Note
When specifying DISTINCT, note that you have to carry out sort operations in the database system for this.
SELECT - columns
Syntax
| { {col1|aggregate( col1 )}
{col2|aggregate( col2 )} ... }
| (column_syntax) ... .
Alternatives:
1. ... *
2. ... {col1|aggregate( col1 )}
{col2|aggregate( col2 )} ...
3. ... (column_syntax)
Effect
The input in columns determines which columns are used to build the resulting set.
Alternative 1
Effect
If * is specified, the resulting set is built based on all columns in the database tables or views specified after FROM, in the order given there. The columns in the resulting set take on the name and data type from the database tables or views. Only one data object can be specified after INTO.
Note
If multiple database tables are specified after FROM, you cannot prevent multiple columns from getting the same name when you specify *.
Alternative 2
... {col1|aggregate( col1 )}
{col2|aggregate( col2 )} ...
Effect
A list of column labels col1 col2 ... is specified in order to build the resulting list from individual columns. An individual column can be specified directly or as an argument of an aggregate function aggregate. The order in which the column labels are specified is up to you and defines the order of the columns in the resulting list. Only if a column of the type LCHAR or LRAW is listed does the corresponding length field also have to be specified directly before it. An individual column can be specified multiple times.
The addition AS can be used to define an alternative column name a1 a2 ... with a maximum of fourteen digits in the resulting set for every column label col1 col2 .... The system uses the alternative column name in the additions INTO|APPENDING CORRESPONDING FIELDS and ORDER BY. .
Column labels
The following column labels are possible:
If only a single database table or a single view is specified after FROM, the column labels in the database table - that is, the names of the components comp1 comp2... - can be specified directly for col1 col2 ... in the structure of the ABAP Dictionary.
If the name of the component occurs in multiple database tables of the FROM addition, but the desired database table or the view dbtab is only specified once after FROM, the names dbtab~comp1 dbtab~comp2 ... have to be specified for col1 col2 .... comp1 comp2 ... are the names of the components in the structure of the ABAP Dictionary.
If the desired database table or view occurs multiple times after FROM, the names tabalias~comp1 tabalias~comp2 ... have to be specified for col1 col2 .... tabalias is the alternative table name of the database table or view defined after FROM, and comp1 comp2 ... are the names of the components in the structure of the ABAP Dictionary.
The data type of a single column in the resulting list is the datatype of the corresponding component in the ABAP Dictionary. The corresponding data object after INTO or APPENDING has to be selected accordingly.
Note
If multiple database tables are specified after FROM, you can use alternative names when specifying single columns to avoid having multiple columns with the same name.
Example
Read specific columns of a single row.
DATA wa TYPE spfli.
SELECT SINGLE carrid connid cityfrom cityto
INTO CORRESPONDING FIELDS OF wa
FROM spfli
WHERE carrid EQ 'LH' AND connid EQ '0400'.
IF sy-subrc EQ 0.
WRITE: / wa-carrid, wa-connid, wa-cityfrom, wa-cityto.
ENDIF.
Alternative 3
... (column_syntax)
Effect
Instead of static data, a data object column_syntax in brackets can be specified, which, when the command is executed, either contains the syntax shown with the static data, or is initial. The data object column_syntax can be a character-type data object or an internal table with a character-type data type. The syntax in column_syntax, like in the ABAP editor, is not case-sensitive. When specifying an internal table, you can distribute the syntax over multiple rows.
If column_syntax is initial when the command is executed, columns is implicitly set to * and all columns are read.
If columns are specificied dynamically without the SINGLE addition, the resulting set is always regarded as having multiple rows.
Notes
Before Release 6.10, you could only specify an internal table with a flat character-type row type for column_syntax with a maximum of 72 characters. Also, before Release 6.10, if you used the DISTINCT addition for dynamic access to pool tables or cluster tables, this was ignored, but since release 6.10, this causes a known exception.
If column_syntax is an internal table with header line, the table body and not the header line is evaluated.
Example
Read out how many flights go to and from a city. The SELECT command is implemented only once in a sub-program. The column data, including aggregate function and the data after GROUP BY, is dynamic. Instead of adding the column data to an internal l_columns table, you could just as easily concatenate it in a character-type l_columns field.
PERFORM my_select USING `CITYFROM`.
ULINE.
PERFORM my_select USING `CITYTO`.
FORM my_select USING l_group TYPE string.
DATA: l_columns TYPE TABLE OF string,
l_container TYPE string,
l_count TYPE i.
APPEND l_group TO l_columns.
APPEND `count( * )` TO l_columns.
SELECT (l_columns)
FROM spfli
INTO (l_container, l_count)
GROUP BY (l_group).
WRITE: / l_count, l_container.
ENDSELECT.
ENDFORM.
SELECT - aggregate
Syntax
... { MAX( col )
| MIN( col )
| AVG( col )
| SUM( col )
| COUNT( DISTINCT col )
| COUNT( * )
| count(*) } ... .
Effect
As many of the specified column labels as you like can be listed in the SELECT command as arguments of the above aggregate expression. In aggregate expressions, a single value is calculated from the values of multiple rows in a column as follows (note that the addition DISTINCT excludes double values from the calculation):
MAX( col ) Determines the maximum value of the value in the column col in the resulting set or in the current group.
MIN( col ) Determines the minimum value of the content of the column col in the resulting set or in the current group.
AVG( col ) Determines the average value of the content of the column col in the resulting set or in the current group. The data type of the column has to be numerical.
SUM( col ) Determines the sum of the content of the column col in the resulting set or in the current group. The data type of the column has to be numerical.
COUNT( DISTINCT col ) Determines the number of different values in the column col in the resulting set or in the current group.
COUNT( * ) (or count(*)) Determines the number of rows in the resulting set or in the current group. No column label is specified in this case.
If you are using aggregate expressions, all column labels that are not listed as an argument of an aggregate function are listed after the addition GROUP BY. The aggregate functions evaluate the content of the groups defined by GROUP BY in the database system and transfer the result to the combined rows of the resulting set.
The data type of aggregate expressions with the function MAX, MIN or SUM is the data type of the corresponding column in the ABAP Dictionary. Aggregate expressions with the function AVG have the data type FLTP, and those with COUNT have the data type INT4. The corresponding data object after INTO or APPENDING has to be selected accordingly.
Note the following points when using aggregate expressions:
If the addition FOR ALL ENTRIES is used in front of WHERE, or if cluster or pool tables are listed after FROM, no other aggregate expressions apart from COUNT( * ) can be used.
Columns of the type STRING or RAWSTRING cannot be used with aggregate functions.
When aggregate expressions are used, the SELECT command makes it unnecessary to use SAP buffering.
Null values are not included in the calculation for the aggregate functions. The result is a null value only if all the rows in the column in question contain the null value.
If only aggregate expressions are used after SELECT, the results set has one row and the addition GROUP BY is not necessary. If a non-table type target area is specified after INTO, the command ENDSELECT cannot be used together with the addition SINGLE. If the aggregate expression count( * ) is not being used, an internal table can be specified after INTO, and the first row of this table is filled.
If aggregate functions are used without GROUP BY being specified at the same time, the resulting set also contains a row if no data is found in the database. If count( * ) is used, the column in question contains the value 0. The columns in the other aggregate functions contain initial values. This row is assigned to the data object specified after INTO, and unless count( * ) is being used exclusively, sy-subrc is set to 0 and sy-dbcnt is set to 1. If count( *) is used exclusively, the addition INTO can be omitted and if no data can be found in the database, sy-subrc is set to 4 and sy-dbcnt is set to 0.
if helpful reward points -
Is it worth using select query on infotype tables
Hi Experts,
I might be posting in the wrong column, but i just need to know is it worth using a select query on Infotype tables (PAxxxx)?? or should we prefer using the function modules for data fetching?
If select is not suggested, what is the reason for that?
Rgds
PrateekHi ,
Its not said that u cant write select on PAXXXX tables . Yes of couse LDB are there to fetch the data but it depends on
the requirement when to write a select and when to consider using in LDB .
Generally when you are looking at say 8 to 10 tables of infotypes with free selection , then LDB is suggested to fetch the data .
if you are looking to fetch the data for say some tables for a restricted selection (where clause) then select is used .
If i want to write a program using select only then fetching data from infotypes tables for large no of records will lead to
more time consumption which becomes easier in LDB as they are fetched in hierarchy level based on keys .
Normally it will be a combination of LDB and select querys in the development scenario
Br,
Vijay.
Maybe you are looking for
-
I can connect fine with my PC and make edits to sites. When I try to connect with Mac it gives me an FTP error. I read it might have to do with MTU or ISP settings? Not sure where I find those or how i fix this. Thanks!
-
Import button is missing?
Where is the import button? My computer isn't showing it at all even after the new iOs upgrade and a full restart?
-
Problems Printing to Ricoh Copier...
Hi, We recently ordered a new Ricoh 3260c Copier and Just found out that I cannot duplex. I am able to duplex in TextEdit, preview, Safari, etc. I was able to replicate on two other macs. I reinstalled iWork on one of the machines just to check. I ha
-
CAN ANY ONE EXPLAIN THE UPSERT USING A PLSQL CODE(EXAMPLE)
-
Free Yahoo Mail and iPod touch
Why does my free Yahoo mail account basically set itself up on my ipod touch while there is no way to set it up on my mac mail? it would seem like I should be able to have it on both.