Sum() in select query

Hi,
How to avoid getting empty row when sum() function returns no value if the condition fails in where clause?
TIA
Harish

SQL> select sum(cnt) sm
  2  from t;
        SM                                                                     
SQL> select sum(cnt) sm
  2  from t
  3  minus
  4  select null sm
  5  from dual;
no rows selected                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • How can i use SUM aggregate in select query?

    HI,
    GURUS,
    How can i use SUM function in Select Query and i want to store that value into itab.
    for ex:
    TABLES: vbap.
    types: begin of ty_vbap,
           incluse type vbap,
           sum type string,
          end of ty_vbap.
    data: i_vbap type TABLE OF ty_vbap,
          w_vbap type ty_vbap.
    SELECT sum(posnr) FROM vbap into table i_vbap up to 5 rows.
                            (or)
    SELECT sum(posnr) FROM vbap into table i_vbap group by vbeln.
      loop at i_vbap into w_vbap
    " which variable have to use to display summed value.
      endloop.
    if above code is not understandable pleas give ome sample code on  above query.
    Thank u,
    shabeer ahmed.

    Hi,
    Check this sample code.
    TABLES SBOOK.
    DATA:  COUNT TYPE I, SUM TYPE P DECIMALS 2, AVG TYPE F.
    DATA:  CONNID LIKE SBOOK-CONNID.
    SELECT CONNID COUNT( * ) SUM( LUGGWEIGHT ) AVG( LUGGWEIGHT )
           INTO (CONNID, COUNT, SUM, AVG)
           FROM SBOOK
           WHERE
             CARRID   = 'LH '      AND
             FLDATE   = '19950228'
           GROUP BY CONNID.
      WRITE: / CONNID, COUNT, SUM, AVG.
    ENDSELECT.
    Regards,
    Sravanthi

  • Can we get data return from stored procedure in a select query ?

    Hello,
    Suppose i have a function GetSum(x,y) that returns sum of two numbers x and y .We can call this function from within a sql function like this :
    select GetSum(4,5) SUM from dual;But is this possible through a stored procedure ? i.e., can i call a stored procedure from within a select query like i have done in above code ?

    Hi,
    bootstrap wrote:
    Hello,
    Suppose i have a function GetSum(x,y) that returns sum of two numbers x and y .We can call this function from within a sql function like this :
    select GetSum(4,5) SUM from dual;But is this possible through a stored procedure ? i.e., can i call a stored procedure from within a select query like i have done in above code ?The short answer has already been given.
    Why can't you use a function?
    Suppose you could use a procedure. What results would you want to see from:
    SELECT  my_proc (4, 5)
    FROM    dual
    ;? Why?
    Explain what you want to do, and somebody will help you find a good way to do it.

  • How to create a triangle view with a select query?

    I need help to build a select query that will create a triangle view.
    Below is the table I have to query
    *{color:#ff0000}INITIAL TABLE{color}*
    *{color:#008000}AMOUNT | TRANSACTION_DATE | OPEN_DATE | TYPE{color}*
    5 | 30-JAN-09 | 10-JAN-09 | A
    10 | 12-JAN - 09 | 30-NOV-08 | A
    20 | 30 - DEC - 08 | 15-OCT-08 | A
    10 | 30 - DEC - 08 | 8 - OCT - 08 | A
    *{color:#ff0000}THE FINAL TABLE I HAVE TO CREATE:{color}*
    DEV_PERIOD - TO_CHAR(TRUNC(TRANSACTION_DATE,'Q'),'YYYY-Q') AS DEV_PERIOD
    OPEN PERIOD - TO_CHAR(TRUNC(OPEN_DATE,'Q'),'YYYY-Q') AS OPEN_PERIOD
    {color:#008000}*SUM of AMOUNT | DEV_PERIOD | OPEN_PERIOD | TYPE*{color}
    5 | 2009 - 1 | 2009 - 1 | A
    40 | 2009 - 1 | 2008 - 4 | A
    30 | 2008 - 4 | 2008 - 4 | A
    {color:#ff0000}*This is another view of the table (The triangle view)*{color}
    | Dev_Period 2008- 1 | 2008 - 2| 2008 -3 | 2008 - 4 | 2009 -1 |
    Open_Period |
    2008 - 1..................... 0.......... 0............ 0........... 0.......... 0
    2008 - 2 ..................................0............ 0........... 0.......... 0
    2008 - 3................................................. 0........... 0.......... 0
    2008 - 4 ..............................................................30......... 40
    2009 - 1 .............................................................................5
    Any ideas will be appreaciated.
    Thank you!

    I think the first thing you need to do is look up "pivot query" in this newsgroup. And how complicated your query gets to be will depend on your database version (11 natively supports pivot queries). You have a variable number of columns in your result set (depending on how many quarters you have in your data.
    I think once you get the columns sorted out, working out the numbers to put in each column will be relatively easy.
    Jon

  • Needed help to improve the performance of a select query?

    Hi,
    I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
    i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
    cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
    select sca.branch_code "BRANCH",
    sca.cust_ac_no "ACCOUNT",
    to_char(p_applDate, 'YYYYMM') "YEARMONTH",
    sca.ccy "CURRENCY",
    sca.account_class "PRODUCT",
    sca.cust_no "CUSTOMER",
    sca.ac_desc "DESCRIPTION",
    null "LOW_BAL",
    null "HIGH_BAL",
    null "AVG_CR_BAL",
    null "AVG_DR_BAL",
    null "CR_DAYS",
    null "DR_DAYS",
    --null                                 "CR_TURNOVER",       
    --null                                 "DR_TURNOVER",       
    null "DR_OD_DAYS",
    (select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
    (case when (p_applDate >= sca.tod_limit_start_date and
    p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
    sca.tod_limit else 0 end) dd
    from getm_facility gf, sttm_cust_account_linkages scal
    where gf.line_code || gf.line_serial = scal.linked_ref_no
    and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
    --sc.credit_rating                      "CR_GRADE",        
    null "AVG_NET_BAL",
    null "UNAUTH_OD_AMT",
    sca.acy_blocked_amount "AMT_BLOCKED",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'D'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "DR_INTEREST",
    (select sum(amt)
    from ictb_entries_history ieh
    where ieh.acc = sca.cust_ac_no
    and ieh.brn = sca.branch_code
    and ieh.drcr = 'C'
    and ieh.liqn = 'Y'
    and ieh.entry_passed = 'Y'
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select * from ictm_pr_int ipi, ictm_rule_frm irf
    where ipi.product_code = ieh.prod
    and ipi.rule = irf.rule_id
    and irf.book_flag = 'B')) "CR_INTEREST",
    (select sum(amt) from ictb_entries_history ieh
    where ieh.brn = sca.branch_code
    and ieh.acc = sca.cust_ac_no
    and ieh.ent_dt between p_st_dt and p_ed_dt
    and exists (
    select product_code
    from ictm_product_definition ipd
    where ipd.product_code = ieh.prod
    and ipd.product_type = 'C')) "FEE_INCOME",
    sca.record_stat "ACC_STATUS",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
    case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
    and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
    and not exists (select 1
    from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null))
    then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
    (select 1 from dual
    where exists (select 1 from ictm_td_closure_renew itcr
    where itcr.brn = sca.branch_code
    and itcr.acc = sca.cust_ac_no
    and itcr.renewal_date = sysdate)
    or exists (select 1 from ictm_tdpayin_details itd
    where itd.multimode_payopt = 'Y'
    and itd.brn = sca.branch_code
    and itd.acc = sca.cust_ac_no
    and itd.multimode_offset_brn is not null
    and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
    (select maturity_date from ictm_acc ia
    where ia.brn = sca.branch_code
    and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
    sca.ac_stat_no_dr "DR_DISALLOWED",
    sca.ac_stat_no_cr "CR_DISALLOWED",
    sca.ac_stat_block                     "BLOCKED_ACC",       Not Reqd
    sca.ac_stat_dormant "DORMANT_ACC",
    sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
    sca.ac_stat_frozen "FROZEN_ACC",
    sca.ac_open_date "ACC_OPENING_DT",
    sca.address1 "ADD_LINE_1",
    sca.address2 "ADD_LINE_2",
    sca.address3 "ADD_LINE_3",
    sca.address4 "ADD_LINE_4",
    sca.joint_ac_indicator "JOINT_ACC",
    sca.acy_avl_bal "CR_BAL",
    0 "DR_BAL",
    0 "CR_BAL_LCY", t
    0 "DR_BAL_LCY",
    null "YTD_CR_MOVEMENT",
    null "YTD_DR_MOVEMENT",
    null "YTD_CR_MOVEMENT_LCY",
    null "YTD_DR_MOVEMENT_LCY",
    null "MTD_CR_MOVEMENT",
    null "MTD_DR_MOVEMENT",
    null "MTD_CR_MOVEMENT_LCY",
    null "MTD_DR_MOVEMENT_LCY",
    'N' "BRANCH_TRFR", --New
    sca.provision_amount "PROVISION_AMT",
    sca.account_type "ACCOUNT_TYPE",
    nvl(sca.tod_limit, 0) "TOD_LIMIT",
    nvl(sca.sublimit, 0) "SUB_LIMIT",
    nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
    nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
    from sttm_cust_account sca, sttm_customer sc
    where sca.branch_code = p_branch_code
    and sca.cust_no = sc.customer_no
    and ( exists (select 1 from actb_daily_log adl
    where adl.ac_no = sca.cust_ac_no
    and adl.ac_branch = sca.branch_code
    and adl.trn_dt = p_applDate
    and adl.auth_stat = 'A')
    or exists (select 1 from catm_amount_blocks cab
    where cab.account = sca.cust_ac_no
    and cab.branch = sca.branch_code
    and cab.effective_date = p_applDate
    and cab.auth_stat = 'A')
    or exists (select 1 from ictm_td_closure_renew itcr
    where itcr.acc = sca.cust_ac_no
    and itcr.brn = sca.branch_code
    and itcr.renewal_date = p_applDate)
    or exists (select 1 from sttm_ac_stat_change sasc
    where sasc.cust_ac_no = sca.cust_ac_no
    and sasc.branch_code = sca.branch_code
    and sasc.status_change_date = p_applDate
    and sasc.auth_stat = 'A')
    or exists (select 1 from cstb_acc_brn_trfr_log cabtl
    where cabtl.branch_code = sca.branch_code
    and cabtl.cust_ac_no = sca.cust_ac_no
    and cabtl.process_status = 'S'
    and cabtl.process_date = p_applDate)
    or exists (select 1 from sttbs_provision_history sph
    where sph.branch_code = sca.branch_code
    and sph.cust_ac_no = sca.cust_ac_no
    and sph.esn_date = p_applDate)
    or exists (select 1 from sttms_cust_account_dormancy scad
    where scad.branch_code = sca.branch_code
    and scad.cust_ac_no = sca.cust_ac_no
    and scad.dormancy_start_dt = p_applDate)
    or sca.maker_dt_stamp = p_applDate
    or sca.status_since = p_applDate
    l_tb_acc_det ty_tb_acc_det_int;
    l_brnrec cvpks_utils.rec_brnlcy;
    l_acbr_lcy sttms_branch.branch_lcy%type;
    l_lcy_amount actbs_daily_log.lcy_amount%type;
    l_xrate number;
    l_dt_rec sttm_dates%rowtype;
    l_acc_rec sttm_cust_account%rowtype;
    l_acc_stat_row ty_r_acc_stat;
    Edited by: user13710379 on Jan 7, 2012 12:18 AM

    I see it more like shown below (possibly with no inline selects
    Try to get rid of the remaining inline selects ( left as an exercise ;) )
    and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
    select sca.branch_code "BRANCH",
           sca.cust_ac_no "ACCOUNT",
           to_char(p_applDate, 'YYYYMM') "YEARMONTH",
           sca.ccy "CURRENCY",
           sca.account_class "PRODUCT",
           sca.cust_no "CUSTOMER",
           sca.ac_desc "DESCRIPTION",
           null "LOW_BAL",
           null "HIGH_BAL",
           null "AVG_CR_BAL",
           null "AVG_DR_BAL",
           null "CR_DAYS",
           null "DR_DAYS",
    --     null "CR_TURNOVER",
    --     null "DR_TURNOVER",
           null "DR_OD_DAYS",
           w.dd "OD_LIMIT",
    --     sc.credit_rating "CR_GRADE",
           null "AVG_NET_BAL",
           null "UNAUTH_OD_AMT",
           sca.acy_blocked_amount "AMT_BLOCKED",
           x.dr_int "DR_INTEREST",
           x.cr_int "CR_INTEREST",
           y.fee_amt "FEE_INCOME",
           sca.record_stat "ACC_STATUS",
           case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_THE_MONTH",
           case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
                 and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
                 and not exists(select 1
                                  from ictm_tdpayin_details itd
                                 where itd.multimode_payopt = 'Y'
                                   and itd.brn = sca.branch_code
                                   and itd.acc = sca.cust_ac_no
                                   and itd.multimode_offset_brn is not null
                                   and itd.multimode_tdoffset_acc is not null
                then 1
                else 0
           end "NEW_ACC_FOR_NEW_CUST",
           (select 1 from dual
             where exists(select 1
                            from ictm_td_closure_renew itcr
                           where itcr.brn = sca.branch_code
                             and itcr.acc = sca.cust_ac_no
                             and itcr.renewal_date = sysdate
                or exists(select 1
                            from ictm_tdpayin_details itd
                           where itd.multimode_payopt = 'Y'
                             and itd.brn = sca.branch_code
                             and itd.acc = sca.cust_ac_no
                             and itd.multimode_offset_brn is not null
                             and itd.multimode_tdoffset_acc is not null
           ) "RENEWED_OR_ROLLOVER",
           m.maturity_date "MATURITY_DATE",
           sca.ac_stat_no_dr "DR_DISALLOWED",
           sca.ac_stat_no_cr "CR_DISALLOWED",
    --     sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
           sca.ac_stat_dormant "DORMANT_ACC",
           sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
           sca.ac_stat_frozen "FROZEN_ACC",
           sca.ac_open_date "ACC_OPENING_DT",
           sca.address1 "ADD_LINE_1",
           sca.address2 "ADD_LINE_2",
           sca.address3 "ADD_LINE_3",
           sca.address4 "ADD_LINE_4",
           sca.joint_ac_indicator "JOINT_ACC",
           sca.acy_avl_bal "CR_BAL",
           0 "DR_BAL",
           0 "CR_BAL_LCY", t
           0 "DR_BAL_LCY",
           null "YTD_CR_MOVEMENT",
           null "YTD_DR_MOVEMENT",
           null "YTD_CR_MOVEMENT_LCY",
           null "YTD_DR_MOVEMENT_LCY",
           null "MTD_CR_MOVEMENT",
           null "MTD_DR_MOVEMENT",
           null "MTD_CR_MOVEMENT_LCY",
           null "MTD_DR_MOVEMENT_LCY",
           'N' "BRANCH_TRFR", --New
           sca.provision_amount "PROVISION_AMT",
           sca.account_type "ACCOUNT_TYPE",
           nvl(sca.tod_limit, 0) "TOD_LIMIT",
           nvl(sca.sublimit, 0) "SUB_LIMIT",
           nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
           nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
      from sttm_cust_account sca,
           sttm_customer sc,
           (select sca.cust_ac_no
                   sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
                       case when p_applDate >= sca.tod_limit_start_date
                             and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
                            then sca.tod_limit else 0
                       end
                      ) dd
              from sttm_cust_account sca
                   getm_facility gf,
                   sttm_cust_account_linkages scal
             where gf.line_code || gf.line_serial = scal.linked_ref_no
               and cust_ac_no = sca.cust_ac_no
             group by sca.cust_ac_no
           ) w,
           (select acc,
                   brn,
                   sum(decode(drcr,'D',amt)) dr_int,
                   sum(decode(drcr,'C',amt)) cr_int
              from ictb_entries_history ieh
             where ent_dt between p_st_dt and p_ed_dt
               and drcr in ('C','D')
               and liqn = 'Y'
               and entry_passed = 'Y'
               and exists(select null
                            from ictm_pr_int ipi,
                                 ictm_rule_frm irf
                           where ipi.rule = irf.rule_id
                             and ipi.product_code = ieh.prod 
                             and irf.book_flag = 'B'
             group by acc,brn
           ) x,
           (select acc,
                   brn,
                   sum(amt) fee_amt
              from ictb_entries_history ieh
             where ieh.ent_dt between p_st_dt and p_ed_dt
               and exists(select product_code
                            from ictm_product_definition ipd
                           where ipd.product_code = ieh.prod
                             and ipd.product_type = 'C'
             group by acc,brn
           ) y,
           ictm_acc m,
           (select sca.cust_ac_no,
                   sca.branch_code
                   coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
                            nvl2(coalesce(t2.account,t2.account),'exists',null),
                            nvl2(coalesce(t3.acc,t3.brn),'exists',null),
                            nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
                            nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
                            nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
                            nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
                            decode(sca.maker_dt_stamp,p_applDate,'exists'),
                            decode(sca.status_since,p_applDate,'exists')
                           ) existence
              from sttm_cust_account sca
                   left outer join
                   (select ac_no,ac_branch
                      from actb_daily_log
                     where trn_dt = p_applDate
                       and auth_stat = 'A'
                   ) t1
                on (sca.cust_ac_no = t1.ac_no
               and  sca.branch_code = t1.ac_branch
                   left outer join
                   (select account,account
                      from catm_amount_blocks
                     where effective_date = p_applDate
                       and auth_stat = 'A'
                   ) t2
                on (sca.cust_ac_no = t2.account
               and  sca.branch_code = t2.branch
                   left outer join
                   (select acc,brn
                      from ictm_td_closure_renew itcr
                     where renewal_date = p_applDate
                   ) t3
                on (sca.cust_ac_no = t3.acc
               and  sca.branch_code = t3.brn
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttm_ac_stat_change
                     where status_change_date = p_applDate
                       and auth_stat = 'A'
                   ) t4
                on (sca.cust_ac_no = t4.cust_ac_no
               and  sca.branch_code = t4.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from cstb_acc_brn_trfr_log
                     where process_date = p_applDate
                       and process_status = 'S'
                   ) t5
                on (sca.cust_ac_no = t5.cust_ac_no
               and  sca.branch_code = t5.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttbs_provision_history
                     where esn_date = p_applDate
                   ) t6
                on (sca.cust_ac_no = t6.cust_ac_no
               and  sca.branch_code = t6.branch_code
                   left outer join
                   (select cust_ac_no,branch_code
                      from sttms_cust_account_dormancy
                     where dormancy_start_dt = p_applDate
                   ) t7
                on (sca.cust_ac_no = t7.cust_ac_no
               and  sca.branch_code = t7.branch_code
           ) z
    where sca.branch_code = p_branch_code
       and sca.cust_no = sc.customer_no
       and sca.cust_ac_no = w.cust_ac_no
       and sca.cust_ac_no = x.acc
       and sca.branch_code = x.brn
       and sca.cust_ac_no = y.acc
       and sca.branch_code = y.brn
       and sca.cust_ac_no = m.acc
       and sca.branch_code = m.brn
       and sca.cust_ac_no = z.sca.cust_ac_no
       and sca.branch_code = z.branch_code
       and z.existence is not nullRegards
    Etbin

  • Select query in case of Multiple line items

    Hi Gurus ,
                  I've a doubt in general SQL select query. I want to know , if suppose I've an internal table - itab . I've fetched G/L Account numbers 1st, based on the input selections . Next , I want to loop on those G/L accounts. However, if the G/L account has multiple line items, then I personally use this select query -- >
    loop at itab.
    select <field> from <table> appending corresponding fields of  <itab1> where hkont eq itab-hkont.
    endloop.
    Now, the execution time for this query is longer than expected. The biggest problem here is, i've to sum up the totals as well. So totaling is an added load. I want to reduce the execution time of this. Kindly suggest me some good method in case u have any.
    I've pasted the code which I've written , for u ppl to understand--
    SELECT DISTINCT HKONT BELNR
      FROM BSIS
       INTO CORRESPONDING FIELDS OF TABLE OTAB
        WHERE HKONT IN S_RACCT
    *      AND PRCTR IN P_PRCTR
          AND MONAT IN S_POPER
          AND BUKRS EQ P_BUKRS
          AND GJAHR EQ P_GJAHR
          AND PRCTR IN S_PRCTR.
    ***The code below takes a lot of time to execute.***
    LOOP AT OTAB .
      SELECT DMBTR HKONT
      FROM BSIS APPENDING CORRESPONDING FIELDS OF TABLE CREDITS
        WHERE HKONT EQ OTAB-HKONT
          AND BELNR EQ OTAB-BELNR
          AND MONAT IN S_POPER
          AND BUKRS EQ P_BUKRS
          AND GJAHR EQ P_GJAHR
          AND PRCTR IN S_PRCTR
          AND SHKZG EQ 'H'.
      COLLECT CREDITS.
    ENDLOOP.

    Hi,
    First of all try to avoid doing select into corresponding fields. THis would improve the performance of the program.
    Try to do a single fetch from the BSIS table . fetch the hkont, belnr, dmbtr fields in to a master internal table. Manipulate and play with the data as required.  Don't hit the data base table more than once (unless it is required) . This would improve the performance of your code.
    Try to code this way.
    types: begin of ty_bsis,
                 hkont type hkont,
                 belnr type  belnr_d,
                 dmbtr type dmbtr,
              end of ty_bsis.
    data: it_bsis type standard table of ty_bsis,
             wa_bsis type ty_bsis,
    select hkont belnr dmbtr
              from bsis
              into table it_bsis
              WHERE HKONT IN S_RACCT
            AND PRCTR IN P_PRCTR
              AND MONAT IN S_POPER
             AND BUKRS EQ P_BUKRS
             AND GJAHR EQ P_GJAHR
              AND PRCTR IN S_PRCTR.
    Using the data availabe in the it_bsis, you can manipulate as required.
    Hope this would be helpful
    Regards
    Ramesh Sundaram

  • Select Query failing on a  table that has per sec heavy insertions.

    Hi
    Problem statement
    1- We are using 11g as an database.
    2- We have a table that is partitioned on the date as range partition.
    3- The insertion of data is very high.i.e. several hundreds records per sec. in the current partitioned.
    4- The data is continuously going in the current partitioned as and when buffer is full or per sec timer expires.
    5-- We have to make also select query on the same table and on the current partitioned say for the latest 500 records.
    6- Effecient indexes are also created on the table.
    Solutions Tried.
    1- After analyzing by tkprof it is observed that select and execute is working fine but fetch is taking too much time to show the out put. Say it takes 1 hour.
    2- Using the 11g sql advisior and SPM several baseline is created but the success rate of them is observed also too low.
    please suggest any solution to this issue
    1- i.e. Redisgn of table.
    2- Any better way to quey to fix the fetch issue.
    3- Any oracle seetings or parameter changes to fix the fetch issue.
    Thanks in advance.
    Regards
    Vishal Sharma

    I am uploading the latest stats please let me know how can improve as this is taking 25 minutes
    ####TKPROF output#########
    SQL ID : 2j5w6bv437cak
    select almevttbl.AlmEvtId, almevttbl.AlmType, almevttbl.ComponentId,
      almevttbl.TimeStamp, almevttbl.Severity, almevttbl.State,
      almevttbl.Category, almevttbl.CauseCode, almevttbl.UnitType,
      almevttbl.UnitId, almevttbl.UnitName, almevttbl.ServerName,
      almevttbl.StrParam, almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2,
      almevttbl.ExtraStrParam3, almevttbl.ParentCustId, almevttbl.ExtraParam1,
      almevttbl.ExtraParam2, almevttbl.ExtraParam3,almevttbl.ExtraParam4,
      almevttbl.ExtraParam5, almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,
      almevttbl.SrcIPAddress12,almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24
    FROM
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.10       0.17          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       42   1348.25    1521.24       1956   39029545          0         601
    total       44   1348.35    1521.41       1956   39029545          0         601
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Rows     Row Source Operation
        601  PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11043 us cost=0 size=7426 card=1)
        601   TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39029545 pr=1956 pw=1956 time=11030 us cost=0 size=7426 card=1)
        601    INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=39029377 pr=1956 pw=1956 time=11183 us cost=0 size=0 card=1)(object id 72557)
        601     FILTER  (cr=39027139 pr=0 pw=0 time=0 us)
    169965204      COUNT STOPKEY (cr=39027139 pr=0 pw=0 time=24859073 us)
    169965204       VIEW  (cr=39027139 pr=0 pw=0 time=17070717 us cost=0 size=13 card=1)
    169965204        PARTITION RANGE SINGLE PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=13527031 us cost=0 size=48 card=1)
    169965204         TABLE ACCESS BY LOCAL INDEX ROWID ALMEVTTBL PARTITION: 24 24 (cr=39027139 pr=0 pw=0 time=10299895 us cost=0 size=48 card=1)
    169965204          INDEX FULL SCAN ALMEVTTBL_INDEX PARTITION: 24 24 (cr=1131414 pr=0 pw=0 time=3222624 us cost=0 size=0 card=1)(object id 72557)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      42        0.00          0.00
      SQL*Net message from client                    42       11.54        133.54
      db file sequential read                      1956        0.20         28.00
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
    SQL ID : 0ushr863b7z39
    SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
      NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false')
      NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0)
    FROM
    (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("PLAN_TABLE") FULL("PLAN_TABLE")
      NO_PARALLEL_INDEX("PLAN_TABLE") */ 1 AS C1, CASE WHEN
      "PLAN_TABLE"."STATEMENT_ID"=:B1 THEN 1 ELSE 0 END AS C2 FROM
      "SYS"."PLAN_TABLE$" "PLAN_TABLE") SAMPLESUB
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.01          1          3          0           1
    total        3      0.00       0.01          1          3          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 82     (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3 pr=1 pw=1 time=0 us)
          0   TABLE ACCESS FULL PLAN_TABLE$ (cr=3 pr=1 pw=1 time=0 us cost=29 size=138856 card=8168)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.01          0.01
    SQL ID : bjkdb51at8dnb
    EXPLAIN PLAN SET STATEMENT_ID='PLUS30350011' FOR select almevttbl.AlmEvtId,
      almevttbl.AlmType, almevttbl.ComponentId, almevttbl.TimeStamp,
      almevttbl.Severity, almevttbl.State, almevttbl.Category,
      almevttbl.CauseCode, almevttbl.UnitType, almevttbl.UnitId,
      almevttbl.UnitName, almevttbl.ServerName, almevttbl.StrParam,
      almevttbl.ExtraStrParam, almevttbl.ExtraStrParam2, almevttbl.ExtraStrParam3,
       almevttbl.ParentCustId, almevttbl.ExtraParam1, almevttbl.ExtraParam2,
      almevttbl.ExtraParam3,almevttbl.ExtraParam4,almevttbl.ExtraParam5,
      almevttbl.SRCIPADDRFAMILY,almevttbl.SrcIPAddress11,almevttbl.SrcIPAddress12,
      almevttbl.SrcIPAddress13,almevttbl.SrcIPAddress14,
      almevttbl.DESTIPADDRFAMILY,almevttbl.DestIPAddress11,
      almevttbl.DestIPAddress12,almevttbl.DestIPAddress13,
      almevttbl.DestIPAddress14,  almevttbl.DestPort, almevttbl.SrcPort,
      almevttbl.SessionDir, almevttbl.CustomerId, almevttbl.ProfileId,
      almevttbl.ParentProfileId, almevttbl.CustomerName, almevttbl.AttkDir,
      almevttbl.SubCategory, almevttbl.RiskCategory, almevttbl.AssetValue,
      almevttbl.IPSAction, almevttbl.l4Protocol,almevttbl.ExtraStrParam4 ,
      almevttbl.ExtraStrParam5,almevttbl.username,almevttbl.ExtraStrParam6,
      IpAddrFamily1,IPAddrValue11,IPAddrValue12,IPAddrValue13,IPAddrValue14,
      IpAddrFamily2,IPAddrValue21,IPAddrValue22,IPAddrValue23,IPAddrValue24 FROM 
           AlmEvtTbl PARTITION(ALMEVTTBLP20100323) WHERE AlmEvtId IN ( SELECT  * FROM
      ( SELECT /*+ FIRST_ROWS(1000) INDEX (AlmEvtTbl AlmEvtTbl_Index) */AlmEvtId
      FROM AlmEvtTbl PARTITION(ALMEVTTBLP20100323) where       ((AlmEvtTbl.Customerid
      = 0 or AlmEvtTbl.ParentCustId = 0))  ORDER BY AlmEvtTbl.TIMESTAMP DESC) 
      WHERE ROWNUM  <  602) order by timestamp desc
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.28       0.26          0          0          0           0
    Execute      1      0.01       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.29       0.27          0          0          0           0
    Misses in library cache during parse: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 82 
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       13      0.71       0.96          3         10          0           0
    Execute     14      0.20       0.29          4        304         26          21
    Fetch       92   2402.17    2714.85       3819   70033708          0        1255
    total      119   2403.09    2716.10       3826   70034022         26        1276
    Misses in library cache during parse: 10
    Misses in library cache during execute: 6
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                      49        0.00          0.00
      SQL*Net message from client                    48       29.88        163.43
      db file sequential read                      1966        0.20         28.10
      latch free                                     21        0.00          0.01
      latch: cache buffers chains                     9        0.01          0.02
      latch: session allocation                       1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      940      0.51       0.73          1          2         38           0
    Execute   3263      1.93       2.62          7       1998         43          23
    Fetch     6049      1.32       4.41        214      12858         36       13724
    total    10252      3.78       7.77        222      14858        117       13747
    Misses in library cache during parse: 172
    Misses in library cache during execute: 168
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        88        0.04          0.62
      latch: shared pool                              8        0.00          0.00
      latch: row cache objects                        2        0.00          0.00
      latch free                                      1        0.00          0.00
      latch: session allocation                       1        0.00          0.00
       34  user  SQL statements in session.
    3125  internal SQL statements in session.
    3159  SQL statements in session.
    Trace file: ora11g_ora_2064.trc
    Trace file compatibility: 11.01.00
    Sort options: default
           6  sessions in tracefile.
          98  user  SQL statements in trace file.
        9111  internal SQL statements in trace file.
        3159  SQL statements in trace file.
          89  unique SQL statements in trace file.
       30341  lines in trace file.
        6810  elapsed seconds in trace file.
    ###################################### AutoTrace Output#################  
    Statistics
           3901  recursive calls
              0  db block gets
       39030275  consistent gets
           1970  physical reads
            140  redo size
         148739  bytes sent via SQL*Net to client
            860  bytes received via SQL*Net from client
             42  SQL*Net roundtrips to/from client
             73  sorts (memory)
              0  sorts (disk)
            601  rows processed

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Select query on a table with 13 million of rows

    Hi guys,
    I have been trying to perform a select query on a table which has 13 millions of entries however it took around 58 min to complete.
    The table has 8 columns with 4 Primary keys looks like below:
    (PK) SegmentID > INT
    (PK) IPAddress > VARCHAR (45)
    MAC Address > VARCHAR (45)
    (PK) Application Name > VARCHAR (45)
    Total Bytes > INT
    Dates > VARCHAR (45)
    Times > VARCHAR (45)
    (PK) DateTime > DATETIME
    The sql query format is :
    select ipaddress, macaddress, sum(totalbytes), applicationname , dates,
    times from appstat where segmentid = 1 and datetime between '2011-01-03
    15:00:00.0' and '2011-01-04 15:00:00.0' group by ipaddress,
    applicationname order by applicationname, sum(totalbytes) desc
    Is there a way I can improve this query to be faster (through my.conf or any other method)?
    Any feedback is welcomed.
    Thank you.
    Mus

    Tolls wrote:
    What db is this?
    You never said.
    Anyway, it looks like it's using the Primary Key to find the correct rows.
    Is that the correct number of rows returned?
    5 million?
    Sorted?I am using MySQL. By the way, the query time has been much more faster (22 sec) after I changed the configuration file (based on my-huge.cnf).
    The number of rows returned is 7999 Rows
    This is some portion of the my.cnf
    # The MySQL server
    [mysqld]
    port = 3306
    socket = /var/lib/mysql/mysql.sock
    skip-locking
    key_buffer = 800M
    max_allowed_packet = 1M
    table_cache = 256
    sort_buffer_size = 1M
    read_buffer_size = 1M
    read_rnd_buffer_size = 4M
    myisam_sort_buffer_size = 64M
    thread_cache_size = 8
    query_cache_size= 16M
    log = /var/log/mysql.log
    log-slow-queries = /var/log/mysqld.slow.log
    long_query_time=10
    # Try number of CPU's*2 for thread_concurrency
    thread_concurrency = 6
    Is there anything else I need to tune so it can be faster ?
    Thanks a bunch.
    Edited by: user578505 on Jan 17, 2011 6:47 PM

  • Parenthesis within a select query.

    I have a doubt as whether i can include Parenthesis within a select query
    as below:-
    SELECT C.CITY_CODE,A.DOCUMENT_TYPE,SUM(A.INSTRUMENT_AMounT) FROM
    ppbs_RECEIPT_HEADER A,ppbs_DISTRIBUTOR_MASTER C WHERE
    A.DISTRIBUTOR_ID = C.DISTRIBUTOR_ID AND
    ( A.STATUS not in( 'CN','BO') or (A.STATUS ='CN' and a.INSTRUMENT_AMOUNT<>a.UNAPPLIED_AMOUNT))
    as we do in a If condition. Please help in solving the doubt as it is urgent.

    No, the result is correct, but is it correct to put the parenthesises around a condition in a normal select query as we do in a 'if' condition or a sub-query. Will the condition in the query be evaluated first and then the other conditions in the normal select query.

  • Regarding to perform in select query

    could any tell  the select query in this piece of code would affect the performance of the programe
    DATA: BEGIN OF OUTREC,
          BANKS LIKE BNKA-BANKS,
          BANKL LIKE BNKA-BANKL,
          BANKA LIKE BNKA-BANKA,
          PROVZ LIKE BNKA-PROVZ,   "Region (State, Province, County)
          BRNCH LIKE BNKA-BRNCH,
          STRAS LIKE BNKA-STRAS,
          ORT01 LIKE BNKA-ORT01,
          SWIFT LIKE BNKA-SWIFT,
    END OF OUTREC.
    OPEN DATASET P_OUTPUT FOR OUTPUT IN TEXT MODE.
    IF SY-SUBRC NE 0. EXIT. ENDIF.
    SELECT * FROM BNKA
             WHERE BANKS EQ P_BANKS
             AND   LOEVM NE 'X'
             AND   XPGRO NE 'X'
             ORDER BY BANKS BANKL.
      PERFORM TRANSFER_DATA.
    ENDSELECT.
    CLOSE DATASET P_OUTPUT.
    *&      Transfer the data to the output file
    FORM TRANSFER_DATA.
      OUTREC-BANKS = BNKA-BANKS.
      OUTREC-BANKL = BNKA-BANKL.
      OUTREC-BANKA = BNKA-BANKA.
      OUTREC-PROVZ = BNKA-PROVZ.
      OUTREC-BRNCH = BNKA-BRNCH.
      OUTREC-STRAS = BNKA-STRAS.
      OUTREC-ORT01 = BNKA-ORT01.
      OUTREC-SWIFT = BNKA-SWIFT.
      TRANSFER OUTREC TO P_OUTPUT.
    ENDFORM.                               " READ_IN_DATA

    Hi
    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one Internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    Points # 1/2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    4.     For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    Point # 1
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops  only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    Point # 2
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Point # 3
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    Point # 4
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Point # 5
    If all primary key fields are supplied in the Where condition you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements           contd..  SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    2.     For all frequently used Select statements, try to use an index.
    3.     Using buffered tables improves the performance considerably.
    Point # 1
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    Point # 2
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    Point # 3
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements       contd…           Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements    contd…For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
    Loop at int_cntry.
           Select single * from zfligh into int_fligh
    where cntry = int_cntry-cntry.
    Append int_fligh.
    Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements    contd…  Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    Point # 1
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    Point # 2
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Point # 3
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    Point # 2
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    Point # 3
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    Point # 5
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    Point # 6
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    Point # 7
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    Point # 8
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    Point # 9
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 10
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    Point # 11
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    Point # 12
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    Point # 13
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Doubts in Select query

    Hai,
    I have problem in my select query,
    My previous select query:
      SELECT  rrcty 
            ryear 
            rbukrs
            rzzpspid 
            SUM( hslvt )
            SUM( hsl01 )
            SUM( hsl02 )
            SUM( hsl03 )
            SUM( hsl04 )
            SUM( hsl05 )
            SUM( hsl06 )
            SUM( hsl07 )
            SUM( hsl08 )
            SUM( hsl09 )
            SUM( hsl10 )
            SUM( hsl11 )
            SUM( hsl12 )
            SUM( mslvt )
            SUM( msl01 )
            SUM( msl02 )
            SUM( msl03 )
            SUM( msl04 )
            SUM( msl05 )
            SUM( msl06 )
            SUM( msl07 )
            SUM( msl08 )
            SUM( msl09 )
            SUM( msl10 )
            SUM( msl11 )
            SUM( msl12 )
             FROM zzsl5t
           INTO TABLE it_erbproj
             WHERE rldnr       EQ  c_rldnr        AND
                rrcty      EQ  c_rrcty        AND
                ryear       EQ  v_srr_fyear    AND
                rbukrs      EQ  v_srr_ccode    AND
                racct      EQ  v_racct        AND
                rzzpspid  IN r_zzpspid         
             GROUP BY  RRCTY ryear RBUKRS rzzpspid.
    now i want change the above select query as,
    If user enters input period 03 means,
    i have to select
    SELECT  rrcty 
            ryear 
            rbukrs
            rzzpspid 
            SUM( hslvt )
            SUM( hsl01 )
            SUM( hsl02 )
            SUM( hsl03 )
            SUM( mslvt )
            SUM( msl01 )
            SUM( msl02 )
            SUM( msl03 )
    only, not to select all the periods.
    how can i give fields dynamically.
    please suggest me.
    Elamaran

    Duplicate message. Please close this one and look at my answer Doubt in Select query.

  • How to do outer join select query for an APEX report

    Hello everyone,
    I am Ann.
    I have one select statement that calculate the statistics for one month(October 2012 in this example)
    select ph.phase_number
    , sum ( (case
    WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE('Oct 2012','MON YYYY'))
    THEN last_day(TO_DATE('Oct 2012','MON YYYY'))
    ELSE ph.date_finished
    END )
    - ph.date_started + 1) / count(def.def_id) as avg_days
    from phase_membership ph
    inner join court_engagement ce on ph.mpm_eng_id = ce.engagement_id
    inner join defendant def on ce.defendant_id = def.def_id
    where def.active = 1
    and ph.date_started <= last_day(TO_DATE('Oct 2012','MON YYYY'))
    and ph.active = 1
    and UPPER(ce.court_name) LIKE '%'
    group by rollup(phase_number)
    Result is as below
    Phase_Number     AVG_DAYS
    Phase One     8.6666666666666667
    Phase Two     14.6
    Phase Three     12
         11.4615365
    I have other select list mainly list the months between two date value.
    select to_char(which_month, 'MON YYYY') as display_month
    from (
    select add_months(to_date('Aug 2012','MON YYYY'), rownum-1) which_month
    from all_objects
    where
    rownum <= months_between(to_date('Oct 2012','MON YYYY'), add_months(to_date('Aug 2012','MON YYYY'), -1))
    order by which_month )
    Query result is as below
    DISPLAY_MONTH
    AUG 2012
    SEP 2012
    OCT 2012
    Is there any way that I can join these two select statement above to generate a result like:
    Month          Phase Number     Avg days
    Aug 2012     Phase One     8.666
    Sep 2012     Phase One     7.66
    Oct 2012     Phase One     5.66
    Aug 2012     Phase Two     8.666
    Sep 2012     Phase Two     7.66
    Oct 2012     Phase Two     5.66
    Aug 2012     Phase Three     8.666
    Sep 2012     Phase Three     7.66
    Oct 2012     Phase Three     5.66
    Or
    Month          Phase Number     Avg days
    Aug 2012     Phase One     8.666
    Aug 2012     Phase Two     7.66
    Aug 2012     Phase Three     5.66
    Sep 2012     Phase One     8.666
    Sep 2012     Phase Two     7.66
    Sep 2012     Phase Three     5.66
    Oct 2012     Phase One     8.666
    Oct 2012     Phase Two     7.66
    Oct 2012     Phase Three     5.66
    And it can be order by either Phase Number or Month.
    My other colleague suggest I should use an left outer join but after trying so many ways, I am still stuck.
    One of the select I tried is
    select a.display_month,b.* from (
    select to_char(which_month, 'MON YYYY') as display_month
    from (
    select add_months(to_date('Aug 2012','MON YYYY'), rownum-1) which_month
    from all_objects
    where
    rownum <= months_between(to_date('Oct 2012','MON YYYY'), add_months(to_date('Aug 2012','MON YYYY'), -1))
    order by which_month )) a left outer join
    ( select to_char(ph.date_finished,'MON YYYY') as join_month, ph.phase_number
    , sum ( (case
    WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE(a.display_month,'MON YYYY'))
    THEN last_day(TO_DATE(a.display_month,'MON YYYY'))
    ELSE ph.date_finished
    END )
    - ph.date_started + 1) / count(def.def_id) as avg_days
    from phase_membership ph
    inner join court_engagement ce on ph.mpm_eng_id = ce.engagement_id
    inner join defendant def on ce.defendant_id = def.def_id
    where def.active = 1
    and ph.date_started <= last_day(TO_DATE(a.display_month,'MON YYYY'))
    and ph.active = 1
    and UPPER(ce.court_name) LIKE '%'
    group by to_char(ph.date_finished,'MON YYYY') , rollup(phase_number)) b
    on a.display_month = b.join_month
    but then I get an error
    SQL Error: ORA-00904: "A"."DISPLAY_MONTH": invalid identifier
    I need to display a report on APEX with option for people to download at least CSV format.
    I already have 1 inteactive report in the page, so don’t think can add another interactive report without using the iframe trick.
    If any of you have any ideas, please help.
    Thanks a lot.
    Ann

    First of all, a huge thanks for following this Frank.
    I have just started working here, I think the Oracle version is 11g, but not sure.
    To run Oracle APEX version 4, I think they must have at least 10g R2.
    This report is a bit challenging for me.I has never worked with PARTITION before.
    About the select query you suggested, I run , and it seems working fine, but if I try this,
    it return error ORA-01843: not a valid month
    DEFINE startmonth = "Aug 2012";
    DEFINE endmonth   = "Oct 2012";
    WITH     all_months     AS
         select add_months(to_date('&startmonth','MON YYYY'), rownum-1) AS which_month
         ,      add_months(to_date('&startmonth','MON YYYY'), rownum  ) AS next_month
         from all_objects
         where
         rownum <= months_between(to_date('&endmonth','MON YYYY'), add_months(to_date('&startmonth','MON YYYY'), -1))
    select TO_CHAR (am.which_month, 'Mon YYYY')     AS month
    ,      ph.phase_number
    , sum ( (case
    WHEN ph.date_finished IS NULL OR ph.date_finished > last_day(TO_DATE(am.which_month,'MON YYYY'))
    THEN last_day(TO_DATE(am.which_month,'MON YYYY'))
    ELSE ph.date_finished
    END )
    - ph.date_started + 1) / count(def.def_id) as avg_days
    FROM           all_months          am
    LEFT OUTER JOIN  phase_membership  ph  PARTITION BY (ph.phase_number)
                                        ON  am.which_month <= ph.date_started
                               AND am.next_month  >  ph.date_started
                               AND ph.date_started <= last_day(TO_DATE(am.which_month,'MON YYYY'))  -- May not be needed
                               AND ph.active = 1
    LEFT OUTER join  court_engagement  ce  on  ph.mpm_eng_id = ce.engagement_id
                                        and ce.court_name IS NOT NULL  -- or something involving LIKE
    LEFT OUTER join  defendant         def on  ce.defendant_id = def.def_id
                                        AND def.active = 1
    group by rollup(phase_number, am.which_month)
    ORDER BY  am.which_month
    ,            ph.phase_number
    ;Here is the shorted versions of the three tables:
    A_DEFENDANT, A_ENGAGEMENT, A_PHASE_MEMBERSHIP
    CREATE TABLE "A_DEFENDANT"
        "DEF_ID"     NUMBER NOT NULL ENABLE,
        "FIRST_NAME" VARCHAR2(50 BYTE),
        "SURNAME"    VARCHAR2(20 BYTE) NOT NULL ENABLE,
        "DOB" DATE NOT NULL ENABLE,
        "ACTIVE" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
        CONSTRAINT "A_DEFENDANT_PK" PRIMARY KEY ("DEF_ID"))
    Sample Data
    Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (101,'Joe','Bloggs',to_date('12/12/99','DD/MM/RR'),1);
    Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (102,'John','Smith',to_date('20/05/00','DD/MM/RR'),1);
    Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (103,'Jane','Black',to_date('15/02/98','DD/MM/RR'),1);
    Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (104,'Minnie','Mouse',to_date('13/12/88','DD/MM/RR'),0);
    Insert into A_DEFENDANT (DEF_ID,FIRST_NAME,SURNAME,DOB,ACTIVE) values (105,'Daisy','Duck',to_date('05/08/00','DD/MM/RR'),1);
    CREATE TABLE "A_ENGAGEMENT"
        "ENGAGEMENT_ID" NUMBER NOT NULL ENABLE,
        "COURT_NAME"    VARCHAR2(50 BYTE) NOT NULL ENABLE,
        "DATE_REFERRED" DATE,
        "DETERMINATION_HEARING_DATE" DATE,
        "DATE_JOINED_COURT" DATE,
        "DATE_TREATMENT_STARTED" DATE,
        "DATE_TERMINATED" DATE,
        "TERMINATION_TYPE" VARCHAR2(50 BYTE),
        "ACTIVE"           NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
        "DEFENDANT_ID"     NUMBER,
        CONSTRAINT "A_ENGAGEMENT_PK" PRIMARY KEY ("ENGAGEMENT_ID"))
    Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (1,'AA',to_date('12/08/12','DD/MM/RR'),null,to_date('12/08/12','DD/MM/RR'),null,null,null,1,101);
    Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (2,'BB',to_date('01/09/12','DD/MM/RR'),null,to_date('02/09/12','DD/MM/RR'),null,null,null,1,102);
    Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (3,'AA',to_date('02/09/12','DD/MM/RR'),null,to_date('15/09/12','DD/MM/RR'),null,null,null,1,103);
    Insert into A_ENGAGEMENT (ENGAGEMENT_ID,COURT_NAME,DATE_REFERRED,DETERMINATION_HEARING_DATE,DATE_JOINED_COURT,DATE_TREATMENT_STARTED,DATE_TERMINATED,TERMINATION_TYPE,ACTIVE,DEFENDANT_ID) values (4,'BB',to_date('01/10/12','DD/MM/RR'),null,to_date('02/10/12','DD/MM/RR'),null,null,null,1,105);
    CREATE TABLE "A_PHASE_MEMBERSHIP"
        "MPM_ID"       NUMBER NOT NULL ENABLE,
        "MPM_ENG_ID"   NUMBER NOT NULL ENABLE,
        "PHASE_NUMBER" VARCHAR2(50 BYTE),
        "DATE_STARTED" DATE NOT NULL ENABLE,
        "DATE_FINISHED" DATE,
        "NOTES"  VARCHAR2(2000 BYTE),
        "ACTIVE" NUMBER(2,0) DEFAULT 1 NOT NULL ENABLE,
        CONSTRAINT "A_PHASE_MEMBERSHIP_PK" PRIMARY KEY ("MPM_ID"))
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (1,1,'PHASE ONE',to_date('15/09/12','DD/MM/RR'),to_date('20/09/12','DD/MM/RR'),null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (2,1,'PHASE TWO',to_date('21/09/12','DD/MM/RR'),to_date('29/09/12','DD/MM/RR'),null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (3,2,'PHASE ONE',to_date('12/09/12','DD/MM/RR'),null,null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (4,3,'PHASE ONE',to_date('20/09/12','DD/MM/RR'),to_date('01/10/12','DD/MM/RR'),null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (5,3,'PHASE TWO',to_date('02/10/12','DD/MM/RR'),to_date('15/10/12','DD/MM/RR'),null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (6,4,'PHASE ONE',to_date('03/10/12','DD/MM/RR'),to_date('10/10/12','DD/MM/RR'),null,1);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (7,3,'PHASE THREE',to_date('17/10/12','DD/MM/RR'),null,null,0);
    Insert into A_PHASE_MEMBERSHIP (MPM_ID,MPM_ENG_ID,PHASE_NUMBER,DATE_STARTED,DATE_FINISHED,NOTES,ACTIVE) values (8,1,'PHASE THREE',to_date('30/09/12','DD/MM/RR'),to_date('16/10/12','DD/MM/RR'),null,1);
    The requirements are:
    The user must be able to request the extract for one or more calendar months, e.g.
    May 2013
    May 2013 – Sep 2013.
    The file must contain a separate row for each calendar month in the requested range. Each row must contain the statistics computed for that calendar month.
    The file must also include a row of totals.
    The user must be able to request the extract for either Waitakere or Auckland or Consolidated (both courts’ statistics accumulated).
    Then the part that I am stuck is
    For each monitoring phase:
    Phase name (e.g. “Phase One”)
    Avg_time_in_phase_all_particip
    for each phase name,
    Add up days in each “phase name” Monitoring Phase, calculated as:
    If Monitoring Phase.Date Finished is NULL or > month end date,
    +(*Month end date* Minus Monitoring Phase.Date Started Plus 1)+
    Otherwise (phase is complete)
    +(Monitoring Phase.Date Finished Minus Monitoring Phase.Date Started Plus 1.)+
    Divide by the numbers of all participants who have engaged in “phase name”.
    This is the words of the Business Analyst,
    I try to do as required but still struggle to identify end_month for the above formula to display for the range of months.
    Of course, I can write two nested cursor. The first one run the list of month, then for each month, run the parameterised report.
    But I prefer if possible just use SQL statements, or at least a PL/SQL but return a query.
    With this way, I can create an APEX report, and use their CSV Extract function.
    Yes, you are right, court_name is one of the selection parameters.
    And the statistics is not exactly for one month. It is kind of trying to identify all phases that are running through the specified month (even phase.date_started is before the month start).
    This is the reason why I put the condition AND ph.date_started <= last_day(TO_DATE('Oct 2012','MON YYYY')) (otherwise I get negative avg_days)
    User can choose either one court "AA" or "BB" or combined which is all figures.
    Sorry for bombarding you a lot of information.
    Thanks a lot, again.
    Edited by: Ann586341 on Oct 29, 2012 9:57 PM
    Edited by: Ann586341 on Oct 29, 2012 9:59 PM

  • Alternative for "check select-options" for select query.

    hi all,
    my report is using a "GET" syntax, followed by "check select-options".
    My senior told me to replace the Get syntax with select query.
    Now, my program also has dynamic selection screen.
    the dynamic selection screen almost has all the fields of database table.
    so, what is the way to replace this check select-options while applying select query?
    can you tell me how dynamic selection screen fields are populated? are they stored in some internal table?
    thanks for help.

    HI
    Here is some points about LDB.
    Logical databases are special ABAP programs that retrieve data and make it available to application programs. The most common use of logical databases is still to read data from database tables by linking them to executable ABAP programs.
    However, from Release 4.5A, it has also been possible to call logical databases using the function module LDB_PROCESS. This allows you to call several logical databases from any ABAP program, nested in any way. It is also possible to call a logical database more than once in a program, if it has been programmed to allow this. This is particularly useful for programs with type 1.
    Logical databases contain Open SQL statements that read data from the database. You do not therefore need to use SQL in your own programs. The logical database reads the program, stores them in the program if necessary, and then passes them line by line to the application program or the function module LDB_PROCESS using an interface work area
    For further on LDB's refer to this link.
    [http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/content.htm]
    Hope this will help.
    Reward if helpful.
    Sumit Agarwal

  • Basic query regarding work-area and select query

    hi
    dear sdn members,
    thanks too all for solving all my query's up till now
    i am stuck in a problem need help
    1)  why basically work-area has been used ? the sole purpose
    2)  different types of select query ? only coding examples
    note: no links pls
    regards,
    virus

    hi,
    Work Area
    Description for a data object that is particularly useful when working with internal tables or database tables as a source for changing operations or a target for reading operations.
    WORKAREA is a structure that can hold only one record at a time. It is a collection of fields. We use workarea as we cannot directly read from a table. In order to interact with a table we need workarea. When a Select Statement is executed on a table then the first record is read and put into the header of the table and from there put into the header or the workarea(of the same structure as that of the table)of the internal table and then transferred top the body of the internal table or directly displayed from the workarea.
    Each row in a table is a record and each column is a field.
    While adding or retrieving records to / from internal table we have to keep the record temporarily.
    The area where this record is kept is called as work area for the internal table. The area must have the same structure as that of internal table. An internal table consists of a body and an optional header line.
    Header line is a implicit work area for the internal table. It depends on how the internal table is declared that the itab will have the header line or not.
    .g.
    data: begin of itab occurs 10,
    ab type c,
    cd type i,
    end of itab. " this table will have the header line.
    data: wa_itab like itab. " explicit work area for itab
    data: itab1 like itab occurs 10. " table is without header line.
    The header line is a field string with the same structure as a row of the body, but it can only hold a single row.
    It is a buffer used to hold each record before it is added or each record as it is retrieved from the internal table. It is the default work area for the internal table.
    With header line
    SELECT.
    Put the curson on that word and press F1 . You can see the whole documentation for select statements.
    select statements :
    SELECT result
    FROM source
    INTO|APPENDING target
    [[FOR ALL ENTRIES IN itab] WHERE sql_cond]
    Effect
    SELECT is an Open-SQL-statement for reading data from one or several database tables into data objects.
    The select statement reads a result set (whose structure is determined in result ) from the database tables specified in source, and assigns the data from the result set to the data objects specified in target. You can restrict the result set using the WHERE addition. The addition GROUP BY compresses several database rows into a single row of the result set. The addition HAVING restricts the compressed rows. The addition ORDER BY sorts the result set.
    The data objects specified in target must match the result set result. This means that the result set is either assigned to the data objects in one step, or by row, or by packets of rows. In the second and third case, the SELECT statement opens a loop, which which must be closed using ENDSELECT. For every loop pass, the SELECT-statement assigns a row or a packet of rows to the data objects specified in target. If the last row was assigned or if the result set is empty, then SELECT branches to ENDSELECT . A database cursor is opened implicitly to process a SELECT-loop, and is closed again when the loop is ended. You can end the loop using the statements from section leave loops.
    Up to the INTO resp. APPENDING addition, the entries in the SELECTstatement define which data should be read by the database in which form. This requirement is translated in the database interface for the database system´s programming interface and is then passed to the database system. The data are read in packets by the database and are transported to the application server by the database server. On the application server, the data are transferred to the ABAP program´s data objects in accordance with the data specified in the INTO and APPENDING additions.
    System Fields
    The SELECT statement sets the values of the system fields sy-subrc and sy-dbcnt.
    sy-subrc Relevance
    0 The SELECT statement sets sy-subrc to 0 for every pass by value to an ABAP data object. The ENDSELECT statement sets sy-subrc to 0 if at least one row was transferred in the SELECT loop.
    4 The SELECT statement sets sy-subrc to 4 if the result set is empty, that is, if no data was found in the database.
    8 The SELECT statement sets sy-subrc to 8 if the FOR UPDATE addition is used in result, without the primary key being specified fully after WHERE.
    After every value that is transferred to an ABAP data object, the SELECT statement sets sy-dbcnt to the number of rows that were transferred. If the result set is empty, sy-dbcnt is set to 0.
    Notes
    Outside classes, you do not need to specify the target area with INTO or APPENDING if a single database table or a single view is specified statically after FROM, and a table work area dbtab was declared with the TABLES statement for the corresponding database table or view. In this case, the system supplements the SELECT-statement implicitly with the addition INTO dbtab.
    Although the WHERE-condition is optional, you should always specify it for performance reasons, and the result set should not be restricted on the application server.
    SELECT-loops can be nested. For performance reasons, you should check whether a join or a sub-query would be more effective.
    Within a SELECT-loop you cannot execute any statements that lead to a database commit and consequently cause the corresponding database cursor to close.
    SELECT - result
    Syntax
    ... lines columns ... .
    Effect
    The data in result defines whether the resulting set consists of multiple rows (table-like structure) or a single row ( flat structure). It specifies the columns to be read and defines their names in the resulting set. Note that column names from the database table can be changed. For single columns, aggregate expressions can be used to specify aggregates. Identical rows in the resulting set can be excluded, and individual rows can be protected from parallel changes by another program.
    The data in result consists of data for the rows lines and for the columns columns.
    SELECT - lines
    Syntax
    ... { SINGLE }
    | { { } } ... .
    Alternatives:
    1. ... SINGLE
    2. ... { }
    Effect
    The data in lines specifies that the resulting set has either multiple lines or a single line.
    Alternative 1
    ... SINGLE
    Effect
    If SINGLE is specified, the resulting set has a single line. If the remaining additions to the SELECT command select more than one line from the database, the first line that is found is entered into the resulting set. The data objects specified after INTO may not be internal tables, and the APPENDING addition may not be used.
    An exclusive lock can be set for this line using the FOR UPDATE addition when a single line is being read with SINGLE. The SELECT command is used in this case only if all primary key fields in logical expressions linked by AND are checked to make sure they are the same in the WHERE condition. Otherwise, the resulting set is empty and sy-subrc is set to 8. If the lock causes a deadlock, an exception occurs. If the FOR UPDATE addition is used, the SELECT command circumvents SAP buffering.
    Note
    When SINGLE is being specified, the lines to be read should be clearly specified in the WHERE condition, for the sake of efficiency. When the data is read from a database table, the system does this by specifying comparison values for the primary key.
    Alternative 2
    Effect
    If SINGLE is not specified and if columns does not contain only aggregate expressions, the resulting set has multiple lines. All database lines that are selected by the remaining additions of the SELECT command are included in the resulting list. If the ORDER BY addition is not used, the order of the lines in the resulting list is not defined and, if the same SELECT command is executed multiple times, the order may be different each time. A data object specified after INTO can be an internal table and the APPENDING addition can be used. If no internal table is specified after INTO or APPENDING, the SELECT command triggers a loop that has to be closed using ENDSELECT.
    If multiple lines are read without SINGLE, the DISTINCT addition can be used to exclude duplicate lines from the resulting list. If DISTINCT is used, the SELECT command circumvents SAP buffering. DISTINCT cannot be used in the following situations:
    If a column specified in columns has the type STRING, RAWSTRING, LCHAR or LRAW
    If the system tries to access pool or cluster tables and single columns are specified in columns.
    Note
    When specifying DISTINCT, note that you have to carry out sort operations in the database system for this.
    SELECT - columns
    Syntax
    | { {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ... }
    | (column_syntax) ... .
    Alternatives:
    1. ... *
    2. ... {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ...
    3. ... (column_syntax)
    Effect
    The input in columns determines which columns are used to build the resulting set.
    Alternative 1
    Effect
    If * is specified, the resulting set is built based on all columns in the database tables or views specified after FROM, in the order given there. The columns in the resulting set take on the name and data type from the database tables or views. Only one data object can be specified after INTO.
    Note
    If multiple database tables are specified after FROM, you cannot prevent multiple columns from getting the same name when you specify *.
    Alternative 2
    ... {col1|aggregate( col1 )}
    {col2|aggregate( col2 )} ...
    Effect
    A list of column labels col1 col2 ... is specified in order to build the resulting list from individual columns. An individual column can be specified directly or as an argument of an aggregate function aggregate. The order in which the column labels are specified is up to you and defines the order of the columns in the resulting list. Only if a column of the type LCHAR or LRAW is listed does the corresponding length field also have to be specified directly before it. An individual column can be specified multiple times.
    The addition AS can be used to define an alternative column name a1 a2 ... with a maximum of fourteen digits in the resulting set for every column label col1 col2 .... The system uses the alternative column name in the additions INTO|APPENDING CORRESPONDING FIELDS and ORDER BY. .
    Column labels
    The following column labels are possible:
    If only a single database table or a single view is specified after FROM, the column labels in the database table - that is, the names of the components comp1 comp2... - can be specified directly for col1 col2 ... in the structure of the ABAP Dictionary.
    If the name of the component occurs in multiple database tables of the FROM addition, but the desired database table or the view dbtab is only specified once after FROM, the names dbtab~comp1 dbtab~comp2 ... have to be specified for col1 col2 .... comp1 comp2 ... are the names of the components in the structure of the ABAP Dictionary.
    If the desired database table or view occurs multiple times after FROM, the names tabalias~comp1 tabalias~comp2 ... have to be specified for col1 col2 .... tabalias is the alternative table name of the database table or view defined after FROM, and comp1 comp2 ... are the names of the components in the structure of the ABAP Dictionary.
    The data type of a single column in the resulting list is the datatype of the corresponding component in the ABAP Dictionary. The corresponding data object after INTO or APPENDING has to be selected accordingly.
    Note
    If multiple database tables are specified after FROM, you can use alternative names when specifying single columns to avoid having multiple columns with the same name.
    Example
    Read specific columns of a single row.
    DATA wa TYPE spfli.
    SELECT SINGLE carrid connid cityfrom cityto
    INTO CORRESPONDING FIELDS OF wa
    FROM spfli
    WHERE carrid EQ 'LH' AND connid EQ '0400'.
    IF sy-subrc EQ 0.
    WRITE: / wa-carrid, wa-connid, wa-cityfrom, wa-cityto.
    ENDIF.
    Alternative 3
    ... (column_syntax)
    Effect
    Instead of static data, a data object column_syntax in brackets can be specified, which, when the command is executed, either contains the syntax shown with the static data, or is initial. The data object column_syntax can be a character-type data object or an internal table with a character-type data type. The syntax in column_syntax, like in the ABAP editor, is not case-sensitive. When specifying an internal table, you can distribute the syntax over multiple rows.
    If column_syntax is initial when the command is executed, columns is implicitly set to * and all columns are read.
    If columns are specificied dynamically without the SINGLE addition, the resulting set is always regarded as having multiple rows.
    Notes
    Before Release 6.10, you could only specify an internal table with a flat character-type row type for column_syntax with a maximum of 72 characters. Also, before Release 6.10, if you used the DISTINCT addition for dynamic access to pool tables or cluster tables, this was ignored, but since release 6.10, this causes a known exception.
    If column_syntax is an internal table with header line, the table body and not the header line is evaluated.
    Example
    Read out how many flights go to and from a city. The SELECT command is implemented only once in a sub-program. The column data, including aggregate function and the data after GROUP BY, is dynamic. Instead of adding the column data to an internal l_columns table, you could just as easily concatenate it in a character-type l_columns field.
    PERFORM my_select USING `CITYFROM`.
    ULINE.
    PERFORM my_select USING `CITYTO`.
    FORM my_select USING l_group TYPE string.
    DATA: l_columns TYPE TABLE OF string,
    l_container TYPE string,
    l_count TYPE i.
    APPEND l_group TO l_columns.
    APPEND `count( * )` TO l_columns.
    SELECT (l_columns)
    FROM spfli
    INTO (l_container, l_count)
    GROUP BY (l_group).
    WRITE: / l_count, l_container.
    ENDSELECT.
    ENDFORM.
    SELECT - aggregate
    Syntax
    ... { MAX( col )
    | MIN( col )
    | AVG( col )
    | SUM( col )
    | COUNT( DISTINCT col )
    | COUNT( * )
    | count(*) } ... .
    Effect
    As many of the specified column labels as you like can be listed in the SELECT command as arguments of the above aggregate expression. In aggregate expressions, a single value is calculated from the values of multiple rows in a column as follows (note that the addition DISTINCT excludes double values from the calculation):
    MAX( col ) Determines the maximum value of the value in the column col in the resulting set or in the current group.
    MIN( col ) Determines the minimum value of the content of the column col in the resulting set or in the current group.
    AVG( col ) Determines the average value of the content of the column col in the resulting set or in the current group. The data type of the column has to be numerical.
    SUM( col ) Determines the sum of the content of the column col in the resulting set or in the current group. The data type of the column has to be numerical.
    COUNT( DISTINCT col ) Determines the number of different values in the column col in the resulting set or in the current group.
    COUNT( * ) (or count(*)) Determines the number of rows in the resulting set or in the current group. No column label is specified in this case.
    If you are using aggregate expressions, all column labels that are not listed as an argument of an aggregate function are listed after the addition GROUP BY. The aggregate functions evaluate the content of the groups defined by GROUP BY in the database system and transfer the result to the combined rows of the resulting set.
    The data type of aggregate expressions with the function MAX, MIN or SUM is the data type of the corresponding column in the ABAP Dictionary. Aggregate expressions with the function AVG have the data type FLTP, and those with COUNT have the data type INT4. The corresponding data object after INTO or APPENDING has to be selected accordingly.
    Note the following points when using aggregate expressions:
    If the addition FOR ALL ENTRIES is used in front of WHERE, or if cluster or pool tables are listed after FROM, no other aggregate expressions apart from COUNT( * ) can be used.
    Columns of the type STRING or RAWSTRING cannot be used with aggregate functions.
    When aggregate expressions are used, the SELECT command makes it unnecessary to use SAP buffering.
    Null values are not included in the calculation for the aggregate functions. The result is a null value only if all the rows in the column in question contain the null value.
    If only aggregate expressions are used after SELECT, the results set has one row and the addition GROUP BY is not necessary. If a non-table type target area is specified after INTO, the command ENDSELECT cannot be used together with the addition SINGLE. If the aggregate expression count( * ) is not being used, an internal table can be specified after INTO, and the first row of this table is filled.
    If aggregate functions are used without GROUP BY being specified at the same time, the resulting set also contains a row if no data is found in the database. If count( * ) is used, the column in question contains the value 0. The columns in the other aggregate functions contain initial values. This row is assigned to the data object specified after INTO, and unless count( * ) is being used exclusively, sy-subrc is set to 0 and sy-dbcnt is set to 1. If count( *) is used exclusively, the addition INTO can be omitted and if no data can be found in the database, sy-subrc is set to 4 and sy-dbcnt is set to 0.
    if helpful reward points

Maybe you are looking for

  • How to change PO Language Key

    Hi people , hope anyone can help with this. How can i change de Language Key at the comunications TAB on the PO Header ? it`s grey from the start and i´m not being able to change it , even before i enter the Vendor. I´ve checked the IMG config menus

  • My fan is running really loud, low cpu (i think?)

    My fan has been running really loudly for a day now. I have tried turning it off and on again, putting it to sleep, cleaning out the dust. On my activity monitor "opendirectoryd" is running at 94.5 percent and I have no idea what that is and if it is

  • Printer won't go online

    I bought an HP Envy computer 3 weeks ago,  I never could get my HP Photosmart  5525 printer to work.  I uninstalled drivers and reinstalled drivers many time and used scan doctor while visiting many forums.  It would also show it working, but it neve

  • How to turn off actionscript error alerts?

    I have to give a demo tomorrow and there are some flash modules that were done in earlier versions of Flash that while they work just fine, they throw up a really annoying error and make the presentation look bad. Is there some preference to turn the

  • Glue or Axis ? what's better ?

    Which one of these web services is considered better if at all there is a consensus ? and which one of these is more popular ? Thank you ! Scy.