Needed help to improve the performance of a select query?

Hi,
I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
--null                                 "CR_TURNOVER",       
--null                                 "DR_TURNOVER",       
null "DR_OD_DAYS",
(select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
(case when (p_applDate >= sca.tod_limit_start_date and
p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
sca.tod_limit else 0 end) dd
from getm_facility gf, sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
--sc.credit_rating                      "CR_GRADE",        
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'D'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "DR_INTEREST",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'C'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "CR_INTEREST",
(select sum(amt) from ictb_entries_history ieh
where ieh.brn = sca.branch_code
and ieh.acc = sca.cust_ac_no
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C')) "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists (select 1 from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate)
or exists (select 1 from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
(select maturity_date from ictm_acc ia
where ia.brn = sca.branch_code
and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
sca.ac_stat_block                     "BLOCKED_ACC",       Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca, sttm_customer sc
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and ( exists (select 1 from actb_daily_log adl
where adl.ac_no = sca.cust_ac_no
and adl.ac_branch = sca.branch_code
and adl.trn_dt = p_applDate
and adl.auth_stat = 'A')
or exists (select 1 from catm_amount_blocks cab
where cab.account = sca.cust_ac_no
and cab.branch = sca.branch_code
and cab.effective_date = p_applDate
and cab.auth_stat = 'A')
or exists (select 1 from ictm_td_closure_renew itcr
where itcr.acc = sca.cust_ac_no
and itcr.brn = sca.branch_code
and itcr.renewal_date = p_applDate)
or exists (select 1 from sttm_ac_stat_change sasc
where sasc.cust_ac_no = sca.cust_ac_no
and sasc.branch_code = sca.branch_code
and sasc.status_change_date = p_applDate
and sasc.auth_stat = 'A')
or exists (select 1 from cstb_acc_brn_trfr_log cabtl
where cabtl.branch_code = sca.branch_code
and cabtl.cust_ac_no = sca.cust_ac_no
and cabtl.process_status = 'S'
and cabtl.process_date = p_applDate)
or exists (select 1 from sttbs_provision_history sph
where sph.branch_code = sca.branch_code
and sph.cust_ac_no = sca.cust_ac_no
and sph.esn_date = p_applDate)
or exists (select 1 from sttms_cust_account_dormancy scad
where scad.branch_code = sca.branch_code
and scad.cust_ac_no = sca.cust_ac_no
and scad.dormancy_start_dt = p_applDate)
or sca.maker_dt_stamp = p_applDate
or sca.status_since = p_applDate
l_tb_acc_det ty_tb_acc_det_int;
l_brnrec cvpks_utils.rec_brnlcy;
l_acbr_lcy sttms_branch.branch_lcy%type;
l_lcy_amount actbs_daily_log.lcy_amount%type;
l_xrate number;
l_dt_rec sttm_dates%rowtype;
l_acc_rec sttm_cust_account%rowtype;
l_acc_stat_row ty_r_acc_stat;
Edited by: user13710379 on Jan 7, 2012 12:18 AM

I see it more like shown below (possibly with no inline selects
Try to get rid of the remaining inline selects ( left as an exercise ;) )
and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
select sca.branch_code "BRANCH",
       sca.cust_ac_no "ACCOUNT",
       to_char(p_applDate, 'YYYYMM') "YEARMONTH",
       sca.ccy "CURRENCY",
       sca.account_class "PRODUCT",
       sca.cust_no "CUSTOMER",
       sca.ac_desc "DESCRIPTION",
       null "LOW_BAL",
       null "HIGH_BAL",
       null "AVG_CR_BAL",
       null "AVG_DR_BAL",
       null "CR_DAYS",
       null "DR_DAYS",
--     null "CR_TURNOVER",
--     null "DR_TURNOVER",
       null "DR_OD_DAYS",
       w.dd "OD_LIMIT",
--     sc.credit_rating "CR_GRADE",
       null "AVG_NET_BAL",
       null "UNAUTH_OD_AMT",
       sca.acy_blocked_amount "AMT_BLOCKED",
       x.dr_int "DR_INTEREST",
       x.cr_int "CR_INTEREST",
       y.fee_amt "FEE_INCOME",
       sca.record_stat "ACC_STATUS",
       case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
             and not exists(select 1
                              from ictm_tdpayin_details itd
                             where itd.multimode_payopt = 'Y'
                               and itd.brn = sca.branch_code
                               and itd.acc = sca.cust_ac_no
                               and itd.multimode_offset_brn is not null
                               and itd.multimode_tdoffset_acc is not null
            then 1
            else 0
       end "NEW_ACC_FOR_THE_MONTH",
       case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
             and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
             and not exists(select 1
                              from ictm_tdpayin_details itd
                             where itd.multimode_payopt = 'Y'
                               and itd.brn = sca.branch_code
                               and itd.acc = sca.cust_ac_no
                               and itd.multimode_offset_brn is not null
                               and itd.multimode_tdoffset_acc is not null
            then 1
            else 0
       end "NEW_ACC_FOR_NEW_CUST",
       (select 1 from dual
         where exists(select 1
                        from ictm_td_closure_renew itcr
                       where itcr.brn = sca.branch_code
                         and itcr.acc = sca.cust_ac_no
                         and itcr.renewal_date = sysdate
            or exists(select 1
                        from ictm_tdpayin_details itd
                       where itd.multimode_payopt = 'Y'
                         and itd.brn = sca.branch_code
                         and itd.acc = sca.cust_ac_no
                         and itd.multimode_offset_brn is not null
                         and itd.multimode_tdoffset_acc is not null
       ) "RENEWED_OR_ROLLOVER",
       m.maturity_date "MATURITY_DATE",
       sca.ac_stat_no_dr "DR_DISALLOWED",
       sca.ac_stat_no_cr "CR_DISALLOWED",
--     sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
       sca.ac_stat_dormant "DORMANT_ACC",
       sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
       sca.ac_stat_frozen "FROZEN_ACC",
       sca.ac_open_date "ACC_OPENING_DT",
       sca.address1 "ADD_LINE_1",
       sca.address2 "ADD_LINE_2",
       sca.address3 "ADD_LINE_3",
       sca.address4 "ADD_LINE_4",
       sca.joint_ac_indicator "JOINT_ACC",
       sca.acy_avl_bal "CR_BAL",
       0 "DR_BAL",
       0 "CR_BAL_LCY", t
       0 "DR_BAL_LCY",
       null "YTD_CR_MOVEMENT",
       null "YTD_DR_MOVEMENT",
       null "YTD_CR_MOVEMENT_LCY",
       null "YTD_DR_MOVEMENT_LCY",
       null "MTD_CR_MOVEMENT",
       null "MTD_DR_MOVEMENT",
       null "MTD_CR_MOVEMENT_LCY",
       null "MTD_DR_MOVEMENT_LCY",
       'N' "BRANCH_TRFR", --New
       sca.provision_amount "PROVISION_AMT",
       sca.account_type "ACCOUNT_TYPE",
       nvl(sca.tod_limit, 0) "TOD_LIMIT",
       nvl(sca.sublimit, 0) "SUB_LIMIT",
       nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
       nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
  from sttm_cust_account sca,
       sttm_customer sc,
       (select sca.cust_ac_no
               sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
                   case when p_applDate >= sca.tod_limit_start_date
                         and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
                        then sca.tod_limit else 0
                   end
                  ) dd
          from sttm_cust_account sca
               getm_facility gf,
               sttm_cust_account_linkages scal
         where gf.line_code || gf.line_serial = scal.linked_ref_no
           and cust_ac_no = sca.cust_ac_no
         group by sca.cust_ac_no
       ) w,
       (select acc,
               brn,
               sum(decode(drcr,'D',amt)) dr_int,
               sum(decode(drcr,'C',amt)) cr_int
          from ictb_entries_history ieh
         where ent_dt between p_st_dt and p_ed_dt
           and drcr in ('C','D')
           and liqn = 'Y'
           and entry_passed = 'Y'
           and exists(select null
                        from ictm_pr_int ipi,
                             ictm_rule_frm irf
                       where ipi.rule = irf.rule_id
                         and ipi.product_code = ieh.prod 
                         and irf.book_flag = 'B'
         group by acc,brn
       ) x,
       (select acc,
               brn,
               sum(amt) fee_amt
          from ictb_entries_history ieh
         where ieh.ent_dt between p_st_dt and p_ed_dt
           and exists(select product_code
                        from ictm_product_definition ipd
                       where ipd.product_code = ieh.prod
                         and ipd.product_type = 'C'
         group by acc,brn
       ) y,
       ictm_acc m,
       (select sca.cust_ac_no,
               sca.branch_code
               coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
                        nvl2(coalesce(t2.account,t2.account),'exists',null),
                        nvl2(coalesce(t3.acc,t3.brn),'exists',null),
                        nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
                        nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
                        nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
                        nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
                        decode(sca.maker_dt_stamp,p_applDate,'exists'),
                        decode(sca.status_since,p_applDate,'exists')
                       ) existence
          from sttm_cust_account sca
               left outer join
               (select ac_no,ac_branch
                  from actb_daily_log
                 where trn_dt = p_applDate
                   and auth_stat = 'A'
               ) t1
            on (sca.cust_ac_no = t1.ac_no
           and  sca.branch_code = t1.ac_branch
               left outer join
               (select account,account
                  from catm_amount_blocks
                 where effective_date = p_applDate
                   and auth_stat = 'A'
               ) t2
            on (sca.cust_ac_no = t2.account
           and  sca.branch_code = t2.branch
               left outer join
               (select acc,brn
                  from ictm_td_closure_renew itcr
                 where renewal_date = p_applDate
               ) t3
            on (sca.cust_ac_no = t3.acc
           and  sca.branch_code = t3.brn
               left outer join
               (select cust_ac_no,branch_code
                  from sttm_ac_stat_change
                 where status_change_date = p_applDate
                   and auth_stat = 'A'
               ) t4
            on (sca.cust_ac_no = t4.cust_ac_no
           and  sca.branch_code = t4.branch_code
               left outer join
               (select cust_ac_no,branch_code
                  from cstb_acc_brn_trfr_log
                 where process_date = p_applDate
                   and process_status = 'S'
               ) t5
            on (sca.cust_ac_no = t5.cust_ac_no
           and  sca.branch_code = t5.branch_code
               left outer join
               (select cust_ac_no,branch_code
                  from sttbs_provision_history
                 where esn_date = p_applDate
               ) t6
            on (sca.cust_ac_no = t6.cust_ac_no
           and  sca.branch_code = t6.branch_code
               left outer join
               (select cust_ac_no,branch_code
                  from sttms_cust_account_dormancy
                 where dormancy_start_dt = p_applDate
               ) t7
            on (sca.cust_ac_no = t7.cust_ac_no
           and  sca.branch_code = t7.branch_code
       ) z
where sca.branch_code = p_branch_code
   and sca.cust_no = sc.customer_no
   and sca.cust_ac_no = w.cust_ac_no
   and sca.cust_ac_no = x.acc
   and sca.branch_code = x.brn
   and sca.cust_ac_no = y.acc
   and sca.branch_code = y.brn
   and sca.cust_ac_no = m.acc
   and sca.branch_code = m.brn
   and sca.cust_ac_no = z.sca.cust_ac_no
   and sca.branch_code = z.branch_code
   and z.existence is not nullRegards
Etbin

Similar Messages

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • Need help in optimising the performance of a query

    Need help in optimising the performance of a query. Below is the query that is executed on TABLE_A, TABLE_B and TABLE_C with record counts as 10M, 10m and 42 (only) respectively and it takes around 5-7 minutes to get 40 records:
    SELECT DISTINCT a.T_ID_, a.FIRSTNAME, b.T_CODE, b.PRODUCT,
    CASE WHEN TRUNC(b.DATE) +90 = TRUNC(SYSDATE) THEN -90 WHEN TRUNC(b.DATE) +30 = TRUNC(SYSDATE) THEN -30 ELSE 0 END AS T_DATE FROM TABLE_B b
    INNER JOIN TABLE_A a ON (a.T_ID_ = b.T_ID_) LEFT JOIN TABLE_C c ON b.PRODUCT = c.PRODUCT
    WHERE b.STATUS = 'T' AND (b.TYPE = 'ACTION'
    AND ( TRUNC(b.DATE) + 1 = TRUNC(SYSDATE) ) ) AND b.PRODUCT = 2;
    Note: Indices on the join columns are available in the respective tables
    Please let me know if there is any better way to write it.
    Edited by: 862944 on Aug 18, 2011 9:52 AM

    862944 wrote:
    Need help in optimising the performance of a query. Below is the query that is executed on TABLE_A, TABLE_B and TABLE_C with record counts as 10M, 10m and 42 (only) respectively and it takes around 5-7 minutes to get 40 records:
    SELECT DISTINCT a.T_ID_, a.FIRSTNAME, b.T_CODE, b.PRODUCT,
    CASE WHEN TRUNC(b.DATE) +90 = TRUNC(SYSDATE) THEN -90 WHEN TRUNC(b.DATE) +30 = TRUNC(SYSDATE) THEN -30 ELSE 0 END AS T_DATE FROM TABLE_B b
    INNER JOIN TABLE_A a ON (a.T_ID_ = b.T_ID_) LEFT JOIN TABLE_C c ON b.PRODUCT = c.PRODUCT
    WHERE b.STATUS = 'T' AND (b.TYPE = 'ACTION'
    AND ( TRUNC(b.DATE) + 1 = TRUNC(SYSDATE) ) ) AND b.PRODUCT = 2;
    Note: Indices on the join columns are available in the respective tables
    Please let me know if there is any better way to write it.
    Edited by: 862944 on Aug 18, 2011 9:52 AM[When Your Query Takes Too Long|https://forums.oracle.com/forums/thread.jspa?messageID=1812597]

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • How to Improve the performance in Variable Selection Screen.

    Hi,
    In Query Level we have Variable " User entry Defalt Valu". User want select particular value when he press "F4" it's take hours time how to improve the performance in Varaible Selection Screen.
    Thanks in Advance.
    Regards,
    Venkat.

    Dear Venkat.
    You please try the following steps:
    1. Say the InfoObject is 0EMPLOYEE against which you have created the variable, which user is trying to select value against, when they execute the report.
    2. Goto RSA1-> InfoObject tab-> Select InfoObject 0EMPLOYEE.
    3. Selcet the following options:
       Query Execution Filter Val. Selectn  -  'Only Posted Value for Navigation'
       Filter Value Repr. At Query Exec. -      'Selector Box Without Values'
    Please let me know if there is any more issue. Feel free to raise further concern
    Thnx,
    Sukdev K

  • Help to improve the performance of a procedure.

    Hello everybody,
    First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
    Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
    The shorter calls (less than 30 minutes) have segmentID = 'C1'.
    The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
    So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
    But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
    As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
    So I decided to come here and ask you for some tips on how to improve the performance of this.
    I think you are getting confused already, so I'm just going to put some comments in the code.
    I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
    DECLARE
    CURSOR cur_c21 IS
        select * from table1
        where segmentID = 'C21'
        order by start_date_of_call;     // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
    CURSOR cur_c22 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;
    CURSOR cur_c22_2 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;  
    cursor cur_c23 is
        select * from table1
        where segmentID = 'C23'
        order by start_date_of_call;
    v_temp_rec_c22 cur_c22%ROWTYPE;
    v_dur table1.duration%TYPE;           // using this for storage of the duration of the call. It's number.
    BEGIN
    insert into table2
    select * from table1 where segmentID = 'C1';     // inserting the calls which are less than 30 minutes long
    -- and here starts the mess
    FOR rec_c21 IN cur_c21 LOOP        // taking the first part of the call
       v_dur := rec_c21.duration;      // recording it's duration
       FOR rec_c22 IN cur_c22 LOOP     // starting to check if there is a middle part for the call
          IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
            (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)                
    /* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
          THEN
             v_dur := v_dur + rec_c22.duration;     // updating the new duration
             v_temp_rec_c22:=rec_c22;               // recording the current record in another variable because I use it for the next check
             FOR rec_c22_2 in cur_c22_2 LOOP
                IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND 
                  (rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)        
    /* logic is the same as before but comparing with the last value in v_temp...
    And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
                THEN
                   v_dur:=v_dur + rec_c22_2.duration;
                   v_temp_rec_c22:=rec_c22_2;
                END IF;
             END LOOP;                     
          END IF;
          EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
                   (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);       
    /* exiting the loop if we have at least one middle part.
    (I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
       END LOOP;
       FOR rec_c23 IN cur_c23 LOOP             
          IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
             (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration          
    /* we should always have one last part, so we need this check.
    If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
    (yes we can have these situations in calls longer than 30 and less than 60 minutes). */
          THEN
             v_dur:=v_dur + rec_c23.duration;
             rec_c21.duration:=v_dur;               // updating the duration
             rec_c21.segmentID :='C1';
             INSERT INTO table2 VALUES rec_c21;     // inserting the whole call in table2
          END IF;
          EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
                    (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;                 
                    // exit the loop when the last part has been found.
       END LOOP;
    END LOOP;
    END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
    It's my first post here so hope this is the right sub-forum.
    I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
    I know I'm still missing a lot of knowledge so every help is really appreciated.
    Thank you very much in advance!

    Atiel wrote:
    Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
    SQL> ed
    Wrote file afiedt.buf
      1  select 'C1' as segmentid
      2        ,start_date_of_call, duration, callingnumber, callednumber
      3  from (
      4        select distinct
      5               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      6              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      7              ,callingnumber
      8              ,callednumber
      9        from table1
    10*      )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349
    Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
    Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
    SQL> select * from table1;
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    C21        15-MAR-2012 09:07:26  134676480 5581790386      0113496771567          219        100      10.16
    C23        11-MAY-2012 09:37:26  134676480 5581790386      0113496771567          321         73       2.71
    C21        11-MAY-2012 12:13:10 3892379648 1982032041      0631432831624          959         80       2.87
    C22        11-MAY-2012 12:43:10 3892379648 1982032041      0631432831624          375         57       8.91
    C22        11-MAY-2012 13:13:10  117899264 1982032041      0631432831624          778         27       1.42
    C23        11-MAY-2012 13:43:10  117899264 1982032041      0631432831624          308         97       3.26
    7 rows selected.
    SQL> ed
    Wrote file afiedt.buf
      1  with t2 as (
      2  select 'C1' as segmentid
      3        ,start_date_of_call, duration, callingnumber, callednumber
      4  from (
      5        select distinct
      6               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      7              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      8              ,callingnumber
      9              ,callednumber
    10        from table1
    11       )
    12  )
    13  --
    14  select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
    15        ,t1.col1, t1.col2, t1.col3
    16  from   t2
    17         join table1 t1 on (   t1.start_date_of_call = t2.start_date_of_call
    18                           and t1.callingnumber = t2.callingnumber
    19                           and t1.callednumber = t2.callednumber
    20*                          )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624          959         80       2.87
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567          219        100      10.16
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows).

  • Need suggestions on improving the performance end to end

    I referred following links and as many as 25 previous posts before posting this question.
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10b54994-f569-2a10-ad8f-cf5c68a9447c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/402fae48-0601-0010-3088-85c46a236f50
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0068bc1-6f8c-2a10-52bb-c6ee3562feb2
    /people/boris.zarske/blog/2007/06/13/sizing-a-system-for-the-system-landscape-directory-of-sap-netweaver
    I have some queries related to improving the performance of a scenario in my landscape.
    Scenario : IDOC-SOAP (Synchronous with BPM)
    The mapping is simple with 3 fld direct mapping.
    Works grt with no problem in normal cases for one message transfer.
    But initial load ( when we put it in new system, the scenario generates erros if we send more than 23 IDOCs at a time)
    I have decided to perform the following to solve it.
    /people/william.li/blog/2008/03/07/step-by-step-guide-in-processing-high-volume-messages-using-pi-71s-message-packaging
    Will it help me because its a IDOC sync scenario.
    What can I do to improve the performance of this scenario?
    Please suggest me good suggestions to improve the performance
    Please do list me out points that I have to perform (for a sync scenario with BPM) as I am already confused watching lots of blogs posts.
    Nikhil.

    do you think that the performance tuning that I mentioned in the link will hold good with sync scenarios?
    i dont think so.....in this scenario the async mesg processing is made to wait
    from the blog:
    As you can see, for the 1st minute, all the messages are waiting to be processed. After 60 seconds,
    the packages will be created and processed. There is no change to the monitoring of each of the
    individual messages in SXI_MONITOR.
    ....but if you make a sync scenario wait....then probably you may run into the risk of blocking the Queues.......
    If this is going to be in production then i would have been more careful...becoz firstly it is BPM....then synchronous BPM.....then a processing wait......normally do not try to make the BPM processing wait...my small suggestion

  • Required help in improving the performance

    Hi I am very new to java concept, I am working with an API, where the records are being processed in for loop, and taking time, to process 10k records it is taking almost 35 min, and as I have incorporated in my apex, if the multiple users using the same that stage the performance even being dropped, it is taking almost near to an hour, somehow with the help of online tutors, I was able to incorporate oracle.sql.array, not able to increase the performance,
    My first requirement is there is any way that I can process the records parallel in batches, or not how do I increase the performance, and I got know that by enabling setautoindex and setautobuffer on we can increase the performance, but I could not do that can anyone help me on this.

    Hi
    I apologize for not adding the process in the initial phase
    The task for me to pass the records from my table to api, and update the results given by the api, the steps involved are
    1) I have created type of strarray and have assigned the same to rec1,rec2 in my stored procedure
    2) rec1 is the input details which consist of batch_id unique identifier by batch,row_id unique identifier for the batch and the contact address information.
    3) rec2 is the output for rec1, where i will get the batch_id,row_id, and formatted address in output form
    4) I will capture the opt put in temp table and update these results to the input table
    5) With this stored procedure, i am not able to allow parallel transaction i..e multiple users
    6) As records are being processed row by row, consuming time
    Here is the code, Please let me know if you need more information on this.
    PROCESS_INT (REC_IN, REC_OUT); which will call the following process
         public static int process(oracle.sql.ARRAY rec_in, oracle.sql.ARRAY[] rec_out) {
              // If everything has been initialized then we want to write some data
              // to the socket we have opened a connection to
              if (m_clientSocket != null && m_out != null && m_in != null) {
                   try {
                        String[] record = (String[])rec_in.getArray();
                        for (int i = 0; i < 9; i++) {
                             if (record[i] != null)
                                  m_out.println(record);
                             else
                                  m_out.println("");
                        m_out.flush();
                        // Read the result
                        for (int i = 0; i < 14; i++) {
                             record[i] = m_in.readLine();
                        Connection conn = new OracleDriver().defaultConnection();
                        ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor( rec_in.getSQLTypeName(), conn );
                        rec_out[0] = new ARRAY( descriptor, conn, record );
                   } catch (UnknownHostException e) {
                        System.err.println("Unable to connect to lqtListener: " + e);
                        return -1;
                   } catch (IOException e) {
                        System.err.println("IOException in process: " + e);
                        return -2;
                   } catch (SQLException e) {
                        System.err.println("SQLException in process: " + e);
                        return -4;
              else
                   return -3;
              return 0;

  • Need help in analyzing the performance aspects of compounding

    Hi all,
    i am analyzing the performance aspects of compounding.
    can anyone guide me about how do i go about?

    when i displayed this table, it is showing some timestamps and validity period for the queries.  i am having some difficulty in understanding this table details.
    can u please guide me regarding this ?
    also can anyone please help me on how do i analyze the OLAP processor performance for the  queries that use compounding?

  • Options to improve the performance of the Job

    Hi Team,
    As part of the CRM Upgrade requirement, we are planning to use Account Life cycle functionality to reflect the status of an account.
    As per the SAP recommendations( Note 1113330) we are currently executing the program CRM_BUPA_USERSTATUS_CONV2ROLE to convert user status master data to BP roles. We have noticed that this program is taking more time even when we run this for single business partner. We are trying to explore the options on how to improve the performance of the job. Incase if anyone have done this kind of exercise in any of their previous assignments or have information on this, request to provide your feedback on the below points.
    1) Total Volume of Customer Master Data
    2) How many records did we consider for one execution of the conversion program
    3) How much time did it toke for one execution ?Did we do any performance tuning
    4) When we are running the program in back ground mode..we are not getting the spool
    showing the log information. Was there any custom report developed to view the log when
    we execute the program in background mode..if so can you share us the technical details
    6) Any information on how many work processors that were available for executing the jobs 
    Appreciate your help.
    Regards,
    Varun

    Hello Udaya ,
    Could you please tryy providing a range of BPs as per note 1121015? This can help in improving the performance .
    Thanks & regards,
    Krishnen

  • EP6 sp12 Performance Issue, Need help to improve performance

    We have a Portal development environment with EP6.0 sp12.
    What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
    Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
    Simple guide, easy to implement,  with immediate effect is what we are looking for in the short term
    Thanks
    Arunabha

    Hi Eric,
      I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
    Subrato,
      This is good and I would obviously read through this, The issue here is this is only for Network.
      But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
    But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
    It would be very helpful if you or anybody( who has actually done some performance tuning) can send  a basic list of steps that I can do immediately, instead of reading through these large guides.
    I know I am looking for a shortcut, but this is the need of the hour.
    Thanks
    Arun

  • Please help me how to improve the performance of this query further.

    Hi All,
    Please help me how to improve the performance of this query further.
    Thanks.

    Hi,
    this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
    The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
    The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
    and we're left with nothing.
    Start by reading this blog post: Kyle Hailey &amp;raquo; Power of DISPLAY_CURSOR
    and applying this knowledge to your case.
    Best regards,
      Nikolay

  • I need to improve the performance of a migration

    Hello!
    I need to migrated a lot of data and i want to improve the performance
    I can use a cursor and then with a for ... loop insert every row or I can use insert into () select ....
    I would like to know which one is better?
    Thanks.

    If you need to do a lot of per-line processing the CURSOR FOR LOOP might be better. But almost certainly, if you can code it as straight SQL statements then that's the way to go. PL/SQL comes with tremendous overheads and is usually a lot slower than SQL.
    However, don't take my word for it: run some tests for yourself.
    Cheers, APC

  • Need to improve the performance

    Hi,
    New fields are added in the standard table,Before the we used select single * from the  table in program,Now because of those fields it reduced the performance,How to improve the performance of the select queary.
    Thanks in advance

    Hi
    folow these rules
    When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where condition you can even use Select Single.  Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    <b>Reward if usefull</b>

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

Maybe you are looking for

  • Syncing iphone and it froze, now it does not know my iphone.

    I was syncing my iphone and it froze.  I had to disconnect and now it does not know my iphone.  What do I do now?

  • Accounting problem in Make to order cofiguration

    Dear friends we have implemented make-to-order scenario on one of our product. We have successfully gone through the following steps.... sales order -> converting to planed order [md50] -> converting planed order to process order -> confirmation of p

  • Arrays

    hi, i'm working on a project and i thought that an easy way to get around a problem i have would be to make an array to store info and then output it later. But i have no idea how to make an array in flash... could someone please explain how to defin

  • Printing problem after installing Adobe Reader XI.

    After installing Adobe Reader XI. I can not just print a document from email attachment without saving it to a "save the file as". Where before was able to just print the document. There is no reason I need to save these documents being printed. How

  • Queues Problem

    Hi Friends, I am facing a problem in Production server Related to Queus.I am sending employee information from oracle systems to sap system.I have used sender as JDBC adapter and Reciver as Inbound Proxy.I am sending 10000 above records.When i run th