Need suggestions on improving the performance end to end
I referred following links and as many as 25 previous posts before posting this question.
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10b54994-f569-2a10-ad8f-cf5c68a9447c
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/402fae48-0601-0010-3088-85c46a236f50
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0068bc1-6f8c-2a10-52bb-c6ee3562feb2
/people/boris.zarske/blog/2007/06/13/sizing-a-system-for-the-system-landscape-directory-of-sap-netweaver
I have some queries related to improving the performance of a scenario in my landscape.
Scenario : IDOC-SOAP (Synchronous with BPM)
The mapping is simple with 3 fld direct mapping.
Works grt with no problem in normal cases for one message transfer.
But initial load ( when we put it in new system, the scenario generates erros if we send more than 23 IDOCs at a time)
I have decided to perform the following to solve it.
/people/william.li/blog/2008/03/07/step-by-step-guide-in-processing-high-volume-messages-using-pi-71s-message-packaging
Will it help me because its a IDOC sync scenario.
What can I do to improve the performance of this scenario?
Please suggest me good suggestions to improve the performance
Please do list me out points that I have to perform (for a sync scenario with BPM) as I am already confused watching lots of blogs posts.
Nikhil.
do you think that the performance tuning that I mentioned in the link will hold good with sync scenarios?
i dont think so.....in this scenario the async mesg processing is made to wait
from the blog:
As you can see, for the 1st minute, all the messages are waiting to be processed. After 60 seconds,
the packages will be created and processed. There is no change to the monitoring of each of the
individual messages in SXI_MONITOR.
....but if you make a sync scenario wait....then probably you may run into the risk of blocking the Queues.......
If this is going to be in production then i would have been more careful...becoz firstly it is BPM....then synchronous BPM.....then a processing wait......normally do not try to make the BPM processing wait...my small suggestion
Similar Messages
-
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
Suggestions to improve the performance to update sales order
Hi all,
Iam uploading the data from a local file to update the sales order by changing the partner function
and iam using 'BAPI_SALESORDER_CHANGE' for updating and it is taking much time to update the
sales order can any one suggest how to improve the performance to update salesorder with new partner function ?
Thanks & Best Regards,
Vishnu.Make an SE30 on your program.
Check the action that cost too much time.
The result could be --> The update (maybe make only one commit at the end, instead of one commit by BAPI call).
A selection --> Maybe change the selection to use an index. -
I'm using it on a Samsung Captivate SGHi-897 with Jellybean 2.2 (I'm unable to upgrade the OS @this time). Firefox was recommended by a freind, however, so far I've been very disappointed with it, as, it is extremely slow and constanly crashes. I've tried all of the suggestions provided, but there is no improvememt. I realize it may be not all that compatible with this properly working device or OS. I have Samsung Galxaxy SGHi-727 Skyrocket with IC 4.1, however, I've been waiting for a part for it coming from Aisa, which seems to be taking forever, so in the meantime I'm.stuck with this. I was wondering if you may have any suggestions on how to speed it up and prevent it from crashing so much, other than what's on your help guide since those changes I've tried make no improvement in it's performance. I would very much prefer to use Firefox on this device, as well as, after I repair my other device for a number of other reasons, but if I'm unable to improve it, I'll just go back to what I was using. Thank you and be well.
twich83115
[ed. removed email]Suggestion for improvement:
I'd like <select size="1" multiple> to provide a dropdown with checkboxes like this:
http://demos.telerik.com/aspnet-ajax/combobox/examples/functionality/checkboxes/defaultcs.aspx
Can that be done?
Thanks -
Need suggestion to improve the performence of the report execution
Hi all,
I have to improve the performence of the report . I am executing the report in foreground. First time it is giving after running 6 mins . If i execute again It is giving the output.
I am using innner join between EKKO and EKPO. When I see the report in debugging, this is the select talking long time to execute then it is giving dump.
what all the things i need to take care .
Please suggest any advicess to over the issue.
Thanks,
Ajayin sap tip-tricks:
Nested Select statements
Runtime: 4,641 microseconds
SELECT * FROM SPFLI INTO SPFLI_WA.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA
WHERE CARRID = SPFLI_WA-CARRID
AND CONNID = SPFLI_WA-CONNID.
ENDSELECT.
ENDSELECT.
Select with join
Runtime: 5,080 microseconds
SELECT * INTO WA
FROM SPFLI AS P INNER JOIN SFLIGHT AS F
ON P~CARRID = F~CARRID AND
P~CONNID = F~CONNID.
ENDSELECT -
Needed help to improve the performance of a select query?
Hi,
I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
--null "CR_TURNOVER",
--null "DR_TURNOVER",
null "DR_OD_DAYS",
(select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
(case when (p_applDate >= sca.tod_limit_start_date and
p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
sca.tod_limit else 0 end) dd
from getm_facility gf, sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
--sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'D'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "DR_INTEREST",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'C'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "CR_INTEREST",
(select sum(amt) from ictb_entries_history ieh
where ieh.brn = sca.branch_code
and ieh.acc = sca.cust_ac_no
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C')) "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists (select 1 from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate)
or exists (select 1 from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
(select maturity_date from ictm_acc ia
where ia.brn = sca.branch_code
and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
sca.ac_stat_block "BLOCKED_ACC", Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca, sttm_customer sc
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and ( exists (select 1 from actb_daily_log adl
where adl.ac_no = sca.cust_ac_no
and adl.ac_branch = sca.branch_code
and adl.trn_dt = p_applDate
and adl.auth_stat = 'A')
or exists (select 1 from catm_amount_blocks cab
where cab.account = sca.cust_ac_no
and cab.branch = sca.branch_code
and cab.effective_date = p_applDate
and cab.auth_stat = 'A')
or exists (select 1 from ictm_td_closure_renew itcr
where itcr.acc = sca.cust_ac_no
and itcr.brn = sca.branch_code
and itcr.renewal_date = p_applDate)
or exists (select 1 from sttm_ac_stat_change sasc
where sasc.cust_ac_no = sca.cust_ac_no
and sasc.branch_code = sca.branch_code
and sasc.status_change_date = p_applDate
and sasc.auth_stat = 'A')
or exists (select 1 from cstb_acc_brn_trfr_log cabtl
where cabtl.branch_code = sca.branch_code
and cabtl.cust_ac_no = sca.cust_ac_no
and cabtl.process_status = 'S'
and cabtl.process_date = p_applDate)
or exists (select 1 from sttbs_provision_history sph
where sph.branch_code = sca.branch_code
and sph.cust_ac_no = sca.cust_ac_no
and sph.esn_date = p_applDate)
or exists (select 1 from sttms_cust_account_dormancy scad
where scad.branch_code = sca.branch_code
and scad.cust_ac_no = sca.cust_ac_no
and scad.dormancy_start_dt = p_applDate)
or sca.maker_dt_stamp = p_applDate
or sca.status_since = p_applDate
l_tb_acc_det ty_tb_acc_det_int;
l_brnrec cvpks_utils.rec_brnlcy;
l_acbr_lcy sttms_branch.branch_lcy%type;
l_lcy_amount actbs_daily_log.lcy_amount%type;
l_xrate number;
l_dt_rec sttm_dates%rowtype;
l_acc_rec sttm_cust_account%rowtype;
l_acc_stat_row ty_r_acc_stat;
Edited by: user13710379 on Jan 7, 2012 12:18 AMI see it more like shown below (possibly with no inline selects
Try to get rid of the remaining inline selects ( left as an exercise ;) )
and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
-- null "CR_TURNOVER",
-- null "DR_TURNOVER",
null "DR_OD_DAYS",
w.dd "OD_LIMIT",
-- sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
x.dr_int "DR_INTEREST",
x.cr_int "CR_INTEREST",
y.fee_amt "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists(select 1
from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate
or exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
) "RENEWED_OR_ROLLOVER",
m.maturity_date "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
-- sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca,
sttm_customer sc,
(select sca.cust_ac_no
sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
case when p_applDate >= sca.tod_limit_start_date
and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
then sca.tod_limit else 0
end
) dd
from sttm_cust_account sca
getm_facility gf,
sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no
group by sca.cust_ac_no
) w,
(select acc,
brn,
sum(decode(drcr,'D',amt)) dr_int,
sum(decode(drcr,'C',amt)) cr_int
from ictb_entries_history ieh
where ent_dt between p_st_dt and p_ed_dt
and drcr in ('C','D')
and liqn = 'Y'
and entry_passed = 'Y'
and exists(select null
from ictm_pr_int ipi,
ictm_rule_frm irf
where ipi.rule = irf.rule_id
and ipi.product_code = ieh.prod
and irf.book_flag = 'B'
group by acc,brn
) x,
(select acc,
brn,
sum(amt) fee_amt
from ictb_entries_history ieh
where ieh.ent_dt between p_st_dt and p_ed_dt
and exists(select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C'
group by acc,brn
) y,
ictm_acc m,
(select sca.cust_ac_no,
sca.branch_code
coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
nvl2(coalesce(t2.account,t2.account),'exists',null),
nvl2(coalesce(t3.acc,t3.brn),'exists',null),
nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
decode(sca.maker_dt_stamp,p_applDate,'exists'),
decode(sca.status_since,p_applDate,'exists')
) existence
from sttm_cust_account sca
left outer join
(select ac_no,ac_branch
from actb_daily_log
where trn_dt = p_applDate
and auth_stat = 'A'
) t1
on (sca.cust_ac_no = t1.ac_no
and sca.branch_code = t1.ac_branch
left outer join
(select account,account
from catm_amount_blocks
where effective_date = p_applDate
and auth_stat = 'A'
) t2
on (sca.cust_ac_no = t2.account
and sca.branch_code = t2.branch
left outer join
(select acc,brn
from ictm_td_closure_renew itcr
where renewal_date = p_applDate
) t3
on (sca.cust_ac_no = t3.acc
and sca.branch_code = t3.brn
left outer join
(select cust_ac_no,branch_code
from sttm_ac_stat_change
where status_change_date = p_applDate
and auth_stat = 'A'
) t4
on (sca.cust_ac_no = t4.cust_ac_no
and sca.branch_code = t4.branch_code
left outer join
(select cust_ac_no,branch_code
from cstb_acc_brn_trfr_log
where process_date = p_applDate
and process_status = 'S'
) t5
on (sca.cust_ac_no = t5.cust_ac_no
and sca.branch_code = t5.branch_code
left outer join
(select cust_ac_no,branch_code
from sttbs_provision_history
where esn_date = p_applDate
) t6
on (sca.cust_ac_no = t6.cust_ac_no
and sca.branch_code = t6.branch_code
left outer join
(select cust_ac_no,branch_code
from sttms_cust_account_dormancy
where dormancy_start_dt = p_applDate
) t7
on (sca.cust_ac_no = t7.cust_ac_no
and sca.branch_code = t7.branch_code
) z
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and sca.cust_ac_no = w.cust_ac_no
and sca.cust_ac_no = x.acc
and sca.branch_code = x.brn
and sca.cust_ac_no = y.acc
and sca.branch_code = y.brn
and sca.cust_ac_no = m.acc
and sca.branch_code = m.brn
and sca.cust_ac_no = z.sca.cust_ac_no
and sca.branch_code = z.branch_code
and z.existence is not nullRegards
Etbin -
How to improve the performance of Swing Application?
My project is a Swing based application which uses java 1.2.1 API.
The problem is my application is very slow and gets hung many times, inspite of the system configuration being good ( 256 Mb RAM, 733MHz processor).
Do give me some suggestions to improve the performance.
Regards,
sudhakarThe system configuration is more than enough to run java applications.
You are probalbly doing time-consuming operations in the event thread. Which blocks the event thread and the gui seems not to be responding. If you you have a very bad design.
Use a thread for time consuming operations. -
I need to improve the performance of a migration
Hello!
I need to migrated a lot of data and i want to improve the performance
I can use a cursor and then with a for ... loop insert every row or I can use insert into () select ....
I would like to know which one is better?
Thanks.If you need to do a lot of per-line processing the CURSOR FOR LOOP might be better. But almost certainly, if you can code it as straight SQL statements then that's the way to go. PL/SQL comes with tremendous overheads and is usually a lot slower than SQL.
However, don't take my word for it: run some tests for yourself.
Cheers, APC -
Need to improve the performance
Hi,
New fields are added in the standard table,Before the we used select single * from the table in program,Now because of those fields it reduced the performance,How to improve the performance of the select queary.
Thanks in advanceHi
folow these rules
When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where condition you can even use Select Single. Select Single requires one communication with the database system, whereas Select-Endselect needs two.
<b>Reward if usefull</b> -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
How to improve the performance of adobe forms
Hi,
Please give me some suggestions as to how to improve the performance of adobe form?
Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
one it is hanging.
I read about Wizard form design approach, how to use the same here.
Thanks,
AravindHi Otto,
The form is created using HCM forms and processes. I' am performing user events in the form.
User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
I was reading ways to improve performance during re-rendering given below.
http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
Let me know if you need further details.
Thanks,
Aravind -
Is There any way to improve the performance on this code
Hi all can any one tell me how to improve the performance of this below code.
Actually i need to calculate opening balance of gl account so instead of using bseg am using bsis
So is there any way to improve this code performance.
Any help would be appreciated.
REPORT ZTEMP5 NO STANDARD PAGE HEADING LINE-SIZE 190.
data: begin of collect occurs 0,
MONAT TYPE MONAT,
HKONT TYPE HKONT,
BELNR TYPE BELNR_D,
BUDAT TYPE BUDAT,
WRBTR TYPE WRBTR,
SHKZG TYPE SHKZG,
SGTXT TYPE SGTXT,
AUFNR TYPE AUFNR_NEU,
TOT LIKE BSIS-WRBTR,
end of collect.
TYPES: BEGIN OF TY_BSIS,
MONAT TYPE MONAT,
HKONT TYPE HKONT,
BELNR TYPE BELNR_D,
BUDAT TYPE BUDAT,
WRBTR TYPE WRBTR,
SHKZG TYPE SHKZG,
SGTXT TYPE SGTXT,
AUFNR TYPE AUFNR_NEU,
END OF TY_BSIS.
DATA: IT_BSIS TYPE TABLE OF TY_BSIS,
WA_BSIS TYPE TY_BSIS.
DATA: TOT TYPE WRBTR,
SUMA TYPE WRBTR,
VALUE TYPE WRBTR,
VALUE1 TYPE WRBTR.
SELECTION-SCREEN: BEGIN OF BLOCK B1.
PARAMETERS: S_HKONT LIKE WA_BSIS-HKONT DEFAULT '0001460002' .
SELECT-OPTIONS: S_BUDAT FOR WA_BSIS-BUDAT,
S_AUFNR FOR WA_BSIS-AUFNR DEFAULT '200020',
S_BELNR FOR WA_BSIS-BELNR.
SELECTION-SCREEN: END OF BLOCK B1.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF SCREEN-NAME = 'S_HKONT'.
SCREEN-INPUT = 0.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
START-OF-SELECTION.
SELECT MONAT
HKONT
BELNR
BUDAT
WRBTR
SHKZG
SGTXT
AUFNR
FROM BSIS
INTO TABLE IT_BSIS
WHERE HKONT EQ S_HKONT
AND BELNR IN S_BELNR
AND BUDAT IN S_BUDAT
AND AUFNR IN S_AUFNR.
* if sy-subrc <> 0.
* message 'No Data' type 'I'.
* endif.
SELECT SUM( WRBTR )
FROM BSIS
INTO COLLECT-TOT
WHERE HKONT EQ S_HKONT
AND BUDAT < S_BUDAT-LOW
AND AUFNR IN S_AUFNR.
END-OF-SELECTION.
CLEAR: S_BUDAT, S_AUFNR, S_BELNR, S_HKONT.
LOOP AT IT_BSIS INTO WA_BSIS.
IF wa_bsis-SHKZG = 'H'.
wa_bsis-WRBTR = 0 - wa_bsis-WRBTR.
ENDIF.
collect-MONAT = wa_bsis-monat.
collect-HKONT = wa_bsis-hkont.
collect-BELNR = wa_bsis-belnr.
collect-BUDAT = wa_bsis-budat.
collect-WRBTR = wa_bsis-wrbtr.
collect-SHKZG = wa_bsis-shkzg.
collect-SGTXT = wa_bsis-sgtxt.
collect-AUFNR = wa_bsis-aufnr.
collect collect into collect.
CLEAR: COLLECT, WA_BSIS.
ENDLOOP.
LOOP AT COLLECT.
AT end of HKONT.
WRITE:/65 'OpeningBalance',
85 collect-tot.
skip 1.
ENDAT.
WRITE:/06 COLLECT-BELNR,
22 COLLECT-BUDAT,
32 COLLECT-WRBTR,
54 COLLECT-SGTXT.
AT end of MONAT.
SUM.
WRITE:/ COLLECT-MONAT COLOR 1.
WRITE:32 COLLECT-WRBTR COLOR 1.
VALUE = COLLECT-WRBTR.
SKIP 1.
ENDAT.
VALUE1 = COLLECT-TOT + VALUE.
AT end of MONAT.
WRITE:85 VALUE1.
ENDAT.
endloop.
CLEAR: COLLECT, SUMA, VALUE, VALUE1.
TOP-OF-PAGE.
WRITE:/06 'Doc No',
22 'Post Date',
39 'Amount',
54 'Text'.
Moderator message : See the Sticky threads (related for performance tuning) in this forum. Thread locked.
Edited by: Vinod Kumar on Oct 13, 2011 11:12 AMHi Ben,
both BSIS selects would become faster if you can add Company Code BUKRS as 1st field of WHERE clause, because it's the 1st field of primary key and HKONT is the 2nd field of primary key.
If you have no table index with HKONT as 1st field it's a full database access.
If possible, try to add BUKRS as 1st field of WHERE clause, otherwise ask for an additional BSIS index at your basis team.
Regards,
Klaus -
How to improve the performance of one program in one select query
Hi,
I am facing performance issue in one program. I have given some part of the code of the program.
it is taking much time below select query. How to improve the performance.
Quick response is highly appreciated.
Program code
DATA: BEGIN OF t_dels_tvpod OCCURS 100,
vbeln LIKE tvpod-vbeln,
posnr LIKE tvpod-posnr,
lfimg_diff LIKE tvpod-lfimg_diff,
calcu LIKE tvpod-calcu,
podmg LIKE tvpod-podmg,
uecha LIKE lips-uecha,
pstyv LIKE lips-pstyv,
xchar LIKE lips-xchar,
grund LIKE tvpod-grund,
END OF t_dels_tvpod,
DATA: l_tabix LIKE sy-tabix,
lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
ls_dels_tvpod LIKE t_dels_tvpod.
SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
FOR ALL ENTRIES IN t_dels_tvpod
WHERE vbeln = t_dels_tvpod-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart NOT IN r_del_types_exclude.
Waiting for quick response.
Best regards,
BDPBansidhar,
1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
2) Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
Try doing something like this.
TYPES: BEGIN OF ty_del_types_exclude,
sign(1) TYPE c,
option(2) TYPE c,
low TYPE likp-lfart,
high TYPE likp-lfart,
END OF ty_del_types_exclude.
DATA: w_del_types_exclude TYPE ty_del_types_exclude,
t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
t_dels_tvpod_tmp LIKE TABLE OF t_dels_tvpod .
IF NOT t_dels_tvpod[] IS INITIAL.
* Assuming that I would like to exclude delivery types 'LP' and 'LPP'
CLEAR w_del_types_exclude.
REFRESH t_del_types_exclude.
w_del_types_exclude-sign = 'E'.
w_del_types_exclude-option = 'EQ'.
w_del_types_exclude-low = 'LP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
w_del_types_exclude-low = 'LPP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
t_dels_tvpod_tmp[] = t_dels_tvpod[].
SORT t_dels_tvpod_tmp BY vbeln.
DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
COMPARING
vbeln.
SELECT vbeln
FROM likp
INTO TABLE lt_dels_tvpod
FOR ALL ENTRIES IN t_dels_tvpod_tmp
WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart IN t_del_types_exclude.
ENDIF. -
Inner Join. How to improve the performance of inner join query
Inner Join. How to improve the performance of inner join query.
Query is :
select f1~ablbelnr
f1~gernr
f1~equnr
f1~zwnummer
f1~adat
f1~atim
f1~v_zwstand
f1~n_zwstand
f1~aktiv
f1~adatsoll
f1~pruefzahl
f1~ablstat
f1~pruefpkt
f1~popcode
f1~erdat
f1~istablart
f2~anlage
f2~ablesgr
f2~abrdats
f2~ableinh
from eabl as f1
inner join eablg as f2
on f1ablbelnr = f2ablbelnr
into corresponding fields of table it_list
where f1~ablstat in s_mrstat
%_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
I wanted to modify the query, since its taking lot of time to load the data.
Please suggest : -
Treat this is very urgent.Hi Shyamal,
In your program , you are using "into corresponding fields of ".
Try not to use this addition in your select query.
Instead, just use "into table it_list".
As an example,
Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
So try not to give "into corresponding fields" in your query.
Regards,
SP. -
Improving the performance of Stored Procedure
need to improve the performance of this SP that is hitting two tables that holds about 24000 rowsUSE [trouble_database]
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER PROCEDURE [dbo].[the_trouble_StoredProcedure]
@troubleMedicatiotrouble_durationl int
as
declare @trouble_refil table
( troubleMedicatiotrouble_durationl int,
trouble_durationl int
declare @nRefillDispense int
if exists (select ml.lid from trouble_MedicaticDirectmd
inner join trouble_Medicatication ml on ml.troubleMedicatiotrouble_durationl=md.lID
where isnull(md.trouble_bRefillDisp,0)=1
and ml.lID <>(Select MIN(lID) from trouble_Medicatication where troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl)
begin
select @nRefillDispense = isnull(nRefillDispense,9) from trouble_MedicaticDirect where lID=@troubleMedicatiotrouble_durationl
insert @trouble_refil (trouble_durationl)
select
Case szIncrement
when 'Day(s)' then isnull(trouble_durationl,0)
when 'Week(s)' then isnull(trouble_durationl,0) * 7
when 'Month(s)' then isnull(trouble_durationl,0) * 30
when 'Year(s)' then isnull(trouble_durationl,0) * 365
else 0
end as trouble_durationl
from
trouble_Medicatication where
troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl
and lID<>(Select min(lID) from trouble_Medicatication where troubleMedicatiotrouble_durationl=@troubleMedicatiotrouble_durationl)
select @nRefillDispense as nRefillDispense, sum(trouble_durationl)as trouble_durationl from @trouble_refil
end
else
select 0 as nRefillDispense,0 as trouble_durationl
kyou can use Execution plain
http://stackoverflow.com/questions/7359702/how-do-i-obtain-a-query-execution-plan
and add index.
Index according to the fields you ask queries can improve performance greatly larger!
You can use the statistics for building indexes:
http://www.mssqltips.com/sqlservertip/2979/querying-sql-server-index-statistics/
Tzuri Ben Ezra | My Certifications:
CompTIA A+ ,Microsoft MCP, MCTS, MCSA, MCITP |
FaceBook: Tzuri FaceBook | vCard:
Tzuri vCard |
Microsoft ID:
Microsoft Transcript
|
Maybe you are looking for
-
How to create a horizontal multi column menu?
We are trying to create a horizontal menu, just like the one at Oracle site ( [http://www.oracle.com/us/index.html] ) with links in multiple columns in each menu item. Can you please suggest how to do it? We have tried using the navigation model but
-
Why does my apple TV take so long to play purchased movies??
-
Newbie question. Trying to set up connection using OLEDB with C++
I have been able to create a C#.Net program using OLEDB to connect to my Oracle server. C++ is a different beast! I can't seem to get any traction. What files should I include? What syntax do I use, things of that nature. I've been looking for code e
-
Assign GL ACCOunt to billing type
Dear gurus How to change the gl account of any customized billing type. i'm having a problem that the billing type ZF26 is assigned to a wrong gl account. i want to change that please guide me regards Saad.Nisar
-
S_ALR_87013614 layout issue
Hello Guru We are using SAP standard transaction S_ALR_87013614 to get the cost center current period/ cumulative selection.we have observed that the layout format is change every month. we tried to find the change log but we couldn't find the change