RE: Need help to improve performance!!
Hi Experts,
There is an standard SAP tcode FPREPT which is to re-print a receipt. The execution of the Query time takes 5+ minutes.
Can anybody suggest me the best way to improve this and if hlp me with any SAP note available for the same.
vishal
Hi,
Check this note
Note 607651 - FPREPT/FPY1: Performance for receipt number assignment
It is a old one for release 471 (FI-CA)
What is your release ?
Regards
Similar Messages
-
EP6 sp12 Performance Issue, Need help to improve performance
We have a Portal development environment with EP6.0 sp12.
What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
Simple guide, easy to implement, with immediate effect is what we are looking for in the short term
Thanks
ArunabhaHi Eric,
I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
Subrato,
This is good and I would obviously read through this, The issue here is this is only for Network.
But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
It would be very helpful if you or anybody( who has actually done some performance tuning) can send a basic list of steps that I can do immediately, instead of reading through these large guides.
I know I am looking for a shortcut, but this is the need of the hour.
Thanks
Arun -
Need help in improving performance of prorating quantities to stores for existing orders
I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
this 10 is called innersize.
While allocating, each store is provided quantities of innersize first and this looping continues
until available quantity is over
Ex:
store1=10
store2=10
store3=10
store4=10
second time:
store1=10(old)+10
store2=10(old)+10
store3=10(old)+10
store4=10(old)+10--demand fulfilled
third time
store1=20(old)+10
store2=20(old)+10
-- available quantity is over and hence stopped.
My code below-
=================================================
int prorate_allocation()
char *function = "prorate_allocation";
long t_cnt_st;
int t_innersize;
int t_qty_ordered;
int t_cnt_lp;
bool t_complete;
sql_cursor alloc_cursor;
EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
SELECT oh.order_no,
ol.item,
isc.inner_pack_size,
ol.qty_ordered
FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
ordloc ol,
item_supp_country isc
WHERE oh.order_no=ol.order_no
AND oh.supplier=isc.supplier
and ol.item=isc.item
AND EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
AND ol.qty_ordered>0;
char v_order_no[10];
char v_item[25];
double v_innersize;
char v_qty_ordered[12];
char v_alloc_no[11];
char v_location[10];
char v_qty_allocated[12];
int *store_quantities;
bool *store_processed_flag;
EXEC SQL OPEN c_order;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR OPEN: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL ALLOCATE :alloc_cursor;
while(1)
EXEC SQL FETCH c_order INTO :v_order_no,
:v_item,
:v_innersize,
:v_qty_ordered;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR FETCH: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (NO_DATA_FOUND) break;
t_qty_ordered =atoi(v_qty_ordered);
t_innersize =(int)v_innersize;
t_cnt_lp = t_qty_ordered/t_innersize;
t_complete =FALSE;
EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
FROM abrl_alc_chg_ad ad,
alloc_header ah
WHERE ah.alloc_no=ad.alloc_no
AND ah.order_no=:v_order_no
AND ah.item=:v_item
AND ad.qty_allocated!=0;
if SQL_ERROR_FOUND
sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (t_cnt_st>0)
store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
while (t_cnt_lp>0)
EXEC SQL WHENEVER NOT FOUND DO break;
for(int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
if (store_quantities[i]!=(int)v_qty_allocated)
store_quantities[i]=store_quantities[i]+t_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
else
if(store_processed_flag[i]==FALSE)
store_processed_flag[i]=TRUE;
t_cnt_st--;
if (t_cnt_st==0)
t_complete=TRUE;
break;
if (t_complete==TRUE && t_cnt_lp!=0)
for (int i=0;i<t_cnt_st;i++)
store_quantities[i]=store_quantities[i]+v_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
}/*END OF WHILE*/
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
EXEC SQL WHENEVER NOT FOUND DO break;
for (int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
EXEC SQL UPDATE abrl_alc_chg_ad
SET qty_allocated=:store_quantities[i]
WHERE to_loc=:v_location
AND alloc_no=:v_alloc_no;
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
SET PROCESSED='Y'
WHERE LOCATION=:v_location
AND alloc_no=:v_alloc_no
AND PROCESSED IN ('E','U');
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ABRL_ALC_CHG_DETAILS");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL COMMIT;
EXEC SQL CLOSE :alloc_cursor;
free(store_quantities);
free(store_processed_flag);
}/*END OF IF*/
}/*END OF OUTER WHILE LOOP*/
EXEC SQL CLOSE c_order;
if SQL_ERROR_FOUND
sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
return(0);
} /* end prorate_allocation*/I have a code written to allocate quantities to stores for an existing order. Suppose there is a supplier order with quantity of 100 and this needs to distributed among 4 stores which has a demand of 50,40,30 and 20. Since total demand not equal to available quantity. the available quantity needs to be allocated to stores using an algorithm.
ALgorithm is like allocating the stores in small pieces of innersize. Innersize is nothing but
quantity within the pack of packs i.e. pack has 4 pieces and each pieces internally has 10 pieces,
this 10 is called innersize.
While allocating, each store is provided quantities of innersize first and this looping continues
until available quantity is over
Ex:
store1=10
store2=10
store3=10
store4=10
second time:
store1=10(old)+10
store2=10(old)+10
store3=10(old)+10
store4=10(old)+10--demand fulfilled
third time
store1=20(old)+10
store2=20(old)+10
-- available quantity is over and hence stopped.
My code below-
=================================================
int prorate_allocation()
char *function = "prorate_allocation";
long t_cnt_st;
int t_innersize;
int t_qty_ordered;
int t_cnt_lp;
bool t_complete;
sql_cursor alloc_cursor;
EXEC SQL DECLARE c_order CURSOR FOR -- cursor to get orders, item in that, inner size and available qty.
SELECT oh.order_no,
ol.item,
isc.inner_pack_size,
ol.qty_ordered
FROM ABRL_ALC_CHG_TEMP_ORDHEAD oh,
ordloc ol,
item_supp_country isc
WHERE oh.order_no=ol.order_no
AND oh.supplier=isc.supplier
and ol.item=isc.item
AND EXISTS (SELECT 1 FROM abrl_alc_chg_details aacd WHERE oh.order_no=aacd.order_no)
AND ol.qty_ordered>0;
char v_order_no[10];
char v_item[25];
double v_innersize;
char v_qty_ordered[12];
char v_alloc_no[11];
char v_location[10];
char v_qty_allocated[12];
int *store_quantities;
bool *store_processed_flag;
EXEC SQL OPEN c_order;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR OPEN: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL ALLOCATE :alloc_cursor;
while(1)
EXEC SQL FETCH c_order INTO :v_order_no,
:v_item,
:v_innersize,
:v_qty_ordered;
if (SQL_ERROR_FOUND)
sprintf(err_data,"CURSOR FETCH: cursor=c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (NO_DATA_FOUND) break;
t_qty_ordered =atoi(v_qty_ordered);
t_innersize =(int)v_innersize;
t_cnt_lp = t_qty_ordered/t_innersize;
t_complete =FALSE;
EXEC SQL SELECT COUNT(*) INTO :t_cnt_st
FROM abrl_alc_chg_ad ad,
alloc_header ah
WHERE ah.alloc_no=ad.alloc_no
AND ah.order_no=:v_order_no
AND ah.item=:v_item
AND ad.qty_allocated!=0;
if SQL_ERROR_FOUND
sprintf(err_data,"SELECT: ALLOC_DETAIL, count = %s\n",t_cnt_st);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
if (t_cnt_st>0)
store_quantities=(int *) calloc(t_cnt_st,sizeof(int));
store_processed_flag=(bool *) calloc(t_cnt_st,sizeof(bool));
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
while (t_cnt_lp>0)
EXEC SQL WHENEVER NOT FOUND DO break;
for(int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
if (store_quantities[i]!=(int)v_qty_allocated)
store_quantities[i]=store_quantities[i]+t_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
else
if(store_processed_flag[i]==FALSE)
store_processed_flag[i]=TRUE;
t_cnt_st--;
if (t_cnt_st==0)
t_complete=TRUE;
break;
if (t_complete==TRUE && t_cnt_lp!=0)
for (int i=0;i<t_cnt_st;i++)
store_quantities[i]=store_quantities[i]+v_innersize;
t_cnt_lp--;
if (t_cnt_lp==0)
EXEC SQL CLOSE :alloc_cursor;
break;
}/*END OF WHILE*/
EXEC SQL EXECUTE
BEGIN
OPEN :alloc_cursor FOR SELECT ad.alloc_no,
ad.to_loc,
ad.qty_allocated
FROM alloc_header ah,
abrl_alc_chg_ad ad
WHERE ah.alloc_no=ad.alloc_no
AND ah.item=:v_item
AND ah.order_no=:v_order_no
order by ad.qty_allocated desc;
END;
END-EXEC;
EXEC SQL WHENEVER NOT FOUND DO break;
for (int i=0;i<t_cnt_st;i++)
EXEC SQL FETCH :alloc_cursor INTO :v_alloc_no,
:v_location,
:v_qty_allocated;
EXEC SQL UPDATE abrl_alc_chg_ad
SET qty_allocated=:store_quantities[i]
WHERE to_loc=:v_location
AND alloc_no=:v_alloc_no;
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ALLOC_DETAIL, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ALLOC_DETAIL");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL UPDATE ABRL_ALC_CHG_DETAILS
SET PROCESSED='Y'
WHERE LOCATION=:v_location
AND alloc_no=:v_alloc_no
AND PROCESSED IN ('E','U');
if SQL_ERROR_FOUND
sprintf(err_data,"UPDATE: ABRL_ALC_CHG_DETAILS, location = %s , alloc_no =%s\n", v_location,v_alloc_no);
strcpy(table,"ABRL_ALC_CHG_DETAILS");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
EXEC SQL COMMIT;
EXEC SQL CLOSE :alloc_cursor;
free(store_quantities);
free(store_processed_flag);
}/*END OF IF*/
}/*END OF OUTER WHILE LOOP*/
EXEC SQL CLOSE c_order;
if SQL_ERROR_FOUND
sprintf(err_data,"CURSOR CLOSE: cursor = c_order");
strcpy(table,"ORDHEAD, ORDLOC, ITEM_SUPP_COUNTRY");
WRITE_ERROR(SQLCODE,function,table,err_data);
return(-1);
return(0);
} /* end prorate_allocation*/ -
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
Need Help with site performance
Looking for Help..
In particular we would like help from experts in ssl, browser experts
(how browsers handle encryption, de-encryption), iPlanet experts, Sun
crypto card experts, webdesign for performance experts.
Our website is hosted on a Sun Enterprise 450 server running Solaris v7
The machine is hosted at Exodus. These are the following software
servers that perform the core functions of the website:
iPlanet Web Server v. 4.1 ( Java server is enabled)
IBM db2 v. 7.1
SAA uses SmartSite, a proprietary system developed by Adaptations
(www.adaptations.com). At the level of individual HTML pages, SmartSite
uses
proprietary markup tags and Tcl code embedded in HTML comments to
publish
content stored in a database. SmartSite allows for control over when,
how and
to whom content appears. It is implemented as a java servlet which
stores its data on the db2 server and uses a tcl like scripting language
(jacl- orginally developed by Sun)
CHALLENGE:
In late June this year we launched a redesigned website with ssl enabled
on all pages. (a departure from the previous practice of maintaining
most of the site on non-secure server and only some pages on a ssl
server). We also introduced a new website design with greater use of
images, nested tables and javascript.
We have found that the introduction of the "secure everywhere" policy
has had a detrimental effect on the web site user experience, due to
decreased web server and web browser performance. In other words, the
site got slower. Specifically, we have
identified the following problems:
1. Web server performance degradation. Due to unidentified increases in
web
server resource demand caused (probably) by the global usage of SSL, the
web
server experienced instability. This was resolved by increasing the
amount of
operating system (OS) resources available to the server.
2. Web browser performance degradation. Several categories are noted:
2.1. Page load and rendering. Page load and rendering time has
increased dramatically on the new site, particularly in the case of
Netscape Navigator. Some of this may be attributed to the usage of SSL.
Particularly, the rendering time of complex tables and images may be
markedly slower on slower client machines.
2.2. Non-caching of content. Web browsers should not cache any content
derived from https on the local hard disk. The amount of RAM caching
ability varies form browser to browser, and machine to machine, but is
generally much less than for disk caching. In addition, some browser may
not cache content in RAM cache at all. The overall effect of reduced
caching is increased accesses to the web server to retrieve content.
This
will degrade server performance, as it services more content, and also
web browser performance, as it will spend more time waiting for page
content before and while rendering it.
Things that have been attempted to improve performance:
1) Reducing javascript redundancy (less compiling time required)
2) Optimizing HTML code (taking out nested tables, hard coding in specs
where possible to reduce compiling time)
3) Optimizing page content assembly (reducing routine redundancy,
enabling things to be compiled ahead of time)
4) Installing an encryption card (to speed page encryption rate) - was
removed as it did not seem to improve performance, but seemed to have
degraded performanceFred Martinez wrote:
Looking for Help..
In particular we would like help from experts in ssl, browser experts
(how browsers handle encryption, de-encryption), iPlanet experts, Sun
crypto card experts, webdesign for performance experts.
Our website is hosted on a Sun Enterprise 450 server running Solaris v7
The machine is hosted at Exodus. These are the following software
servers that perform the core functions of the website:
iPlanet Web Server v. 4.1 ( Java server is enabled)
IBM db2 v. 7.1
SAA uses SmartSite, a proprietary system developed by Adaptations
(www.adaptations.com). Since I don't see iPlanet's application server in the mix here this (a
newsgroup
for performance questions for iAS) is not the newsgroup to ask in.
Kent -
Needed help to improve the performance of a select query?
Hi,
I have been preparing a report which involves data to be fetched from 4 to 5 different tables and calculation has to performed on some columns also,
i planned to write a single cursor to populate 1 temp table.i have used INLINE VIEW,EXISTS more frequently in the select query..please go through the query and suggest me a better way to restructure the query.
cursor c_acc_pickup_incr(p_branch_code varchar2, p_applDate date, p_st_dt date, p_ed_dt date) is
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
--null "CR_TURNOVER",
--null "DR_TURNOVER",
null "DR_OD_DAYS",
(select sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
(case when (p_applDate >= sca.tod_limit_start_date and
p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)) then
sca.tod_limit else 0 end) dd
from getm_facility gf, sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no) "OD_LIMIT",
--sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'D'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "DR_INTEREST",
(select sum(amt)
from ictb_entries_history ieh
where ieh.acc = sca.cust_ac_no
and ieh.brn = sca.branch_code
and ieh.drcr = 'C'
and ieh.liqn = 'Y'
and ieh.entry_passed = 'Y'
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select * from ictm_pr_int ipi, ictm_rule_frm irf
where ipi.product_code = ieh.prod
and ipi.rule = irf.rule_id
and irf.book_flag = 'B')) "CR_INTEREST",
(select sum(amt) from ictb_entries_history ieh
where ieh.brn = sca.branch_code
and ieh.acc = sca.cust_ac_no
and ieh.ent_dt between p_st_dt and p_ed_dt
and exists (
select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C')) "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists (select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null))
then 1 else 0 end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists (select 1 from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate)
or exists (select 1 from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null)) "RENEWED_OR_ROLLOVER",
(select maturity_date from ictm_acc ia
where ia.brn = sca.branch_code
and ia.acc = sca.cust_ac_no) "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
sca.ac_stat_block "BLOCKED_ACC", Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca, sttm_customer sc
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and ( exists (select 1 from actb_daily_log adl
where adl.ac_no = sca.cust_ac_no
and adl.ac_branch = sca.branch_code
and adl.trn_dt = p_applDate
and adl.auth_stat = 'A')
or exists (select 1 from catm_amount_blocks cab
where cab.account = sca.cust_ac_no
and cab.branch = sca.branch_code
and cab.effective_date = p_applDate
and cab.auth_stat = 'A')
or exists (select 1 from ictm_td_closure_renew itcr
where itcr.acc = sca.cust_ac_no
and itcr.brn = sca.branch_code
and itcr.renewal_date = p_applDate)
or exists (select 1 from sttm_ac_stat_change sasc
where sasc.cust_ac_no = sca.cust_ac_no
and sasc.branch_code = sca.branch_code
and sasc.status_change_date = p_applDate
and sasc.auth_stat = 'A')
or exists (select 1 from cstb_acc_brn_trfr_log cabtl
where cabtl.branch_code = sca.branch_code
and cabtl.cust_ac_no = sca.cust_ac_no
and cabtl.process_status = 'S'
and cabtl.process_date = p_applDate)
or exists (select 1 from sttbs_provision_history sph
where sph.branch_code = sca.branch_code
and sph.cust_ac_no = sca.cust_ac_no
and sph.esn_date = p_applDate)
or exists (select 1 from sttms_cust_account_dormancy scad
where scad.branch_code = sca.branch_code
and scad.cust_ac_no = sca.cust_ac_no
and scad.dormancy_start_dt = p_applDate)
or sca.maker_dt_stamp = p_applDate
or sca.status_since = p_applDate
l_tb_acc_det ty_tb_acc_det_int;
l_brnrec cvpks_utils.rec_brnlcy;
l_acbr_lcy sttms_branch.branch_lcy%type;
l_lcy_amount actbs_daily_log.lcy_amount%type;
l_xrate number;
l_dt_rec sttm_dates%rowtype;
l_acc_rec sttm_cust_account%rowtype;
l_acc_stat_row ty_r_acc_stat;
Edited by: user13710379 on Jan 7, 2012 12:18 AMI see it more like shown below (possibly with no inline selects
Try to get rid of the remaining inline selects ( left as an exercise ;) )
and rewrite traditional joins as ansi joins as problems might arise using mixed syntax as I have to leave so I don't have time to complete the query
select sca.branch_code "BRANCH",
sca.cust_ac_no "ACCOUNT",
to_char(p_applDate, 'YYYYMM') "YEARMONTH",
sca.ccy "CURRENCY",
sca.account_class "PRODUCT",
sca.cust_no "CUSTOMER",
sca.ac_desc "DESCRIPTION",
null "LOW_BAL",
null "HIGH_BAL",
null "AVG_CR_BAL",
null "AVG_DR_BAL",
null "CR_DAYS",
null "DR_DAYS",
-- null "CR_TURNOVER",
-- null "DR_TURNOVER",
null "DR_OD_DAYS",
w.dd "OD_LIMIT",
-- sc.credit_rating "CR_GRADE",
null "AVG_NET_BAL",
null "UNAUTH_OD_AMT",
sca.acy_blocked_amount "AMT_BLOCKED",
x.dr_int "DR_INTEREST",
x.cr_int "CR_INTEREST",
y.fee_amt "FEE_INCOME",
sca.record_stat "ACC_STATUS",
case when trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_THE_MONTH",
case when (trunc(sca.ac_open_date,'MM') = trunc(p_applDate,'MM')
and trunc(sc.cif_creation_date,'MM') = trunc(p_applDate,'MM')
and not exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
then 1
else 0
end "NEW_ACC_FOR_NEW_CUST",
(select 1 from dual
where exists(select 1
from ictm_td_closure_renew itcr
where itcr.brn = sca.branch_code
and itcr.acc = sca.cust_ac_no
and itcr.renewal_date = sysdate
or exists(select 1
from ictm_tdpayin_details itd
where itd.multimode_payopt = 'Y'
and itd.brn = sca.branch_code
and itd.acc = sca.cust_ac_no
and itd.multimode_offset_brn is not null
and itd.multimode_tdoffset_acc is not null
) "RENEWED_OR_ROLLOVER",
m.maturity_date "MATURITY_DATE",
sca.ac_stat_no_dr "DR_DISALLOWED",
sca.ac_stat_no_cr "CR_DISALLOWED",
-- sca.ac_stat_block "BLOCKED_ACC", --Not Reqd
sca.ac_stat_dormant "DORMANT_ACC",
sca.ac_stat_stop_pay "STOP_PAY_ACC", --New
sca.ac_stat_frozen "FROZEN_ACC",
sca.ac_open_date "ACC_OPENING_DT",
sca.address1 "ADD_LINE_1",
sca.address2 "ADD_LINE_2",
sca.address3 "ADD_LINE_3",
sca.address4 "ADD_LINE_4",
sca.joint_ac_indicator "JOINT_ACC",
sca.acy_avl_bal "CR_BAL",
0 "DR_BAL",
0 "CR_BAL_LCY", t
0 "DR_BAL_LCY",
null "YTD_CR_MOVEMENT",
null "YTD_DR_MOVEMENT",
null "YTD_CR_MOVEMENT_LCY",
null "YTD_DR_MOVEMENT_LCY",
null "MTD_CR_MOVEMENT",
null "MTD_DR_MOVEMENT",
null "MTD_CR_MOVEMENT_LCY",
null "MTD_DR_MOVEMENT_LCY",
'N' "BRANCH_TRFR", --New
sca.provision_amount "PROVISION_AMT",
sca.account_type "ACCOUNT_TYPE",
nvl(sca.tod_limit, 0) "TOD_LIMIT",
nvl(sca.sublimit, 0) "SUB_LIMIT",
nvl(sca.tod_limit_start_date, global.min_date) "TOD_START_DATE",
nvl(sca.tod_limit_end_date, global.max_date) "TOD_END_DATE"
from sttm_cust_account sca,
sttm_customer sc,
(select sca.cust_ac_no
sum(gf.limit_amount * (scal.linkage_percentage / 100)) +
case when p_applDate >= sca.tod_limit_start_date
and p_applDate <= nvl(sca.tod_limit_end_date, p_applDate)
then sca.tod_limit else 0
end
) dd
from sttm_cust_account sca
getm_facility gf,
sttm_cust_account_linkages scal
where gf.line_code || gf.line_serial = scal.linked_ref_no
and cust_ac_no = sca.cust_ac_no
group by sca.cust_ac_no
) w,
(select acc,
brn,
sum(decode(drcr,'D',amt)) dr_int,
sum(decode(drcr,'C',amt)) cr_int
from ictb_entries_history ieh
where ent_dt between p_st_dt and p_ed_dt
and drcr in ('C','D')
and liqn = 'Y'
and entry_passed = 'Y'
and exists(select null
from ictm_pr_int ipi,
ictm_rule_frm irf
where ipi.rule = irf.rule_id
and ipi.product_code = ieh.prod
and irf.book_flag = 'B'
group by acc,brn
) x,
(select acc,
brn,
sum(amt) fee_amt
from ictb_entries_history ieh
where ieh.ent_dt between p_st_dt and p_ed_dt
and exists(select product_code
from ictm_product_definition ipd
where ipd.product_code = ieh.prod
and ipd.product_type = 'C'
group by acc,brn
) y,
ictm_acc m,
(select sca.cust_ac_no,
sca.branch_code
coalesce(nvl2(coalesce(t1.ac_no,t1.ac_branch),'exists',null),
nvl2(coalesce(t2.account,t2.account),'exists',null),
nvl2(coalesce(t3.acc,t3.brn),'exists',null),
nvl2(coalesce(t4.cust_ac_no,t4.branch_code),'exists',null),
nvl2(coalesce(t5.cust_ac_no,t5.branch_code),'exists',null),
nvl2(coalesce(t6.cust_ac_no,t6.branch_code),'exists',null),
nvl2(coalesce(t7.cust_ac_no,t7.branch_code),'exists',null),
decode(sca.maker_dt_stamp,p_applDate,'exists'),
decode(sca.status_since,p_applDate,'exists')
) existence
from sttm_cust_account sca
left outer join
(select ac_no,ac_branch
from actb_daily_log
where trn_dt = p_applDate
and auth_stat = 'A'
) t1
on (sca.cust_ac_no = t1.ac_no
and sca.branch_code = t1.ac_branch
left outer join
(select account,account
from catm_amount_blocks
where effective_date = p_applDate
and auth_stat = 'A'
) t2
on (sca.cust_ac_no = t2.account
and sca.branch_code = t2.branch
left outer join
(select acc,brn
from ictm_td_closure_renew itcr
where renewal_date = p_applDate
) t3
on (sca.cust_ac_no = t3.acc
and sca.branch_code = t3.brn
left outer join
(select cust_ac_no,branch_code
from sttm_ac_stat_change
where status_change_date = p_applDate
and auth_stat = 'A'
) t4
on (sca.cust_ac_no = t4.cust_ac_no
and sca.branch_code = t4.branch_code
left outer join
(select cust_ac_no,branch_code
from cstb_acc_brn_trfr_log
where process_date = p_applDate
and process_status = 'S'
) t5
on (sca.cust_ac_no = t5.cust_ac_no
and sca.branch_code = t5.branch_code
left outer join
(select cust_ac_no,branch_code
from sttbs_provision_history
where esn_date = p_applDate
) t6
on (sca.cust_ac_no = t6.cust_ac_no
and sca.branch_code = t6.branch_code
left outer join
(select cust_ac_no,branch_code
from sttms_cust_account_dormancy
where dormancy_start_dt = p_applDate
) t7
on (sca.cust_ac_no = t7.cust_ac_no
and sca.branch_code = t7.branch_code
) z
where sca.branch_code = p_branch_code
and sca.cust_no = sc.customer_no
and sca.cust_ac_no = w.cust_ac_no
and sca.cust_ac_no = x.acc
and sca.branch_code = x.brn
and sca.cust_ac_no = y.acc
and sca.branch_code = y.brn
and sca.cust_ac_no = m.acc
and sca.branch_code = m.brn
and sca.cust_ac_no = z.sca.cust_ac_no
and sca.branch_code = z.branch_code
and z.existence is not nullRegards
Etbin -
Need help troubleshooting poor performance loading cubes
I need ideas on how to troubleshoot performance issues we are having when loading our infocube. There are eight infopackages running in parallel to update the cube. Each infopackage can execute three datapackages at a time. The load performance is erractic. For example, if an infopackage needs five datapackages to load the data, data package 1 is sometimes the last one to complete. Sometimes the slow performance is in the Update Rules processing and other times it is on the Insert into the fact table.
Sometimes there are no performance problems and the load completes in 20 mins. Other times, the loads complete in 1.5+ hours.
Does anyone know how to tell which server a data package was executed on? Can someone tell me any transactions to use to monitor the loads while they are running to help pinpoint what the bottleneck is?
Thanks.
Regards,
RyanSome sugegstions:
1. Collect BW statistics for all the cubes. Goto RSA1 and go to the cube and on tool bar - tools - BW statistics. Check thed boxes to collect both OLAP and WHM.
2. Activate all the technical content cubes and reports and relevant objects. You will find them if you search with 0BWTC* in the business content.
3. Start loading data to the Technical content cubes.
4. There are a few reports out of these statistical cubes and run them and you will get some ideas.
5. Try to schedule sequentially instead of parallel loads.
Ravi Thothadri -
This query is running slw...please help to improve performance
SELECT Listnontranstextid,
Listnontransshort,
Listnontransmedium,
Listnontransextended
FROM (WITH TEXT_T
AS (SELECT /*+ index(TT pk_text_translation) */TT.TEXTID,
TT.short,
TT.medium,
TT.extended
FROM TEXT_TRANSLATION TT
WHERE TT.Active = 1
AND ( TT.Short <> 'Null'
OR TT.Medium <> 'Null'
OR TT.Extended <> 'Null')
AND TT.Languageid = @Langid
FUNC AS (SELECT FN.ID
FROM Function_ Fn
INNER JOIN Function_Type Fnty
ON Fn.Functiontype = Fnty.Functiontype
AND Fnty.Active = 1
INNER JOIN Operation_Step_Function Osf
ON (Osf.Functionid = Fn.Id)
AND Osf.Active = 1
INNER JOIN Operation_Step Os
ON Os.Id = Osf.Operationstepid
AND Os.Active = 1
INNER JOIN Operation Op
ON Op.Id = Os.Operationid
AND op.defaultoperationrevision = 1
AND Op.Active = 1
AND Op.Revisionstatusid NOT IN (2)
-- 2 means Operation Staus =Cancelled
WHERE FN.ACTIVE = 1
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM function_translation ft
INNER JOIN TEXT_T TT
ON (TT.Textid = Ft.Textid)
INNER JOIN FUNC F
ON (F.ID = Ft.Functionid)
WHERE Ft.ACTIVE = 1
UNION
SELECT /*+ index(Forout IF_FUNCTION_OUTPUT_ROUTING_02) */ TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM Function_Output Fo
INNER JOIN Function_Output_Routing Forout
ON Forout.Functionoutputid = Fo.Id
INNER JOIN TEXT_T TT
ON TT.Textid = Forout.PromptTextid
INNER JOIN Function_Output_Routing_Type Fort
ON Fort.Id = Forout.Outputroutingtypeid
INNER JOIN Text_Translation Ttdt
ON Ttdt.Textid = Fort.Textid
AND Ttdt.Languageid = @Langid
AND UPPER (Ttdt.Extended) = ('USER')
INNER JOIN FUNC F
ON F.ID = FO.Functionid
UNION
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM Function_Input Fi
INNER JOIN TEXT_T TT
ON (TT.Textid = Fi.Prompttextid)
INNER JOIN Function_Input_Source_Type Fist
ON Fist.Id = Fi.Inputsourcetypeid AND Fist.Active = 1
INNER JOIN Text_Translation Ttdt
ON Ttdt.Textid = Fist.Textid
AND Ttdt.Active = 1
AND Ttdt.Languageid = @Langid
AND UPPER (Ttdt.Extended) = ('USER')
INNER JOIN FUNC F
ON F.ID = FI.Functionid
UNION
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM Function_Input_value Fiv
INNER JOIN function_input fi
ON fi.id = fiv.functioninputid
INNER JOIN TEXT_T TT
ON (TT.Textid = Fiv.textid)
INNER JOIN Function_Input_Source_Type Fist
ON Fist.Id = Fi.Inputsourcetypeid AND Fist.Active = 1
INNER JOIN Text_Translation Ttdt
ON Ttdt.Textid = Fist.Textid
AND Ttdt.Active = 1
AND Ttdt.Languageid = @Langid
AND UPPER (Ttdt.Extended) = ('USER')
INNER JOIN FUNC F
ON F.ID = FI.Functionid
UNION
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM cob_t_ngmes_master_data ctnmt
INNER JOIN
text_translation tt
ON tt.textid = ctnmt.textid
WHERE tt.languageid = @Langid
UNION -- Swanand, PR 190540, Added this clause to get the reasoncodes
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM Reason_Code RC
INNER JOIN Reason_Type RT
ON RT.ReasonType = RC.ReasonType
INNER JOIN TEXT_TRANSLATION TT1
ON TT1.textid = RT.textid
AND RT.ACTIVE = 1
AND TT1.ACTIVE = 1
AND ( TT1.Short <> 'Null'
OR TT1.Medium <> 'Null'
OR TT1.Extended <> 'Null')
INNER JOIN TEXT_T TT
ON TT.textid = RC.textid AND RC.ACTIVE = 1
WHERE TT1.Languageid = @Langid
UNION
SELECT TT.Textid AS Listnontranstextid,
TT.Short AS Listnontransshort,
TT.Medium AS Listnontransmedium,
TT.Extended AS Listnontransextended
FROM NSPT_T_Event_Type ET
INNER JOIN TEXT_TRANSLATION TT1 ON TT1.TextID =
ET.TextID AND TT1.ACTIVE = 1
INNER JOIN TEXT_T TT
ON TT.TextID = TT1.TextID
WHERE TT1.Languageid = @Langid
ORDER BY Listnontranstextid ASC) WHERE Listnontranstextid > @I_TextIDEdited by: 964145 on Oct 26, 2012 4:53 PMDuplicate post ? Query running slow....need performance tuning
-
Need pointers to improve performance of a select query to table vbrk
Hey Folks,
I have a query , whose performance needs to be tuned , as such:
SELECT a~vbeln
a~fkart
a~waerk
a~fkdat
b~posnr
b~vgbel
b~vgpos
b~matnr
b~arktx
b~prctr
b~txjcd
INTO TABLE gi_billing_items
FROM vbrk AS a
INNER JOIN vbrp AS b
ON a~vbeln = b~vbeln
FOR ALL ENTRIES IN gi_sales_items
WHERE b~vgbel = gi_sales_items-vbeln
AND b~vgpos = gi_sales_items-posnr
AND b~matnr = gi_sales_items-matnr
AND b~werks = gi_sales_items-werks.
where
gi_sales_items is an internal table consisting of 278 entries,.
The result set collected in table gi_billing_items is 200 records
The total execution time for this query for the afore given data is 72,983 ms with the average time/record being ~ 9,471 ms which is too high.
When I try to verify the Explain Plan of the query in ST05, in the Access path I see that the performance of Query Block 1 is bad. Query Block 1 is of the QBLOCK_TYPE UNIONA. Its the very first step in the Query execution internally.
The indexes are defined on participating tables VBRK and VBRP as:
VBRK~0 MANDT,VBELN
VBRK~LOC MANDT,LCNUM
VBRP~0 MANDT,VBELN,POSNR
VBRP~Z01 FPLNR,MANDT
VBRP~Z02 MANDT,MATNR,WERKS
Its clear from the ST05, STAD and SE30 traces that there is a performance issue in this query. Does anyone have any pointers as to how to resolve this issue? Is there a protocol one needs to follow when using the "FOR ALL ENTRIES IN" clause? Or is there a need for any secondary indexes to be created?
Please let me know
Thanks and Best Regards,
Rashmi.Hi,
Try using the VBFA...to get the Invoice number and line item..and then use that value in VBRK...
* Declare the internal table for T_VBFA.
IF NOT gi_sales_items[] IS INITIAL.
SELECT VBELV
POSNV
VBELN
POSNN
VBTYP_N
INTO TABLE T_VBFA
FOR ALL ENTRIES IN gi_sales_items
WHERE VBELV = gi_sales_items-VBELN
AND POSNV = gi_sales_items-POSNR
AND VBTYP_N = 'M'. "Invoice ""Added this..
ENDIF.
**Add two columns to GI_SALES_ITEMS..to store the VBELN POSNN the data from t_vbfa..let's assume it is VBELN_VF and POSNR_VF
* Basically merge gi_sales_items AND t_vbfa
** Then use that field in
IF NOT GI_SALES_ITEMS[] IS INITIAL.
SELECT a~vbeln
a~fkart
a~waerk
a~fkdat
b~posnr
b~vgbel
b~vgpos
b~matnr
b~arktx
b~prctr
b~txjcd
INTO TABLE gi_billing_items
FROM vbrk AS a
INNER JOIN vbrp AS b
ON a~vbeln = b~vbeln
FOR ALL ENTRIES IN gi_sales_items
WHERE b~vbeln = gi_sales_items-vbeln_vf " Change here
AND b~posnr = gi_sales_items-posnr_vf " Change here
AND b~matnr = gi_sales_items-matnr
AND b~werks = gi_sales_items-werks.
ENDIF.
Thanks
Naren
Edited by: Narendran Muthukumaran on Oct 15, 2008 11:35 PM -
Explain plan : need advice in improving performance
Hi,
DB : 10.2.0.4
platform: Solaris
SGA: 8 G
one of my query is taking too much time, explain plan gives below output.
kindly advise what i can do to improve the performance
PLAN_TABLE_OUTPUT
Plan hash value: 430877948
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8143 | 1081K| 31130 (1)| 00:06:14 |
|* 1 | FILTER | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| ID_TICKET_DETAILS | 8143 | 1081K| 494 (0)| 00:00:06 |
|* 3 | INDEX SKIP SCAN | TKT_IDX_21 | 1 | | 493 (0)| 00:00:06 |
|* 4 | TABLE ACCESS BY INDEX ROWID| ID_DELIVERY_DEBIT_SLIP_DETAIL | 1 | 34 | 3 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 2 | 124 | 7 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL | ID_DELIVERY_DEBIT_SLIP_HEADER | 32243 | 881K| 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | DSD_DELIVERY_DEBIT_UKEY | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "ID_DELIVERY_DEBIT_SLIP_HEADER"
"ID_DELIVERY_DEBIT_SLIP_HEADER","ID_DELIVERY_DEBIT_SLIP_DETAIL" "ID_DELIVERY_DEBIT_SLIP_DETAIL" WHERE
"DSH_DOCUMENT_NUMBER"="DSD_DOCUMENT_NUMBER" AND "DSH_DOCUMENT_TYPE"="DSD_DOCUMENT_TYPE" AND
"DSH_COMPANY"="DSD_COMPANY" AND "DSD_TICKET_NUMBER" IS NOT NULL AND
LNNVL(:B1||:B2<>"DSD_AIRLINE"||"DSD_TICKET_NUMBER") AND "DSH_DELIVERY_DEBIT"='DEBIT'))
2 - filter((:1 IS NULL OR "TICKET_AIRLINE"=:2) AND "TICKET_REFERENCE_2" IS NULL AND
"TICKET_RECEIPT_NUMBER" IS NULL AND "TICKET_CARD_RECEIPT_NUMBER" IS NULL AND
"TICKET_SYSTEM_DOC_NUMBER" IS NULL)
3 - access("TICKET_REFERENCE_1" IS NULL)
filter("TICKET_REFERENCE_1" IS NULL AND TO_NUMBER("TICKET_COMPANY")=1)
4 - filter("DSD_TICKET_NUMBER" IS NOT NULL AND LNNVL(:B1||:B2<>"DSD_AIRLINE"||"DSD_TICKET_NUMBER"))
6 - filter("DSH_DELIVERY_DEBIT"='DEBIT')
7 - access("DSH_COMPANY"="DSD_COMPANY" AND "DSH_DOCUMENT_TYPE"="DSD_DOCUMENT_TYPE" AND
"DSH_DOCUMENT_NUMBER"="DSD_DOCUMENT_NUMBER")
Note
- SQL profile "SYS_SQLPROF_014f902e2ea4c002" used for this statementsome comments:
it would be more simple to read the plan with indentatitions: you could use a fixed-width font
it's hard to tell much about the plan without seeing the corresponding query (though in this case the predicate section gives some information on the query - especially step 1)
the plan shows the use of a sql profile: so the CBO uses additional statistics to generate the plan
in step 3 there is an index skip scan: that's only a good idea if there are few distinct values for the leading column of the index and the selectivity of "TICKET_REFERENCE_1" IS NULL is good
in step 6 there is a Full Table Scan for the driving table of a nested loops join: the cost value for the scan is very small and so is the cost for the complete NL join - and that could be misleading
I would use the gather_plan_statistics hint to get a plan with rowsource statistics to check if the cardinalities the CBO works with are correct. If they are not you could try to disable the profile (or create a new profile; of course after checking who created the profile and for what reasons). With an accurate sql profile the CBO should have enough information to create an accurate plan in most cases.
Regards
Martin -
Cursor For Loop SQL/PL right application? Need help with PL Performance
I will preface this post by saying that I am a novice Oracle PL user, so an overexplanation would not be an issue here.
Goal: Run a hierarchial query for over 120k rows and insert output into Table 1. Currently I am using a Cursor For Loop that takes the first record and puts 2 columns in "Start" section and "connect by" section. The hierarchial query runs and then it inserts the output into another table. I do this 120k times( I know it's not very efficient). Now the hierarchial query doesn't take too long ( run by itself for many parts) but this loop process is taking over 9 hrs to run all 120k records. I am looking for a way to make this run faster. I've read about "Bulk collect" and "forall", but I am not understanding how they function to help me in my specific case.
Is there anyway I can rewrite the PL/SQL Statement below with the Cursor For loop or with another methodology to accomplish the goal significantly quicker?
Below is the code ( I am leaving some parts out for space)
CREATE OR REPLACE PROCEDURE INV_BOM is
CURSOR DISPATCH_CSR IS
select materialid,plantid
from INV_SAP_BOM_MAKE_UNIQUE;
Begin
For Row_value in Dispatch_CSR Loop
begin
insert into Table 1
select column1
,column2
,column3
,column4
from( select ..
from table 3
start with materialid = row_value.materialid
and plantid = row_value.plantid
connect by prior plantid = row.value_plantid
exception...
end loop
exception..
commitBluShadow:
The table that the cursor is pulling from ( INV_SAP_BOM_MAKE_UNIQUE) has only 2 columns
Materialid and Plantid
Example
Materialid Plantid
100-C 1000
100-B 1010
X-2 2004
I use the cursor to go down the list 1 by 1 and run a hierarchical query for each row. The only reason I do this is because I have 120,000 materialid,plantid combinations that I need to run and SQL has a limit of 1000 items in the "start with" if I'm semi-correct on that.
Structure of Table it would be inserted into ( Table 1) after Hierarchical SQL Statement runs:
Materialid Plantid User Create Column1 Col2
100-C 1000 25 EA
The Hierarchical query ran gives the 2 columns at the end.
I am looking for a way to either just run a quicker SQL or a more efficient way of running all 120,000 materialid, plantid rows through the Hierarchial Query.
Any Advice? I really appreciate it. Thank You. -
Need Help in improving logic for determining the range
Hi guys,
I need some help in my program logic to determine the range of value. My purpose of this program is to find which combinations have the lowest amplitude.
In the attachment is a set of number.
e.g 10 0 is a combination.
Each combination, I will need to draw a graph of data acquired using this combination VS gray level. There is 255 gray level. Every 5 gray level I will acquire a point so I will have 52 points.
Next, I will get the maximum value - minimum value. And this is the amplitude for the combination. I can do all this function, however it is not practical for me to do it this way until 360 360. There is a round 1200+ combination. It requires a long time since I need to interface all this function with my hardware.
The graph of each combination maybe a irregular shape. Do any of you have a logic to help me to shorten the process and find the range of values (combination) where the lowest amplitude may lies. Thanks!!
Attachments:
example.txt 11 KBHi Johnsold,
This is a example of my result. This is only a portion of it. The last column is the amplitude, I store all the 52 points into array and used Array Min and Max function. Then I use max - min to get the amplitude. From the amplitude I cannot see any pattern. Can you please advice me on it. Thanks!!! -
Hello
I have a JTable with 33 nessesary columns. The row count can be from 1 to 50. That gives maximum 1650 diffrent values to save to the database.
The current system, creates a object for each line(33 values) and fill them into a arraylist, send it to my dao, who takes object by object and stores them. The speed this way is "okay, could be better".
Then when im retreawing the data, I get row by row, create a object, fill my arraylist, goes trough it object by object and put them in each row.
1 row goes fast, 5 rows takes time, 50 rows takes forever...
What can speed things upp here ? If u need some code, please let me know.
Ive been thinking of something thread based, 1 thread for each row, sounds smart?My JTable lists "meetings" for 1 week. 1 row presents a meetings data.
There are 2 JButtons used for changing week, either +1 or -1 from the week number your standing inn. I will now show everything that happens with 1 click. Thats 4 steps.
Step 1: Saving current data(meetings for current week)
private void saveM�ter() {
dao.slettM�ter(�r, uke);
ArrayList m�ter = new ArrayList();
for (int i = 0; i < mdlRapport.getRowCount(); i++) {
UkerapportM�te m�te =
new UkerapportM�te(
�r,
uke,
(String) mdlRapport.getValueAt(i, 0),
(String) mdlRapport.getValueAt(i, 1),
(String) mdlRapport.getValueAt(i, 32),
(String) mdlRapport.getValueAt(i, 33));
UkerapportM�te m�teX = formatM�te(m�te);
m�ter.add(m�teX);
dao.lagreM�ter(m�ter);
}Comments:
dao.slettM�ter(�r, uke); This method removes everything from the database for the current week.
dao.lagreM�ter(m�ter);This method saves everything to the database current week. That method is posted bellow, maybe that is a bottleneck.
public void lagreM�ter(ArrayList m�ter) {
Statement stmt = null;
String sql = null;
UkerapportM�te m�te = null;
try {
for (int i = 0; i < m�ter.size(); i++) {
m�te = (UkerapportM�te) m�ter.get(i);
sql = " INSERT INTO Ukerapport VALUES("
+ m�te.get�rID()
+ ","
+ m�te.getUkeID()
+ ",'"
+ m�te.getM�tedag()
+ "','"
+ m�te.getM�teKlokke()
+ "','"
+ m�te.getM�teSted()
+ "','"
+ m�te.getM�teKunde()
+ "','"
+ m�te.getPrivatBilP()
+ "','"
+ m�te.getPrivatBilS()
+ "','"
+ m�te.getPrivatVillaP()
+ "','"
+ m�te.getPrivatVillaS()
+ "','"
+ m�te.getPrivatServP()
+ "','"
+ m�te.getPrivatServS()
+ "','"
+ m�te.getN�ringslivAntP()
+ "','"
+ m�te.getN�ringslivAntS()
+ "','"
+ m�te.getMersalgKrP()
+ "','"
+ m�te.getMersalgKrS()
+ "','"
+ m�te.getMersalgAntP()
+ "','"
+ m�te.getMersalgAntS()
+ "','"
+ m�te.getLivP()
+ "','"
+ m�te.getLivS()
+ "','"
+ m�te.getSparAntP()
+ "','"
+ m�te.getSparAntS()
+ "','"
+ m�te.getSparKrP()
+ "','"
+ m�te.getSparKrS()
+ "','"
+ m�te.getKollpensjonAntP()
+ "','"
+ m�te.getKollpensjonAntS()
+ "','"
+ m�te.getKollpensjonKrP()
+ "','"
+ m�te.getKollpensjonKrS()
+ "','"
+ m�te.getFinansAntP()
+ "','"
+ m�te.getFinansAntS()
+ "','"
+ m�te.getFinansKrP()
+ "','"
+ m�te.getFinansKrS()
+ "','"
+ m�te.getRef()
+ "','"
+ m�te.getProv()
+ "','"
+ m�te.getForening()
+ "','"
+ m�te.getKommentar()
+ "')";
stmt = con.createStatement();
//rs = stmt.executeQuery(sql);
int x = stmt.executeUpdate(sql);
} //end try
catch (SQLException e) {
System.out.println("UkerapportDAO: Klarer ikke � utf�re sp�rringen: " + sql);
System.out.println("--lagreM�ter() " + e.getMessage() + "\n");
}finally{
try{
if(stmt!=null)
stmt.close();
}catch(SQLException sqlex){
System.out.println("UkerapportDAO: Klarer ikke � lukke");
System.out.println("--lagreM�ter() " + sqlex.getMessage() + "\n");
}Step 2:
I update values for week and year, depening on going 1 week up or down. I dont find it nessesary to show that code....
Step 3: Remove everything from the JTable
private void clearTable() {
for (int i = 0; i < mdlRapport.getRowCount(); i++) {
mdlRapport.removeRow(i);
}Comments: Dont think theres anything todo here...
Step4: Get new meetings from the database.
private void loadM�ter() {
ArrayList nyeM�ter = dao.getM�ter(�r, uke);
mdlRapport.setRowCount(nyeM�ter.size());
for (int i = 0; i < nyeM�ter.size(); i++) {
UkerapportM�te m�te = (UkerapportM�te) nyeM�ter.get(i);
if (m�te.getM�tedag().equals("Blank")) {
mdlRapport.setValueAt("", i, 0);
} else {
mdlRapport.setValueAt(m�te.getM�tedag(), i, 0);
if (m�te.getM�teKlokke().equals("Blank")) {
mdlRapport.setValueAt("", i, 1);
} else {
mdlRapport.setValueAt(m�te.getM�teKlokke(), i, 1);
if (m�te.getProv().equals("0")) {
mdlRapport.setValueAt("", i, 31);
} else {
mdlRapport.setValueAt(m�te.getProv() + "", i, 31);
if (m�te.getForening().equals("Blank")) {
mdlRapport.setValueAt("", i, 32);
} else {
mdlRapport.setValueAt(m�te.getForening(), i, 32);
if (m�te.getKommentar().equals("Blank")) {
mdlRapport.setValueAt("", i, 33);
} else {
mdlRapport.setValueAt(m�te.getKommentar(), i, 33);
}Comments: I skipped alot of code here, This code is ofcouse much longer, I just took the first if*s and the last ones.
ArrayList nyeM�ter = dao.getM�ter(�r, uke); This method is posted bellow.
public ArrayList getM�ter(int �rID, int ukeID) {
Statement stmt = null;
ResultSet rs = null;
String sql = "SELECT * FROM Ukerapport WHERE ukeID=" + ukeID + " AND �rID =" + �rID;
list = new ArrayList();
try {
stmt = con.createStatement();
rs = stmt.executeQuery(sql);
while (rs.next()) {
m�te =
new UkerapportM�te( rs.getInt(1), rs.getInt(2), rs.getString(3), rs.getString(4), rs.getString(5), rs.getString(6), rs.getString(7), rs.getString(8), rs.getString(9), rs.getString(10), rs.getString(11), rs.getString(12), rs.getString(13), rs.getString(14), rs.getString(15), rs.getString(16), rs.getString(17), rs.getString(18), rs.getString(19), rs.getString(20), rs.getString(21), rs.getString(22), rs.getString(23), rs.getString(24), rs.getString(25), rs.getString(26), rs.getString(27), rs.getString(28), rs.getString(29), rs.getString(30), rs.getString(31), rs.getString(32), rs.getString(33), rs.getString(34), rs.getString(35), rs.getString(36));
list.add(m�te);
rs.close();
stmt.close();
} catch (SQLException sqlex) {
System.out.println( "Error: Klarer ikke � utf�re sp�rringen. " + sqlex.getMessage());
System.out.println( "--getM�ter()");
return list;
}I Think this is the slow part. The if tests where made cause i had trouble saving empty meeting values in the database, so When a meetings value is missing i save it as "Blank", and when i read the data i check for "Blank" and if true I set the value "".
Can you help me make this faster? -
Need Help in Improving Performace of PLSQL Code
Hello Gurus,
I am very new to PL/SQL Coding. i had to design a code that takes values from 3 tables and fills the empty 4th table.
Table 1 : IDH(primary key),Table 2 :IDH(primary key) and Table 3 where IDH exists but not a Primary key(means has multiple values for each IDH)
So my approach was i created a STORED PROCEDURE as below
1. Create a cursor joining Table1 and Table 2
2. Iterate over the Cursor, for Each IDH create one more cursor over Table 3 and Calculate some values.
3. insert into the New Table.
But this seems to take a long time (more than 5 hours)
Can you please help me in optimizing this solutions ?
Thanks,
GaneshBRAND table : IDH,BRAND,SUBBRAND -> IDH Primary key
MARKET table : IDH,MARKET,VARIANT -> IDH Primary key
MAT table : col1,IDH,col4,col5,col15 -> no primary key here * redundant table
New table needs to be created as follows
LEGEND table : IDH,BRAND,SUBBRAND,MARKET,VARIANT,CODE,EAN
now its easy to get IDH,BRAND,SUBBRAND,MARKET,VARIANT into the LEGEND table (just a join result will give)
but CODE is calculated as follows
FOr an IDH in MAT
if col4 contains 'OLDKEZ' then
CODE:= col5;
EAN is calculated as follows
FOr an IDH in MAT
if col1 contains 'MARA' then
CODE:= col15;
this i am accomplishing by using 2 cursors (nested), which takes a long time !!!
Can you give any inputs ? -
Need help/suggestion in performance tuning
hi,
I have the following
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 460 4700.00 5239.96 2 6 0 0
Execute 6234640362100.0040673190.99 102043906 110604822 123086 49656
Fetch 150442561100.0011454381.51 515184 13365552 0 92801
total 7785042927900.00 9183139.50 102559092 123970380 123086 142457
Misses in library cache during parse: 27
Misses in library cache during execute: 14
40 user SQL statements in session.
585 internal SQL statements in session.
625 SQL statements in session.
10 statements EXPLAINed in this session.Can some one suggest how to checkAman.... wrote:
What do you want us to say about it since it's a summary of the session's trace file? Do you have any particular query which you think is not performing well?
It's a summary that records an elapsed time of 9.1 million seconds - that's about 106 days - which is quite a long time for a single session running 40 end-user statements.
Regards
Jonathan Lewis
Maybe you are looking for
-
Reading parameters lists from a text file
Hi All, Using CR XI R2 with VB6 and external .rpt files - - - - - In a VB6 app I use VB to loop through my tables to create dynamic parameter lists (CR XI is limited to the number of parameters it can create from the table). This works well but each
-
WHY YOU SHOULD NOT USE SKYPE (and where to file co...
Approximately two weeks ago, my account was blocked for no apparent reason. I contacted customer support and they didn't know why it was blocked, but told me to go through the account verification process. I asked to speak to someone on the phone to
-
For about 4 weeks my Foxtab will not work properly. I fill the boxes with websites but they slip out from the boxes and when I click on one of them I get a different website. I have uninstalled Firefox and set up[ again but this makes no difference.
-
Hi All, Oracle 11.2.0.4 suse linux 10 am getting below error while installing oracle 11g(11.2.0.4) on suse linux (silent installation). please suggest is it ignorable or do i need to fix it up manually (run runfixup as a root user? am able to see in
-
I have a copy of Photoshop Elements 8
I forgot to mention, I have the Full Version, not a copy and it is REGISTERED with Adobe properly.