Index don't improve the performance
Hello everyone,
I've a problem on improving the performance of XMLType Column. In fact, I've created a table with tow column a Number column and a XMLType column and found out that the retreival time is quite unexceptal for the following SQL Statement:
Select id, extractValue(data, '/requestor/name') From
requestor Where extractValue(data, '/requestor/name')
= 'Req1640'
So I have created an Index like this:
create index req_name_index on requestor
extractValue(data, '/requestor/name')
but it don't improve the performance.
What's wrong?
Thanks
Hi David,
Once you had defined the 2 entries for the Browsing Index, have you rebuilt the indexes for the database ?
You can either use db2index or export and re-import the data.
Regards,
Ludovic.
Similar Messages
-
Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube
Hi BW Guru's,
I have unresolved issue and our team is still working on it.
I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
I have requested for OSS note and searching myself but still could not found.
Finally i have executed one of the cube in RSRV with the database selection
"Database indexes of an InfoCube and its aggregates" and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated
ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
ORACLE: Index /BIC/D1001072~010 has possibly degenerated
ORACLE: Index /BIC/D1001132~010 has possibly degenerated
ORACLE: Index /BIC/D1001212~010 has possibly degenerated
ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
Thanks and Regards,
Venkathi,
check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
If you use "like" in your sql then forget indexes....
For more informations about indexes check google or your Dba .
Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
ex...
logiacal dimensions
year-half-day
company-department
fact
quantity
instead of making one...make 3,
year - department - quantity
half - department - quantity
day - department - quantity
and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
Do you use partioning functionality???
i hope i helped....
http://greekoraclebi.blogspot.com/
/////////////////////////////////////// -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
Improve the performance of the 'BAPI_GOODSMVT_CREATE' Bapi
Hi All,
We have a requirement in which we create a material document number for each goods receipt note.
This is done with the help of 'BAPI_GOODSMVT_CREATE' . This BAPI is working perfectly fine and is correctly posting the material document number for each of the goods receipt.
The problem lies in the fact that this BAPI is geting called for each line item of the Purchase order ie it gets called in a loop , due to which this program is taking much longer time to run.
Is there any way or any sap notes which we can use to improve the performance of this BAPI .
Thanks ,
SumitHi,
Why do you call the bapi for each line item. All the line items per document should be processed together.
The standard code is mostly fine. The performance of these standard transactions generally deteriorates if good programming practices are not used in the user exits/badis available in the standard transaction. Check where is the pain point to figure out where the transaction takes more time.
Regards,
Abdullah. -
Is There any way to improve the performance on this code
Hi all can any one tell me how to improve the performance of this below code.
Actually i need to calculate opening balance of gl account so instead of using bseg am using bsis
So is there any way to improve this code performance.
Any help would be appreciated.
REPORT ZTEMP5 NO STANDARD PAGE HEADING LINE-SIZE 190.
data: begin of collect occurs 0,
MONAT TYPE MONAT,
HKONT TYPE HKONT,
BELNR TYPE BELNR_D,
BUDAT TYPE BUDAT,
WRBTR TYPE WRBTR,
SHKZG TYPE SHKZG,
SGTXT TYPE SGTXT,
AUFNR TYPE AUFNR_NEU,
TOT LIKE BSIS-WRBTR,
end of collect.
TYPES: BEGIN OF TY_BSIS,
MONAT TYPE MONAT,
HKONT TYPE HKONT,
BELNR TYPE BELNR_D,
BUDAT TYPE BUDAT,
WRBTR TYPE WRBTR,
SHKZG TYPE SHKZG,
SGTXT TYPE SGTXT,
AUFNR TYPE AUFNR_NEU,
END OF TY_BSIS.
DATA: IT_BSIS TYPE TABLE OF TY_BSIS,
WA_BSIS TYPE TY_BSIS.
DATA: TOT TYPE WRBTR,
SUMA TYPE WRBTR,
VALUE TYPE WRBTR,
VALUE1 TYPE WRBTR.
SELECTION-SCREEN: BEGIN OF BLOCK B1.
PARAMETERS: S_HKONT LIKE WA_BSIS-HKONT DEFAULT '0001460002' .
SELECT-OPTIONS: S_BUDAT FOR WA_BSIS-BUDAT,
S_AUFNR FOR WA_BSIS-AUFNR DEFAULT '200020',
S_BELNR FOR WA_BSIS-BELNR.
SELECTION-SCREEN: END OF BLOCK B1.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF SCREEN-NAME = 'S_HKONT'.
SCREEN-INPUT = 0.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
START-OF-SELECTION.
SELECT MONAT
HKONT
BELNR
BUDAT
WRBTR
SHKZG
SGTXT
AUFNR
FROM BSIS
INTO TABLE IT_BSIS
WHERE HKONT EQ S_HKONT
AND BELNR IN S_BELNR
AND BUDAT IN S_BUDAT
AND AUFNR IN S_AUFNR.
* if sy-subrc <> 0.
* message 'No Data' type 'I'.
* endif.
SELECT SUM( WRBTR )
FROM BSIS
INTO COLLECT-TOT
WHERE HKONT EQ S_HKONT
AND BUDAT < S_BUDAT-LOW
AND AUFNR IN S_AUFNR.
END-OF-SELECTION.
CLEAR: S_BUDAT, S_AUFNR, S_BELNR, S_HKONT.
LOOP AT IT_BSIS INTO WA_BSIS.
IF wa_bsis-SHKZG = 'H'.
wa_bsis-WRBTR = 0 - wa_bsis-WRBTR.
ENDIF.
collect-MONAT = wa_bsis-monat.
collect-HKONT = wa_bsis-hkont.
collect-BELNR = wa_bsis-belnr.
collect-BUDAT = wa_bsis-budat.
collect-WRBTR = wa_bsis-wrbtr.
collect-SHKZG = wa_bsis-shkzg.
collect-SGTXT = wa_bsis-sgtxt.
collect-AUFNR = wa_bsis-aufnr.
collect collect into collect.
CLEAR: COLLECT, WA_BSIS.
ENDLOOP.
LOOP AT COLLECT.
AT end of HKONT.
WRITE:/65 'OpeningBalance',
85 collect-tot.
skip 1.
ENDAT.
WRITE:/06 COLLECT-BELNR,
22 COLLECT-BUDAT,
32 COLLECT-WRBTR,
54 COLLECT-SGTXT.
AT end of MONAT.
SUM.
WRITE:/ COLLECT-MONAT COLOR 1.
WRITE:32 COLLECT-WRBTR COLOR 1.
VALUE = COLLECT-WRBTR.
SKIP 1.
ENDAT.
VALUE1 = COLLECT-TOT + VALUE.
AT end of MONAT.
WRITE:85 VALUE1.
ENDAT.
endloop.
CLEAR: COLLECT, SUMA, VALUE, VALUE1.
TOP-OF-PAGE.
WRITE:/06 'Doc No',
22 'Post Date',
39 'Amount',
54 'Text'.
Moderator message : See the Sticky threads (related for performance tuning) in this forum. Thread locked.
Edited by: Vinod Kumar on Oct 13, 2011 11:12 AMHi Ben,
both BSIS selects would become faster if you can add Company Code BUKRS as 1st field of WHERE clause, because it's the 1st field of primary key and HKONT is the 2nd field of primary key.
If you have no table index with HKONT as 1st field it's a full database access.
If possible, try to add BUKRS as 1st field of WHERE clause, otherwise ask for an additional BSIS index at your basis team.
Regards,
Klaus -
How to improve the performance of one program in one select query
Hi,
I am facing performance issue in one program. I have given some part of the code of the program.
it is taking much time below select query. How to improve the performance.
Quick response is highly appreciated.
Program code
DATA: BEGIN OF t_dels_tvpod OCCURS 100,
vbeln LIKE tvpod-vbeln,
posnr LIKE tvpod-posnr,
lfimg_diff LIKE tvpod-lfimg_diff,
calcu LIKE tvpod-calcu,
podmg LIKE tvpod-podmg,
uecha LIKE lips-uecha,
pstyv LIKE lips-pstyv,
xchar LIKE lips-xchar,
grund LIKE tvpod-grund,
END OF t_dels_tvpod,
DATA: l_tabix LIKE sy-tabix,
lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
ls_dels_tvpod LIKE t_dels_tvpod.
SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
FOR ALL ENTRIES IN t_dels_tvpod
WHERE vbeln = t_dels_tvpod-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart NOT IN r_del_types_exclude.
Waiting for quick response.
Best regards,
BDPBansidhar,
1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
2) Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
Try doing something like this.
TYPES: BEGIN OF ty_del_types_exclude,
sign(1) TYPE c,
option(2) TYPE c,
low TYPE likp-lfart,
high TYPE likp-lfart,
END OF ty_del_types_exclude.
DATA: w_del_types_exclude TYPE ty_del_types_exclude,
t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
t_dels_tvpod_tmp LIKE TABLE OF t_dels_tvpod .
IF NOT t_dels_tvpod[] IS INITIAL.
* Assuming that I would like to exclude delivery types 'LP' and 'LPP'
CLEAR w_del_types_exclude.
REFRESH t_del_types_exclude.
w_del_types_exclude-sign = 'E'.
w_del_types_exclude-option = 'EQ'.
w_del_types_exclude-low = 'LP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
w_del_types_exclude-low = 'LPP'.
APPEND w_del_types_exclude TO t_del_types_exclude.
t_dels_tvpod_tmp[] = t_dels_tvpod[].
SORT t_dels_tvpod_tmp BY vbeln.
DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
COMPARING
vbeln.
SELECT vbeln
FROM likp
INTO TABLE lt_dels_tvpod
FOR ALL ENTRIES IN t_dels_tvpod_tmp
WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
AND erdat IN s_erdat
AND bldat IN s_bldat
AND podat IN s_podat
AND ernam IN s_ernam
AND kunnr IN s_kunnr
AND vkorg IN s_vkorg
AND vstel IN s_vstel
AND lfart IN t_del_types_exclude.
ENDIF. -
How can we improve the performance while fetching data from RESB table.
Hi All,
Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
SELECT aufnr posnr roms1 roanz
INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
FROM resb
WHERE kdauf = p_vbeln
AND ablad = itab-sposnr+2.
Here I am using 'KDAUF' & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
Regards,
HimanshuHi ,
Declare intenal table with only those four fields.
and try the beloe code....
SELECT aufnr posnr roms1 roanz
INTO table itab
FROM resb
WHERE kdauf = p_vbeln
AND ablad = itab-sposnr+2.
yes, you can also use secondary index for improving the performance in this case.
Regards,
Anand .
Reward if it is useful.... -
Need help in improving the performance for the sql query
Thanks in advance for helping me.
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
Any suggestions or solutions for improving performance are appreciated
SQL query:
update targettable tt
set mnop = 'G',
where ( x,y,z ) in
select a.x, a.y,a.z
from table1 a
where (a.x, a.y,a.z) not in (
select b.x,b.y,b.z
from table2 b
where 'O' = b.defg
and mnop = 'P'
and hijkl = 'UVW';987981 wrote:
I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both? -
How to improve the performance of the abap program
hi all,
I have created an abap program. And it taking long time since the number of records are more. And can anyone let me know how to improve the performance of my abap program.
Using se30 and st05 transaction.
can anyone help me out step by step
regds
harithaHi Haritha,
->Run Any program using SE30 (performance analysis)
Note: Click on the Tips & Tricks button from SE30 to get performance improving tips.
Using this you can improve the performance by analyzing your code part by part.
->To turn runtim analysis on within ABAP code insert the following code
SET RUN TIME ANALYZER ON.
->To turn runtim analysis off within ABAP code insert the following code
SET RUN TIME ANALYZER OFF.
->Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
->Avoid for all entries in JOINS
->Try to avoid joins and use FOR ALL ENTRIES.
->Try to restrict the joins to 1 level only ie only for tables
->Avoid using Select *.
->Avoid having multiple Selects from the same table in the same object.
->Try to minimize the number of variables to save memory.
->The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
->Avoid creation of index as far as possible
->Avoid operators like <>, > , < & like % in where clause conditions
->Avoid select/select single statements in loops.
->Try to use 'binary search' in READ internal table. -->Ensure table is sorted before using BINARY SEARCH.
->Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
->Avoid using ORDER BY in selects
->Avoid Nested Selects
->Avoid Nested Loops of Internal Tables
->Try to use FIELD SYMBOLS.
->Try to avoid into Corresponding Fields of
->Avoid using Select Distinct, Use DELETE ADJACENT
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Check also http://service.sap.com/performance
and
books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
edited by,
Naveenan -
How to improve the performance of serialization/deserialization?
Hi, Friends,
I have a question about how to improve the performance of serialization/deserialization.
When an object is serialized, the entire tree of objects rooted at the object is also serialized. When it is deserialized, the tree is reconstructed. For example, suppose a serializable Father object contains (a serializable field of) an array of Child objects. When a Father object is serialized, so is the array of Child objects.
For the sake of performance consideration, when I need to deserialize a Father object, I don't want to deserialize any Child object. However, I should be able to know that Father object has children. I should also be able to deserialize any child of that Father object when necessary.
Could you tell me how to achieve the above idea?
Thanks.
YoubinYou could try something like this...
import java.io.*;
import java.util.*;
class Child implements Serializable {
int id;
Child(int _id) { id=_id; }
public String toString() { return String.valueOf(id); }
class Father implements Serializable
Child[] children = new Child[10];
public Father() {
Arrays.fill(children, new Child(1001));
public void readObject(ObjectInputStream stream)
throws IOException, ClassNotFoundException
int numchildren = stream.readInt();
for(int i=0; i<numchildren; i++)
children[i] = (Child)stream.readObject();
stream.close();
public void writeObject(ObjectOutputStream stream) throws IOException
stream.writeInt(children.length);
for(int i=0; i<children.length; i++)
stream.writeObject(children);
stream.close();
Child[] getChildren() { return children; }
class FatherProxy
int numchildren;
String filename;
public FatherProxy(String _filename) throws IOException
filename = _filename;
ObjectInputStream ois =
new ObjectInputStream(new FileInputStream(filename));
numchildren = ois.readInt();
ois.close();
int getNumChildren() { return numchildren; }
Child[] getChildren() throws IOException, ClassNotFoundException
ObjectInputStream ois =
new ObjectInputStream(new FileInputStream(filename));
Father f = (Father)ois.readObject();
ois.close();
return f.getChildren();
public class fatherref
public static void main(String[] args) throws Exception
// create the serialized file
Father f = new Father();
ObjectOutputStream oos =
new ObjectOutputStream(new FileOutputStream("father.ser"));
oos.writeObject(f);
oos.close();
// read in just what is needed -- numchildren
FatherProxy fp = new FatherProxy("father.ser");
System.out.println("numchildren: " + fp.getNumChildren());
// do some processing
// you need the rest -- children
Child[] c = fp.getChildren();
System.out.println("children:");
for(int i=0; i<c.length; i++)
System.out.println("i " + i + ": " + c[i]); -
HI All, How to improve the performance in given query?
HI All,
How to improve the performance in given query?
Query is..
PARAMETERS : p_vbeln type lips-vbeln.
DATA : par_charg TYPE LIPS-CHARG,
par_werks TYPE LIPS-WERKS,
PAR_MBLNR TYPE MSEG-MBLNR .
SELECT SINGLE charg
werks
INTO (par_charg, par_werks)
FROM lips
WHERE vbeln = p_vbeln.
IF par_charg IS NOT INITIAL.
SELECT single max( mblnr )
INTO par_mblnr
FROM mseg
WHERE bwart EQ '101'
AND werks EQ par_werks (index on werks only)
AND charg EQ par_charg.
ENDIF.
Regards
SteveHi steve,
Can't you use the material in your query (and not only the batch)?
I am assuming your system has an index MSEG~M by MANDT + MATNR + WERKS (+ other fields). Depending on your system (how many different materials you have), this will probably speed up the query considerably.
Anyway, in our system we ended up by creating an index by CHARG, but leave as a last option, only if selecting by matnr and werks is not good enough for your scenario.
Hope this helps,
Rui Dantas -
Inner Join. How to improve the performance of inner join query
Inner Join. How to improve the performance of inner join query.
Query is :
select f1~ablbelnr
f1~gernr
f1~equnr
f1~zwnummer
f1~adat
f1~atim
f1~v_zwstand
f1~n_zwstand
f1~aktiv
f1~adatsoll
f1~pruefzahl
f1~ablstat
f1~pruefpkt
f1~popcode
f1~erdat
f1~istablart
f2~anlage
f2~ablesgr
f2~abrdats
f2~ableinh
from eabl as f1
inner join eablg as f2
on f1ablbelnr = f2ablbelnr
into corresponding fields of table it_list
where f1~ablstat in s_mrstat
%_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
I wanted to modify the query, since its taking lot of time to load the data.
Please suggest : -
Treat this is very urgent.Hi Shyamal,
In your program , you are using "into corresponding fields of ".
Try not to use this addition in your select query.
Instead, just use "into table it_list".
As an example,
Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
So try not to give "into corresponding fields" in your query.
Regards,
SP. -
Please help me how to improve the performance of this query further.
Hi All,
Please help me how to improve the performance of this query further.
Thanks.Hi,
this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
and we're left with nothing.
Start by reading this blog post: Kyle Hailey &raquo; Power of DISPLAY_CURSOR
and applying this knowledge to your case.
Best regards,
Nikolay -
Improve the performance of filter
Hi All,
We are using Oracle Coherence Standard Edition in our application.
Any way to improve the performance of NamedCache.keySet(filter) configured as distributed?
I know there is no scope for indexing since we are using Standard Edition.
Thanksuser1096084 wrote:
Hi All,
We are using Oracle Coherence Standard Edition in our application.
Any way to improve the performance of NamedCache.keySet(filter) configured as distributed?
I know there is no scope for indexing since we are using Standard Edition.
ThanksNot only is there no indexing, but also, I believe, all data is sent to the querying node for filtering instead of filtering on the storage nodes in parallel. If you upgraded, then you could benefit from parallel querying which is only available in a higher edition.
Best regards,
Robert -
Improving the performance of a progarm
How would you go about improving the performance of a progarm which selects data from MSEG & MKPF ?
Hi Ramesh,
I don't know your problem, but try to make less access to database as possible (probably is better to put all of possible data you need in a internal table for each table).
Pay attention in the "select" statment to put the conditional field in the same order they appear in MSEG and MKPF.
Have you make an abap runtime analusis (SM30) to understand if the problem is on database access or in the program?
Bye
enzo
Maybe you are looking for
-
Terminus 9pt font issue in Konsole after upgrade
I have a slightly weird font issue in Konsole following a bunch of upgrades today. Basically, at the beginning of the command line and the beginning of subsequent words on that line, the very first character sort of flashes and gets truncated if it i
-
2 powerbooks, 1 APE; one can connect, but the other can't
I have two powerbooks and one Airport Express. One can connect to the APE, but the other can't. There were no problems earlier this evening, but it just stopped for some reason. Under "Internet Connect" it lists Aiport Power as "On", and it has no tr
-
Hello, When we use the exploded war format we are getting some issues in JSP precompilation. The issue comes only when we do some local changes to the JSPs in the domain directly and refresh the webpage without restarting the server. A sample error i
-
Only one display resolution option for monitor
I just installed a new SSD in my computer, and then installed SL onto the drive. I have an Asus monitor, and Display Preferences on the SL install on the older HD gave me a variety of screen resolutions to choose from. But on the SSD install I only h
-
HTML_DB Version : 1.6 In Reports, the width is being set to the max width of the data. How can I change the width of fields in Reports ? abhay