Performance problem in data replication program written in java
Dear all,
I need your valuable ideas on improving below logic on replicating data fromDB2 to Oracle 9i.We have a huge tables in DB2 to replicate to Oracle side.For one table this taking lot of time.The whole app' is written in java.The current logic is Setting soft delete to specific set of records in oracel table and Reading all records from DB2 table to set only these records in oracle table to 'N' so that deleted records got soft deleted in oralce side.The DB2 query is having 3 table join and taking nearly 1minute.We are updating the oracle table in batch of 100000.For 610275 record update in batch mode it is taking 2.25 hours which has to be reduced to <1hour.The first update to all Y and second update using DB2 query is taking 2.85 hrs.
Do you have any clever idea to reduce this time?? kindly help us.we are in critical situation now.Even new approach in logic to replicate also welcome..
hi,
just remove joins and use for all entries.
if sy-subrc = 0.
use delete adjacent duplicates from itab comparing key fields.(it will increase performance)
then write another select statement.
endif.
some tips:
Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
Avoid for all entries in JOINS
Try to avoid joins and use FOR ALL ENTRIES.
Try to restrict the joins to 1 level only ie only for tables
Avoid using Select *.
Avoid having multiple Selects from the same table in the same object.
Try to minimize the number of variables to save memory.
The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
Avoid creation of index as far as possible
Avoid operators like <>, > , < & like % in where clause conditions
Avoid select/select single statements in loops.
Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
Avoid using ORDER BY in selects
Avoid Nested Selects
Avoid Nested Loops of Internal Tables
Try to use FIELD SYMBOLS.
Try to avoid into Corresponding Fields of
Avoid using Select Distinct, Use DELETE ADJACENT
Go through the following Document
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTunin
Similar Messages
-
In program written with Java Swing, I can't input Chinese
In program written with Java Swing, I can't input Chinese.
But if I change my language first, then change the input method tu U.S, open the Java Swing application, finally I can input Chinese. I want to know how to fix this bug.
My OS is Mac OS X 10.6.8.
At the JDK version 1.6.0_29, I can input Chinese friendly in Java Swing applications. But after 1.6.0_31, I can't do it anymore. The input methods can input Chinese in other non Java Swing applications so the problem must create by JDK or JRE's Swing part. What's the different between 1.6.0_29's Swing and 1.6.0_31's ? Why ? I heard that Java Swing apps not support Chinese input methods seens 2009... Why haven't fix these yet?Chazza wrote:
Perhaps you need to change your keyboard layout in Xorg?
https://wiki.archlinux.org/index.php/Ke … ard_layout
Thanks for your answer!
I have tried to change the keyboard layout from "en" to "cn", but it is still not work.
The input method coin on the righttop is right when I change the method.But it still output english even I use ibus-pinyin.There is not a box for my choosing chinese words.
Last edited by Dilingg (2015-05-15 16:18:43) -
Performance Problem While Data updating In Customize TMS System
Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.
Thanks in advance
--regard
furqanFurqan wrote:
Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.From that description and without any database version, code, explain plans, execution traces or anything that would be remotely useful.... erm... No.
Hint:-
How to Post an SQL statement tuning request...
HOW TO: Post a SQL statement tuning request - template posting -
Want to perform an incremental data replication
I have five vdisk in DC SAN (EVA 8400) which contains a large amount of data. This data has to replicate to newly deployed DR SAN. That is why we take a tape backup from DC SAN and want to restore it on DR before start the replication. After that we want to start the replication which will just replicate the incremental data, after restored from tape. But in HP CA (Continuous Access) I have not found any option to start the incremental data replication, it is started from the zero-block data with a newly automatic created vdisk. So please suggest me about the possibility of incremental data replication.
Actually, I have got it to work...
I used the methods ACTIVATE_TAB_PAGE & TRANSFER_DATA_TO_SUBSCREEN in the BADI LE_SHP_TAB_CUST_ITEM.
Its working fine now.
Thanks for the response. -
How to improve the performance of the Data Migration program ?
Hi All,
I have written a program for Data Migration of Opportunities, but it is taking a lot of time to create or update opportunities.
I have used so many function modules like
" 'CONVERSION_EXIT_ALPHA_INPUT' " -- 8 TIMES
"GUID_CREATE'" -- 4 TIMES
"CRM_ORDER_READ" -- 1 TIME
"CONVERT_INTO_TIMESTAMP" -- 4 TIMES
"CRM_ORDER_MAINTAIN" --- 1 TIME
"CRM_ORDERADM_H_CHANGE_OW" - 1 TIME
"'BAPI_BUSPROCESSND_SAVE" -- 1 TIME
"BAPI_TRANSACTION_COMMIT" -- 1 TIME
"BAPI_BUSPROCESSND_CREATEMULTI" OR "BAPI_BUSPROCESSND_CHANGEMULTI"
and all these function modules are for every record in a loop. for 1000 records it is taking 75 min approximately. How i can improve its performance ?Hi Raman,
For Performance improvment do the following things:
First Remove the CRM_ORDER_READ and use the tables to get the data and then use Read Statement on Internal Tables.
Second, Instead of BAPI_BUSPROCESSND_CREATEMULTI use BAPI_OPPORTUNITY_CREATEMULTI and then use BAPI_TRANSACTION_COMMIT. No need to use 'BAPI_BUSPROCESSND_SAVE.
Third thing, Not sure why you are creating guid 4 times, No need to generate guid if you use BAPI_OPPORTUNITY_CREATEMULTI. Guid will get generated.
Fourth, instead of using 'CONVERSION_EXIT_ALPHA_INPUT' multiple times, create a variable of req. length and then pass the values.
Hope this helps. -
Performance problem in ODI replication
I have defined bulk and CDC interface to replicate data from one source to another source. In CDC replication, the volume of rows changed is very high. It takes lot of time to replicate.
In addition to that, I have defined logical primary keys in ODI to transfer data as no keys are defined on source and target datastore. I donot want to use flow control strategy as it decreases the performance further.
When duplicate rows are loaded from source, odi fails due to duplicate rows. i do not want odi to be failed because of this.Data appearing on replica sites naturally lags behind the master,
simply because it takes some time to transmit across a network, and
apply updates to the database.
Would you rather have read operations on the replica wait until they
"catch up" to some certain point in the sequence of transactions
generated by the master?
Your an ack policy of "NONE" allows the master to race ahead of the
clients without bound. If you were to use the "ALL" ack policy then
commit operations at the master would try to wait until all replica
sites had received the transaction, and you would get closer to having
all sites progress in sync. Of course that slows down the master
though.
Is this the kind of "stale" data you're talking about, or something
more long-term?
Alan Bram
Oracle -
Problem during data extraction program (tran - RSA3) in SC system
Hi Experts,
We are extracting data from one of the data source (9ALS_SBC) through trans RSA3. We are using active version 000 for the same.
but, the transaction is showing error below:
Error when generating program
Message no. /SAPAPO/TSM141
Diagnosis
Generated programs are programs that are generated based on individual data objects, such as planning object structures, planniung areas and InfoCubes. These programs are then executed in the transaction. An error occurred during the generation of such a program.
There are two possible causes:
The template has been corrupted.
The object that the template uses to generate the program contains inconsistencies, for instnace an InfopCube has not been activated.
could someone tell me how to solve this error?
i hv corrected all the inconsistencies but still it is showing same error!
thanks in advance for this help!
with best rgds/
JayDear Jay,
the error can have several reasons, it depends on your business scenario and system settings.
I send you some possible reasons and solutions:
1.
The problem can be that the technical datasource name has changed inside the planning area. The planning area has other extract structure assigned as the datasource.
This could have happened due to a planning area or datasource transport.
The best way to repair this is to create a dummy-datasource for the planning area with transaction /N/SAPAPO/SDP_EXTR. This will update the relevant tables with the new datasource structure.
Please check if the other datasources will work correctly as well. If not you may create similar dummy-datasources for the other aggregates as well.
Please do not forget to delete the dummy-datasources as well in /N/SAPAPO/SDP_EXTR.
2.
You can try to re-generate the data source at transaction /SAPAPO/SDP_EXTR and recheck the issue.
3.
Run /sapapo/ts_pstru_gen via transaction se38.
In the selection, enter the Planning Object Structure used and select 'Planning area extractor' and set the flags on the 2 bottom checkboxes 'Reset generation time stamp' and 'Generate'.
4.
Maybe the extract structure which is assigned to planning area doesn't exist in your system. This could happend at the time of transport.
Please refer to following content from note 549184:
=======================================================================
Q4: Why could I have extraction problem after transport of DataSource?
A4: DataSources for DP/SNP planning areas depend directly on the
structure of the planning areas. That's why the planning area MUST
ALWAYS be transported with or before the DataSource.
========================================================================
To solve this inconsistency please try the below:
-Please reactivate the datasoure on Planning Area.
And activate all active transfer structures for a source system with RS_TRANSTRU_ACTIVATE_ALL programme on se38 transaction.
I hope these proposals could solve the error.
Regards,
Tibor -
Report program Performance problem
Hi All,
one object is taking 30hr for executing.some one develped this in 1998 but this time it is a big Performance problem.please some one helep what to do i am giving that code.
*--DOCUMENTATION--
Programe written by : 31.03.1998 .
Purpose : this programe updates the car status into the table zsdtab1
This programe is to be schedule in the backgroud periodically .
Querries can be fired on the table zsdtab1 to get the details of the
Car .
This programe looks at the changes made in the material master from
last updated date and the new entries in material master and updates
the tables zsdtab1 .
Changes in the Sales Order are not taken into account .
To get a fresh data set the value of zupddate in table ZSTATUS as
01.01.1998 . All the data will be refreshed from that date .
Program Changed on 23/7/2001 after version upgrade 46b by jyoti
Addition of New tables for Ibase
tables used -
tables : mara , " Material master
ausp , " Characteristics table .
zstatus , " Last updated status table .
zsdtab1 , " Central database table to be maintained .
vbap , " Sales order header table .
vbak , " Sales order item table .
kna1 , " Customer master .
vbrk ,
vbrp ,
bkpf ,
bseg ,
mseg ,
mkpf ,
vbpa ,
vbfa ,
t005t . " Country details tabe .
--NEW TABLES ADDEDFOR VERSION 4.6B--
tables : ibsymbol ,ibin , ibinvalues .
data : vatinn like ibsymbol-atinn , vatwrt like ibsymbol-atwrt ,
vatflv like ibsymbol-atflv .
*--types definition--
types : begin of mara_itab_type ,
matnr like mara-matnr ,
cuobf like mara-cuobf ,
end of mara_itab_type ,
begin of ausp_itab_type ,
atinn like ausp-atinn ,
atwrt like ausp-atwrt ,
atflv like ausp-atflv ,
end of ausp_itab_type .
data : mara_itab type mara_itab_type occurs 500 with header line ,
zsdtab1_itab like zsdtab1 occurs 500 with header line ,
ausp_itab type ausp_itab_type occurs 500 with header line ,
last_date type d ,
date type d .
data: length type i.
clear mara_itab . refresh mara_itab .
clear zsdtab1_itab . refresh zsdtab1_itab .
select single zupddate into last_date from zstatus
where programm = 'ZSDDET01' .
select matnr cuobf into (mara_itab-matnr , mara_itab-cuobf) from mara
where mtart eq 'FERT' or mtart = 'ZCBU'.
where MATNR IN MATERIA
and ERSDA IN C_Date
and MTART in M_TYP.
append mara_itab .
endselect .
loop at mara_itab.
clear zsdtab1_itab .
zsdtab1_itab-commno = mara_itab-matnr .
Get the detailed data into internal table ausp_itab .----------->>>
clear ausp_itab . refresh ausp_itab .
--change starts--
select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
ausp_itab-atflv) from ausp
where objek = mara_itab-matnr .
append ausp_itab .
endselect .
clear ausp_itab .
select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
ausp_itab-atflv) from ibin as a inner join ibinvalues as b
on ain_recno = bin_recno
inner join ibsymbol as c
on bsymbol_id = csymbol_id
where a~instance = mara_itab-cuobf .
append ausp_itab .
endselect .
----CHANGE ENDS HERE -
sort ausp_itab by atwrt.
loop at ausp_itab .
clear date .
case ausp_itab-atinn .
when '0000000094' .
zsdtab1_itab-model = ausp_itab-atwrt . " model .
when '0000000101' .
zsdtab1_itab-drive = ausp_itab-atwrt . " drive
when '0000000095' .
zsdtab1_itab-converter = ausp_itab-atwrt . "converter
when '0000000096' .
zsdtab1_itab-transmssn = ausp_itab-atwrt . "transmission
when '0000000097' .
zsdtab1_itab-colour = ausp_itab-atwrt . "colour
when '0000000098' .
zsdtab1_itab-ztrim = ausp_itab-atwrt . "trim
when '0000000103' .
*=========Sujit 14-Mar-2006
IF AUSP_ITAB-ATWRT(3) EQ 'WDB' OR AUSP_ITAB-ATWRT(3) EQ 'WDD'
OR AUSP_ITAB-ATWRT(3) EQ 'WDC' OR AUSP_ITAB-ATWRT(3) EQ 'KPD'.
ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT+3(14).
ELSE.
ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT . "chassis no
ENDIF.
zsdtab1_itab-chassis_no = ausp_itab-atwrt . "chassis no
*=========14-Mar-2006
when '0000000166' .
----25.05.04
length = strlen( ausp_itab-atwrt ).
if length < 15. "***aded by patil
zsdtab1_itab-engine_no = ausp_itab-atwrt . "ENGINE NO
else.
zsdtab1_itab-engine_no = ausp_itab-atwrt+13(14)."Aded on 21.05.04 patil
endif.
----25.05.04
when '0000000104' .
zsdtab1_itab-body_no = ausp_itab-atwrt . "BODY NO
when '0000000173' . "21.06.98
zsdtab1_itab-cockpit = ausp_itab-atwrt . "COCKPIT NO . "21.06.98
when '0000000102' .
zsdtab1_itab-dest = ausp_itab-atwrt . "destination
when '0000000105' .
zsdtab1_itab-airbag = ausp_itab-atwrt . "AIRBAG
when '0000000110' .
zsdtab1_itab-trailer_no = ausp_itab-atwrt . "TRAILER_NO
when '0000000109' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-fininspdat = date . "FIN INSP DATE
when '0000000108' .
zsdtab1_itab-entrydate = ausp_itab-atwrt . "ENTRY DATE
when '0000000163' .
zsdtab1_itab-regist_no = ausp_itab-atwrt . "REGIST_NO
when '0000000164' .
zsdtab1_itab-mech_key = ausp_itab-atwrt . "MECH_KEY
when '0000000165' .
zsdtab1_itab-side_ab_rt = ausp_itab-atwrt . "SIDE_AB_RT
when '0000000171' .
zsdtab1_itab-side_ab_lt = ausp_itab-atwrt . "SIDE_AB_LT
when '0000000167' .
zsdtab1_itab-elect_key = ausp_itab-atwrt . "ELECT_KEY
when '0000000168' .
zsdtab1_itab-head_lamp = ausp_itab-atwrt . "HEAD_LAMP
when '0000000169' .
zsdtab1_itab-tail_lamp = ausp_itab-atwrt . "TAIL_LAMP
when '0000000170' .
zsdtab1_itab-vac_pump = ausp_itab-atwrt . "VAC_PUMP
when '0000000172' .
zsdtab1_itab-sd_ab_sn_l = ausp_itab-atwrt . "SD_AB_SN_L
when '0000000174' .
zsdtab1_itab-sd_ab_sn_r = ausp_itab-atwrt . "SD_AB_SN_R
when '0000000175' .
zsdtab1_itab-asrhydunit = ausp_itab-atwrt . "ASRHYDUNIT
when '0000000176' .
zsdtab1_itab-gearboxno = ausp_itab-atwrt . "GEARBOXNO
when '0000000177' .
zsdtab1_itab-battery = ausp_itab-atwrt . "BATTERY
when '0000000178' .
zsdtab1_itab-tyretype = ausp_itab-atwrt . "TYRETYPE
when '0000000179' .
zsdtab1_itab-tyremake = ausp_itab-atwrt . "TYREMAKE
when '0000000180' .
zsdtab1_itab-tyresize = ausp_itab-atwrt . "TYRESIZE
when '0000000181' .
zsdtab1_itab-rr_axle_no = ausp_itab-atwrt . "RR_AXLE_NO
when '0000000183' .
zsdtab1_itab-ff_axl_nor = ausp_itab-atwrt . "FF_AXLE_NO_rt
when '0000000182' .
zsdtab1_itab-ff_axl_nol = ausp_itab-atwrt . "FF_AXLE_NO_lt
when '0000000184' .
zsdtab1_itab-drivairbag = ausp_itab-atwrt . "DRIVAIRBAG
when '0000000185' .
zsdtab1_itab-st_box_no = ausp_itab-atwrt . "ST_BOX_NO
when '0000000186' .
zsdtab1_itab-transport = ausp_itab-atwrt . "TRANSPORT
when '0000000106' .
zsdtab1_itab-trackstage = ausp_itab-atwrt . " tracking stage
when '0000000111' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_1 = date . " tracking date for 1.
when '0000000112' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_5 = date . " tracking date for 5.
when '0000000113' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_10 = date . "tracking date for 10
when '0000000114' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_15 = date . "tracking date for 15
when '0000000115' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_20 = date . " tracking date for 20
when '0000000116' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_25 = date . " tracking date for 25
when '0000000117' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_30 = date . "tracking date for 30
when '0000000118' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_35 = date . "tracking date for 35
when '0000000119' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_40 = date . " tracking date for 40
when '0000000120' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_45 = date . " tracking date for 45
when '0000000121' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_50 = date . "tracking date for 50
when '0000000122' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_55 = date . "tracking date for 55
when '0000000123' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_60 = date . " tracking date for 60
when '0000000124' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_65 = date . " tracking date for 65
when '0000000125' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_70 = date . "tracking date for 70
when '0000000126' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_75 = date . "tracking date for 75
when '0000000127' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_78 = date . " tracking date for 78
when '0000000203' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_79 = date . " tracking date for 79
when '0000000128' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_80 = date . " tracking date for 80
when '0000000129' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_85 = date . "tracking date for 85
when '0000000130' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_90 = date . "tracking date for 90
when '0000000131' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dat_trk_95 = date . "tracking date for 95
when '0000000132' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_100 = date . " tracking date for100
when '0000000133' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_110 = date . " tracking date for110
when '0000000134' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_115 = date . "tracking date for 115
when '0000000135' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_120 = date . "tracking date for 120
when '0000000136' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-dattrk_105 = date . "tracking date for 105
when '0000000137' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_1 = date . "plan trk date for 1
when '0000000138' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_5 = date . "plan trk date for 5
when '0000000139' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_10 = date . "plan trk date for 10
when '0000000140' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_15 = date . "plan trk date for 15
when '0000000141' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_20 = date . "plan trk date for 20
when '0000000142' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_25 = date . "plan trk date for 25
when '0000000143' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_30 = date . "plan trk date for 30
when '0000000144' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_35 = date . "plan trk date for 35
when '0000000145' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_40 = date . "plan trk date for 40
when '0000000146' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_45 = date . "plan trk date for 45
when '0000000147' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_50 = date . "plan trk date for 50
when '0000000148' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_55 = date . "plan trk date for 55
when '0000000149' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_60 = date . "plan trk date for 60
when '0000000150' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_65 = date . "plan trk date for 65
when '0000000151' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_70 = date . "plan trk date for 70
when '0000000152' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_75 = date . "plan trk date for 75
when '0000000153' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_78 = date . "plan trk date for 78
when '0000000202' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_79 = date . "plan trk date for 79
when '0000000154' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_80 = date . "plan trk date for 80
when '0000000155' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_85 = date . "plan trk date for 85
when '0000000156' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_90 = date . "plan trk date for 90
when '0000000157' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_95 = date . "plan trk date for 95
when '0000000158' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_100 = date . "plan trk date for 100
when '0000000159' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_105 = date . "plan trk date for 105
when '0000000160' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_110 = date . "plan trk date for 110
when '0000000161' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_115 = date . "plan trk date for 115
when '0000000162' .
perform date_convert using ausp_itab-atflv changing date .
zsdtab1_itab-pdt_tk_120 = date . "plan trk date for 120
********Additional fields / 24.05.98**********************************
when '0000000099' .
case ausp_itab-atwrt .
when '540' .
zsdtab1_itab-roll_blind = 'X' .
when '482' .
zsdtab1_itab-ground_clr = 'X' .
when '551' .
zsdtab1_itab-anti_theft = 'X' .
when '882' .
zsdtab1_itab-anti_tow = 'X' .
when '656' .
zsdtab1_itab-alloy_whel = 'X' .
when '265' .
zsdtab1_itab-del_class = 'X' .
when '280' .
zsdtab1_itab-str_wheel = 'X' .
when 'CDC' .
zsdtab1_itab-cd_changer = 'X' .
when '205' .
zsdtab1_itab-manual_eng = 'X' .
when '273' .
zsdtab1_itab-conn_handy = 'X' .
when '343' .
zsdtab1_itab-aircleaner = 'X' .
when '481' .
zsdtab1_itab-metal_sump = 'X' .
when '533' .
zsdtab1_itab-speaker = 'X' .
when '570' .
zsdtab1_itab-arm_rest = 'X' .
when '580' .
zsdtab1_itab-aircond = 'X' .
when '611' .
zsdtab1_itab-exit_light = 'X' .
when '613' .
zsdtab1_itab-headlamp = 'X' .
when '877' .
zsdtab1_itab-readlamp = 'X' .
when '808' .
zsdtab1_itab-code_ckd = 'X' .
when '708' .
zsdtab1_itab-del_prt_lc = 'X' .
when '593' .
zsdtab1_itab-ins_glass = 'X' .
when '955' .
zsdtab1_itab-zelcl = 'Elegance' .
when '593' .
zsdtab1_itab-zelcl = 'Classic' .
endcase .
endcase .
endloop .
*--Update the sales data .--
perform get_sales_order using mara_itab-matnr .
perform get_cartype using mara_itab-matnr .
append zsdtab1_itab .
endloop.
<<<
loop at zsdtab1_itab .
if zsdtab1_itab-cartype <> 'W-203'
or zsdtab1_itab-cartype <> 'W-210'
or zsdtab1_itab-cartype <> 'W-211'.
clear zsdtab1_itab-zelcl.
endif.
SELECT SINGLE * FROM ZSDTAB1 WHERE COMMNO = MARA_ITAB-MATNR .
select single * from zsdtab1 where commno = zsdtab1_itab-commno.
if sy-subrc <> 0 .
insert into zsdtab1 values zsdtab1_itab .
else .
update zsdtab1 set :vbeln = zsdtab1_itab-vbeln
bill_doc = zsdtab1_itab-bill_doc
dest = zsdtab1_itab-dest
lgort = zsdtab1_itab-lgort
ship_tp = zsdtab1_itab-ship_tp
country = zsdtab1_itab-country
kunnr = zsdtab1_itab-kunnr
vkbur = zsdtab1_itab-vkbur
customer = zsdtab1_itab-customer
city = zsdtab1_itab-city
region = zsdtab1_itab-region
model = zsdtab1_itab-model
drive = zsdtab1_itab-drive
converter = zsdtab1_itab-converter
transmssn = zsdtab1_itab-transmssn
colour = zsdtab1_itab-colour
ztrim = zsdtab1_itab-ztrim
commno = zsdtab1_itab-commno
trackstage = zsdtab1_itab-trackstage
chassis_no = zsdtab1_itab-chassis_no
engine_no = zsdtab1_itab-engine_no
body_no = zsdtab1_itab-body_no
cockpit = zsdtab1_itab-cockpit
airbag = zsdtab1_itab-airbag
trailer_no = zsdtab1_itab-trailer_no
fininspdat = zsdtab1_itab-fininspdat
entrydate = zsdtab1_itab-entrydate
regist_no = zsdtab1_itab-regist_no
mech_key = zsdtab1_itab-mech_key
side_ab_rt = zsdtab1_itab-side_ab_rt
side_ab_lt = zsdtab1_itab-side_ab_lt
elect_key = zsdtab1_itab-elect_key
head_lamp = zsdtab1_itab-head_lamp
tail_lamp = zsdtab1_itab-tail_lamp
vac_pump = zsdtab1_itab-vac_pump
sd_ab_sn_l = zsdtab1_itab-sd_ab_sn_l
sd_ab_sn_r = zsdtab1_itab-sd_ab_sn_r
asrhydunit = zsdtab1_itab-asrhydunit
gearboxno = zsdtab1_itab-gearboxno
battery = zsdtab1_itab-battery
tyretype = zsdtab1_itab-tyretype
tyremake = zsdtab1_itab-tyremake
tyresize = zsdtab1_itab-tyresize
rr_axle_no = zsdtab1_itab-rr_axle_no
ff_axl_nor = zsdtab1_itab-ff_axl_nor
ff_axl_nol = zsdtab1_itab-ff_axl_nol
drivairbag = zsdtab1_itab-drivairbag
st_box_no = zsdtab1_itab-st_box_no
transport = zsdtab1_itab-transport
OPTIONS-
roll_blind = zsdtab1_itab-roll_blind
ground_clr = zsdtab1_itab-ground_clr
anti_theft = zsdtab1_itab-anti_theft
anti_tow = zsdtab1_itab-anti_tow
alloy_whel = zsdtab1_itab-alloy_whel
del_class = zsdtab1_itab-del_class
str_wheel = zsdtab1_itab-str_wheel
cd_changer = zsdtab1_itab-cd_changer
manual_eng = zsdtab1_itab-manual_eng
conn_handy = zsdtab1_itab-conn_handy
aircleaner = zsdtab1_itab-aircleaner
metal_sump = zsdtab1_itab-metal_sump
speaker = zsdtab1_itab-speaker
arm_rest = zsdtab1_itab-arm_rest
aircond = zsdtab1_itab-aircond
exit_light = zsdtab1_itab-exit_light
headlamp = zsdtab1_itab-headlamp
readlamp = zsdtab1_itab-readlamp
code_ckd = zsdtab1_itab-code_ckd
del_prt_lc = zsdtab1_itab-del_prt_lc
ins_glass = zsdtab1_itab-ins_glass
dat_trk_1 = zsdtab1_itab-dat_trk_1
dat_trk_5 = zsdtab1_itab-dat_trk_5
dat_trk_10 = zsdtab1_itab-dat_trk_10
dat_trk_15 = zsdtab1_itab-dat_trk_15
dat_trk_20 = zsdtab1_itab-dat_trk_20
dat_trk_25 = zsdtab1_itab-dat_trk_25
dat_trk_30 = zsdtab1_itab-dat_trk_30
dat_trk_35 = zsdtab1_itab-dat_trk_35
dat_trk_40 = zsdtab1_itab-dat_trk_40
dat_trk_45 = zsdtab1_itab-dat_trk_45
dat_trk_50 = zsdtab1_itab-dat_trk_50
dat_trk_55 = zsdtab1_itab-dat_trk_55
dat_trk_60 = zsdtab1_itab-dat_trk_60
dat_trk_65 = zsdtab1_itab-dat_trk_65
dat_trk_70 = zsdtab1_itab-dat_trk_70
dat_trk_75 = zsdtab1_itab-dat_trk_75
dat_trk_78 = zsdtab1_itab-dat_trk_78
dat_trk_79 = zsdtab1_itab-dat_trk_79
dat_trk_80 = zsdtab1_itab-dat_trk_80
dat_trk_85 = zsdtab1_itab-dat_trk_85
dat_trk_90 = zsdtab1_itab-dat_trk_90
dat_trk_95 = zsdtab1_itab-dat_trk_95
dattrk_100 = zsdtab1_itab-dattrk_100
dattrk_105 = zsdtab1_itab-dattrk_105
dattrk_110 = zsdtab1_itab-dattrk_110
dattrk_115 = zsdtab1_itab-dattrk_115
dattrk_120 = zsdtab1_itab-dattrk_120
pdt_tk_1 = zsdtab1_itab-pdt_tk_1
pdt_tk_5 = zsdtab1_itab-pdt_tk_5
pdt_tk_10 = zsdtab1_itab-pdt_tk_10
pdt_tk_15 = zsdtab1_itab-pdt_tk_15
pdt_tk_20 = zsdtab1_itab-pdt_tk_20
pdt_tk_25 = zsdtab1_itab-pdt_tk_25
pdt_tk_30 = zsdtab1_itab-pdt_tk_30
pdt_tk_35 = zsdtab1_itab-pdt_tk_35
pdt_tk_40 = zsdtab1_itab-pdt_tk_40
pdt_tk_45 = zsdtab1_itab-pdt_tk_45
pdt_tk_50 = zsdtab1_itab-pdt_tk_50
pdt_tk_55 = zsdtab1_itab-pdt_tk_55
pdt_tk_60 = zsdtab1_itab-pdt_tk_60
pdt_tk_65 = zsdtab1_itab-pdt_tk_65
pdt_tk_70 = zsdtab1_itab-pdt_tk_70
pdt_tk_75 = zsdtab1_itab-pdt_tk_75
pdt_tk_78 = zsdtab1_itab-pdt_tk_78
pdt_tk_79 = zsdtab1_itab-pdt_tk_79
pdt_tk_80 = zsdtab1_itab-pdt_tk_80
pdt_tk_85 = zsdtab1_itab-pdt_tk_85
pdt_tk_90 = zsdtab1_itab-pdt_tk_90
pdt_tk_95 = zsdtab1_itab-pdt_tk_95
pdt_tk_100 = zsdtab1_itab-pdt_tk_100
pdt_tk_105 = zsdtab1_itab-pdt_tk_105
pdt_tk_110 = zsdtab1_itab-pdt_tk_110
pdt_tk_115 = zsdtab1_itab-pdt_tk_115
pdt_tk_120 = zsdtab1_itab-pdt_tk_120
cartype = zsdtab1_itab-cartype
zelcl = zsdtab1_itab-zelcl
excise_no = zsdtab1_itab-excise_no
where commno = zsdtab1_itab-commno .
Update table .---------<<<
endif .
endloop .
perform update_excise_date .
perform update_post_goods_issue_date .
perform update_time.
*///////////////////// end of programe /////////////////////////////////
Get sales data -
form get_sales_order using matnr .
data : corr_vbeln like vbrk-vbeln .
ADDED BY ADITYA / 22.06.98 **************************************
perform get_order using matnr .
select single vbeln lgort into (zsdtab1_itab-vbeln , zsdtab1_itab-lgort)
from vbap where matnr = matnr . " C-22.06.98
from vbap where vbeln = zsdtab1_itab-vbeln .
if sy-subrc = 0 .
************Get the Excise No from Allocation Field*******************
select single * from zsdtab1 where commno = matnr .
if zsdtab1-excise_no = '' .
select * from vbrp where matnr = matnr .
select single vbeln into corr_vbeln from vbrk where
vbeln = vbrp-vbeln and vbtyp = 'M'.
if sy-subrc eq 0.
select single * from vbrk where vbtyp = 'N'
and sfakn = corr_vbeln. "cancelled doc.
if sy-subrc ne 0.
select single * from vbrk where vbeln = corr_vbeln.
if sy-subrc eq 0.
data : year(4) .
move sy-datum+0(4) to year .
select single * from bkpf where awtyp = 'VBRK' and awkey = vbrk-vbeln
and bukrs = 'MBIL' and gjahr = year .
if sy-subrc = 0 .
select single * from bseg where bukrs = 'MBIL' and belnr = bkpf-belnr
and gjahr = year and koart = 'D' and
shkzg = 'S' .
zsdtab1_itab-excise_no = bseg-zuonr .
endif .
endif.
endif.
endif.
endselect.
endif .
select single kunnr vkbur into (zsdtab1_itab-kunnr ,
zsdtab1_itab-vkbur) from vbak
where vbeln = zsdtab1_itab-vbeln .
if sy-subrc = 0 .
select single name1 ort01 regio into (zsdtab1_itab-customer ,
zsdtab1_itab-city , zsdtab1_itab-region) from kna1
where kunnr = zsdtab1_itab-kunnr .
endif.
Get Ship to Party **************************************************
select single * from vbpa where vbeln = zsdtab1_itab-vbeln and
parvw = 'WE' .
if sy-subrc = 0 .
zsdtab1_itab-ship_tp = vbpa-kunnr .
Get Destination Country of Ship to Party .************
select single * from kna1 where kunnr = vbpa-kunnr .
if sy-subrc = 0 .
select single * from t005t where land1 = kna1-land1
and spras = 'E' .
if sy-subrc = 0 .
zsdtab1_itab-country = t005t-landx .
endif .
endif .
endif .
endif .
endform. " GET_SALES
form update_time.
update zstatus set zupddate = sy-datum
uzeit = sy-uzeit
where programm = 'ZSDDET01' .
endform. " UPDATE_TIME
*& Form DATE_CONVERT
form date_convert using atflv changing date .
data : dt(8) , dat type i .
dat = atflv .
dt = dat .
date = dt .
endform. " DATE_CONVERT
*& Form UPDATE_POST_GOODS_ISSUE_DATE
form update_post_goods_issue_date .
types : begin of itab1_type ,
mblnr like mseg-mblnr ,
budat like mkpf-budat ,
end of itab1_type .
data : itab1 type itab1_type occurs 10 with header line .
loop at mara_itab .
select single * from zsdtab1 where commno = mara_itab-matnr .
if sy-subrc = 0 and zsdtab1-postdate = '00000000' .
refresh itab1 . clear itab1 .
select * from mseg where matnr = mara_itab-matnr and bwart = '601' .
itab1-mblnr = mseg-mblnr .
append itab1 .
endselect .
loop at itab1 .
select single * from mkpf where mblnr = itab1-mblnr .
if sy-subrc = 0 .
itab1-budat = mkpf-budat .
modify itab1 .
endif .
endloop .
sort itab1 by budat .
read table itab1 index 1 .
if sy-subrc = 0 .
update zsdtab1 set postdate = itab1-budat
where commno = mara_itab-matnr .
endif .
endif .
endloop .
endform. " UPDATE_POST_GOODS_ISSUE_DATE
*& Form UPDATE_EXCISE_DATE
form update_excise_date.
types : begin of itab2_type ,
mblnr like mseg-mblnr ,
budat like mkpf-budat ,
end of itab2_type .
data : itab2 type itab2_type occurs 10 with header line .
loop at mara_itab .
select single * from zsdtab1 where commno = mara_itab-matnr .
if sy-subrc = 0 and zsdtab1-excise_dat = '00000000' .
refresh itab2 . clear itab2 .
select * from mseg where matnr = mara_itab-matnr and
( bwart = '601' or bwart = '311' ) .
itab2-mblnr = mseg-mblnr .
append itab2 .
endselect .
loop at itab2 .
select single * from mkpf where mblnr = itab2-mblnr .
if sy-subrc = 0 .
itab2-budat = mkpf-budat .
modify itab2 .
endif .
endloop .
sort itab2 by budat .
read table itab2 index 1 .
if sy-subrc = 0 .
update zsdtab1 set excise_dat = itab2-budat
where commno = mara_itab-matnr .
endif .
endif .
endloop .
endform. " UPDATE_EXCISE_DATE
form get_order using matnr .
types : begin of itab_type ,
vbeln like vbap-vbeln ,
posnr like vbap-posnr ,
end of itab_type .
data : itab type itab_type occurs 10 with header line .
refresh itab . clear itab .
select * from vbap where matnr = mara_itab-matnr .
itab-vbeln = vbap-vbeln .
itab-posnr = vbap-posnr .
append itab .
endselect .
loop at itab .
select single * from vbak where vbeln = itab-vbeln .
if vbak-vbtyp <> 'C' .
delete itab .
endif .
endloop .
loop at itab .
select single * from vbfa where vbelv = itab-vbeln and
posnv = itab-posnr and vbtyp_n = 'H' .
if sy-subrc = 0 .
delete itab .
endif .
endloop .
clear : zsdtab1_itab-vbeln , zsdtab1_itab-bill_doc .
loop at itab .
zsdtab1_itab-vbeln = itab-vbeln .
select single * from vbfa where vbelv = itab-vbeln and
posnv = itab-posnr and vbtyp_n = 'M' .
if sy-subrc = 0 .
zsdtab1_itab-bill_doc = vbfa-vbeln .
endif .
endloop .
endform .
*& Form GET_CARTYPE
form get_cartype using matnr .
select single * from mara where matnr = matnr .
zsdtab1_itab-cartype = mara-satnr .
endform. " GET_CARTYPEHi,
I have analysed your program and i would like to share following points for better performance of this report :
(a) Use the field Names instead of Select * or Select Single * as if you use the field names it will consume less amount of resources inside the loop as well as you have lot many Select Single * and u r using very big tables like VBAP and many more.
(b) Trace on ST05 which particular query is mostly effecting your system or use ST12 in current mode to trace for less inputs which run the report for 20-30 min so that we get an idea which queries are effecting the system and taking a lot of time.
(c) In Case of internal tables sort the data properly and use binary search for getting the data.
I think this will help.
Thanks and Regards,
Harsh -
Performance problems with XMLTABLE and XMLQUERY involving relational data
Hello-
Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
* Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
* Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.<Note>Long post, sorry.</Note>
First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
• Converting legacy tabular data into XML records; and
• Performing code table lookups for coded values in XML records.
There are three things I want to accomplish with this post:
• Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
• Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
• Highlight remaining performance issues in hopes that we can solve them
What we are trying to accomplish:
• Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
• Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
• Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
• Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
Why we did and why:
• Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
• Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
• Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
• Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
• Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
• Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
• Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
What issues remain?
We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
-- The main record table:
create table RECORDS (
SSN varchar2(20),
XMLREC sys.xmltype
xmltype column XMLREC store as binary xml;
create index records_ssn on records(ssn);
-- A dozen code tables represented by one like this:
create table CODES (
CODE varchar2(4),
DESCRIPTION varchar2(500)
create index codes_code on codes(code);
-- Some XML records with coded values (the real records are much more complex of course):
-- I think this took about a minute or two
DECLARE
ssn varchar2(20);
xmlrec xmltype;
i integer;
BEGIN
xmlrec := xmltype('<?xml version="1.0"?>
<Root>
<Id>123456789</Id>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
</Root>
for i IN 1..100000 loop
insert into records(ssn, xmlrec) values (i, xmlrec);
end loop;
commit;
END;
-- Some code data like this (ignoring date ranges on codes):
DECLARE
description varchar2(100);
i integer;
BEGIN
description := 'This is the code description ';
for i IN 1..3000 loop
insert into codes(code, description) values (to_char(i), description);
end loop;
commit;
end;
-- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
-- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
-- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
-- Note we are accessing a single XML record based on SSN
-- Note also we are reusing the one test code table multiple times for convenience of this test
select xmlquery('
for $r in Root
return
<Root>
<Id>123456789</Id>
{for $e in $r/Element
return
<Element>
<Subelement1>
{$e/Subelement1/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
</Description>
</Subelement1>
<Subelement2>
{$e/Subelement2/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
</Description>
</Subelement2>
<Subelement3>
{$e/Subelement3/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
</Description>
</Subelement3>
</Element>
</Root>
' passing xmlrec returning content)
from records
where ssn = '10000';
The plan shows the nested loop access that slows things down.
By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
Operation Object
|SELECT STATEMENT ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
-- Add an xmlindex. Takes about 2.5 minutes
create index records_record_xml ON records(xmlrec)
indextype IS xdb.xmlindex;
Operation Object
|SELECT STATEMENT ()
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity. -
There is any Performance problem with Creation of many Y or Z Programs.
HI,
There is any Performance problem with Creation of many Y or Z Programs. plz give clarity regarding this to me......
regards
ganeshGanesh,
Can you please mention the context and the purpose of creating these custom program. And application are you referring to?
Regards,
Rohit -
Performance problem in loading the Mater data attributes 0Equipment_attr
Hi Experts,
We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
Thanks,
dpHi,
check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
Regards
Andreas -
Problem setting up data collection for a performance problem
We have a performance problem on our production environment: there is a query that is long to display specially on specific records involving a lot of détails. I had the idea to set up data collection to investigate the problem.
I used the wizard to set up data collection with default settings.
I had created a database PERF_DW on our test server for that purpose.
I didn't have any problem with the wizard: it created data collection sets and SQL Server agents tasks and probably many objects. But I don't know where I missed something but I didnt, have the chance to specify the database I wanted to tune
and even when I started data collection, I couldn't figure out where were the reports.
Then I tought that I did it all wrong so I stopped data collection and dropped the database and tried to deleted the jobs created and I had the following message:
TITRE : Microsoft SQL Server Management Studio
Échec de Supprimer pour Travail « collection_set_1_noncached_collect_and_upload ». (Microsoft.SqlServer.Smo)
Pour obtenir de l'aide, cliquez sur :
http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1917+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Supprimer+Job&LinkId=20476
INFORMATIONS SUPPLÉMENTAIRES :
Une exception s'est produite lors de l'exécution d'une instruction ou d'un lot Transact-SQL. (Microsoft.SqlServer.ConnectionInfo)
The DELETE statement conflicted with the REFERENCE constraint "FK_syscollector_collection_sets_collection_sysjobs". The conflict occurred in database "msdb", table "dbo.syscollector_collection_sets_internal", column 'collection_job_id'.
The statement has been terminated. (Microsoft SQL Server, Erreur : 547)
Pour obtenir de l'aide, cliquez sur :
http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=11.00.3128&EvtSrc=MSSQLServer&EvtID=547&LinkId=20476
BOUTONS :
OK
Now I understand that the collectors prevent the jobs from being deleted but I can't delete the collectors.
What should I do?
Any help is appreciated.
Thank you for your advice.
Sylvie PHello,
Please refer to the following article.
http://blogs.msdn.com/b/sqlagent/archive/2011/07/22/remove-associated-data-collector-jobs.aspx
You can run the dcclenaup.sql script mentioned in the article.
Another option is using sp_syscollector_cleanup_collector.
http://blogs.msdn.com/b/sqlagent/archive/2012/04/05/remove-associated-data-collector-jobs-in-sql-2012.aspx
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com -
Performance problem whlile selecting(extracting the data)
i have one intermediate table.
iam inserting the rows which are derived from a select statement
The select statement having a where clause which joins a view (created by 5 tables)
The problem is select statement which is getting the data is taking more time
i identified the problems like this
1) The view which is using in the select statement is not indexed---is index is necessary on view ????
2) Because the tables which are used to create a view have already properly indexed
3) while extracting the data it is taking the more time
the below query will extract the data and insert the data in the intermediate table
SELECT 1414 report_time,
2 dt_q,
1 hirearchy_no_q,
p.unique_security_c,
p.source_code_c,
p.customer_specific_security_c user_security_c,
p.par_value par_value, exchange_code_c,
(CASE WHEN p.ASK_PRICE_L IS NOT NULL THEN 1
WHEN p.BID_PRICE_L IS NOT NULL THEN 1
WHEN p.STRIKE_PRICE_L IS NOT NULL THEN 1
WHEN p.VALUATION_PRICE_L IS NOT NULL THEN 1 ELSE 0 END) bill_status,
p.CLASS_C AS CLASS,
p.SUBCLASS_C AS SUBCLASS,
p.AGENT_ADDRESS_LINE1_T AS AGENTADDRESSLINE1,
p.AGENT_ADDRESS_LINE2_T AS AGENTADDRESSLINE2,
p.AGENT_CODE1_T AS AGENTCODE1,
p.AGENT_CODE2_T AS AGENTCODE2,
p.AGENT_NAME_LINE1_T AS AGENTNAMELINE1,
p.AGENT_NAME_LINE2_T AS AGENTNAMELINE2,
p.ASK_PRICE_L AS ASKPRICE,
p.ASK_PRICE_DATE_D AS ASKPRICEDATE,
p.ASSET_CLASS_T AS ASSETCLASS
FROM (SELECT
DISTINCT x.*,m.customer_specific_security_c,m.par_value
FROM
HOLDING_M m JOIN ED_DVTKQS_V x ON
m.unique_security_c = x.unique_security_c AND
m.customer_c = 'CONF100005' AND
m.portfolio_c = 24 AND
m.status_c = 1
WHERE exists
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
any one please give me the valueble suggestions on the performancethanks for the updating
in the select query we used some functions like max
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
will these type of functions will cause the performance problem ??? -
[URGENT] Performance problem with BC4J and partioned data
Hi all,
I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
Thanks,
AxelHere's a SQL statement that creates the table.
CREATE TABLE SEARCH
(SEAR_PARTKEY_DAY NUMBER(4) NOT NULL
,SEAR_PARTKEY_EMP VARCHAR2(2) NOT NULL
,SEAR_ID NUMBER(20) NOT NULL
,SEAR_ENTRY_DATE TIMESTAMP NOT NULL
,SEAR_LAST_MODIFIED TIMESTAMP NOT NULL
,SEAR_STATUS VARCHAR2(100) DEFAULT '0'
,SEAR_ITC_DATE TIMESTAMP NOT NULL
,SEAR_MESSAGE_CLASS VARCHAR2(15) NOT NULL
,SEAR_CHIPHERING_TYPE VARCHAR2(256)
,SEAR_GMAT VARCHAR2(1) DEFAULT 'U'
,SEAR_NATIONALITY VARCHAR2(3) DEFAULT 'XXX'
,SEAR_MESSAGE_ID VARCHAR2(32) NOT NULL
,SEAR_COMMENT VARCHAR2(256) NOT NULL
,SEAR_NUMBER_OF NUMBER(3) NOT NULL
,SEAR_INTERCEPTION_SYSTEM VARCHAR2(40)
,SEAR_COMM_PRIOD_H NUMBER(5) DEFAULT -1
,SEAR_PRIOD_R NUMBER(5) DEFAULT -1
,SEAR_INMARSAT_CES VARCHAR2(40)
,SEAR_BEAM VARCHAR2(10)
,SEAR_DIALED_NUMBER VARCHAR2(70)
,SEAR_TRANSMIT_NUMBER VARCHAR2(70)
,SEAR_CALLED_NUMBER VARCHAR2(40)
,SEAR_CALLER_NUMBER VARCHAR2(40)
,SEAR_MATERIAL_TYPE VARCHAR2(3) NOT NULL
,SEAR_SOURCE VARCHAR2(10)
,SEAR_MAPPING VARCHAR2(100) DEFAULT '__REST'
,SEAR_DETAIL_MAPPING VARCHAR2(100)
,SEAR_PRIORITY NUMBER(3) DEFAULT 255
,SEAR_LANGUAGE VARCHAR2(5) DEFAULT 'XXX'
,SEAR_TRANSMISSION_TYPE VARCHAR2(40)
,SEAR_INMARSAT_STD VARCHAR2(1)
,SEAR_FILE_NAME VARCHAR2(100) NOT NULL
PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
);of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
We moved to native JDBC with our application and the performance is like we never expected to be! -
URGENT------MB5B : PERFORMANCE PROBLEM
Hi,
We are getting the time out error while running the transaction MB5B. We have posted the same to SAP global support for further analysis, and SAP revrted with note 1005901 to review.
The note consists of creating the Z table and some Z programs to execute the MB5B without time out error, and SAP has not provided what type logic has to be written and how we can be addressed this.
Could any one suggest us how can we proceed further.
Note as been attached for reference.
Note 1005901 - MB5B: Performance problems
Note Language: English Version: 3 Validity: Valid from 05.12.2006
Summary
Symptom
o The user starts transaction MB5B, or the respective report
RM07MLBD, for a very large number of materials or for all materials
in a plant.
o The transaction terminates with the ABAP runtime error
DBIF_RSQL_INVALID_RSQL.
o The transaction runtime is very long and it terminates with the
ABAP runtime error TIME_OUT.
o During the runtime of transaction MB5B, goods movements are posted
in parallel:
- The results of transaction MB5B are incorrect.
- Each run of transaction MB5B returns different results for the
same combination of "material + plant".
More Terms
MB5B, RM07MLBD, runtime, performance, short dump
Cause and Prerequisites
The DBIF_RSQL_INVALID_RSQL runtime error may occur if you enter too many
individual material numbers in the selection screen for the database
selection.
The runtime is long because of the way report RM07MLBD works. It reads the
stocks and values from the material masters first, then the MM documents
and, in "Valuated Stock" mode, it then reads the respective FI documents.
If there are many MM and FI documents in the system, the runtimes can be
very long.
If goods movements are posted during the runtime of transaction MB5B for
materials that should also be processed by transaction MB5B, transaction
MB5B may return incorrect results.
Example: Transaction MB5B should process 100 materials with 10,000 MM
documents each. The system takes approximately 1 second to read the
material master data and it takes approximately 1 hour to read the MM and
FI documents. A goods movement for a material to be processed is posted
approximately 10 minutes after you start transaction MB5B. The stock for
this material before this posting has already been determined. The new MM
document is also read, however. The stock read before the posting is used
as the basis for calculating the stocks for the start and end date.
If you execute transaction MB5B during a time when no goods movements are
posted, these incorrect results do not occur.
Solution
The SAP standard release does not include a solution that allows you to
process mass data using transaction MB5B. The requirements for transaction
MB5B are very customer-specific. To allow for these customer-specific
requirements, we provide the following proposed implementation:
Implementation proposal:
o You should call transaction MB5B for only one "material + plant"
combination at a time.
o The list outputs for each of these runs are collected and at the
end of the processing they are prepared for a large list output.
You need three reports and one database table for this function. You can
store the lists in the INDX cluster table.
o Define work database table ZZ_MB5B with the following fields:
- Material number
- Plant
- Valuation area
- Key field for INDX cluster table
o The size category of the table should be based on the number of
entries in material valuation table MBEW.
Report ZZ_MB5B_PREPARE
In the first step, this report deletes all existing entries from the
ZZ_MB5B work table and the INDX cluster table from the last mass data
processing run of transaction MB5B.
o The ZZ_MB5B work table is filled in accordance with the selected
mode of transaction MB5B:
- Stock type mode = Valuated stock
- Include one entry in work table ZZ_MB5B for every "material +
valuation area" combination from table MBEW.
o Other modes:
- Include one entry in work table ZZ_MB5B for every "material +
plant" combination from table MARC
Furthermore, the new entries in work table ZZ_MB5B are assigned a unique
22-character string that later serves as a key term for cluster table INDX.
Report ZZ_MB5B_MONITOR
This report reads the entries sequentially in work table ZZ_MB5B. Depending
on the mode of transaction MB5B, a lock is executed as follows:
o Stock type mode = Valuated stock
For every "material + valuation area" combination, the system
determines all "material + plant" combinations. All determined
"material + plant" combinations are locked.
o Other modes:
- Every "material + plant" combination is locked.
- The entries from the ZZ_MB5B work table can be processed as
follows only if they have been locked successfully.
- Start report RM07MLBD for the current "Material + plant"
combination, or "material + valuation area" combination,
depending on the required mode.
- The list created is stored with the generated key term in the
INDX cluster table.
- The current entry is deleted from the ZZ_MB5B work table.
- Database updates are executed with COMMIT WORK AND WAIT.
- The lock is released.
- The system reads the next entry in the ZZ_MB5B work table.
Application
- The lock ensures that no goods movements can be posted during
the runtime of the RM07MLBD report for the "material + Plant"
combination to be processed.
- You can start several instances of this report at the same
time. This method ensures that all "material + plant"
combinations can be processed at the same time.
- The system takes just a few seconds to process a "material +
Plant" combination so there is just minimum disruption to
production operation.
- This report is started until there are no more entries in the
ZZ_MB5B work table.
- If the report terminates or is interrupted, it can be started
again at any time.
Report ZZ_MB5B_PRINT
You can use this report when all combinations of "material + plant", or
"material + valuation area" from the ZZ_MB5B work table have been
processed. The report reads the saved lists from the INDX cluster table and
adds these individual lists to a complete list output.
Estimated implementation effort
An experienced ABAP programmer requires an estimated three to five days to
create the ZZ_MB5B work table and these three reports. You can find a
similar program as an example in Note 32236: MBMSSQUA.
If you need support during the implementation, contact your SAP consultant.
Header Data
Release Status: Released for Customer
Released on: 05.12.2006 16:14:11
Priority: Recommendations/additional info
Category: Consulting
Main Component MM-IM-GF-REP IM Reporting (no LIS)
The note is not release-dependent.
Thanks in advance.
Edited by: Neliea on Jan 9, 2008 10:38 AM
Edited by: Neliea on Jan 9, 2008 10:39 AMbefore you try any of this try working with database-hints as described in note 921165, 902157, 918992
Maybe you are looking for
-
Where are the toolbars on Mozilla for Windows 8? There are no navigation, print, save as options. Have read several articles online, but when I right click on a blank space on a tab bar - no toolbar options show.
-
Control file failed error while duplicating database using RMAN
I am using oracle database 10g R2 and trying to make a clone of the source database orcl on the same machine but with different directory structure with the name of test. When i run the duplicate command to create test database, then following error
-
How to read any text files using file adapter as it is
Hi, I need to build bpel process to read any text files as it is.I am file adapter and using opaque schema.But input file is coming as base64encoding format.But i need the input as it is.How can i do that. Is there any sample schema to read input tex
-
Add effects (rounded corners) to an embedded HTML video frame?
Is possible to add effects (rounded corners) to an embedded HTML video frame?
-
Cannot recognize Canon 40D RAW file
My Photoshop CS3 Extended cannot recognize the raw files generated in my Canon 40D, however this camera is on the list of recognized cameras. This camera generates files with extension .cr2. I downloaded the updates to photoshop CS3 and the DNG Conve