Poor performance of transformation end routine
Hi Folks,
I have written a transformation end routine to update a Cube for WBS Elements reading the Master Data and Hierarchy tables. The code seems to be working but with a lot of time spent on reading the endroutine. Could someone take a look and advice the efficient way to write this?
Purpose:
Read WBS Element master data for the WBS Element delivered in the datapackage (comm. Structure). Read the WBS Element hierarchy table for the WBS Element to determine what its level is. If the record is found, level is 4 and approval year is not empty, we use that. Otherwise if the record is found but it turns out to be a level 3, use the WBS element as well. If all fails (eg the level is 4 but there is no approval year), determine the parent WBS element, see code after else statement.
Finally, if the WBS element above is the same as the one in the comm. Structure (for which we read already the master data in table t_h_wbs_elemt), fetch approval year from the table t_h_wbs_elemt (this increases performance because there is no need to read the master data for every new record), otherwise read the new WBS element master data in the header of t_h_wbs_elemt.
Edited by: ramesh ramaraju on Jan 7, 2010 4:50 PM
Hi,
Try like this
DATA: lv_nodename type RSSHNODENAME.
DATA: lt_mwbs_elemt type table of /bi0/mwbs_elemt,
lw_mwbs_elemt type /bi0/mwbs_elemt.
DATA: lt_hwbs_elemt type table of /bi0/hwbs_elemt,
lw_hwbs_elemt type /bi0/hwbs_elemt.
***instead of using select * try to specidfy the fields other wise it will fetch fiull records
***or if you are using select * specking a where condition avodi into corresponding fields instead use the field name
select * from /bi0/mwbs_elemt
into corresponding fields of table lt_mwbs_elemt.
IF SY-SUBRC = 4.
RAISE EXCEPTION TYPE CX_RSROUT_ABORT.
ENDIF.
***instead of using select * try to specidfy the fields other wise it will fetch fiull records
***or if you are using select * specking a where condition avodi into corresponding fields instead use the field name
select * from /bi0/hwbs_elemt
into corresponding fields of table lt_hwbs_elemt.
sort lt_hwbs_elemt by nodename.
sort lt_mwbs_elemt by wbs_elemt.
clear E_S_RESULT.
loop at RESULT_PACKAGE into E_S_RESULT.
clear lw_mwbs_elemt.
loop at lt_mwbs_elemt into lw_mwbs_elemt
where wbs_elemt = e_s_result-wbs_elemt.
endloop.
***instead of looping again use a read statement
read table lt_mwbs_elemt where where wbs_elemt = e_s_result-wbs_elemt binary search.
clear lw_hwbs_elemt.
loop at lt_hwbs_elemt into lw_hwbs_elemt
where nodename = e_s_result-wbs_elemt.
endloop.
***instead of looping again use a read statement
read table lt_hwbs_elemt where where wbs_elemt = e_s_result-wbs_elemt binary search.
clear lv_nodename.
IF sy-subrc EQ 0 AND lw_hwbs_elemt-tlevel EQ 4
and lw_mwbs_elemt-appr_year ne 0.
lv_nodename = e_s_result-wbs_elemt.
ELSEIF sy-subrc EQ 0 AND lw_hwbs_elemt-tlevel EQ 4
and lw_mwbs_elemt-prog_def_s ne ' '.
lv_nodename = e_s_result-wbs_elemt.
ELSEIF sy-subrc EQ 0 AND lw_hwbs_elemt-tlevel EQ 3.
lv_nodename = e_s_result-wbs_elemt.
ENDIF.
IF lw_mwbs_elemt-wbs_elemt ne lv_nodename.
clear lw_mwbs_elemt.
LOOP at lt_mwbs_elemt into lw_mwbs_elemt
where wbs_elemt = lv_nodename.
ENDLOOP.
ENDIF.
IF lw_mwbs_elemt-wbs_elemt EQ lv_nodename.
E_S_RESULT-PROG_DEF_S = lw_mwbs_elemt-prog_def_s.
E_S_RESULT-APPR_YEAR = lw_mwbs_elemt-APPR_YEAR.
ENDIF.
APPEND E_S_RESULT TO E_T_RESULT.
ENDLOOP.
REFRESH RESULT_PACKAGE[].
MOVE E_T_RESULT] TO RESULT_PACKAGE.
Regards,
Ravi
Similar Messages
-
Transformation End Routine - Aggregation - Dummy rule
Hello,
I am writing a transformation end routine.
I would like to use a 'SUM' aggregation behaviour the key figures in my Result_Table instead of a MOVE.
The help at sap http://help.sap.com/saphelp_nw04s/helpdata/en/e3/732c42be6fde2ce10000000a1550b0/content.htm
says that
"key figures are updated by default with the aggregation behavior Overwrite (MOVE)".
In ordrer to do otherwise "You have to use a dummy rule to override this.".
Could someone explain me how to do this?
ClaudioClaudio,
Map your KF to a dummy KF and then set the aggregation to Summation.
Then apply your transformation end routine and then the value calculated in the end routine will get summed up to the existing value.
Arun -
Hi experts,
i have requirement to write end routine to read a DSO for last 12 months sales quantity for each month and sum value pass to keyfigure
not interested using bex variable, while data loading from source to target dso in end routine i am trying to read another DSO which is same as my
target dso where information is stored by fiscal period, year material etc. finally there is a keyfigure in target whih needs to be filled with sum of 12
months sales quantity, for each record form sourc to target maximum of 12 records will be in read dso (for 12 months) my routine is like below.
i am not expert in abap please kindly gothrough and guide me in this
TYPES: BEGIN OF s_/BIC/AZOSLS00,
FISCPER type /BI0/OIFISCPER,
FISCVARNT type /BI0/OIFISCVARNT,
PLANT type /BI0/OIPLANT,
STOR_LOC type /BI0/OISTOR_LOC,
/BIC/MATERIAL type /BIC/OIMATERIAL,
VTYPE type /BI0/OIVTYPE,
BILL_QTY type /BI0/OIBILL_QTY,
END OF s_/BIC/AZOSLS00.
DATA: it_/BIC/AZOSLS00 TYPE TABLE OF s_/BIC/AZOSLS00,
wa_/BIC/AZOSLS00 TYPE s_/BIC/AZOSLS00.
SELECT
FISCPER
FISCVARNT
PLANT
STOR_LOC
/BIC/MATERIAL
VTYPE
BILL_QTY
FROM /BIC/AZOSLS00 INTO TABLE it_/BIC/AZOSLS00
FOR ALL
ENTRIES IN RESULT_PACKAGE
WHERE
below field is from value of fiscal period (which is fiscal period -999 ex: for 001.2014 this
value will be 002.2013 so 12 months including current period)
FISCPER >= RESULT_PACKAGE-/BIC/ZFISCPERF
below is result filed fiscal period (here i dont know which keyword or statement to be used to select
interval values this between statement giving syntax error that can not be used in where for for all entries
between RESULT_PACKAGE-FISCPER
AND
FISCVARNT = RESULT_PACKAGE-FISCVARNT AND
PLANT = RESULT_PACKAGE-PLANT AND
STOR_LOC = RESULT_PACKAGE-STOR_LOC and
/BIC/MATERIAL = RESULT_PACKAGE-/BIC/MATERIAL .
SORT it_/BIC/AZOSLS00 BY FISCPER FISCVARNT PLANT STOR_LOC
/BIC/MATERIAL .
LOOP AT RESULT_PACKAGE ASSIGNING <result_fields>.
READ TABLE it_/BIC/AZOSLS00 INTO wa_/BIC/AZOSLS00 WITH KEY
below dont know what statement i need to use in read statement for interval of fiscal periods
giving error that >= can not be used
FISCPER >= <result_fields>-/BIC/ZFISCPERF
FISCPER = <result_fields>-FISCPER
FISCVARNT = <result_fields>-FISCVARNT
PLANT = <result_fields>-PLANT
STOR_LOC = <result_fields>-STOR_LOC
/BIC/MATERIAL = <result_fields>-/BIC/MATERIAL
BINARY SEARCH.
BREAK-POINT.
IF sy-subrc = 0.
below for each record there will be 12 records in read so sume of 12 records quantity i need to pass to result again dont know what to say here
sum statement giving error
<result_fields>-/BIC/ZLSTSLS12 =
sum(wa_/BIC/AZOSLS00-BILL_QTY).
ENDIF.
ENDLOOP.
friends please help me in this.
Thanks
Chandra.Hiii,
If you only want to store last 12 months data in Target ODS .
Then Create filter in DTP and write routine in filter for calmonth or fiscal period.
Refer the below link to create filter routine :
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80b2db87-639b-2e10-a8b9-c1ac0a44a7a6?QuickLink=index&…
Regards,
Akshay -
Abap logic in Transformation end routine bringing 0 records
Hi,
I wrote this logic but is not populating the figure that I need. I am trying to get a Key figure /
field-symbols: <fs_rp> LIKE LINE OF RESULT_PACKAGE.
Types : Begin of s_itab,
S_ITEMID TYPE /N/ADSO_ASPC00-/N42/S_ITEMID,
STRDCOST TYPE /N/ADSO_ASPC00-/N42/S_STRDCOST,
End of s_itab.
Data : it_itab type table of s_itab,
wa_itab type s_itab.
LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
Read table it_itab INTO wa_itab with key S_ITEMID =
<fs_rp>-/N42/S_ITEMID.
Clear wa_itab.
if sy-subrc ne 0.
SELECT /N/S_STRDCOST /N/S_ITEMID FROM /N/ADSO_ASPC00 INTO
CORRESPONDING FIELDS OF wa_itab
FOR ALL ENTRIES IN RESULT_PACKAGE
WHERE /N/S_ITEMID EQ RESULT_PACKAGE-/N/S_ITEMID.
ENDSELECT.
<fs_rp>-/N/S_STRDCOST = wa_itab-STRDCOST.
ENDIF.
ENDLOOP.1. the name of the fields in the internal table it_itab and in table /n/adso_dsc00 are not the same, so you're move-corresponding is not working.
2. you need to select in table it_itab not in workarea wa_itab.
Data rp TYPE tys_TG_1.
field-symbols: <fs_rp> LIKE LINE OF RESULT_PACKAGE.
Types : Begin of s_itab,
/N/S_ITEMID TYPE /N/ADSO_DSOC00-/N/S_ITEMID,
/N/S_STRDCOST TYPE N/ADSO_DSOC00-/N/S_STRDCOST,
End of s_itab.
Data : it_itab type table of s_itab,
wa_itab type s_itab.
SELECT /N/S_STRDCOST /N/S_ITEMID FROM N/ADSO_DSOC00 INTO
INTO table it_itab
FOR ALL ENTRIES IN RESULT_PACKAGE
WHERE /N/S_ITEMID EQ RESULT_PACKAGE-/N/S_ITEMID.
ENDSELECT.
LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
Read table it_itab INTO wa_itab with key /N/S_ITEMID =
<fs_rp>-/N/S_ITEMID.
if sy-subrc ne 0.
<fs_rp>-/N/S_STRDCOST = wa_itab-STRDCOST.
ENDIF.
ENDLOOP. -
Abap logic in Transformation End Routine not working correctly
Hi,
I wrote a piece of code but during testing I found out that it doesn't meet my requirement.
Requirement
I want to extract Standard_Cost for all sales items that meets the conditon. but at the moment only the first sales item in the DSO is showing.
I would like the following lines to display in the cube as well since the PLITEM is different.
201021 PI31 REDBACK 999999A 78,850
201021 PI31 FLXAAA 999999A 3154,000
DSO Table
CALWEEK PLPLANT PLITEM SALESITEM STRDCOST
201020 IN06 FLXAAA 557868B 6308,000
201021 FI24 FLXAAA 557868B 6308,000
201021 FI24 FLXAAA 999999B 0,000
201021 PI31 REDBACK 999999A 78,850
201021 PI31 FLXAAA 999999A 3154,000
InfoCube
SALESITEM PLPLANT SALESDOC STRDCOST
999999A PI31 1100000911 78,850
Abap Logic
Data ld_calweek(6) TYPE n.
Getting the current week based on the system date.
CALL FUNCTION 'DATE_GET_WEEK'
EXPORTING
date = sy-datum
IMPORTING
week = ld_calweek
EXCEPTIONS
date_invalid = 1
OTHERS = 2.
Data rp TYPE tys_TG_1.
LOOP AT RESULT_PACKAGE INTO rp.
SELECT SINGLE STRDCOST FROM /N/ABC_EFG00 INTO
rp-S_STRDCOST
WHERE SALESITEM = rp-S_ITEMID AND CALWEEK =
ld_calweek AND PLPLANT EQ rp-S_SOURCE.
MODIFY RESULT_PACKAGE FROM rp.
Clear rp.
ENDLOOP.
How do I resolve this
thanksHi Vaidya
Select single will always select the first entry from the source which matched your where condition.
therefore you are not getting all the required data.
WHERE SALESITEM = rp-S_ITEMID AND CALWEEK =
ld_calweek AND PLPLANT EQ rp-S_SOURCE.
according to your logic
it will pick only one record e.g
201021 PI31 REDBACK 999999A 78,850
201021 PI31 FLXAAA 999999A 3154,000
item id = 999999A
plplant = PI31
in this case it will pick only the first record due to select single will fetch only one record (whichever it gets first and which meets your where condition)
You need to change your code logic and need to include more fileds which differentiates one record from another who have same valued as in your present where condition.
Thanks
Navneet -
End Routine in virtual infoprovider based on DTP
Hi gurus!!
I'm facing the following problem. I need to have a Virtual Provider based on DTP based on a DataSource from a view of a table. But I want to do a end routine in the transformation rules. When I use the end routine, my filters don't work (always give al the information). If I don't use the end Routine (or start Routine) I can do filters work perfect.
Any suggestions for this problems.
Thanks a lot!
Gorka Ibor.hi
initially try to debug the rotines
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b0038ad7-a0c7-2c10-cdbc-dd674682c8e7?QuickLink=index&overridelayout=true
if you havent found any bug then
check the links will help you
Can I call values entered in DTP filter in transformation End Routine??
Routines in DTP filters -
Primitive APAB editor in start/end routines in transformations
When editing or viewing ABAP code in BI transformations, for example in a start routine, the editor that opens is very primitive compared to the normal SE38 editor. Some of the limitations include:
The editor window doesn't cover the whole screen with seemingly no way to increase its size.
The syntax check doesn't show on which line syntax errors are located.
There is no option to perform a extended program check.
There is no way to insert break-points (other than with the ABAP keyword of course)
These limitations are present regardless of whether i choose the new front-end editor, the old front-end editor or the back-end editor. We're running SAP Netweaver 2004s.
It is of course possible to create a program in SE38 and copy-paste your start routine code to see the code using the "real" editor, but this is very tiresome and time consuming. Is there a way to make this editor look and behave like the normal editor? I have looked through the setting options an searched SDN without finding a way.Hi,
This is just the settings you need to change to open the start,end, and characteristics routine using the old editor you are comfortable with. No need to go to se38 and check copy the program.
Go to se38->Utilities->settings->abap editor->editor tab->select the old abap editor.
To specifically put break point in transformations (start routine..end routine..)..goto transformation (RSA1) and then display the transformation.
Then goto extra (menu)->generated program. search for start_routine (method now) and put break point in the desired place.
Then from the DTP enable all 4 break points..in tranformation (this will come when u cange it to debug mode simulation). And u can debug the transformation.
The new editor is a good handy one. But take some time to get acquented to it. After you may start liking it :).
Cheers,
-J -
Source Field in End Routine of DSO Transformation
Hi,
I made a transformation from source DSO to Target DSO.
There are 7 fields in source & 6 fields in target..All the 6 fields are one to one mapped from the source to Target
I need to write a simple ABAP Logic in End Routine based on the 7th source field which is not mapped.
Please let me know the piece of ABAP code or steps where i can get the value of Source table in End routine
Regards
SureshHi Suresh,
Check here.........
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e73bfc19-0e01-0010-23bc-ef0ad53f2fab
http://help.sap.com/saphelp_nw70/helpdata/en/e3/732c42be6fde2ce10000000a1550b0/frameset.htm
Regards,
Vijay. -
Start & End Routines in BI 7 Transformations
Hi,
In Transformations from DSO1-->DSO2
In Start Routine for all entries in Source Package i read some fields from DSO3 and filled an iternal table
And in end routine i read the iternal table and filled the result package/fields
In the mapping i haven't mapped any thing to the fields to which i intended to fill using routines
When i executed data load those fields are not populated with any value
But if i debug the transformation...results are updating in all fields in the result package.......
Do i need to make any setting or mappings to the fields which i want to update using end routine
ThanksHI,
For support pack 16 and above you get one more button besides End Routine (once end routine is created).
This button is to update behaviour of fields in End Routines. You get two options once you select this button. One needs to make selection of proper option as it is mandatory.
The default setting for the pushbutton is that only the fields with active rules are updated in the transformation. With this selection, fields populated in End routine wont be updated in the data target if no active rule exists for them in Transformation.
Alternatively, you can define that all the fields should always be updated by selecting 2nd radio button. As a result, fields filled in the end routine are not lost if there is no other active rule.
So in your case if you are in SP 15 or lower, then you will have to map the fields.
Go through this article it gives the above explanation along with screenshots.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/30d35342-1fe3-2c10-70ba-ad0da83d23bd
Hope this helps.
Thanks,
Rahul -
Difference between Start Routine and End Routine in Transformations
Hi Friends,
I'm using BI 7.0... here in Transformations step we have two options..that is START ROUTINE... and END ROUTINE... What is the Difference between Start Routine and End Routine in Transformations..
When we go for Start Routine.. and when we go for End Routine..
Plz clarrify... points will be rearded..
thanks
babuHi,
One real time scenario for End Routine.
We have a scenario where in a datasource field is mapped to three infoobjects on the datatarget side. There are 2 key figures which need to get data after these these Infoobjects are filled. The best place for this to happen would be in a End Routine, where in we would loop through the results package and using the values of the infoobjects from the data target ( Cube in this case).
Hope this helps,
HD -
Hi Experts,
We have a scenario wherein we need to write END Routine in the transfromation between two cubes. While doing so we need to Look up at a DSO to fetch Financial Document Number.
Details:
We will be loading the Invoice data from 0LIV_DS01 ODS to the target cube ZLIV_DS01. We have a field called Invoice Clearing Date(0clear_date) in the target cube which is not present in this 0LIV_DS01 ODS. We have Invoice clearing date in the second cube(ZCSOINV). We need to write an END routine in the transformation between ZCSOINV and the target cube.
We need to select Financial Document Number from Active table of 0liv_ds01 ods and store it in an internal table.
Then compare this Financial Document Number with the Financial Document Number in the ZCSOINV cube. If there a match we need to select 0clear_date from ZCSOINV cube and assign it to result package.
Please let me know what code needs to be written to achieve the above requirement in End Routine. Thanks
Regards,
Kavitha JagannathIn order to read the info cube you can use this function module: RSDRI_INFOPROV_READ
This is the link in order to help you to use that function :
/people/dinesh.lalchand/blog/2006/06/07/reading-infocube-data-in-updatetransfer-rules
SAP also have the demo about it:
You can open it in this program : RSDRI_INFOPROV_READ_DEMO
1. go to tcode : se38
2. type : RSDRI_INFOPROV_READ_DEMO then execute it.
Edited by: Ananda Theerthan on Apr 12, 2010 6:20 PM -
BI end routine at transformation to populate info object by vlookup attribu
Hi ,
I am APO consultant working in Bi routines and I have the follwoing situation and need some guidance in ABAP code (routine) .
We sales info from markets as flat filea snd lod them into cubes. One of the filed is file is External sales Group: ZEXSGRP. This is an attribute or Sales Group info object ZSLSGRP.
We get external sales group populated in file when we upload the file into cube - I want to use end routine to vlook up the info infoobject table ZSLSGRP - all the external sales groups and use the matching value to write Sales Group (ZSLSGRP).Example if ZSLSGRP is NAM and it attribute is ZEXSGRP and has value N0000032. The file gets value N0000032 so the end routine should look all attribute of all infoobject ZSLSGRP and match the value and populate in example above it populates NAM.
Hope i am clear - can any help with this.
thaks
VarmaReplace your select statement ,
SELECT *
FROM /BIC/PZF31SALOFF
INTO CORRESPONDING FIELDS OF TABLE it_tab4.
instead of selecting all the fields , pick only the fields which are required.(one good performance improvement)
SELECT /BIC/PZF31SALOFF comp_code
FROM /BIC/PZF31SALOFF
INTO CORRESPONDING FIELDS OF TABLE it_tab4.
Remove the line below , this is not required
MODIFY it_tab4 FROM wa_tab4. -
Error in END ROUTINE while activating the transformation.
Hi ALL,
While writing an END ROUTINE IN BI, I got no error while saving the code.
But when Activate the Transformation I got the following Error.
" Syntax Error in GP_ERR_RSTRAN_MASTER_TMPL,row 54(->long text)
Diagnosis:Component called CRM_OBJ_ID already exits "
I have used the above object in the transformation.
Please help me debug the error or highlight on it.Hi,
i guess u have create two object with same variable name.check.
regards,
rakesh -
[9i] poor performance with XMLType.transform
Hello,
I've got a problem with the Oracle function XMLType.transform.
When I try to apply a XSL to a big XML, it is very very slow, and it evens consumes all the CPU, and other users are not able to work until the processing is complete...
So I was wondering if my XSL was corrupted, but it does not seem so, because when i apply it with Internet Explorer (by just double-clicking on the .xml), it is immediately applied. I've also even tried with oraxsl, and the processing is quick and good.
So, i tried to use XDB, but it does not work, maybe I should upgrade to a newer version of XDB?
Please find the ZIP file here :
http://perso.modulonet.fr/~tleoutre/Oracle/samples.zip
Please find in this file :
1) The XSL file k_xsl.xsl
2) The "big" XML file big_xml.xml
Here you can try to apply the XSL on the XML with Internet Explorer : processing is very quick...
3) The batch file transform.bat
Here you can launch it, it calls oraxsl, and produces a result very quickly...
4) The SQL file test_xsl_with_xmltype_transform.sql.
You can try to launch it... First, it applies the same XSL with a little XML, and it's OK... And then, it applies the XSL to the same big XML as in point 1), and then, it takes a lot of time and CPU...
5) The SQL file test_xsl_with_xdb_1.sql ...
On my server, it fails... So I tried to change the XSL in the next point :
6) The SQL file test_xsl_with_xdb_2.sql with a "cleaned" XSL...
And then, it fails with exactly the same problem as in :
TransformerConfigurationException (Unknown expression at EOF: *|/.)
Any help would be greatly appreciated!
Thank you!
P.S. : Sorry for my bad english, I'm a French man :-)This is what I see...
Your tests are measuring the wrong thing. You are measuring the time to create the sample documents, which is being done very innefficiently, as well
as the time take to do the transform.
Below is the correct way to get mesasurements for each task..
Here's what I see on a PIV 2.4Ghz with 10.2.0.2.0 and 2GB of RAM
Fragments SourceSize TargetSize createSource Parse Transform
50 28014 104550 00:00:00.04 00:00:00.04 00:00:00.12
100 55964 209100 00:00:00.03 00:00:00.05 00:00:00.23
500 279564 1045500 00:00:00.16 00:00:00.23 00:00:01.76
1000 559064 2091000 00:00:00.28 00:00:00.28 00:00:06.04
2000 1118064 4182000 00:00:00.34 00:00:00.42 00:00:24.43
5000 2795064 10455000 00:00:00.87 00:00:02.02 00:03:19.02I think this clearly shows the pattern.
Of course what this testing really shows is that you've clearly missed the point of performing XSLT transformation inside the database.
The idea behind database based transformation is to optimize XSLT processing by
(1), not having to parse the XML and build a DOM tree before commencing the XSLT processing. In this example this is not possible since the
XML is being created from a CLOB based XMLType, not a schema based XMLType.
(2) Leveraging the Lazily Loaded Virtual DOM when doing sparse transformation ( A Sparse transformation is one where there are large parts of the
source document that are not required to create the target document. Again in this case the XSL requires you to walk all the nodes to generate the
required output.
If is necessary to process all of the nodes in the source document to generate the entire output it probably makes more sense to use a midtier XSL engine.
Here's the code I used to generate the numbers in the above example
BTW in terms of BIG XML we've successully processed 12G documents with Schema Based Storage...So nothing you have hear comes any where our defintion of big.- 1 with Oracle 10g Express on Linux
Also, please remember that 9.2.0.1.0 is not a supported release for any XML DB related features.
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:44:59 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool createDocument.log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> create or replace directory &1 as '&3'
2 /
old 1: create or replace directory &1 as '&3'
new 1: create or replace directory SCOTT as '/home/mdrake/bugs/xslTest'
Directory created.
SQL> drop table source_data_table
2 /
Table dropped.
SQL> create table source_data_table
2 (
3 fragCount number,
4 xml_text clob,
5 xml xmlType,
6 result clob
7 )
8 /
Table created.
SQL> create or replace procedure createDocument(fragmentCount number)
2 as
3 fragmentText clob :=
4 '<AFL LIGNUM="1999">
5 <mat>20000001683</mat>
6 <name>DOE</name>
7 <firstname>JOHN</firstname>
8 <name2>JACK</name2>
9 <SEX>MALE</SEX>
10 <birthday>1970-05-06</birthday>
11 <salary>5236</salary>
12 <code1>5</code1>
13 <code2>6</code2>
14 <code3>7</code3>
15 <date>2006-05-06</date>
16 <dsp>8.665</dsp>
17 <dsp_2000>455.45</dsp_2000>
18 <darr04>5.3</darr04>
19 <darvap04>6</darvap04>
20 <rcrr>8</rcrr>
21 <rcrvap>9</rcrvap>
22 <rcrvav>10</rcrvav>
23 <rinet>11.231</rinet>
24 <rmrr>12</rmrr>
25 <rmrvap>14</rmrvap>
26 <ro>15</ro>
27 <rr>189</rr>
28 <date2>2004-05-09</date2>
29 </AFL>';
30
31 xmlText CLOB;
32
33 begin
34 dbms_lob.createTemporary(xmlText,true,DBMS_LOB.CALL);
35 dbms_lob.write(xmlText,5,1,'<PRE>');
36 for i in 1..fragmentCount loop
37 dbms_lob.append(xmlText,fragmentText);
38 end loop;
39 dbms_lob.append(xmlText,xmlType('<STA><COD>TER</COD><MSG>Op?ation R?ssie</MSG></STA>').getClobVal());
40 dbms_lob.append(xmlText,'</PRE>');
41 insert into source_data_table (fragCount,xml_text) values (fragmentCount, xmlText);
42 commit;
43 dbms_lob.freeTemporary(xmlText);
44 end;
45 /
Procedure created.
SQL> show errors
No errors.
SQL> --
SQL> set timing on
SQL> --
SQL> call createDocument(50)
2 /
Call completed.
Elapsed: 00:00:00.04
SQL> call createDocument(100)
2 /
Call completed.
Elapsed: 00:00:00.03
SQL> call createDocument(500)
2 /
Call completed.
Elapsed: 00:00:00.16
SQL> call createDocument(1000)
2 /
Call completed.
Elapsed: 00:00:00.28
SQL> call createDocument(2000)
2 /
Call completed.
Elapsed: 00:00:00.34
SQL> call createDocument(5000)
2 /
Call completed.
Elapsed: 00:00:00.87
SQL> select fragCount dbms_lob.getLength(xmlText)
2 from sample_data_table
3 /
select fragCount dbms_lob.getLength(xmlText)
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:01 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 50
1 row updated.
Elapsed: 00:00:00.04
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 50
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964
500 279564
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 100
1 row updated.
Elapsed: 00:00:00.05
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 100
1 row updated.
Elapsed: 00:00:00.23
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.01
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:02 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 500
1 row updated.
Elapsed: 00:00:00.12
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 500
1 row updated.
Elapsed: 00:00:01.76
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:04 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 1000
1 row updated.
Elapsed: 00:00:00.28
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 1000
1 row updated.
Elapsed: 00:00:06.04
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.00
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:11 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 2000
1 row updated.
Elapsed: 00:00:00.42
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.02
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 2000
1 row updated.
Elapsed: 00:00:24.43
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.03
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064 4182000
5000 2795064
6 rows selected.
Elapsed: 00:00:00.00
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL*Plus: Release 10.2.0.2.0 - Production on Fri Feb 10 07:45:36 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> spool testcase_&3..log
SQL> --
SQL> connect &1/&2
Connected.
SQL> --
SQL> set timing on
SQL> --
SQL> update source_data_table
2 set xml = xmltype(xml_text)
3 where fragCount = &3
4 /
old 3: where fragCount = &3
new 3: where fragCount = 5000
1 row updated.
Elapsed: 00:00:02.02
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.05
SQL> update source_data_table
2 set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
3 where fragCount = &3
4 /
old 2: set result = xmltransform(xml,xmltype(bfilename(USER,'&4'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
new 2: set result = xmltransform(xml,xmltype(bfilename(USER,'k_xsl.xsl'),nls_charset_id('WE8ISO8859P1'))).getClobVal()
old 3: where fragCount = &3
new 3: where fragCount = 5000
1 row updated.
Elapsed: 00:03:19.02
SQL> commit
2 /
Commit complete.
Elapsed: 00:00:00.01
SQL> select fragCount, dbms_lob.getLength(xml_text),dbms_lob.getLength(result)
2 from source_data_table
3 /
FRAGCOUNT DBMS_LOB.GETLENGTH(XML_TEXT) DBMS_LOB.GETLENGTH(RESULT)
50 28014 104550
100 55964 209100
500 279564 1045500
1000 559064 2091000
2000 1118064 4182000
5000 2795064 10455000
6 rows selected.
Elapsed: 00:00:00.04
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
bash-2.05b$ -
Performance: reading huge amount of master data in end routine
In our 7.0 system, each day a full load runs from DSO X to DSO Y in which from six characteristics from DSO X master data is read to about 15 fields in DSO Y contains about 2mln. records, which are all transferred each day. The master data tables all contain between 2mln. and 4mln. records. Before this load starts, DSO Y is emptied. DSO Y is write optimized.
At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups. We redesigned and fill all master data attributes in the end routine, after fillilng internal tables with the master data values corresponding to the data package:
* Read 0UCPREMISE into temp table
SELECT ucpremise ucpremisty ucdele_ind
FROM /BI0/PUCPREMISE
INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise
FOR ALL ENTRIES IN RESULT_PACKAGE
WHERE ucpremise EQ RESULT_PACKAGE-ucpremise.
And when we loop over the data package, we write someting like:
LOOP AT RESULT_PACKAGE ASSIGNING <fs_rp>.
READ TABLE lt_0ucpremise INTO ls_0ucpremise
WITH KEY ucpremise = <fs_rp>-ucpremise
BINARY SEARCH.
IF sy-subrc EQ 0.
<fs_rp>-ucpremisty = ls_0ucpremise-ucpremisty.
<fs_rp>-ucdele_ind = ls_0ucpremise-ucdele_ind.
ENDIF.
*all other MD reads
ENDLOOP.
So the above statement is repeated for all master data we need to read from. Now this method is quite faster (1,5 hr). But we want to make it faster. We noticed that reading in the master data in the internal tables still takes a long time, and this has to be repeated for each data package. We want to change this. We have now tried a similar method, but now load all master data in internal tables, without filtering on the data package, and we do this only once.
* Read 0UCPREMISE into temp table
SELECT ucpremise ucpremisty ucdele_ind
FROM /BI0/PUCPREMISE
INTO CORRESPONDING FIELDS OF TABLE lt_0ucpremise.
So when the first data package starts, it fills all master data values, which 95% of them we would need anyway. To accomplish that the following data packages can use the same table and don't need to fill them again, we placed the definition of the internal tables in the global part of the end routine. In the global we also write:
DATA: lv_data_loaded TYPE C LENGTH 1.
And in the method we write:
IF lv_data_loaded IS INITIAL.
lv_0bpartner_loaded = 'X'.
* load all internal tables
lv_data_loaded = 'Y'.
WHILE lv_0bpartner_loaded NE 'Y'.
Call FUNCTION 'ENQUEUE_SLEEP'
EXPORTING
seconds = 1.
ENDWHILE.
LOOP AT RESULT_PACKAGE
* assign all data
ENDLOOP.
This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
Well this all seems to work: it takes now 10 minutes to load everything to DSO Y. But I'm wondering if I'm missing anything. The system seems to work fine loading all these records in internal tables. But any improvements or critic remarks are very welcome.This is a great question, and you've clearly done a good job of investigating this, but there are some additional things you should look at and perhaps a few things you have missed.
Zephania Wilder wrote:
At first, we designed this with the standard "master data reads", but this resulted in load times of 4 hours, because all master data is read with single lookups.
This is not accurate. After SP14, BW does a prefetch and buffers the master data values used in the lookup. Note [1092539|https://service.sap.com/sap/support/notes/1092539] discusses this in detail. The important thing, and most likely the reason you are probably seeing individual master data lookups on the DB, is that you must manually maintain the MD_LOOKUP_MAX_BUFFER_SIZE parameter to be larger than the number of lines of master data (from all characteristics used in lookups) that will be read. If you are seeing one select statement per line, then something is going wrong.
You might want to go back and test with master data lookups using this setting and see how fast it goes. If memory serves, the BW master data lookup uses an approach very similar to your second example (1,5 hrs), though I think that it first loops through the source package and extracts the lists of required master data keys, which is probably faster than your statement "FOR ALL ENTRIES IN RESULT_PACKAGE" if RESULT_PACKAGE contains very many duplicate keys.
I'm guessing you'll get down to at least the 1,5 hrs that you saw in your second example, but it is possible that it will get down quite a bit further.
Zephania Wilder wrote:
This makes sure that another data package that already started, "sleeps" until the first data package is done with filling the internal tables.
This sleeping approach is not necessary as only one data package will be running at a time in any given process. I believe that the "global" internal table is not be shared between parallel processes, so if your DTP is running with three parallel processes, then this table will just get filled three times. Within a process, all data packages are processed serially, so all you need to do is check whether or not it has already been filled. Or are you are doing something additional to export the filled lookup table into a shared memory location?
Actually, you have your global data defined with the statement "DATA: lv_data_loaded TYPE C LENGTH 1.". I'm not completely sure, but I don't think that this data will persist from one data package to the next. Data defined in the global section using "DATA" is global to the package start, end, and field routines, but I believe it is discarded between packages. I think you need to use "CLASS-DATA: lv_data_loaded TYPE C LENGTH 1." to get the variables to persist between packages. Have you checked in the debugger that you are really only filling the table once per request and not once per package in your current setup? << This is incorrect - see next posting for correction.
Otherwise the third approach is fine as long as you are comfortable managing your process memory allocations and you know the maximum size that your master data tables can have. On the other hand, if your master data tables grow regularly, then you are eventually going to run out of memory and start seeing dumps.
Hopefully that helps out a little bit. This was a great question. If I'm off-base with my assumptions above and you can provide more information, I would be really interested in looking at it further.
Edited by: Ethan Jewett on Feb 13, 2011 1:47 PM
Maybe you are looking for
-
I can't see my ipod nano in itunes
I am setting up my new nano and can't see it as an option under devices in ITunes
-
Skype 7.3 crashing around same time every night
Skype has been crashing every night, right before I go to bed Skype.exe ends up consuming almost 2GB of memory for some reason, at around the same time every night, and there are constant errors that Skype is out of memory, and it eventually stops re
-
AppleTV 3.01 video output seems extremely 'contrasty'
When I first plugged in my refurb 40Gb unit running 3.01 I though, hmm, that's very contrasty, highlights blown, artwork/thumbnails, well very contrasty - a bit like setting your Tv to those horrendous dynamic picture settings that overblow everythin
-
Is a Macbook up to running FCP?
Hi all. I know FCP is a 'heavy' program and in an ideal world you would not want to mess about trying to use FCP on a Macbook (the specs are as listed for my machine), however my employers have recently requested that I be able to capture clips off a
-
G510 missing driver root/LenovoVhid
Hello, I installed win 7 pro 64bit on a lenovo G510. Now there are one unknown devices, it is root/LenovoVhid. Does anyone know which drivers are missing? Thanks much. Link to image Moderator note: large image(s) converted to link(s): About Posting