FM LDB_PROCESS slow extraction
Hi all,
I'm using FM LDB_PROCESS to access logical database FTI_TR_PERIODS to extract records from there, and following is how I call my FM:
CALL FUNCTION 'LDB_PROCESS'
EXPORTING
ldbname = 'FTI_TR_PERIODS'
variant = ' '
* EXPRESSIONS = TEXPR
field_selection = i_fsel
TABLES
callback = i_callback
selections = i_sel
EXCEPTIONS
ldb_not_reentrant = 1
ldb_incorrect = 2
ldb_already_running = 3
ldb_error = 4
ldb_selections_error = 5
ldb_selections_not_accepted = 6
variant_not_existent = 7
variant_obsolete = 8
variant_error = 9
free_selections_error = 10
callback_no_event = 11
callback_node_duplicate = 12
OTHERS = 13.
However, I found that the extraction process is very slow (approximately 15,000 records to be extracted) when executing my report. Due to this, I have filled up field_selection parameter to improve performance, after reading this FM documentation. Consequently, it didn't help much on the performance.
Can anyone suggest are there any other ways to improve the extraction process when accessing Logical Database?
Best regards,
Patrick
Take a look at the Std Report RVKUSTA1 & you will know how to pass the Sle scr.
~Suresh
Similar Messages
-
Slow extraction in big XML-Files with PL/SQL
Hello,
i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion. Here is an example of a XML File i want to analyse:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns2:GetDocumentByIDResponse xmlns:ns2="***">
<ArchivedDocument>
<ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
<Metadata archiveDate="2013-08-01+02:00" documentID="123">
<Descriptor type="Integer" name="fachlicheId">
<Value>123<Value>
</Descriptor>
<Descriptor type="String" name="user">
<Value>***</Value>
</Descriptor>
<InternalDescriptor type="Date" ID="DocumentDate">
<Value>2013-08-01+02:00</Value>
</InternalDescriptor>
<!-- Here some more InternalDescriptor Nodes -->
</Metadata>
<RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
<DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
</RepresentationDescription>
</ArchivedDocumentDescription>
<DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
<Data fileName="20mb.test">
<BinaryData>
<!-- Here is the BASE64 converted document -->
</BinaryData>
</Data>
</DocumentPart>
</ArchivedDocument>
</ns2:GetDocumentByIDResponse>
</soap:Body>
</soap:Envelope>
Now i want to extract the filename and the Base64 converted document from this XML response.
For the extraction of the filename i use the following command:
v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
For the extraction of the binary data i use the following command:
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
Start Time
End Time
Difference
Command
10.09.13 - 15:46:11,402668000
10.09.13 - 15:47:21,407895000
00:01:10,005227
v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
10.09.13 - 15:47:21,407895000
10.09.13 - 15:47:22,336786000
00:00:00,928891
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
I wonder about it and started some tests.
I tried to use an exact - non dynamic - filename. So i have this commands:
v_filename := '20mb_1.test';
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
Under this Conditions the time for the document extraction soar. You can see this in the following table:
Start Time
End Time
Difference
Command
10.09.13 - 16:02:33,212035000
10.09.13 - 16:02:33,212542000
00:00:00,000507
v_filename_bcm := '20mb_1.test';
10.09.13 - 16:02:33,212542000
10.09.13 - 16:03:40,342396000
00:01:07,129854
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
Thank you,
Matthias
PS: I use the Oracle 11.2.0.2.0Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
Here are two approaches you can test :
Using the DOM interface over your XMLType variable, for example :
DECLARE
v_xml xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns2:GetDocumentByIDResponse xmlns:ns2="***">
<ArchivedDocument>
<ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
<Metadata archiveDate="2013-08-01+02:00" documentID="123">
<Descriptor type="Integer" name="fachlicheId">
<Value>123</Value>
</Descriptor>
<Descriptor type="String" name="user">
<Value>***</Value>
</Descriptor>
<InternalDescriptor type="Date" ID="DocumentDate">
<Value>2013-08-01+02:00</Value>
</InternalDescriptor>
<!-- Here some more InternalDescriptor Nodes -->
</Metadata>
<RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
<DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
</RepresentationDescription>
</ArchivedDocumentDescription>
<DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
<Data fileName="20mb.test">
<BinaryData>
ABC123
</BinaryData>
</Data>
</DocumentPart>
</ArchivedDocument>
</ns2:GetDocumentByIDResponse>
</soap:Body>
</soap:Envelope>');
domDoc dbms_xmldom.DOMDocument;
docNode dbms_xmldom.DOMNode;
node dbms_xmldom.DOMNode;
nsmap varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
istream sys.utl_characterinputstream;
buf varchar2(32767);
numRead pls_integer := 1;
filename varchar2(30);
base64clob clob;
BEGIN
domDoc := dbms_xmldom.newDOMDocument(v_xml);
docNode := dbms_xmldom.makeNode(domdoc);
filename := dbms_xslprocessor.valueOf(
docNode
, xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
, nsmap
node := dbms_xslprocessor.selectSingleNode(
docNode
, xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
, nsmap
--create an input stream to read the node content :
istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
dbms_lob.createtemporary(base64clob, false);
-- read the content in 32k chunk and append data to the CLOB :
loop
istream.read(buf, numRead);
exit when numRead = 0;
dbms_lob.writeappend(base64clob, numRead, buf);
end loop;
-- free resources :
istream.close();
dbms_xmldom.freeDocument(domDoc);
END;
Using a temporary XMLType storage (binary XML) :
create table tmp_xml of xmltype
xmltype store as securefile binary xml;
insert into tmp_xml values( v_xml );
select x.*
from tmp_xml t
, xmltable(
xmlnamespaces(
'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
, '***' as "ns2"
, '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
passing t.object_value
columns filename varchar2(30) path '@fileName'
, base64clob clob path 'BinaryData'
) x -
Slow extraction from Source System
Experts,
I'm on the basis team and we're trying to figure out what we can do to increase the performance of a BI extraction from an ECC source system.
Our BI team is doing a Full extraction of Data Source 0UC_SALES_STATS_02 which has about 24 million records. This has been running for about 2.5 days and has only extracted 12.5 million records according to RSMO
I'm no expert on BI performance, but I'm trying to get there. One thing we noticed is that we have over 1000 Data Packages and each has about 11K records per package.
RSBIW "control params for data transfer" in the BI system is set as this:
LOGSYS -- Max KB. -- MAX Lines -- Freq -- MAx PROC -- Target Sys -- Max DPs
OURBWSYS -- 20000 -- 0 -- 10 -- 3 -- blank -- blank
OURECCSYS -- 20000 -- 0 -- 10 -- 3 -- blank -- blank
Also we only see one background process running on the ECC system which is the Zprogram that was written to perform the extract.
We also checked IMG - sap netweaver - BI - links to other source systems - maintain contrl params for data xfer
Freq/IDOC = 10
pkg size = 50,000
partition size = 1,000,000
We are on NWEHP1 for our BI system
We are on ECC6.0 EHP3sp5 for our ECC system
We use Oracle 10.2.0.4.0 with it fully tuned
We also think we have all the correct tuning parameters in our ECC/BI systems. There are a few tweaks we could make but I don't see any glaring problems. All our memory parameters seem to be in line with what is recommended.
We do not see any kind of Memory/CPU bottlenecks.
Do you think the many data packages and small records per package is a problem?
Any tips to increase performance would be helpful, I'm still looking.
I saw this wiki thread and it mentions to look at tcode RSA3 in ECC system to monitor long extracts. But I have looked at this and I have not figured out what all needs to go into the fields to correctly run this.
Any help you can provide would be great.
Thanks,
NICKHI Nick,
This problem is due to the huge volumes of data and the job is ruinng from long time onwards, normally we will kill the job if it exceeds more than 8 hours time while extracting the data.
Please suggest your BI Team to pull data in small volumes by providing selections in info package. This will drastically redcues the time for extraction of huge data and will also not consume all the resources in system.
Please pass below links to your BI team as this is completely releated to design issue.
http://help.sap.com/saphelp_nw70/helpdata/en/39/b520d2ba024cfcb542ef455c76b250/frameset.htm
Type of DSO should be write optimized to increase load performance.
http://help.sap.com/saphelp_nw70/helpdata/en/34/1c74415cb1137de10000000a155106/frameset.htm
2. And at your end look for the table spaces of the PSA tables, may be it is execeeding its capacity and make it as automatic. instead of manullay assigning the space at the regular intervals.
Hope this helps.
Regards,
Reddy
Edited by: Reddybl on Apr 2, 2010 12:23 AM -
Consignment PO condition tables
Hi, I need to connect the pricing conditions in tables KONV / KONH / KONP to the purchasing info record number for a data extract that will give us the pricing history. I know I can use the varkey, but it's not an indexed field in the condition tables, so will be slow extract from a big table. Is there another table/field I can use to connect the two sets of data? we're running version 4.6c Thanks in advance for your help.
> Probably u can restrict the selection by field
> application KONV-KAPPL to M if u are extracting
> for Purchasing report .
>
> Regards
> Mani
Thanks Mani,
If I understand you, restricting to Purchase Order conditions will reduce the amount of data extracted, and surely will help. But we're using conditions for other types of PO's besides consignment. I'd still like to restrict further, ideally to just the conditions created for our consignment Purchasing Info Records. Is there a table or combination of tables that directly links the PIR with the condition? -
Queries to extract slow moving stock
Hi All,
I have the following queries to extract the item list. May I know how to modify this queries to filter slow moving item where Purchase date are over 90 days older than today date. Example, today is 01/05/15 the report will list only item where purchase date are older than 01/02/15.
SELECT T0.[ItemCode], T1.[ItmsGrpNam], T0.[ItemName], T0.[LastPurDat], T0.[OnHand], T0.[BuyUnitMsr] as 'uom', T0.[LastPurPrc], T0.[OnHand]*T0.[LastPurPrc] as 'Est Stock Value'
FROM [dbo].[OITM] T0 INNER JOIN [dbo].[OITB] T1 ON T0.[ItmsGrpCod] = T1.[ItmsGrpCod]
WHERE T0.[OnHand] >0 AND T1.ItmsGrpNam <> 'SP001-Spare Parts'
Thank you,
AnnieHi,
Try this query:
SELECT T0.[ItemCode], T1.[ItmsGrpNam], T0.[ItemName], T0.[LastPurDat], T0.[OnHand], T0.[BuyUnitMsr] as 'uom', T0.[LastPurPrc], T0.[OnHand]*T0.[LastPurPrc] as 'Est Stock Value'
FROM [dbo].[OITM] T0 INNER JOIN [dbo].[OITB] T1 ON T0.[ItmsGrpCod] = T1.[ItmsGrpCod]
WHERE T0.[OnHand] >0 AND T1.ItmsGrpNam <> 'SP001-Spare Parts' and datediff(day, T0.[LastPurDat],getdate()) > 90
Thanks. -
Hello All,
I am trying to extract data from init 2lis_11_vaitm. But, extraction is very very slow. Total data in init set-up table are only 200k.
One thing I noticed is that we changed the extractor to increase a no. of fields to 225. All these fields were available in the BI content delivered communication extract structure of 2lis_11_vaitm in ECC.
Is it because of large nos of fields being extracted from ECC. 200k records are being extracted..It has been 38 hrs..and so far only 140K records are extracted.
Could you please suggest, how I can improve extraction performance.
We will be moving this to production. I am afraid that much larger nos of records in production will take for ever to extract.
Thanks
ShailiniYes you are right, IO = input and output, and generally reference to the data loading capability
BASIS will help to monitor the data loading capability.
They will help to check the sizing document, prepared before system go launch, to check the "designed capability". BASIS guys need to do some test on data loading before system go launch.
And they will help to check the log to find some problems during data loading.
In bi 7 statistics you can find some information about that load. Discuss with the basis guys, they can help to analysis the problem without the BI statistics.
The system on hand do no load from 2lis_12_vcitm. But here is some information for your reference:
1. Production Server: 26K records, load to 2LIS_03_BF, takes 58s in all
2. Testing Server: 1.5 Million records, full load to 0FI_GL_10, Runtime 34m 12s -
Extracting .rar files is slow, too slow
hi
i have problem with extracting files. it takes too much time.
just for ex' extracting 1G on linux took 3.5 minutes and 40 sec' on windows (on the same hardware)
* i'm using xarchiver 0.4.6
tydids22 wrote:
hi
i have problem with extracting files. it takes too much time.
just for ex' extracting 1G on linux took 3.5 minutes and 40 sec' on windows (on the same hardware)
ty
I've noticed the same problem. Pacman grinds system (firefox) to a halt too.
iirc, I dont think rar'ing was always this slow. Guess it might have something to do with sata, ncq and schedulers.
unrar x 701MB.rar temp2/ 4.69s user 4.51s system 4% cpu 3:12.77 total -
Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.
We have configured OGG on a near-DR server. The extracts are configured to work in ALO Mode.
During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
Points to be noted:
1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
3) There is no connection to PROD or DR Database
4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?Hi Nick.W,
The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
Regarding the 3rd point,
a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
b. We have added "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements. -
Slow DTP extraction running on top of a cube with BIA
Hi Experts,
Ive a DTP, which is running on top of a cube, without aggregates. But I have the BIA built for the cube. Its taking 2 n half hour to extract 2 million records which is quite slow. Initial 1 hour of the extraction, it doesnt do anything. I have checked and confirmed, its not hitting BIA. The "Use Aggregates" option doesnt really help as there are no aggregates. (Pls dont suggest to build aggregates)Tried changing the "Settings for Batch Manager" option from 3 to 1, but did not help. Is there anyway, I can force the extraction to use BIA?
Thanks for looking
ShriramaHi Tibolo,
Thanks for the reply. There are no Symantic grouping.Delta cannot be used as the system where i'm sending information is not capable of handling Delta.
Thanks
Shrirama -
Slow motion, subtitles and extracting freeze frames
hi friend,
had a few queries if someone can help.
1) want to play a single clip in a project in slow motion. how can it be done?
2) can subtitles of the voice be added? there are only three titles and only possibility to have one in a clip.
3) can a freeze frame be exported as a photo to camera roll or photos? or can a still frame in a video be extracted as a photo and sent to the camera roll?
thanx,1. You will need to buy an slow motion video app.
There are a few in the app store that slow down video that is already shot.
Export the slowed video to the camera roll, and use it in iMovie.
2. Subtitling is not an option in iMovie.
You can add 'middle' titles, but using this to make subtitles would be slow, and add transitions which is anoying.
3. There are apps that can make stills from video.
Or you can do a screen grab of the ipad; hold power and Home button. -
Data Extraction to Cube is very slow
Hi Experts,
I am working on SAP APO project .I have created a cube to extract data(Backup of a planning area) from Live cache.But the data load is taking lot of time to load.We analysed the job log found that for storing data in PSA and in datatarget takes less than 1 minute but reading data from live cache is taking lot of time and also it is creating and deleting the views for reading data from live cache.I have created the cube in the BI instance present in APO system itself ....So i believe it should be faster ....Could anyone help me in this regard...
Thanks,
Vijay.If you have 2 separate system APO and BW.
Preference for performance should be Extract data from Live cache in APO Cube and read Data in BW from APO cube to BW Cube.
SAP recommend 2 different instances especially for APO DP.
Hope it helps otherwise you can find lot of documentation in the market place for this subject..
Good luck. -
OGG Version 11.2.1.0.6_03
OS - RHEL 5.8
We have encounter an issue where extract Lag stops moving but Checkpoint is keep increasing.
Example:
EXTRACT RUNNING ETEST 29:02:57 35:52:01
--send extract status --> Don't know why current read thread# and sequence is set to 0
EXTRACT ETEST (PID 111222)
Current status: Recovery complete: Processing data
Current read position:
Redo thread #: 0
Sequence #: 0
RBA: 0
Timestamp: 2013-08-11 00:48:12.000000
SCN: 474.2539637828
Current write position:
Sequence #: 399
RBA: 1182
Timestamp: 2013-08-12 05:51:09.804431
Extract Trail: /tmp/test/lt
-- send etest showtrans ---> lists several open transactions.
example;
XID: 155.9.13952
Items: 102014
Extract: E_KRGA_P
Redo Thread: 1
Start Time: 2013-08-11:00:29:23
SCN: 474.2539241649 (2038353739953)
Redo Seq: 48782
Redo RBA: 1401277456
Status: Running
--- gv$transaction does not any transaction running.
-- info etest showch - shows
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Timestamp: 2013-08-11 00:29:23.000000 --> Not moving at all
SCN: 474.2539241470 (2038353739774)
Current Checkpoint (position of last record read in the data source):
Timestamp: 2013-08-11 00:48:12.000000 --->Not moving at all
SCN: 474.2539637828 (2038354136132)
BR Previous Recovery Checkpoint:
Timestamp: 2013-08-11 09:59:12.643333
SCN: Not available
BR Begin Recovery Checkpoint:
Timestamp: 2013-08-11 00:48:11.000000
SCN: 474.2539614038 (2038354112342)
BR End Recovery Checkpoint:
Timestamp: 2013-08-11 00:48:11.000000
SCN: 474.2539614038 (2038354112342)
Any help.Hi,
Try this query:
SELECT T0.[ItemCode], T1.[ItmsGrpNam], T0.[ItemName], T0.[LastPurDat], T0.[OnHand], T0.[BuyUnitMsr] as 'uom', T0.[LastPurPrc], T0.[OnHand]*T0.[LastPurPrc] as 'Est Stock Value'
FROM [dbo].[OITM] T0 INNER JOIN [dbo].[OITB] T1 ON T0.[ItmsGrpCod] = T1.[ItmsGrpCod]
WHERE T0.[OnHand] >0 AND T1.ItmsGrpNam <> 'SP001-Spare Parts' and datediff(day, T0.[LastPurDat],getdate()) > 90
Thanks. -
Slow data extraction From R/3
Hi,
We are extracting data using Some Standard Function module Like
0CO_OM_CCA_9
0CO_OM_WBS_6
both are init and delta enabled but for extraction of 145 or 200 records it takes 3 or 4 hours, I am not able to understand where it is taking more time. when I go to SM 37 T code in Source system it give s the foloowing in job log for more than 2 hours.
Job started
Step 001 started (program SBIE0001, variant &0000000002333, user ID ALEREMOTE)
DATASOURCE = 0HR_PT_1
Current Values for Selected Profile Parameters *
abap/heap_area_nondia......... 2000000000 *
abap/heap_area_total.......... 2000000000 *
abap/heaplimit................ 40894464 *
zcsa/installed_languages...... ED *
zcsa/system_language.......... E *
ztta/max_memreq_MB............ 250 *
ztta/roll_area................ 2000000 *
ztta/roll_extension........... 2000000000 *
Can any body tell me why it takes so much time in selecting data from source system. this message i can see for more 2 hours it means extractor is not doing anything so my question is that why it waits for longer time.hi,
goto tcode ST05 : enable trace
Run you job..
then disable trace in ST05 and dispaly it. now u can monitor whch step is taking most time nad u can act accordingly.
*Assign points if helpful. -
DataSource extraction very slow ( from Source System to PSA it takes 23 hrs
Friends,
We have enhanced the datasource 0CRM_SALES_ORDER_I with the user exit....after the enhancement i.e (adding the new fields and wrote some coding to enhance ...) the data extraction takes place for around 23 hours. there is approximately 2,50,000 records.
Can you please suggest any steps to tune up the performance of the datasource.
NOTE: Data Extraction from source system to PSA alone takes 23 hrs.once the data is arrived in PSA then the loading of data to cube is fast.
PLZ help me to solve this issue.
BASKARHi Friends,
This is the code used for the datasource enhancement.(EXIT_SAPLRSAP_001)
DATA : IS_CRMT_BW_SALES_ORDER_I LIKE CRMT_BW_SALES_ORDER_I.
DATA: MKT_ATTR TYPE STANDARD TABLE OF CRMT_BW_SALES_ORDER_I.
DATA: L_TABIX TYPE I.
DATA: LT_LINK TYPE STANDARD TABLE OF CRMD_LINK,
LS_LINK TYPE CRMD_LINK.
DATA: LT_PARTNER TYPE STANDARD TABLE OF CRMD_PARTNER,
LS_PARTNER TYPE CRMD_PARTNER.
DATA: LT_BUT000 TYPE STANDARD TABLE OF BUT000,
LS_BUT000 TYPE BUT000.
DATA: GUID TYPE CRMT_OBJECT_GUID.
DATA: GUID1 TYPE CRMT_OBJECT_GUID_TAB.
DATA: ET_PARTNER TYPE CRMT_PARTNER_EXTERNAL_WRKT,
ES_PARTNER TYPE CRMT_PARTNER_EXTERNAL_WRK.
TYPES: BEGIN OF M_BINARY,
OBJGUID_A_SEL TYPE CRMT_OBJECT_GUID,
END OF M_BINARY.
DATA: IT_BINARY TYPE STANDARD TABLE OF M_BINARY,
WA_BINARY TYPE M_BINARY.
TYPES : BEGIN OF M_COUPON,
OFRCODE TYPE CRM_MKTPL_OFRCODE,
END OF M_COUPON.
DATA: IT_COUPON TYPE STANDARD TABLE OF M_COUPON,
WA_COUPON TYPE M_COUPON.
DATA: CAMPAIGN_ID TYPE CGPL_EXTID.
TYPES : BEGIN OF M_ITEM,
GUID TYPE CRMT_OBJECT_GUID,
END OF M_ITEM.
DATA: IT_ITEM TYPE STANDARD TABLE OF M_ITEM,
WA_ITEM TYPE M_ITEM.
TYPES : BEGIN OF M_PRICE,
KSCHL TYPE PRCT_COND_TYPE,
KWERT TYPE PRCT_COND_VALUE,
KBETR TYPE PRCT_COND_RATE,
END OF M_PRICE.
DATA: IT_PRICE TYPE STANDARD TABLE OF M_PRICE,
WA_PRICE TYPE M_PRICE.
DATA: PRODUCT_GUID TYPE COMT_PRODUCT_GUID.
TYPES : BEGIN OF M_FRAGMENT,
PRODUCT_GUID TYPE COMT_PRODUCT_GUID,
FRAGMENT_GUID TYPE COMT_FRG_GUID,
FRAGMENT_TYPE TYPE COMT_FRGTYPE_GUID,
END OF M_FRAGMENT.
DATA: IT_FRAGMENT TYPE STANDARD TABLE OF M_FRAGMENT,
WA_FRAGMENT TYPE M_FRAGMENT.
TYPES : BEGIN OF M_UCORD,
PRODUCT_GUID TYPE COMT_PRODUCT_GUID,
FRAGMENT_TYPE TYPE COMT_FRGTYPE_GUID,
ZZ0010 TYPE Z1YEARPLAN,
ZZ0011 TYPE Z6YAERPLAN_1,
ZZ0012 TYPE Z11YEARPLAN,
ZZ0013 TYPE Z16YEARPLAN,
ZZ0014 TYPE Z21YEARPLAN,
END OF M_UCORD.
DATA: IT_UCORD TYPE STANDARD TABLE OF M_UCORD,
WA_UCORD TYPE M_UCORD.
DATA: IT_CATEGORY TYPE STANDARD TABLE OF COMM_PRPRDCATR,
WA_CATEGORY TYPE COMM_PRPRDCATR.
DATA: IT_CATEGORY_MASTER TYPE STANDARD TABLE OF ZPROD_CATEGORY ,
WA_CATEGORY_MASTER TYPE ZPROD_CATEGORY .
types : begin of st_final,
OBJGUID_B_SEL TYPE CRMT_OBJECT_GUID,
OFRCODE TYPE CRM_MKTPL_OFRCODE,
PRODJ_ID TYPE CGPL_GUID16,
OBJGUID_A_SEL type CRMT_OBJECT_GUID,
end of st_final.
data : t_final1 type standard table of st_final.
data : w_final1 type st_final.
SELECT bOBJGUID_B_SEL aOFRCODE aPROJECT_GUID bOBJGUID_A_SEL INTO table t_final1 FROM
CRMD_MKTPL_COUP as a inner join CRMD_BRELVONAE as b on bOBJGUID_A_SEL = aPROJECT_GUID . -
Urgent!! Urgent!! Slow in XML extraction.
Hi,
I am using 8.1.7 dB. I have prob in performance, while extractging. I have XML doc like the following.
- <rowOfCells>
- <cellValueList>
<cellValue>Class I Senior Notes</cellValue>
<cellValue>0</cellValue>
<cellValue>0.0054</cellValue>
<cellValue>48164975</cellValue>
<cellValue>4.18</cellValue>
<cellValue>304557600</cellValue>
<cellValue>181835025</cellValue>
</cellValueList>
</rowOfCells>
- <rowOfCells>
- <cellValueList>
<cellValue>Class I Senior Notes</cellValue>
<cellValue>0</cellValue>
<cellValue>0.0054</cellValue>
<cellValue>49105975</cellValue>
<cellValue>4.33</cellValue>
<cellValue>307906256</cellValue>
<cellValue>180894025</cellValue>
</cellValueList>
</rowOfCells>
Please noet that the number of "cellValue" tags may vary. I am using the code like the following to extract and make it as a record.
l_row_node_list := xslProcessor.selectNodes (p_node,'/rowOfCells/cellValueList' );
FOR i IN 0 .. XMLDOM.getLength (l_row_node_list) - 1
LOOP
l_row_node := XMLDOM.item (l_row_node_list, i);
l_col_node_list := xslProcessor.selectNodes (l_row_node,'cellValue' );
FOR j IN 0 .. XMLDOM.getLength (l_col_node_list) - 1
LOOP
l_grandChildArray(j+1) := SUBSTR(XMLDOM.getNodeValue( XMLDOM.getFirstChild(XMLDOM.item (l_col_node_list, j) ) ),1,100);
END LOOP;
-- Make as record structure.
l_generic_rec.col1 := l_grandChildArray(1);
l_generic_rec.col2 := l_grandChildArray(2);
l_generic_rec.col3 := l_grandChildArray(3);
l_generic_rec.col4 := l_grandChildArray(4);
l_generic_rec.col5 := l_grandChildArray(5);
l_generic_rec.col6 := l_grandChildArray(6);
l_generic_rec.col7 := l_grandChildArray(7);
l_generic_rec.col8 := l_grandChildArray(8);
END LOOP;
This works fine for me, but it takes 5 minutes to extract for 1200 cellValueList. I want to speed up this process.Is there any way i can modify this code or use some other way to speed up the process of extraction?
Advanced Thanks
RameshYour code using the DOM APIs looks fine. You can change 'cellValue' to '/cellValue' and see if there will be any improvement:
l_col_node_list := xslProcessor.selectNodes (l_row_node,'/cellValue' );
Or you can use the SAX streaming method to get the data. Would you like to send the sample data?
Maybe you are looking for
-
One credit note with reference to several invoices??
Hello, my customer came up with specia requirement , Is it possible to have One credit note request with reference to several invoices?? I never seen but your input would be helpful. Points will be added for helpful answer
-
Tutor HTML Files: defaulting to tutor_notabs
We have a large number of Tutor html files. We would like the html pages to default to the "no tab" version. (The same that is seen when clicking the "Accessible/Printable" link in top right corner - above the 'Desk Manual Index' link). We see how we
-
Best way to change partition key on existing table
Hi, Using Oracle 11.20.3 on AIX. We have a table about 800 million rows and 120gb in size. Want to try copies oif this table to evalaute different partitiong strategies. What is the quickest way to do this? Would have liked say datapump table 1 tro d
-
I would like to know the implementation/development process of RFID Tags and Readers. Is there any referece hardware architecture by which i can develop RFID readers with CDC/CLDC kit and use Java RFID software. Or is there any RFID tag and reader re
-
I can't share in iMovie 10?
I recently downloaded iMovie to my iMac. I successfully created a short 30 sec project video in iMovie 10 but I can't share. The share button is dead