Lack of performance in SELECT-ing XML records
Hello Champs, I am new to XML world. But as a DBA now I'm into the situation to suggest better performance improvement in accessing XML records.
Problem:
There is a batch job from informatica, fetching records from XML tables(close to 400, one by one) stored in oracle database. All 400 tables just have two columns as described below:
Name Null? Type
RECID NOT NULL VARCHAR2(255)
XMLRECORD XMLTYPE
Each table has NORMAL index created for it only for VARCHAR2 column:
CREATE UNIQUE INDEX "username"."Indexname_PK" ON "username"."table_name" ("RECID") PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "tbs_idx" ;
All the table has the CLOB index created for it with below description.
CHUNK PCTVERSION RETENTION FREEPOOLS CACHE IN_ROW FORMAT PAR
8192 14400 YES YES ENDIAN NEUTRAL NO
Informatica issues below query on all the table, where it could just fetch only 400rows/sec in average. This takes entire buisness day to complete the batch job for all 400 tables, which gives big problem in production environment.
SELECT <table_name>.<column_name>.getclobval() FROM <table_name>;
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.5.0 - Productio
NLSRTL Version 10.2.0.5.0 - Production
Clarification required:
1. Where do you see in this scenario, there is a problem which blocks the performance?
2. What type of index on this setup is normally advisable?
Many Thanks for your assistance,
Not sure if you will like it, although it will improve your performance issues considerably, upgrade to Oracle 11.2.0.3 or 11.2.0.4 and be sure that all those XMLTYPE (CLOB) stored columns will be created/recreated with XMLTYPE (SECUREFILE BINARY XML).
One way of making sure that XMLTYPE will be stored via Binary XML Securefile LOB storage is by setting "db_securefile" on database level ( Using Oracle SecureFiles ) to value ALWAYS or FORCE. Binary XML Securefile XMLType content may only be created on ASSM enabled tablespaces...
Similar Messages
-
Need help for SQL SELECT query to fetch XML records from Oracle tables having CLOB field
Hello,
I have a scenario wherein i need to fetch records from several oracle tables having CLOB fields(which is holding XML) and then merge them logically to form a hierarchy XML. All these tables are related with PK-FK relationship. This XML hierarchy is having 'OP' as top-most root node and ‘DE’ as it’s bottom-most node with One-To-Many relationship. Hence, Each OP can have multiple GM, Each GM can have multiple DM and so on.
Table structures are mentioned below:
OP:
Name Null Type
OP_NBR NOT NULL NUMBER(4) (Primary Key)
OP_DESC VARCHAR2(50)
OP_PAYLOD_XML CLOB
GM:
Name Null Type
GM_NBR NOT NULL NUMBER(4) (Primary Key)
GM_DESC VARCHAR2(40)
OP_NBR NOT NULL NUMBER(4) (Foreign Key)
GM_PAYLOD_XML CLOB
DM:
Name Null Type
DM_NBR NOT NULL NUMBER(4) (Primary Key)
DM_DESC VARCHAR2(40)
GM_NBR NOT NULL NUMBER(4) (Foreign Key)
DM_PAYLOD_XML CLOB
DE:
Name Null Type
DE_NBR NOT NULL NUMBER(4) (Primary Key)
DE_DESC NOT NULL VARCHAR2(40)
DM_NBR NOT NULL NUMBER(4) (Foreign Key)
DE_PAYLOD_XML CLOB
+++++++++++++++++++++++++++++++++++++++++++++++++++++
SELECT
j.op_nbr||'||'||j.op_desc||'||'||j.op_paylod_xml AS op_paylod_xml,
i.gm_nbr||'||'||i.gm_desc||'||'||i.gm_paylod_xml AS gm_paylod_xml,
h.dm_nbr||'||'||h.dm_desc||'||'||h.dm_paylod_xml AS dm_paylod_xml,
g.de_nbr||'||'||g.de_desc||'||'||g.de_paylod_xml AS de_paylod_xml,
FROM
DE g, DM h, GM i, OP j
WHERE
h.dm_nbr = g.dm_nbr(+) and
i.gm_nbr = h.gm_nbr(+) and
j.op_nbr = i.op_nbr(+)
+++++++++++++++++++++++++++++++++++++++++++++++++++++
I am using above SQL select statement for fetching the XML records and this gives me all related xmls for each entity in a single record(OP, GM, DM. DE). Output of this SQL query is as below:
Current O/P:
<resultSet>
<Record1>
<OP_PAYLOD_XML1>
<GM_PAYLOD_XML1>
<DM_PAYLOD_XML1>
<DE_PAYLOD_XML1>
</Record1>
<Record2>
<OP_PAYLOD_XML2>
<GM_PAYLOD_XML2>
<DM_PAYLOD_XML2>
<DE_PAYLOD_XML2>
</Record2>
<RecordN>
<OP_PAYLOD_XMLN>
<GM_PAYLOD_XMLN>
<DM_PAYLOD_XMLN>
<DE_PAYLOD_XMLN>
</RecordN>
</resultSet>
Now i want to change my SQL query so that i get following output structure:
<resultSet>
<Record>
<OP_PAYLOD_XML1>
<GM_PAYLOD_XML1>
<GM_PAYLOD_XML2> .......
<GM_PAYLOD_XMLN>
<DM_PAYLOD_XML1>
<DM_PAYLOD_XML2> .......
<DM_PAYLOD_XMLN>
<DE_PAYLOD_XML1>
<DE_PAYLOD_XML2> .......
<DE_PAYLOD_XMLN>
</Record>
<Record>
<OP_PAYLOD_XML2>
<GM_PAYLOD_XML1'>
<GM_PAYLOD_XML2'> .......
<GM_PAYLOD_XMLN'>
<DM_PAYLOD_XML1'>
<DM_PAYLOD_XML2'> .......
<DM_PAYLOD_XMLN'>
<DE_PAYLOD_XML1'>
<DE_PAYLOD_XML2'> .......
<DE_PAYLOD_XMLN'>
</Record>
<resultSet>
Appreciate your help in this regard!Hi,
A few questions :
How's your first query supposed to give you an XML output like you show ?
Is there something you're not telling us?
What's the content of, for example, <OP_PAYLOD_XML1> ?
I don't think it's a good idea to embed the node level in the tag name, it would make much sense to expose that as an attribute.
What's the db version BTW? -
Error inserting XML records 4000 bytes through Pro*C
Hi,
I am seeing the following error while trying to insert XML records > 4000 bytes (Records < 4000 bytes get inserted without any issues). Any help in resolving the issue would be highly appreciated.
ORA return text: ORA-01461: can bind a LONG value only for insert into a LONG column.
I am also able to insert records > 4000 bytes using the following query, But, I want to insert the records through a C application (using Pro*C) that is not running on the database server.
INSERT INTO MY_XML_TABLE
VALUES (XMLType(bfilename('XML_DIR', 'MY_FILE.XML'),
nls_charset_id('AL32UTF8')));
Oracle Version
===============
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Solaris: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
Pro*C/C++ version:
====================
Pro*C/C++ RELEASE 11.2.0.0.0 - PRODUCTION
Schema registration:
====================
begin
DBMS_XMLSCHEMA.registerSchema (
SCHEMAURL => 'MY_XML_SCHEMA.xsd',
SCHEMADOC => bfilename ('ENG_REPORTS', 'MY_XML_SCHEMA.xsd'),
GENTYPES => FALSE,
OPTIONS => DBMS_XMLSCHEMA.REGISTER_BINARYXML,
CSID =>nls_charset_id ('AL32UTF8'));
end;
Table creation
===============
CREATE TABLE MY_XML_TABLE (
MY_XML_RECORD XmlType )
XMLTYPE MY_XML_RECORD STORE AS BINARY XML
XMLSCHEMA "MY_XML_SCHEMA.xsd" ELEMENT "MYXMLTAG" ;
Record Insertion (Pro*C generated code):
=========================================
/* EXEC SQL FOR :l_sizeof_array_togo
insert INTO MY_XML_TABLE
(MY_XML_RECORD )
VALUES( XMLTYPE(:l_XML_ptr INDICATOR :l_XML_indicators )); */
struct sqlexd sqlstm;
sqlstm.sqlvsn = 12;
sqlstm.arrsiz = 1;
sqlstm.sqladtp = &sqladt;
sqlstm.sqltdsp = &sqltds;
sqlstm.stmt = "insert into MY_XML_TABLE (MY_XML_RECORD) values (XMLTYPE(:s1\
:s2 ))";
sqlstm.iters = (unsigned int )l_sizeof_array_togo;
sqlstm.offset = (unsigned int )20;
sqlstm.cud = sqlcud0;
sqlstm.sqlest = (unsigned char *)&sqlca;
sqlstm.sqlety = (unsigned short)4352;
sqlstm.occurs = (unsigned int )0;
sqlstm.sqhstv[0] = (unsigned char *)&l_XML_ptr->xml_record;
sqlstm.sqhstl[0] = (unsigned long )8002;
sqlstm.sqhsts[0] = ( int )sizeof(struct xml_rec_definition);
sqlstm.sqindv[0] = ( short *)&l_XML_indicators->XML_record_ind;
sqlstm.sqinds[0] = ( int )sizeof(struct XML_indicator);
sqlstm.sqharm[0] = (unsigned long )0;
sqlstm.sqadto[0] = (unsigned short )0;
sqlstm.sqtdso[0] = (unsigned short )0;
sqlstm.sqphsv = sqlstm.sqhstv;
sqlstm.sqphsl = sqlstm.sqhstl;
sqlstm.sqphss = sqlstm.sqhsts;
sqlstm.sqpind = sqlstm.sqindv;
sqlstm.sqpins = sqlstm.sqinds;
sqlstm.sqparm = sqlstm.sqharm;
sqlstm.sqparc = sqlstm.sqharc;
sqlstm.sqpadto = sqlstm.sqadto;
sqlstm.sqptdso = sqlstm.sqtdso;
sqlcxt((void **)0, &sqlctx, &sqlstm, &sqlfpn);
}After selecting data from xmltab table I just received first line of xmldata file. i.e
<?xml version="1.0" encoding="WINDOWS-12 52"?> <BAROutboundXM L xmlns="http://BARO
That must be a display issue.
What client tool are you using, and what version?
If SQL*Plus, you won't see the whole content unless you set some options :
{code}
SET LONG <value>
SET LONGCHUNKSIZE <value>
{code}
Could you try the following?
{code}
SET LONG 10000
SELECT t.object_value.getclobval() FROM xmltab t;
-- to force pretty-printing :
SELECT extract(t.object_value, '/*').getclobval() FROM xmltab t;
{code}
Edited by: odie_63 on 16 févr. 2011 08:58 -
? - Is there a way to validate 1 XML record at a time, using the SAX or oth
Hello!
Before running into space problems, i generated an XML file from a 'pipe delimited' file and and then processed that XML file thru a SAXParser 'validator' and the data was correctly validated, using the RELAXNG Schema patterns as the validation criteria!
But as feared, the XML file was huge! (12 billion XML recs. generated from 1 billion 'pipe' recs.) and i am now trying to find a way to process 1 'pipe' record at a time, (ie) read 1 record from the 'pipe delimited' file, convert that rec. to an XML rec. and then send that 1 XML rec. thru the SAXParser 'validator', avoiding the build of a huge temporary XML file!
After testing this approach, its looks like the SAXParser 'validator' (sp.parse) is expecting only (1) StringBufferInputStream as input,and after opening, reading and closing just (1) of the returned StringBufferInputStream objects, the validator wants to close up operations!
Where i have the "<<<<<" you can see where i'm calling the the object.method that creates the 'pipe>XML' records 'sb.createxml' and will be returning many occurances of the StringBufferInputStream object, where (1) StringBufferInputStream object represents (1) 'pipe>XML' record!
So what i'm wondering, is if there is a form of 'inputStream' class that can be loaded and processed at the same time! ie instead of requiring that the 'inputStream' object be loaded in it's entirety, before going to validation?
Or if there is another XML 'validator' that can validate 1 XML record at a time, without requiring that the entire XML file be built first?
1. ---------------------------------class: (SX2) ---------------------------------------------------------------------------------------------------------
import ............
public class SX2
public static void main(String[] args) throws Exception
MyDefaultHandler dh = new MyDefaultHandler();
SX1 sx = new SX1();
SAXParser sp = sx.getParser(args[0]);
stbuf1 sb = new stbuf1();
sp.parse(sb.createxml(args[1]),dh); <<<<<< createxml( ) see <<<<<<< below
class MyDefaultHandler extends DefaultHandler {
public int errcnt;
"SX2.java" 87 lines, 2563 characters
2. ----------------------------------class: (stbuf1) method: (createxml) ----------------------------------------------------------------------------
public stbuf1 () { }
public StringBufferInputStream createxml( String inputFile ) <<<<<< createxml(
BufferedReader textReader = null;
if ( (inputFile == null) || (inputFile.length() <= 1) )
{ throw new NullPointerException("Delimiter Input File does not exist");
String ele = new String();
try {
ele = new String();
textReader = new BufferedReader(new FileReader(inputFile));
String line = null; String SEPARATOR = "\\|"; String sToken = null;
String hdr1=("<?xml version=#1.0# encoding=#UTF-8#?>"); hdr1=hdr1.replace('#','"');
String hdr2=("<hlp_data>");
String hdr3=("</hlp_data>");
String hdr4=("<"+TABLE_NAME+">");
String hdr5=("</"+TABLE_NAME+">");
while ( (line = textReader.readLine()) != null )
String[] sa = line.split(SEPARATOR);
String elel = new String();
for (int i = 0; i < NUM_COLS; i++)
if (i>(sa.length-1)) { sToken = new String(); } else { sToken = sa; }
elel="<"+_columnNames[i]+">"+sToken+"</"+_columnNames[i]+">";
if (i==0) {
ele=ele.concat(hdr1);ele=ele.concat(hdr2);ele=ele.concat(hdr4);ele=ele.concat(elel);
else
if (i==NUM_COLS - 1) {
ele=ele.concat(elel);ele=ele.concat(hdr5);ele=ele.concat(hdr3);
else {
ele=ele.concat(elel);
textReader.close();
catch (IOException e) {
return (new StringBufferInputStream(ele));
public static void main( String args[] ) {
stbuf1 genxml_obj = new stbuf1 ();
String ptxt=new String(args[0]);
genxml_obj.createxml(ptxt); }}Well,i think you can use the streaming API for xml processing provided by weblogic.It is pull model,not push model like SAX.with it,you can select the events you want without having to react to every event,and you can filter the events out.
Sun also provide such streaming API for xml processing,and i got an very simple introduction about it on the Chinese Sun developer site.but i couldn't find any other infomation about it elsewhere! If you have such materials,please send to my email:[email protected],and if I have it,i will be sure to post the links here.hope it helps more or less:)
@smile@ -
How can I select the random records in u201Cstep loopu201D.
Hi, Experts,
I am using step loop to display the data on screen now I want to select the random records from this step loop display. Like 1st 3rd or 5th record, is there any way to select the records,
like I did in Table control there was a filed in internal table named marked char length 1, I gave this field name in u201Cw/ SelColumnu201D it fill that field with u2018Xu2019 from where I can get the info about the selected records, in this way I can easily perform the operation on that internal table with the help of that marked field.
Is there any way to select the records in step loop too ?
Kind Regards,
Faisalthanks for replay shwetali,
but i just gave you example of random records with 1st 3rd and 5th my goal is not select only these records. i want to select the random mean any records from the step loop display.
like we can select from the table control. and when select any record it place 'X' in the given internal table field.
Thanks and kind Regards,
Faisal -
Load XML records in a normal table
Good afternoon all,
I have a very simple question:
I get a XML file and want to store that data in my Oracle database in a normal table.
I have seen so many answers everywhere, varying from LOBs and using XDB etc.
What i don't understand is why it is so difficult.
When i want to load a CSV file in a table I make a very small Control File CTL and from the command prompt / command line I run the SQL Loader.
Control file:
load data
infile 'import.csv'
into table emp
fields terminated by "," optionally enclosed by '"'
( empno, empname, sal, deptno )
command:
sqlldr user/password@SID control=loader_Control_File.ctl
Next I connect to the database and run SQL query:
select * from emp;
and i see my data as usual, I can make Crystal Reports on it, etc etc
I really don't understand why this can't be done with an XML file
Oracle know the fields in the table EMP
The xml file has around every field the <EMPNO> and </EMPNO>
Can't be easier than that I would say.
I can understand Oracle likes some kind of description of the XML table, so reference to a XSD file would be understandable.
But all examples are describing LOB things (whatever that is)
Who can help me to get XML data in a normal table?
Thanks
FrankHi Frank,
What i don't understand is why it is so difficult.Why do you think that?
An SQL*Loader control file might appear very small and simple to you, but you don't actually see what happens inside the loader itself, I guess a lot of complex operations (parsing, datatype mapping, memory allocation etc.).
XML, contrary to a CSV format, is a structured, well standardized language and could handle far more complex documents than row-organized CSV files.
I think it naturally requires a few extra work (for a developer) to describe what we want to do out of it.
However, using an XML schema is not mandatory to load XML data into a relational table.
It's useful if you're interested in high-performance loading and scalability, as it allows Oracle to fully understand the XML data model it has to deal with, and make the correct mapping with SQL types in the database.
Furthermore, now with 11g BINARY XMLType, performance has been improved with or without schema.
Here's a simple example, loading XML file "import.xml" into table MY_EMP.
Do you find it difficult? ;)
SQL> create or replace directory test_dir as 'D:\ORACLE\test';
Directory created
SQL> create table my_emp as
2 select empno, ename, sal, deptno
3 from scott.emp
4 where 1 = 0
5 ;
Table created
SQL> insert into my_emp (empno, ename, sal, deptno)
2 select *
3 from xmltable('/ROWSET/ROW'
4 passing xmltype(bfilename('TEST_DIR', 'import.xml'), nls_charset_id('CHAR_CS'))
5 columns empno number(4) path 'EMPNO',
6 ename varchar2(10) path 'ENAME',
7 sal number(7,2) path 'SAL',
8 deptno number(2) path 'DEPTNO'
9 )
10 ;
14 rows inserted
SQL> select * from my_emp;
EMPNO ENAME SAL DEPTNO
7369 SMITH 800.00 20
7499 ALLEN 1600.00 30
7521 WARD 1250.00 30
7566 JONES 2975.00 20
7654 MARTIN 1250.00 30
7698 BLAKE 2850.00 30
7782 CLARK 2450.00 10
7788 SCOTT 3000.00 20
7839 KING 5000.00 10
7844 TURNER 1500.00 30
7876 ADAMS 1100.00 20
7900 JAMES 950.00 30
7902 FORD 3000.00 20
7934 MILLER 1300.00 10
14 rows selected
import.xml :
<?xml version="1.0"?>
<ROWSET>
<ROW>
<EMPNO>7369</EMPNO>
<ENAME>SMITH</ENAME>
<SAL>800</SAL>
<DEPTNO>20</DEPTNO>
</ROW>
<ROW>
<EMPNO>7499</EMPNO>
<ENAME>ALLEN</ENAME>
<SAL>1600</SAL>
<DEPTNO>30</DEPTNO>
</ROW>
<!-- more rows here -->
<ROW>
<EMPNO>7934</EMPNO>
<ENAME>MILLER</ENAME>
<SAL>1300</SAL>
<DEPTNO>10</DEPTNO>
</ROW>
</ROWSET>
Who can help me to get XML data in a normal table?If you have a specific example, feel free to post it, including the following information :
- structure of the target table
- sample XML file
- database version (select * from v$version)
Hope that helps.
Edited by: odie_63 on 9 mars 2011 21:22 -
Filtering XML records while bursting - processing only a subset of the data
Hi
I'm attempting to filter the records in my XML data while bursting, so that depending on the value of a data element, the entire record is either skipped or included in the burst output.
I only want a subset of the XML output to be included in the burst output.
I've tried applying a filter at the select stage -
<xapi:request select="/AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD{EMAIL_IND!=''}">
This removes all the records where 'EMAIL_IND' is not null - i.e. there is an email address to send to,
but instead of giving me multiple emails, one for each /AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD,
I get just one mail address that contains all of the data for the records that have email addresses.
I also tried putting a filter on the template
<xapi:template type="rtf" locale="" location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter="{EMAIL_IND!=''}" />
i.e. having only one template and filtering as shown, but this has no effect.
Note: I had to change the square brackets - '[' to curly brackets '{' to get the examples to show.
Any ideas?
Thanks in advance.
MikeHi
I worked out a way to conditionally use ony a number of the data records in the bursting process - discarding the others.
In EBS, I generate a set of data in a a data-definition template that contains entries for some people who require email delivery, and some who require printed output.
In the concurrent programme, I specify an output rtf template that has a filter in it to print only the records for those who don't need emails.
(don't you just love the way that the designers call both the data definition and the output definition - TEPLATES - not confusing at all.....)
This step generates a sinlge sorted pdf file that is printed)
Then came the tricky part to send emails only to those who need them- using the bursting engine - to the other set of people - but to "cleanly" dispose of the records for the people who do not want emails.
What is not clear in any of the documentation at all is that the XMLPublisher bursting engine MUST handle ALL the records that it receives in the XML input.
If you specify a filter on the output template in the bursting control file, that excludes some records (those not needing the email) the bursting engine doesn't know what do with the remaining records.
Enter stage left - multiple delivery methods.
I simply defined another delivery method sample template with a filter including only those records for people who do NOT need emails - type filesystem - and routed the output to the /tmp directory - the trash.
The lesson(s) to be learnt
1) The bursting engine needs to have instructions to handle ALL the XML data that is fed to it.
2) You can define as many output documents and delivery methods as you like - putting filters on each delivery method as you like - BUT ALL XML RECORDS MUST be provided for.
3) XML records that are not required in one output / bursting stream can be handled in another - and trashed in the /tmp - or other - area.
The full bursting control file is shown below
1) First define the two delivery methods
2) Then define the two "documents" - using filters - using the two delivery methods.
Hope this helps others wanting to do similary things
Mike
<?xml version="1.0" encoding="UTF-8"?>
<xapi:requestset xmlns:xapi="http://xmlns.oracle.com/oxp/xapi" type="bursting">
<xapi:request select="/AR_INVOICE/LIST_EMAIL_HEAD/EMAIL_HEAD">
----- Define the two delivery methods - first the "email"---------
<xapi:delivery>
<xapi:email server="10.1.1.2" port="25"
from="${EM_DOC_EMAIL_ADDRESS}" reply-to ="${EM_COLLECTOR_EMAIL_ADDRESS}">
<xapi:message id="email" to="[email protected]"
attachment="true" subject="${EM_OPERATING_UNIT} invoice for ${EM_CUSTOMER_NAME}">
Please review the attached invoice - terms are ${EM_TERMS}
This will be sent to collector email ${EM_COLLECTOR_EMAIL_ADDRESS} and customer email ${EM_DOC_EMAIL_ADDRESS}
</xapi:message>
</xapi:email>
------ Second - the "null" - type filesystem ----------------
<xapi:filesystem id="null" output="/tmp/xmlp_null_${EM_CUSTOMER_NAME}" />
</xapi:delivery>
------Then define the first document using the"email" delivery -------------
<xapi:document output-type="pdf" delivery="email">
<xapi:template type="rtf" locale=""
location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter=".//EMAIL_HEAD[EM_DOC_EMAIL_ADDRESS!='NO_EMAIL']">
</xapi:template>
</xapi:document>
------ Then define the other document using the "null" delivery --------------
<xapi:document output-type="pdf" delivery="null">
<xapi:template type="rtf" locale=""
location="xdo://AR.XXARINVEMAILDUMMY.en.00/?getSource=true" translation="" filter=".//EMAIL_HEAD[EM_DOC_EMAIL_ADDRESS='NO_EMAIL']">
</xapi:template>
</xapi:document>
</xapi:request>
</xapi:requestset> -
Regarding to perform in select query
could any tell the select query in this piece of code would affect the performance of the programe
DATA: BEGIN OF OUTREC,
BANKS LIKE BNKA-BANKS,
BANKL LIKE BNKA-BANKL,
BANKA LIKE BNKA-BANKA,
PROVZ LIKE BNKA-PROVZ, "Region (State, Province, County)
BRNCH LIKE BNKA-BRNCH,
STRAS LIKE BNKA-STRAS,
ORT01 LIKE BNKA-ORT01,
SWIFT LIKE BNKA-SWIFT,
END OF OUTREC.
OPEN DATASET P_OUTPUT FOR OUTPUT IN TEXT MODE.
IF SY-SUBRC NE 0. EXIT. ENDIF.
SELECT * FROM BNKA
WHERE BANKS EQ P_BANKS
AND LOEVM NE 'X'
AND XPGRO NE 'X'
ORDER BY BANKS BANKL.
PERFORM TRANSFER_DATA.
ENDSELECT.
CLOSE DATASET P_OUTPUT.
*& Transfer the data to the output file
FORM TRANSFER_DATA.
OUTREC-BANKS = BNKA-BANKS.
OUTREC-BANKL = BNKA-BANKL.
OUTREC-BANKA = BNKA-BANKA.
OUTREC-PROVZ = BNKA-PROVZ.
OUTREC-BRNCH = BNKA-BRNCH.
OUTREC-STRAS = BNKA-STRAS.
OUTREC-ORT01 = BNKA-ORT01.
OUTREC-SWIFT = BNKA-SWIFT.
TRANSFER OUTREC TO P_OUTPUT.
ENDFORM. " READ_IN_DATAHi
Ways of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one Internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
Points # 1/2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
4. For testing existence , use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
5. Use Select Single if all primary key fields are supplied in the Where condition .
Point # 1
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
Point # 2
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Point # 3
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields . In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
Point # 4
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
Point # 5
If all primary key fields are supplied in the Where condition you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements contd.. SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
2. For all frequently used Select statements, try to use an index.
3. Using buffered tables improves the performance considerably.
Point # 1
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
Point # 2
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
Point # 3
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements contd Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements contd For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements contd Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
3. Instead of using nested Select loops it is often better to use subqueries.
Point # 1
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
Point # 2
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Point # 3
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
Point # 2
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
Point # 3
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
Point # 5
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
Point # 6
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
Point # 7
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
Point # 8
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
Point # 9
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 10
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
Point # 11
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
Point # 12
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
Point # 13
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Database adapter for performing sync read of N records at a time
Would anyone have an example and/or documentation related to performing a select operation in a database adapter where the adapter can fetch N number of records at a time? The database adapter would not be an activation agent/point polling for changes to any records, but invoked in the middle of the process. I searched for this in the samples folder and online but found nothing.
Any help would be appreciated.
Thanks.Hi,
Do you want to know the best way to integrate Oracle BPEL to OTM? Then it's simple webservice call from BPEL using http binding.
If you want to connect to Apps inbound then you can either use DB adapter or JMS adapter or Apps adapter which ever best suites your business requirement.
Rgds,
Mandrita. -
APP-PAY-07201 Cannot perform a delete when child record exists in future
I am trying to put end date to a payment method of any employee in HR/Payroll 11.5.10.2
but receiving the following error message:
APP-PAY-07201 Cannot perform a delete when child record exists in future
Can u advise what steps I should follow to resolve this issue.
Regards /AliThis note is related to termination of employee while our employee is on payroll and just want to change is payment method. But in the presence of existing payment method we cannot attched another becuase we are receiving an error:
APP-PAY-07041: Priority must be unique within an orgainzational payment method -
How to select only one record at a time in ALV
Hi all,
I have to use ALV report. Each record on the report output should have a button (or radio button) which will allow the record to be selected. My requirement is to select only one record at a time.
please guide me how to proceed further.
regards
manishhi
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/snippets/sampleCheckBoxProgram
Regards
Pavan -
URGENT: Selecting only 25 records at a time from a table
URGENT !!!!
Hi,
Im having a RFC which selects records from a table (say table_A) and depending on these selected records, further processing is done within that RFC.
Now my problem is, this table_A contains around 200 matching records. Due to this entire logical processing consumes lot of time. Hence my RFC is taking huge time to produce result. (apprx 10 mins).
Can i select these matching records in batch of 25 and display result for these 25 records??
I'll give this batch size as input to RFC?
Do anybody have any idea about how to tackle this situation and reduce response time?
If u hav any better solution than this then pls pls let me know ASAP..
Regards,
AmeyAmey Mogare ,
Do One thing , create a new importing parameter in your RFC , say current_trans_id. NOw on the first call pass the initial value for current_trans_id.
then inside the logic .. change the select to
select upto 25 rows where trnascation id > current_trans_id.
next time when u call teh rfc.. send the last selected id as a value for current_trans_id.
i think you can some how use this logic
Regards
Sarath -
How to import multiple xml records into PDF form
Our organization needs to print and distribute serveral hundred PDF forms to our members. We want to populate the names and some other basic info at the top of each form so the member can fill in the rest and sign and return it. I have the one page form set up with the fields and it's connected to an xml schema data source. When I import the xml file it reads the first record and that's it - how do I get it to read all of the xml records into a copy of the form for each record in the xml data source file so we can print and discribute the forms? We are using Acrobat 9 Pro on a Windows 7 32 bit machine.
ThanksIs this a form created with Acrobat or LiveCycle?
Acrobat would need uniquely named fields for each individual's page or pages if they are all in one file.
If you are processing one PDF file per individual, I would look at splitting the XML or FDF file into a record for each individual and them populating each individual's PDF form.
You could even use a tab delimited file and a template to populate the PDF template and then spawn a new page from the template.
With LiveCycle I would look at using a database for the variable data. -
"xml:lang" is getting replaced by "lang" in XML Record converter class
Hi All,
The issue i am having is in the Record converter class which converts XML Record to JCA record, the string "xml:lang" is replaced by only "lang" due to which the transaction i am running fails as the application to which this xml is posted rejects it.
Details:
I have a Custom Resource Adapter(on JCA 1.5 Spec) deployed on Oracle Weblogic Server 10.3. I invoke this adapter from the BPEL process (SOA Suite 11g). I have implemented the XMLRecordConverter interface which converts the XMLRecord to JCARecord. During this conversation the "xml:lang" string is replaced by "lang" due to which the application to which this xml is posted is throwing error.
The code snippet of the XMLRecordConverter implementation is below
import javax.resource.ResourceException;
import javax.resource.cci.Record;
import javax.xml.transform.dom.DOMSource;
import oracle.tip.adapter.api.record.RecordElement;
import oracle.tip.adapter.api.record.XMLRecord;
import oracle.tip.adapter.api.record.XMLRecordConverter;
import oracle.tip.adapter.fw.record.RecordElementImpl;
import oracle.tip.adapter.fw.record.XMLRecordImpl;
import oracle.tip.adapter.fw.util.JCADOMWriter;
import oracle.xml.parser.v2.DOMParser;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
public Record convertFromXMLRecord (XMLRecord xmlRecord) throws ResourceException
String xmlString;
RecordElement payloadRecordElement = xmlRecord.getPayloadRecordElement();
if (payloadRecordElement != null) {
xmlString = serialize(payloadRecordElement);
else {
throw new ResourceException("No data in record from EIS!");
RecordImpl rec = new RecordImpl();
rec.setStr_Payload(xmlString);
return rec;
private String serialize (RecordElement recordElement)
if (recordElement != null) {
org.w3c.dom.Element wsifEnvelopeRootElement =
recordElement.getDataAsDOMElement();
if (wsifEnvelopeRootElement != null) {
JCADOMWriter domSerializer = new JCADOMWriter();
String xmlString = domSerializer.print(wsifEnvelopeRootElement);
return xmlString;
return "<empty>";
the resulting xml has all the occurrence of "xml:lang" replaced with "lang"
If i add namespace ( like xmlns:xml="http://www.w3.org/XML/1998/namespace") at the root element of XML coming from BPEL process then the replacement is not happening.
Has any of the Oracle api's used in the above code uses namespace aware parser or the BPEL engine is replacing "xml:lang" to "lang" when it passes the xml to XMLRecord Converter class?
The above code works perfectly fine in Oracle Fusion 10g (Oracle AS 10 and SOA Suite 10g)
Any information will be really helpful specially in the oracle api's
Thanks,
AmithHi wilson,
Do you know how to move or delete this thread? i didnt find any options to move or delete this thread -
Error : idoc xml record in segment attribute instead of SEGMENT
hi friends
can any one solve my problem. In message mapping I mapped with a IDOC. In message mapping I Mapped all the fields. Still I am getting the error as "IDOC XML RECORD IN SEGMENT ATRIBUTE INSTEAD OF SEGMENT" I dont know about this error.
can any one solve this problem please . I am doing this scenario since 5 days. help me..
thanks in advance
VasuHi Vasudeva,
Can you pls provide little more details on the scenario ?
Also at which place are you getting this error ?
Assuming that you have created a message mapping for some source message to target IDoc message, here are some suggestions.
1) Test the message mapping. (are you getting the error in testing itself ?)
2) Apart from mandatory fields' mapping, are there any constants to be assigned to some IDoc fields ? Or any node to be disabled ? Or any such additional things...
Regards,
Maybe you are looking for
-
How can i stream live from my mac
I am trying to stream a live feed of a game from my mac to a streaming site such as Twitch.com. However all the software i have found for this is only avalible for windows. Are there any mac versions? Thank you
-
Will a iPad 3 cover fit the iPad with retnia display
I want to know because they don't show the specs of the iPad 3 any more So I can't compare them
-
Can we connect to 1 Users Persistence(User Database) ?
Hi experts, Kindly advise whether it's possible to set up to have > 1 Users persistence(User management database) For example, we have 50 users in LDAP and another 20 portal related users in portal system itself(This group of users does not have auth
-
Hello, Anybody has any idea or used before the Alert management where we go with the IMG setting - sapnetweaver - generic business tools - alert management. how to use this in action profile and where this can be displayed in webui(how to display- wh
-
Ipad 2 will not find me when i use or maps, but it will in the office - all wireless
My iPad 2 will not find my location when I use Maps when i am out of the office but it will when I am in the office - i use wireless. Location selector is ON. Any ideas?