Performance problems with XMLTABLE and XMLQUERY involving relational data

Hello-
Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
* Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
* Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

<Note>Long post, sorry.</Note>
First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
•     Converting legacy tabular data into XML records; and
•     Performing code table lookups for coded values in XML records.
There are three things I want to accomplish with this post:
•     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
•     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
•     Highlight remaining performance issues in hopes that we can solve them
What we are trying to accomplish:
•     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
•     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
•     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
•     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
Why we did and why:
•     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
•     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
•     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
•     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
•     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
•     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
•     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
What issues remain?
We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
-- The main record table:
create table RECORDS (
SSN varchar2(20),
XMLREC sys.xmltype
xmltype column XMLREC store as binary xml;
create index records_ssn on records(ssn);
-- A dozen code tables represented by one like this:
create table CODES (
CODE varchar2(4),
DESCRIPTION varchar2(500)
create index codes_code on codes(code);
-- Some XML records with coded values (the real records are much more complex of course):
-- I think this took about a minute or two
DECLARE
ssn varchar2(20);
xmlrec xmltype;
i integer;
BEGIN
xmlrec := xmltype('<?xml version="1.0"?>
<Root>
<Id>123456789</Id>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
<Element>
<Subelement1><Code>11</Code></Subelement1>
<Subelement2><Code>21</Code></Subelement2>
<Subelement3><Code>31</Code></Subelement3>
</Element>
</Root>
for i IN 1..100000 loop
insert into records(ssn, xmlrec) values (i, xmlrec);
end loop;
commit;
END;
-- Some code data like this (ignoring date ranges on codes):
DECLARE
description varchar2(100);
i integer;
BEGIN
description := 'This is the code description ';
for i IN 1..3000 loop
insert into codes(code, description) values (to_char(i), description);
end loop;
commit;
end;
-- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
-- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
-- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
-- Note we are accessing a single XML record based on SSN
-- Note also we are reusing the one test code table multiple times for convenience of this test
select xmlquery('
for $r in Root
return
<Root>
<Id>123456789</Id>
{for $e in $r/Element
    return
    <Element>
      <Subelement1>
        {$e/Subelement1/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
</Description>
</Subelement1>
<Subelement2>
{$e/Subelement2/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
</Description>
</Subelement2>
<Subelement3>
{$e/Subelement3/Code}
<Description>
{ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
</Description>
</Subelement3>
</Element>
</Root>
' passing xmlrec returning content)
from records
where ssn = '10000';
The plan shows the nested loop access that slows things down.
By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
Operation Object
|SELECT STATEMENT ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| NESTED LOOPS (SEMI)
| TABLE ACCESS (FULL) CODES
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| SORT (AGGREGATE)
| XPATH EVALUATION ()
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
-- Add an xmlindex. Takes about 2.5 minutes
create index records_record_xml ON records(xmlrec)
indextype IS xdb.xmlindex;
Operation Object
|SELECT STATEMENT ()
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (GROUP BY)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| TABLE ACCESS (FULL) CODES
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| FILTER ()
| NESTED LOOPS ()
| FAST DUAL ()
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| SORT (AGGREGATE)
| TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
| INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
| TABLE ACCESS (BY INDEX ROWID) RECORDS
| INDEX (RANGE SCAN) RECORDS_SSN
Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

Similar Messages

  • Problem with XMLTABLE and LEFT OUTER JOIN

    Hi all.
    I have one problem with XMLTABLE and LEFT OUTER JOIN, in 11g it returns correct result but in 10g it doesn't, it is trated as INNER JOIN.
    SELECT * FROM v$version;
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    --test for 11g
    CREATE TABLE XML_TEST(
         ID NUMBER(2,0),
         XML XMLTYPE
    INSERT INTO XML_TEST
    VALUES
         1,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g1</id>
                             <dat>data1</dat>
                        </fields>
                   </data>
              </msg>
    INSERT INTO XML_TEST
    VALUES
         2,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g2</id>
                             <dat>data2</dat>
                        </fields>
                   </data>
              </msg>
    INSERT INTO XML_TEST
    VALUES
         3,
         XMLTYPE
              <msg>
                   <data>
                        <fields>
                             <id>g3</id>
                             <dat>data3</dat>
                        </fields>
                        <fields>
                             <id>g4</id>
                             <dat>data4</dat>
                        </fields>
                        <fields>
                             <dat>data5</dat>
                        </fields>
                   </data>
              </msg>
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x LEFT OUTER JOIN
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )y ON 1=1
    ID     DAT     SEQNO     ID_REAL
    1     data1     1     g1
    2     data2     1     g2
    3     data3     1     g3
    3     data4     1     g4
    3     data5          Here's everything fine, now the problem:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
    PL/SQL Release 10.2.0.1.0 - Production
    "CORE     10.2.0.1.0     Production"
    TNS for HPUX: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    --exactly the same environment as 11g (tables and rows)
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x LEFT OUTER JOIN
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )y ON 1=1
    ID     DAT     SEQNO     ID_REAL
    1     data1     1     g1
    2     data2     1     g2
    3     data3     1     g3
    3     data4     1     g4As you can see in 10g I don't have the last row, it seems that Oracle 10g doesn't recognize the LEFT OUTER JOIN.
    Is this a bug?, Metalink says that sometimes we can have an ORA-0600 but in this case there is no error returned, just incorrect results.
    Please help.
    Regards.

    Hi A_Non.
    Thanks a lot, I tried with this:
    SELECT
         t.id,
         x.dat,
         y.seqno,
         y.id_real
    FROM
         xml_test t,
         XMLTABLE
              '/msg/data/fields'
              passing t.xml
              columns
                   dat VARCHAR2(10) path 'dat',
                   id XMLTYPE path 'id'
         )x,
         XMLTABLE
              'id'
              passing x.id
              columns
                   seqno FOR ORDINALITY,
                   id_real VARCHAR2(30) PATH '.'
         )(+) y ;And is giving me the complete output.
    Thanks again.
    Regards.

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

  • Problem with BTE and FI-parking- no data from memory ID to ABAP

    Hi Experts,
    I have a scenario in Business Workflow where I want to catch the data(BKPF & BSEG) after SAP transaction processing - event is to change parked FI-document with FBV2. I´m trying to use BTE to receive the data after processing transaction.
    All BTE steps seems to be activated - because I can debug my functions when processing FBV2 - but I cannot reach any data into my coding after processing.
    I have created my project in BF24:
    ZXXX Testing: WF-memory ID
    I have also linked few FI-events to my BTE interface funtion in BF34:
    00001130 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002213 ZXXX ZXXX_FIPP_CHANGE_BTE
    00002217 ZXXX ZXXX_FIPP_CHANGE_BTE
    My BTE function ZXXX_FIPP_CHANGE_BTE is as following:
    DATA: memid(15) VALUE 'ZXXX_2217'.
    *Initialize
    CLEAR: t_vbkpf,
    t_vbsegs.
    FREE MEMORY ID 'ZXXX_2217'.
    *Backtracking of BTE
    EXPORT t_vbkpf
    t_vbsegs
    TO MEMORY ID memid.
    ENDFUNCTION.
    I have also created funtion to call FBV2:
    data: memid(15) value 'ZXXX_2217'.
    CALL TRANSACTION 'FBV2'.
    *Import memory of BTE
    IMPORT t_vbkpf
    t_vbsegs
    FROM MEMORY ID memid.
    *Free memory id
    FREE MEMORY ID memid.  
    When I set a breakpoint to both of the functions and I execute FBV2 I reach the breakpoints. Problem is that the data is not passed from memid 'ZXXX_2217' into function after FBV2(this syntax: IMPORT t_vbkpf t_vbsegs FROM MEMORY ID memid.)
    What could be missing. So both functions are called but NO DATA is passed from memory ID to my internal tables? This seems to be a problem with memory ID´s. Also in my BTE function ZXXX_FIPP_CHANGE_BTE I receive a sy-subrc value 4 when executing syntax "FREE MEMORY ID 'ZXXX_2217'. ".
    All hints appreciated,
    Jani

    Hi Ramki,
    I must be frustrating You with these stupid questions... You have already supported me hugely to get more famailiar with BTE. For example not to commit in BTE etc. Big thanks for that!
    In all cases when I have tested my project in BF24 has been always active; and yes I have changed the pre. posted document always to trigger the BTE.
    I still have a problem. I copied all customizing and coding into other system - with same dissapointing results. Let me describe this once more:
    <b>My entry in BF24:</b>
    <i>Product:                 ZBE
    Text:                       Jani testing
    RFC destination:
    Active:   </i>               X
    <b>My entry in BF34:</b>
    <i>Event:                   00002214
    Product:                    ZBE
    Ctr:
    Appl:
    Function module:            ZBE_BTE</i>
    <b>My BTE function:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      FREE MEMORY ID memid.
      EXPORT t_vbkpf TO MEMORY ID memid.</i>
    <b>And the function ZBE_BTE_EXECUTE where I call FBV2:</b>
    <i>  DATA: memid(15) VALUE 'ZBE_BTE2214'.
      DATA: t_vbkpf LIKE fvbkpf OCCURS 0 WITH HEADER LINE.
      CALL TRANSACTION 'FBV2'.
      IMPORT t_vbkpf FROM MEMORY ID memid.
      FREE MEMORY ID memid.</i>
    When I process my function ZBE_BTE_EXECUTE - and I have added a breakpoint into ZBE_BTE - for FBV2 call <b>it does not reach the breakpoint</b> in ZBE_BTE at all! This occurs when using event linkage 00002214! And I do execute a real change to parked document.
    But if I change the entry in BF34 to be linked for event 00002217 or 00002218 process reaches my breakpoint in BTE function ZBE_BTE! But in these cases also, it does not import the data after transaction call in function ZBE_BTE_EXECUTE. When I continue debugging the coding in BTE function ZBE_BTE further into SAP-coding, I can see that my internal table t_vbkpf is filled. But when I leave SAP-coding when FBV2 is completed and return to ZBE_BTE_EXECUTE - where t_vbkpf should be filled from memory id - t_vbkpf is empty!
    Again as a conclusion in this system:
    For BTE event 00002214 this locig does not work at all. For events 00002217 and 00002218 it does get activated, but does not bring any data from memory ID into my abap. Strangest this is that I use absolutely same kind of coding that You.
    Can You see if I have to fill those empty fields( Ctr. & Appl.) in my BF34 customizing? Or is some other customizing action missing?
    Br,
    Jani

  • Encoding problem with convert and CLOB involving UTF8 and EBCDIC

    Hi,
    I have a task that requires me to call a procedure with a CLOB argument containing a string encoded in EBCDIC. This did not go well so I started narrowing down the problem. Here is some SQL to illustrate it:
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> select value from v$nls_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    SQL> select convert(convert('abc', 'WE8EBCDIC500'), 'AL32UTF8', 'WE8EBCDIC500')
    output from dual;
    OUT
    abc
    SQL> select convert(to_char(to_clob(convert('abc', 'WE8EBCDIC500'))), 'AL32UTF8', 'WE8EBCDIC500') output from dual;
    OUTPUT
    ╒╫¿╒╫¿╒╫¿
    So converting to and from EBCDIC works fine when using varchar2, but (if I am reading this right) fails when involving CLOB conversion.
    My question then is: Can anyone demonstrate how to put correct EBCDIC into a CLOB and maybe even explain why the examples do what they do.

    in order to successfully work with xmldb it is recommended that you use 9.2.0.4
    and above. Its seems to have lower version.
    Okay now related to the problem , if your data that you want to send to the attributes are not greater than 32767, then you can use the pl/sql varchar2 datatype to hold the data rather then CLOB and overcome this problem.
    here is the sample. use function with below pl/sql to return the desired output.
    SQL> declare
      2   l_clob     CLOB := 'Hello';
      3   l_output   CLOB;
      4  begin
      5    select  xmlelement("test", xmlattributes(l_clob AS "a")).getclobval()
      6      into l_output from dual;
      7  end;
      8  /
      select  xmlelement("test", xmlattributes(l_clob AS "a")).getclobval()
    ERROR at line 5:
    ORA-06550: line 5, column 44:
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got CLOB
    ORA-06550: line 5, column 3:
    PL/SQL: SQL Statement ignored
    SQL> declare
      2   l_vchar     varchar2(32767) := 'Hello';
      3   l_output   CLOB;
      4  begin
      5    select  xmlelement("test", xmlattributes(l_vchar AS "a")).getclobval()
      6      into l_output from dual;
      7    dbms_output.put_line(l_output);
      8  end;
      9  /
    <test a="Hello"></test>
    PL/SQL procedure successfully completed.

  • Performance problem with sproc and out parameter ref cursor

    Hi
    I have sproc with Ref Cursor as an OUT parameter.
    It is extremely slow looping over the ResultSet (does it record by record in the fetch).
    so I have added setPrefetchRowCount(100) and setPrefetchMemorySize(6000)
    pseudo code below:
    string sqlSmt = "BEGIN get_tick_data( :v1 , :v2); END;";
    Statement* s = connection->createStatement(sqlStmt);
    s->setString(1, i1);
    // cursor ( f1 , f2, f3 , f4 , i1 ) f for float type and i for interger value.
    // 5 columns as part of cursor with 4 columns are having float value and
    // 1 column is having int value assuming 40 bytes for one rec.
    s->setPrefetchRowCount (100);
    s->PrefetchMemorySize(6000);
    s->registerOutParam(2,OCCICURSOR);
    s->execute();
    ResultSet* rs = s->getCursor(2);
    while (rs->next()) {
    // do, and do v slowly!
    }

    Hi,
    I have the same problem. It seems, when retrieving cursor, that "setPrefetchRowCount" is not taking into account by OCCI. If you have a SQL statement like "SELECT STR1, STR2, STR3 FROM TABLE1" that works fine but if your SQL statement is a call to a stored procedure returning a cursor each row fetching need a roudtrip.
    To avoid this problem you need to use the method "setDataBuffer" from the object "ResultSet" for each column of your cursor. It's easy to use with INT type and STRING type, a lit bit more complex with DATE type. But until now, I'm not able to do the same thing with REF type.
    Below a sample with STRING TYPE (It's assuming that the cursor return only one column of STRING type):
    try
      l_Statement = m_Connection->createStatement("BEGIN :1 := PACKAGE1.GetCursor1(:2); END;");
      l_Statement->registerOutParam(1, oracle::occi::OCCINUMBER, sizeof(l_CodeErreur));
      l_Statement->registerOutParam(2, oracle::occi::OCCICURSOR);
      l_Statement->executeQuery();
      l_CodeErreur = l_Statement->getNumber(1);
      if ((int) l_CodeErreur     == 0)
        char                         l_ArrayName[5][256];
        ub2                          l_ArrayNameSize[5];
        l_ResultSet  = l_Statement->getCursor(2);
        l_ResultSet->setDataBuffer(1, l_ArrayName,   OCCI_SQLT_STR, sizeof(l_ArrayName[0]),   l_ArrayNameSize,   NULL, NULL);
        while (l_ResultSet->next(5))
          for (int i = 0; i < l_ResultSet->getNumArrayRows(); i++)
            l_Name = CString(l_ArrayName);
    l_Statement->closeResultSet(l_ResultSet);
    m_Connection->terminateStatement(l_Statement);
    catch (SQLException &p_SQLException)
    I hope that sample help you.
    Regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Performance problems with JDBC and mysql

    Hi,
    I have here a "one table" database which has 1.000.000 data rows. The problem is, that deleting a special entry (ie delete from mytable where da_value="foo1") takes between 10 and 60 seconds.
    Does anyone can help me out of this performance trap.
    LoCal

    table indexed and keys used?
    always preferred postgres! :)

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • Why FireFox 3.6 has problem with expand and colapsed in sharepoint data view?

    I used an script in this page http://www.sgnec.net/pages/customers/customers.aspx. It worked well. but now I found it doesn't work with Firefox 3.6 . when I clicked plus image it doesn't collapsed.
    thanks
    Leila

    Try the Firefox Portable support forum. <br />
    http://portableapps.com/forums/support/firefox_portable

  • Performance problems with SAP GUI 7.10 and BEx 3.5 Patch 400?

    Hi everybody,
    we installed SAP GUI 7.10 and BEx 3.5 Patch 400 and detected hugh performance problems with this version in comparison to the SAP GUI 6.40 and BEx 3.5 or BEx 7.0 Patch 800.
    Does anybody detect the same problems?
    Best regards,
    Ulli

    Most important question when you are talking about performance-issues:
    which OC are you working on and which excel version?
    ciao
    Joke

  • Performance Problems with "For all Entries" and a big internal table

    We have big Performance Problems with following Statement:
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
      FOR ALL ENTRIES IN gt_zmon_help
        WHERE
        status = 'IAI200' AND
        logdat IN gs_dat AND
        ztrack = gt_zmon_help-ztrack.
    In the internal table gt_zmon_help are over 1000000 entries.
    Anyone an Idea how to improve the Performance?
    Thank you!

    >
    Matthias Weisensel wrote:
    > We have big Performance Problems with following Statement:
    >
    >  
    SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
    >   FOR ALL ENTRIES IN gt_zmon_help
    >     WHERE
    >     status = 'IAI200' AND
    >     logdat IN gs_dat AND
    >     ztrack = gt_zmon_help-ztrack.
    >
    > In the internal table gt_zmon_help are over 1000000 entries.
    > Anyone an Idea how to improve the Performance?
    >
    > Thank you!
    You can't expect miracles.  With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab?  How many records is the select bringing back?  I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table. 
    In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time.

  • Performance problems with EP 6 and MS IE

    Hi everybody,
    since a couple of days, we are facing a sever performance problem with our SAP EP 6.0. When I access the system with MS Internet Explorer 6.0, it takes 5-10 minutes after the Login. With Firefox Browser, the performance is ok. Therefore I assume that it must be a problem with the IE settings. Does anybody know a solution?
    Best regards,
       Michael

    There are a few things that this could be.  I've seen the setting "Empty Temporary Internet Files folder when browser is closed" cause a lot of performance problems (This is in the advanced settings in your IE).
    This will cause your cache to be cleared out each time the browser is closed and cause a lot more data to be downloaded each time you login to the system.
    For more analysis I'd recommend putting a tool like HTTPWatch into your IE browser and seeing which requests are using the most time.

  • Performance problem with Integration with COGNOS and Bex

    Hi Gems
    I have a performance problem with some of my queries when integrating with the COGNOS
    My query is simple which gets the data for the date interval : "
    From Date: 20070101
    To date:20070829
    When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
    Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
    and how to increase the performance.. of the query in the Cognos ..
    Thanks in Advance
    Regards
    AK

    Hi,
    Please check the following CA Unicenter config files on the SunMC server:
    - is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
    How to debug:
    - run ea-start in debug mode:
    # /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
    - check if the Event Adaptor is been setup,
    # /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
    - check the CA log file
    # /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
    After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
    http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
    Kind Regards

  • Problems with RAC and XA: Fallback

    Hello,
    we are seing problems with RAC and XA (Tuxedo 11, DB 11.2), specifically encountering "ORA-24798: cannot resume the distributed transaction branch on another instance".
    The first scenario relates to fallback after a RAC node failure. There are two servers, S1 and S2. S1 makes an ATMI call to S2. Both servers are in the same Tuxedo group, using TMS_ORA. RAC is set up for failover (BASIC), no load balancing.
    The sequence is:
    - S1 and S2 are connected to the same RAC node n1. All is well.
    - RAC node n1 fails. S1, S2 and the TMS_ORA all fail over to RAC node n2. After the failover has happened, all is well.
    - RAC node n1 recovers. All is still well (as there is no automatic fallback).
    - S1 (or S2) is restarted (either intentionally or because of a crash). Since n1 is up again, S1 connects to n1. Now we get ORA-24798. Permanently.
    S1 is connected to n1 and S2 is connected to n2. Since both are in the same group, both use the same XA transaction branch. When called, S2 attempts to JOIN the transaction branch that S1 started. But the DB (11.2) does not allow the same branch to span more than one node. Hence the ORA-24798.
    This seems to be a severe limitation in the combination of Tuxedo, XA and RAC. It basically means we still have to use DTP services, even with Tuxedo 11 and DB 11.2. Or are we missing something?
    We could put S1 and S2 into different groups, but that seems to be inefficient, and not practical for a real application (10s of servers).
    I am extrapolating from this that RAC load balancing would also not work, as S1 and S2 could be connected to different RAC nodes.
    Roger

    Roger,
    When using an external transaction manager such as Tuxedo you should still declare Oracle services as DTP services when using Oracle Database 11g. The Tuxedo documentation is not clear about this. The relevant 11gR2 RAC documentation is at http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/hafeats.htm and states
    "An XA transaction can span Oracle RAC instances by default, allowing any application that uses the Oracle XA library to take full advantage of the Oracle RAC environment to enhance the availability and scalability of the application.
    "GTXn background processes support global (XA) transactions in an Oracle RAC environment. The GLOBAL_TXN_PROCESSES initialization parameter, which is set to 1 by default, specifies the initial number of GTXn background processes for each Oracle RAC instance. Use the default value for this parameter clusterwide to allow distributed transactions to span multiple Oracle RAC instances. Using the default value allows the units of work performed across these Oracle RAC instances to share resources and act as a single transaction (that is, the units of work are tightly coupled). It also allows 2PC requests to be sent to any node in the cluster.
    "Before Release 11.1, the way to achieve tight coupling in Oracle RAC was to use Distributed Transaction Processing (DTP) services, that is, services whose cardinality (one) ensured that all tightly-coupled branches landed on the same instance—regardless of whether load balancing was enabled. Tightly coupled XA transactions no longer require the special type of singleton services to be deployed on Oracle RAC databases if the XA application does not join or resume XA transaction branches. XA transactions are transparently supported on Oracle RAC databases with any type of service configuration.
    A"n external transaction manager, such as Oracle Services for Microsoft Transaction Server (OraMTS), coordinates DTP/XA transactions. However, an internal Oracle transaction manager coordinates distributed SQL transactions. Both DTP/XA and distributed SQL transactions must use the DTP service in Oracle RAC."
    This issue came up earlier this year in another newsgroup thread at https://forums.oracle.com/forums/thread.jspa?threadID=2165803
    Regards,
    Ed

  • Performance problem with sdn_nn - new 10g install

    I am having a performance problem with sdn_nn after migrating to a new machine. The old Oracle version was 9.0.1.4. The new is 10g. The new machine is faster in general. Most (non-spatial) batch processes run in half the time. However, the below statement is radically slower. The below statement ran in 45 minutes before. On the new machine it hasn't finished after 10 hours. I am able to get a 5% sample of the customers to finish in 45 minutes.
    Does anyone have any ideas on how to approach this problem? Any chance something isn't installed correctly on the new machine (the nth version of the query finishe, albeit 20 times slower)?
    Appreciate any help. Thanks.
    - Jack
    create table nearest_store
    as
    select /*+ ordered */
    a.customer_id,
    b.store_id nearest_store,
    round(mdsys.sdo_nn_distance(1),4) distance
    from customers a,
    stores b
    where mdsys.sdo_nn(
    b.geometry,
    a.geometry,
    'sdo_num_res=1, unit=mile',
    1
    ) = 'TRUE'
    ;

    Dan,
    customers 110,000 (involved in this query)
    stores 28,000
    Here is the execution plan on the current machine:
    CREATE TABLE STATEMENT cost = 81947
    LOAD AS SELECT
    PX COORDINATOR
    PX SEND QC (RANDOM) :TQ10000
    ROW NESTED LOOPS
    1 1 PX BLOCK ITERATOR
    1 1ROW TABLE ACCESS FULL CUSTOMERS
    1 3 PARTITION RANGE ALL
    1 3 TABLE ACCESS BY LOCAL INDEX ROWID STORES
    DOMAIN INDEX STORES_SIDX
    I can't capture the execution plan on the old database. It is gone. I don't remember it being any different from the above (full scan customers, probe stores index once for each row in customers).
    I am trying the query without the create table (just doing a count). I'll let you know on that one.
    I am at 10.0.1.3.
    Here is how I created the index:
    create index stores_sidx
    on stores(geometry)
    indextype is mdsys.spatial_index LOCAL
    Note that the stores table is partitioned by range on store type. There are three store types (each in its own partition). The query returns the nearest store of each type (three rows per customer). This is by design (based on the documented behavior of sdo_nn).
    In addition to running the query without the CTAS, I am also going try running it on a different machine tonight. I let you know how that turns out.
    The reason I ask about the install, is that the Database Quick Installation Guide for Solaris says this:
    "If you intend to use Oracle JVM or Oracle interMedia, you must install the Oracle Database 10g Products installation type from the Companion CD. This installation optimizes the performance of those products on your system."
    And, the Database Installlation Guide says:
    "If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These libraries are required to improve the performance of the products on your platform."
    Based on that, I am suspicious that maybe I have the product installed on the new machine, but not correctly (forgot to set fast=true).
    Let me know if you think of anything else you'd like to see. Thanks.
    - Jack

Maybe you are looking for

  • MacBook Pro screen randomly turns black on me for no reason while I'm using it? What's wrong with my MacBook?

    I have a MacBook Pro from 2009 and the screen randomly turns black on me for no reason while I'm using it? It takes anywhere from 5 seconds to 1 minute to turn back on and no matter what I do it just has to fix itself so idk if it is going in to slee

  • HOW TO FIX NO AUDIO PROBLEM IN FLASH (4 SOLUTIONS)

    This solution is for WINDOWS ONLY!!! Unfortunately I believe it is a bug in flash itself as I have read more than one instance of this same no audio problem occuring on MAC'S and LINUX based systems as well. IF YOU KNOW OF SOLUTIONS TO THE MAC OR LIN

  • What is SFTP Adapter

    Hi Experts,                 Could any body please tell me what is SFTP adpter and its significance... Thanks Srinivas SAP PI Moderator *********************************** SEARCH BEFORE POSTING, READ RULES OF ENGAGEMENT. THREAD LOCKED. SAP PI Moderato

  • Error Message " Unable to start Intel My WiFi Technology"

    Setup: Lenovo W520 on Windows 7 Pro After Lenovo Software Update, the Intel WiDi was updated.   Then I run the program " Intel WiDi". I received an Error Message " Unable to start Intel My WiFi Technology" I try to troubleshoot and did the following

  • Naming conventions in EP

    Hi all, we are currently in the process of preparing for Buisness Package certification in Enterprise Portal. While going through the Test Plan provided by SAP describing the guidelines and the naming conventions. Is there any thing that should be es