Performance issues with FDK in large XML documents

In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
--Franz

Similar Messages

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Large XML document performance

    We are using XDB 9.2.0.4. I am seeing a severe performance degradation when attempting to extract larger XML documents from XDB (somewhere over 3M). Smaller documents appear to be working fine.
    I have been reading in the forum that the problem I am running into is most likely related to the storage model being used. ie) There are several repeating elements within the schema.
    I have added xdb:storeVarrayAsTable="true" statement to the schema and re-registered. I can see, based on user_nested_tables, that XDB appears to be storing the repeating elements as nested tables vs varrays.
    The change to the storage model does not seem to have significantly changed the queries performance.
    The schemas I am using can be found at http://www.sasked.gov.sk.ca/xsd/sl/1.x/SLMessage.xsd & http://www.sasked.gov.sk.ca/xsd/sl/1.x/SDSElements.xsd
    The schema documentation can be found at http://www.sasked.gov.sk.ca/sds/xml/SchemaDocumentation/SLMessage.html
    The element /SL_Message/SL_Event/SL_ObjectData/SL_EventObject is the primary repeating element
    I am using a table with an XMLType column
    CREATE TABLE XML_SL_MESSAGE
    (XML_SL_MESSAGE_ID NUMBER(11) NOT NULL
    ,DTE_TIMESTAMP TIMESTAMP DEFAULT SYSTIMESTAMP NOT NULL
    ,ORIGINAL_XML_SL_MESSAGE_ID NUMBER(11)
    ,VALID_SL_MESSAGE_XML sys.XMLType
    ,INVALID_XML CLOB
    ,ERROR_MESSAGE VARCHAR2(4000)
    ) xmltype column valid_sl_message_xml XMLSCHEMA "http://www.sasked.gov.sk.ca/xsd/sl/1.x/SLMessage.xsd" element "SL_Message"
    The SQL I am using is attempting to bring the XMLType back as a clob, the query seems to be intensive in both CPU and I/O. (looks like it is the getClobVal function)
    select xsm.xml_sl_message_id
    ,xsm.dte_timestamp
    ,nvl(xsm.valid_sl_message_xml.getClobVal(),xsm.invalid_xml) as xml_clob
    ,xsm.error_message
    ,xsm.original_xml_sl_message_id
    from xml_sl_message xsm
    where xsm.dte_timestamp > sysdate –1
    I guess what I am wondering is what are my options ? Changing the storage model ? Applying Indexes ?
    On an unrelated topic, Are there many differences in XDB 9.2.0.5 and 9.2.0.4 ? (I don’t believe 10g will be an option here … yet)
    Thanx in advance
    Trent

    I have applied the 9.2.0.5.0 patches and created the relational table with the following attributes:
    CREATE TABLE XML_SL_MESSAGE
    (XML_SL_MESSAGE_ID NUMBER(11) NOT NULL
    ,DTE_TIMESTAMP TIMESTAMP DEFAULT SYSTIMESTAMP NOT NULL
    ,PSE_SYS_USR_ID NUMBER(11) NOT NULL
    ,ORIGINAL_XML_SL_MESSAGE_ID NUMBER(11)
    ,VALID_SL_MESSAGE_XML sys.XMLType
    ,INVALID_XML CLOB
    ,ERROR_MESSAGE VARCHAR2(4000)
    xmltype column valid_sl_message_xml
    STORE AS OBJECT RELATIONAL
    XMLSCHEMA "http://www.sasked.gov.sk.ca/xsd/sl/1.x/SLMessage.xsd" ELEMENT "SL_Message"
    -- 1:1 ----------------------
    varray valid_sl_message_xml."XMLDATA"."SL_Event"."SL_ObjectData"."SL_EventObject"
    store as table SL_EVENTOBJECT2_TB(
    (constraint SL_EVENTOBJECT2_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 2:2 ----------------------
    varray "SchoolTerm"
    store as table SCHOOLTERM2_TB(
    (constraint SCHOOLTERM2_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 3:2 ----------------------
    varray "SchoolClass"
    store as table SCHOOLCLASS3_TB(
    (constraint SCHOOLCLASS3_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 6:2 ----------------------
    varray "StudentCourseHistory"
    store as table STUDENTCOURSEHISTORY6_TB(
    (constraint STUDENTCOURSEHISTORY6_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 7:2 ----------------------
    varray "StudentSupplementalMark"
    store as table STUDENTSUPPLEMENTALMARK7_TB(
    (constraint STUDENTSUPPLEMENTALMARK7_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 8:2 ----------------------
    varray "StudentClassMark"
    store as table STUDENTCLASSMARK8_TB(
    (constraint STUDENTCLASSMARK8_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 9:2 ----------------------
    varray "StudentExamRegistration"
    store as table STUDENTEXAMREGISTRATION9_TB(
    (constraint STUDENTEXAMREGISTRATION9_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 10:2 ----------------------
    varray "StudentClassEnrollment"
    store as table STUDENTCLASSENROLLMENT10_TB(
    (constraint STUDENTCLASSENROLLMENT10_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 11:2 ----------------------
    varray "StudentPersonal"
    store as table STUDENTPERSONAL11_TB(
    (constraint STUDENTPERSONAL11_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 18:2 ----------------------
    varray "StudentProgramEnrollment"
    store as table STUDENTPROGRAMENROLLMENT18_TB(
    (constraint STUDENTPROGRAMENROLLMENT18_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 19:2 ----------------------
    varray "StudentSchoolEnrollment"
    store as table STUDENTSCHOOLENROLLMENT19_TB(
    (constraint STUDENTSCHOOLENROLLMENT19_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 26:1 ----------------------
    varray valid_sl_message_xml."XMLDATA"."SL_Response"."SL_ObjectData"."SL_EventObject"
    store as table SL_EVENTOBJECT26_TB(
    (constraint SL_EVENTOBJECT26_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 27:2 ----------------------
    varray "SchoolTerm"
    store as table SCHOOLTERM27_TB(
    (constraint SCHOOLTERM27_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 28:2 ----------------------
    varray "SchoolClass"
    store as table SCHOOLCLASS28_TB(
    (constraint SCHOOLCLASS28_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 31:2 ----------------------
    varray "StudentProgramEnrollment"
    store as table STUDENTPROGRAMENROLLMENT31_TB(
    (constraint STUDENTPROGRAMENROLLMENT31_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 32:2 ----------------------
    varray "StudentExamRegistration"
    store as table STUDENTEXAMREGISTRATION32_TB(
    (constraint STUDENTEXAMREGISTRATION32_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 33:2 ----------------------
    varray "StudentClassEnrollment"
    store as table STUDENTCLASSENROLLMENT33_TB(
    (constraint STUDENTCLASSENROLLMENT33_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 34:2 ----------------------
    varray "StudentPersonal"
    store as table STUDENTPERSONAL34_TB(
    (constraint STUDENTPERSONAL34_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 41:2 ----------------------
    varray "StudentSchoolEnrollment"
    store as table STUDENTSCHOOLENROLLMENT41_TB(
    (constraint STUDENTSCHOOLENROLLMENT41_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 48:2 ----------------------
    varray "StudentClassMark"
    store as table STUDENTCLASSMARK48_TB(
    (constraint STUDENTCLASSMARK48_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 49:2 ----------------------
    varray "StudentCourseHistory"
    store as table STUDENTCOURSEHISTORY49_TB(
    (constraint STUDENTCOURSEHISTORY49_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 50:2 ----------------------
    varray "StudentSupplementalMark"
    store as table STUDENTSUPPLEMENTALMARK50_TB(
    (constraint STUDENTSUPPLEMENTALMARK50_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 51:1 ----------------------
    varray valid_sl_message_xml."XMLDATA"."SL_Response"."SL_Ack"."SL_Error"
    store as table SL_ERROR51_TB(
    (constraint SL_ERROR51_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    -- 52:1 ----------------------
    varray valid_sl_message_xml."XMLDATA"."SL_Request"."SL_Query"."SL_QueryObject"
    store as table SL_QUERYOBJECT52_TB(
    (constraint SL_QUERYOBJECT52_PK primary key (NESTED_TABLE_ID,ARRAY_INDEX))
    tablespace data
    ALTER TABLE XML_SL_MESSAGE
    ADD (CONSTRAINT XML_SL_MESSAGE_PK PRIMARY KEY
    (XML_SL_MESSAGE_ID))
    ALTER TABLE XML_SL_MESSAGE
    ADD (CONSTRAINT XMLSLMSG_ORIGINAL_XMLSLMSG_UK UNIQUE
    (ORIGINAL_XML_SL_MESSAGE_ID))
    ALTER TABLE XML_SL_MESSAGE ADD (CONSTRAINT
    XMLSLMSG_SYSUSR_FK FOREIGN KEY
    (PSE_SYS_USR_ID) REFERENCES PSE_SYS_USR
    (PSE_SYS_USR_ID))
    ALTER TABLE XML_SL_MESSAGE ADD (CONSTRAINT
    XMLSLMSG_ORIGINAL_XMLSLMSG_FK FOREIGN KEY
    (ORIGINAL_XML_SL_MESSAGE_ID) REFERENCES XML_SL_MESSAGE
    (XML_SL_MESSAGE_ID))
    -- Create a unique index for the XML Message id
    CREATE UNIQUE INDEX XMLSLMSG_MSGID_UNIQUE ON XML_SL_MESSAGE
    ((substr(extractValue(valid_sl_message_xml,'//SL_MsgId'),1,255)))
    tablespace indx
    COMPUTE STATISTICS
    Here is the nested table structure of the XMLType table created during the schema registration:
    select level
    ,parent_table_column
    from user_nested_tables
    connect by prior table_name = parent_table_name
    start with parent_table_name = 'SL_Message4724_TAB'
    LEVEL PARENT_TABLE_COLUMN
    1 "XMLDATA"."SL_Event"."SL_ObjectData"."SL_EventObject"
    2 SchoolTerm
    2 StudentSchoolEnrollment
    3 "StudentInfo"."Name"
    3 "StudentInfo"."Demographics"."CountryOfCitizenship"
    3 "StudentInfo"."StudentAddress"
    3 "StudentInfo"."PhoneNumber"
    3 "StudentInfo"."Demographics"."Language"
    3 "StudentInfo"."Email"
    2 SchoolClass
    3 "ClassInfo"."DeptAssignedCourseId"
    3 "ClassInfo"."EducatorCertificateNumber"
    2 StudentProgramEnrollment
    2 StudentClassEnrollment
    2 StudentClassMark
    2 StudentCourseHistory
    2 StudentSupplementalMark
    2 StudentExamRegistration
    2 StudentPersonal
    3 "StudentInfo"."Name"
    3 "StudentInfo"."Email"
    3 "StudentInfo"."Demographics"."CountryOfCitizenship"
    3 "StudentInfo"."StudentAddress"
    3 "StudentInfo"."PhoneNumber"
    3 "StudentInfo"."Demographics"."Language"
    1 "XMLDATA"."SL_Request"."SL_Query"."SL_QueryObject"
    1 "XMLDATA"."SL_Response"."SL_ObjectData"."SL_EventObject"
    2 SchoolTerm
    2 SchoolClass
    3 "ClassInfo"."DeptAssignedCourseId"
    3 "ClassInfo"."EducatorCertificateNumber"
    2 StudentProgramEnrollment
    2 StudentClassEnrollment
    2 StudentClassMark
    2 StudentCourseHistory
    2 StudentSupplementalMark
    2 StudentExamRegistration
    2 StudentPersonal
    3 "StudentInfo"."Name"
    3 "StudentInfo"."Email"
    3 "StudentInfo"."Demographics"."Language"
    3 "StudentInfo"."PhoneNumber"
    3 "StudentInfo"."StudentAddress"
    3 "StudentInfo"."Demographics"."CountryOfCitizenship"
    2 StudentSchoolEnrollment
    3 "StudentInfo"."Name"
    3 "StudentInfo"."Demographics"."Language"
    3 "StudentInfo"."PhoneNumber"
    3 "StudentInfo"."StudentAddress"
    3 "StudentInfo"."Demographics"."CountryOfCitizenship"
    3 "StudentInfo"."Email"
    1 "XMLDATA"."SL_Response"."SL_Ack"."SL_Error"
    52 rows selected.
    When I attempt to insert to previously valid XML documents I get a core dump … Here are the insert statements:
    insert into xml_sl_message (
    xml_sl_message_id, dte_timestamp, pse_sys_usr_id, original_xml_sl_message_id, valid_sl_message_xml, invalid_xml,error_message
    select xml_sl_message_id, dte_timestamp, pse_sys_usr_id, original_xml_sl_message_id, xmltype(valid_sl_message_clob), invalid_xml,error_message
    from xml_sl_message_temp where xml_sl_message_id in (5154,5155)
    Here are the details on the exception:
    Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.5.0 - Production
    ORACLE_HOME = /opt/app/oracle/product/9.2.0
    System name: SunOS
    Node name: *****
    Release: 5.8
    Version: Generic_117000-01
    Machine: sun4u
    Instance name: EDDSDS
    Redo thread mounted by this instance: 1
    Oracle process number: 76
    Unix process pid: 11460, image: oracle@***** (TNS V1-V3)
    *** 2004-06-22 09:24:34.603
    *** SESSION ID:(29.159) 2004-06-22 09:24:34.602
    Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x20, PC: [0x101e41830, 0000000101E41830]
    *** 2004-06-22 09:24:34.606
    ksedmp: internal or fatal error
    ORA-07445: exception encountered: core dump [0000000101E41830] [SIGSEGV] [Address not mapped to object] [0x000000020] [] []
    Current SQL statement for this session:
    insert into xml_sl_message (
    xml_sl_message_id, dte_timestamp, pse_sys_usr_id, original_xml_sl_message_id, valid_sl_message_xml, invalid_xml,error_message
    select xml_sl_message_id, dte_timestamp, pse_sys_usr_id, original_xml_sl_message_id, xmltype(valid_sl_message_clob), invalid_xml,error_message
    from xml_sl_message_temp where xml_sl_message_id in (5154,5155)
    I have tried a couple different storage sections (with different levels of nesting) and still the same problem ..
    Is there something wrong with my storage section ?
    What is the "return as LOCATOR" clause for?

  • Performance issues with respect scheme registration,select & insert query

    I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me

    Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
    Justin

  • Performance Issue with BSIS(open accounting items)

    Hey All,
          I am having serious performance issue with a accrual report which gets all open GL items, and need some tips for optimization.
    The main issue is that I am accesing large tables like BSIS, BSEG, BSAS etc without proper indexes and that I am dealing with huge amounts of data.
    The select itself take a long time and after that as I have so much data overall execution is slow too.
    The select which concerns me the most is:
      SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
                 INTO TABLE i_bsis
                  FROM bsis
                  WHERE bukrs = '1000'
                  AND hkont in r_hkont   
                  AND budat <= p_lcdate
                  AND augdt = 0
                  AND augbl = space
                  AND gsber = c_ZRL1   
                  AND gjahr BETWEEN l_gjahr2 AND l_gjahr
                  AND ( blart = c_re      "Invoice
                  OR    blart = c_we      "Goods receipt
                  OR    blart = c_zc      "Invoice Cancels
                  OR    blart = c_kp ).   "Accounting offset
    I have seen other related threads, but was not that helpful.
    We already have a secondary index on bukrs hkont and budat, and i have checked in ST05 that it does use it. But inspite that it takes more than 15 hrs to complete(maybe because of huge data).
    Any Input is highly appreciated.
    Thanks

    Thank you Thomas for your inputs:
    You said that R_HKONT contains several ranges of account numbers. If these ranges cover a significant
    portion of the overall existing account numbers, then there is no really quick access possible via the
    BSIS primary key.
    Unfortunately R_HKONT contains all account numbers.
    As Rob said, your index on HKONT and BUDAT does not help much, since you are selecting "<=" on
    BUDAT. No chance of narrowing down that range?
    Will look into this.
    What about GSBER? Does the value in c_ZRL1 provide a rather small subset of the overall values? Then
    an index on BUKRS and GSBER might be helpful.
    ZRL1 does provide a decent selection . But I dont know if one more index is a good idea on overall
    system performance.
    I assume that the four document types are not very selective, so it probably does not pay off to
    investigate selecting on BKPF (there is an index involving BLART) and joining BSIS for the additional
    information. You still might want to look into it though.
    I did try to investigate this option too. Based on other threads related to BSIS and Robs Suggestion in
    those threads I tried this:
    SELECT bukrs belnr gjahr blart budat
      FROM bkpf INTO TABLE bkpf_l
            WHERE bukrs = c_pepsico
            AND bstat IN (' ', 'A', 'B', 'D', 'M', 'S', 'V', 'W', 'Z')
            AND blart IN ('RE', 'WE', 'ZC', 'KP')
            AND gjahr BETWEEN l_gjahr2 AND l_gjahr
            AND budat <= p_lcdate.
    SELECT zuonr hkont gjahr belnr buzei budat blart wrbtr shkzg xblnr waers bukrs
               FROM bsis INTO TABLE i_bsis FOR ALL ENTRIES IN bkpf_l
                         WHERE bukrs = bkpf_l-bukrs
                          AND  hkont IN r_hkont
                          AND  budat = bkpf_l-budat
                          AND  augdt = 0
                          AND  augbl = space
                          AND  gjahr = bkpf_l-gjahr
                          AND  belnr = bkpf_l-belnr
                          AND  blart = bkpf_l-blart
                          AND  gsber = c_zrl1.
    The improves the select on BSIS a lot, but the first select on BKPF kills it. Not sure if this would help
    improve performance.
    Also I was wondering whether it helps on refreshing the tabe statistics through DB20. The last refresh
    was done 7 months back. How frequently should we do this? Will it help?

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issue with Jdeveloper

    Hi Guys,
    I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
    I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
    I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
    Re: IDE has become extremely slow?
    But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
    The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
    The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
    The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
    Any advises are welcome!
    Thanks,
    Steven

    Hi Guys,
    I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
    Here is the content of the test file:
    <?xml version='1.0' encoding='windows-1252'?>
    <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:af="http://xmlns.oracle.com/adf/faces"
    xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
    <jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
    doctype-system="http://www.w3.org/TR/html4/loose.dtd"
    doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
    <jsp:directive.page contentType="text/html;charset=windows-1252"/>
    <f:view>
    <afh:html binding="#{backing_streettypedetail.html1}" id="html1">
    <afh:head title="streettypedetail"
    binding="#{backing_streettypedetail.head1}" id="head1">
    <meta http-equiv="Content-Type"
    content="text/html; charset=windows-1252"/>
    </afh:head>
    <afh:body binding="#{backing_streettypedetail.body1}" id="body1">
    <af:messages binding="#{backing_streettypedetail.messages1}"
    id="messages1"/>
    <h:form binding="#{backing_streettypedetail.form1}" id="form1">
    <af:panelForm binding="#{backing_streettypedetail.panelForm1}"
    id="panelForm1">
    <af:inputText value="#{bindings.streetTypeID.inputValue}"
    label="#{bindings.streetTypeID.label}"
    required="#{bindings.streetTypeID.mandatory}"
    columns="#{bindings.streetTypeID.displayWidth}"
    binding="#{backing_streettypedetail.inputText1}"
    id="inputText1">
    <af:validator binding="#{bindings.streetTypeID.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.description.inputValue}"
    label="#{bindings.description.label}"
    required="#{bindings.description.mandatory}"
    columns="#{bindings.description.displayWidth}"
    binding="#{backing_streettypedetail.inputText2}"
    id="inputText2">
    <af:validator binding="#{bindings.description.validator}"/>
    </af:inputText>
    <af:inputText value="#{bindings.abbr.inputValue}"
    label="#{bindings.abbr.label}"
    required="#{bindings.abbr.mandatory}"
    columns="#{bindings.abbr.displayWidth}"
    binding="#{backing_streettypedetail.inputText3}"
    id="inputText3">
    <af:validator binding="#{bindings.abbr.validator}"/>
    </af:inputText>
    <f:facet name="footer">
    <h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
    id="panelGroup1">
    <af:commandButton text="Save"
    binding="#{backing_streettypedetail.saveButton}"
    id="saveButton"
    actionListener="#{bindings.mergeEntity.execute}"
    action="#{userState.retrieveReturnNavigationRule}"
    disabled="#{!bindings.mergeEntity.enabled}"
    partialSubmit="false">
    <af:setActionListener from="#{true}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    <af:commandButton text="Cancel"
    binding="#{backing_streettypedetail.cancelButton}"
    action="#{userState.retrieveReturnNavigationRule}"
    id="cancelButton">
    <af:setActionListener from="#{false}"
    to="#{userState.refresh}"/>
    </af:commandButton>
    </h:panelGroup>
    </f:facet>
    </af:panelForm>
    </h:form>
    </afh:body>
    </afh:html>
    </f:view>
    <!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
    </jsp:root>
    Can anybody take a look at the file and let me know what's wrong with it?
    Thanks in advance.
    Steven

  • Date insertion from large XML document (clob) into relation table very slow

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

  • Bulk Loader Program to load large xml document

    I am looking for a bulk loader database program that will load a very large xml document. The simple bulk loader application available on the oracle site will not load this document due to its size which is approximately 20MG. Please advise asap. Thank you.

    From the above document:
    Storing XML Data Across Tables
    Question
    Can XML- SQL Utility store XML data across tables?
    Answer
    Currently XML-SQL Utility (XSU) can only store to a single table. It maps a canonical representation of an XML document into any table/view. But of course there is a way to store XML with the XSU across table. One can do this using XSLT to transform any document into multiple documents and insert them separately. Another way is to define views over multiple tables (object views if needed) and then do the inserts ... into the view. If the view is inherently non-updatable (because of complex joins, ...), then one can use INSTEAD-OF triggers over the views to do the inserts.
    -- I've tried this, works fine.

  • Performance issue with BSEG

    Hi,
              I am having serious performance issue due to BSEG table .I am having a change request in which I have to solve the performance issue with regard to BSEG. The situation was that previously they had used select * on both BKPF and BSEG. I removed the select * and selected only those fields which are required as shown below. I also tried using cursors. But the problem is happening in the TEST server where BSEG is having more than 1 crore entries. I have gone through some threads but still not able to understand how to solve this problem. Please help
    select bukrs belnr gjahr bldat bstat from bkpf into table T_BKPF_p
                                                    WHERE BUKRS IN sd_bukrs AND
                                                    BLDAT < s_bldat-low
                                                    and  BSTAT = ' ' .
    select bukrs belnr gjahr shkzg dmbtr hkont from bseg into table T_BSEG_C
                                            FOR ALL ENTRIES IN t_BKPF_p
                                            WHERE BUKRS = T_bkpf_p-bukrs
                                            AND   BELNR = T_bkpf_p-belnr
                                            AND   GJAHR = T_bkpf_p-gjahr
                                            AND   HKONT = SKB1-SAKNR.

    Hi Kunal,
    Here is my take on your issue.
    In your select statement on BKPF you are selecting every BKPF record for a specified company code and blank document status that was created before a specified date. If your company has implemented SAP 10 years ago, and your user enters todays date and leaves the company code field blank you will effectively be retrieving almost all the records from BKPF (excluding the ones created today or those with non-blank document status). This would effectively be a huge amount of data. After that you are looking for the corresponding BSEG records for all the records that you have selected in BKPF.
    My question to you is why do you need to look at all the records before a given date? Why not ask the user to enter a smaller date range and make the document date and the company code a mandatory entry? You do not have to look at 10 years worth of data especially if you are running this online (as opposed to in the background).
    Your BSEG select looks correct. There is very little that you can do except for adding BUZEI to the field list. If you use for all entries and do not include the entire primary key you could lose data.
    TABLES: bkpf,
            skb1.
    SELECT-OPTIONS: s_bldat  FOR bkpf-bldat OBLIGATORY,
                    sd_bukrs FOR bkpf-bukrs OBLIGATORY.
    TYPES: BEGIN OF ty_bkpf,
            bukrs TYPE bkpf-bukrs,
            belnr TYPE bkpf-belnr,
            gjahr TYPE bkpf-gjahr,
            bldat TYPE bkpf-bldat,
            bstat TYPE bkpf-bstat,
          END OF ty_bkpf,
          BEGIN OF ty_bseg,
            bukrs TYPE bseg-bukrs,
            belnr TYPE bseg-belnr,
            gjahr TYPE bseg-gjahr,
            buzei TYPE bseg-buzei,
            shkzg TYPE bseg-shkzg,
            dmbtr TYPE bseg-dmbtr,
            hkont TYPE bseg-hkont,
          END OF ty_bseg.
    DATA: t_bkpf_p TYPE TABLE OF ty_bkpf,
          t_bseg_c TYPE TABLE OF ty_bseg.
    SELECT bukrs
           belnr
           gjahr
           bldat
           bstat
    FROM bkpf
    INTO TABLE t_bkpf_p
    WHERE bukrs IN sd_bukrs
    AND   bldat IN s_bldat
    AND   bstat EQ space .
    IF NOT t_bkpf_p[] IS INITIAL.
      SELECT bukrs
             belnr
             gjahr
             buzei
             shkzg
             dmbtr
             hkont
        FROM bseg
        INTO TABLE t_bseg_c
        FOR ALL ENTRIES IN t_bkpf_p
        WHERE bukrs EQ t_bkpf_p-bukrs
        AND   belnr EQ t_bkpf_p-belnr
        AND   gjahr EQ t_bkpf_p-gjahr
        AND   hkont EQ skb1-saknr.
    ENDIF.

  • Performance Issue with SXMB_MONI

    Hi All,
    I have a typical performance issue with SXMB_MONI, when I trigger this T.code it is taking around 20-24 hrs to execute.
    Here I have found some tables which actually stores these processed xml messages,
    SXMSPFADDRESS
    SXMSPFRAWH
    RSXMB_REMOTE_SERVICE
    SXMSPFAGG
    SXMSCONFVL
    SXMSPMAST
    SXMSPEMAS, SXMSPERROR, SXMSPMAST & SXMSPVERS.
    SXMSPMAST, SXMSPMAST2, SXMSCLUR, SXMSCLUR2,
    SXMSCLUP, SMXSLUP2, SXMSPFRAWH,
    Here I want to increase the performance of sxmb_moni, firstly I want to know from which tables does the sxmb_moni fetches data and more over is it a single table or multiple table.
    And please suggest any technique which can decrease the latency time in executing sxmb_moni.
    Regards,
    Vijay N

    Hi,
    Periodically you need to archive the XI messages, that allows you to maintain sufficient performance level.
    Create archive jobs in SXMB_ADM to archive data which is 15 days old from XI related growing tables like SXMSCLUR, SXMSPEMAS, SXMSPHIST, SXMSPMAST, SXMSPVERS, SXMSPFRAW ,SWWWIHEAD. This archive job created archive files at the OS Level
    For XI tables refer
    /people/gourav.khare2/blog/2007/12/12/interesting-abap-tables-in-xi-150-part-i
    SXMSPMAST, SXMSCLUP, SXMSPCLUR
    the last two are cluster tables
    and you won't get XML messages directly from them
    have a look inside them
    The classes that reads this information in SXMB_MONI are abap classes,
    (can be seen at SE24) it is quite difficult to use them,
    you might debug SXMB_MONI or use SE30 and see all the classes
    that have been used.
    You can use value mapping if you are not looking at picking up values from application system.It is just like SM30 transcation.You can get the info under SAP XI->Design and Configuration->Configuration->Value Mapping.
    Also see the
    these tables,
    /SAPDMC/LSOMAP Field Mapping
    /SAPTRX/SCAOTMAP
    /SAPTRX/SCCNDMAP /SAPTRX/SCEVTMAP
    /SAPTRX/SCFUNMAP /SAPTRX/SCSOMAP
    /people/udo.martens/blog/2006/02/16/own-logging-of-xi-messages
    message-mappings: stored in which database-table?
    sxmb_moni, table sxmspmast, Messages with ICON_LED_RED, report RSXMB_SELECT
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/a1b46c4c686341e10000000a114a6b/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/ef/45393c3eb3036be10000000a11402f/frameset.htm
    Thanks
    Swarup

  • XMLTYPE as CLOB storage "inserting large xml document in xml type column"

    Hi All,
    i have a table containing an xml datatype(non schema based)
    i would like to insert a large xml document in it
    but an exception is thrown-->"string literal too long"
    i tried to use bind variables as a solution"prepared statements as i write in java"
    but it didn't work....as xml document is large
    when i tried to change the column type to CLOB,it worked but without xml validataion,
    although the xml type is mapped to a CLOB in storage, xml type couldn't insert the document
    if anyone have a solution plz tell,i needed it urgently
    thanks,in advance :-)

    thx it was very useful :-)
    but i am not having any success getting the following statement working using a JDBC connection pool rather than a hard coded URL connection
    tempClob = CLOB.createTemporary(conn, true, CLOB.DURATION_SESSION);
    it works with:
    "jdbc:oracle:thin:@server:port:dbname"
    Does NOT work with:
    datasource.getConnection()
    if anyone colud help...

  • Performance Issue with RF Scanners after SAP Enhancement Pack 5 Upgrade

    We are on component version SAP ECC 6.0, and recently upgraded to Enhancement Pack 5.  I believe we are on Net Weaver 7.10, and using RF scanners in one plant that is Warehouse Managed.  Evidentially when we moved to EHP5, the Web SAP Console went away and we are left with ITS Mobile.  This has created several issues and continues to be a performance barrier for the forklift drivers in the warehouse.  We see there is a high use of java script, and the processors canu2019t handle this.  When we login to tcode LM00, on a laptop or desktop computer, there are no performance issues.  When we login to tcode LM00, with the RF scanners, the system is very slow.  It might take 30 seconds to confirm one item on a WM Transfer Order.
    1.)     Can we revert back to Web SAP Console, after we have upgraded to EHP5?
    2.)     What is creating the performance issues with the RF Scanners now that we switched over to SAP ITS mobile?
    Our RF Scanners are made by Intermec, but I donu2019t think that is where the solution lies.  One person in our IT Operations has spent a good deal of time configuring SAPITS to get it to work, but still it isnu2019t performing.

    Tom,
    I am sorry I did not see this earlier.
    I'm currently working on a very similar project with ITS mobile and the problem is to accurately determine the root cause of the problem in the least amount of time. The tool that works is found here: http://www.connectrf.com/index.php/mcm/managed-diagnostics/
    Isolating the network from the application and the device is a time consuming process unless you have a piece of software that can trace the HTTP transaction between host and device on both wired and wireless side of the network. Once that is achieved (as with Connect's tool) you can then you can begin to solve the problem.
    What I found in my project is that the amount of data traffic generated by ITS mobile can be reduced drastically, which speeds the response time of the mobile devices, especially with large number of devices in distribution centers.
    Let me know if I can answer more questions related to this topic.
    Cheers,
    Shari

Maybe you are looking for

  • How do i get my music back on my ipad after i upgraded it

    after upgrating my ipad i lost all my music and now i cant find it. and i found a manual but that was an old one now the programs looks differents

  • ITunes no longer working after Song Upgrade.

    I got onto iTunes store, was offered to upgrade 32 of my songs, I said "Yes". After the upgrading completed, iTunes main window pane no longer worked for anything but the iTunes store. It will play my music if I manually select it from the disk (via

  • Condition type need to be deactivated due to free good

    Dear Gurus, Is it possible in standard sap to deactivate a certain condition type if its a free good(item category :TANN) but work normally for other items of the order.As the pricing procedure must remain same. Regards, Sam Ahmed

  • Free unlimited hosting for you!

    Space: 250 MB Ads (Banner). Unlimited bandwidth Upload: FTP, Browser Scripting: PHP Other Features: MySQL databases, PhpMyAdmin. AHCS c-panel. All file types allowed. No file size limit. POP3, Web-based Email. 30-days Inactivity limit. Automatic scri

  • An exception of class NilObjectException was not handled. The application must shutdown.

    I just upgraded to Lion last night.  Today I purchased ScholarWord OSX, downloaded it now and I can't open it.  It was previously working with the demo.  I continue to receive the Nil error above.  How do I resolve this?