ORA-22813 ?

Hi,
I've run into an interesting ORA-22813 error. I ran the following SQL aginst an xmltype table.
SQL> select
2 extractValue(value(a), '/GTP/TransmissionSource@ID') TransmissionSourceID,
3 extractValue(value(a), '/GTP/TransmissionSource@Name') TransmissionSourceName,
4 extractValue(value(b), '/Study@ID') StudyID,
5 extractValue(value(b), '/Study@Name') StudyName,
6 extractValue(value(b), '/Study@TransmissionType') StudyTransmissionType,
7 extractValue(value(c), '/Site@ID') SiteID,
8 extractValue(value(d), '/Investigator@ID') InvestigatorID,
9 extractValue(value(d), '/Investigator@Name') InvestigatorName
10 from gtp x,
11 table(xmlsequence(extract(value(x),'/GTP'))) a,
12 table(xmlsequence(extract(value(x),'/GTP/Study'))) b,
13 table(xmlsequence(extract(value(x),'/GTP/Study/Site'))) c,
14 table(xmlsequence(extract(value(x),'/GTP/Study/Site/Investigator'))) d
15 /
select
extractValue(value(a), '/GTP/TransmissionSource@ID') TransmissionSourceID,
extractValue(value(a), '/GTP/TransmissionSource@Name') TransmissionSourceName,
extractValue(value(b), '/Study@ID') StudyID,
extractValue(value(b), '/Study@Name') StudyName,
extractValue(value(b), '/Study@TransmissionType') StudyTransmissionType,
extractValue(value(c), '/Site@ID') SiteID,
extractValue(value(d), '/Investigator@ID') InvestigatorID,
extractValue(value(d), '/Investigator@Name') InvestigatorName
from gtp x,
table(xmlsequence(extract(value(x),'/GTP'))) a,
table(xmlsequence(extract(value(x),'/GTP/Study'))) b,
table(xmlsequence(extract(value(x),'/GTP/Study/Site'))) c,
table(xmlsequence(extract(value(x),'/GTP/Study/Site/Investigator'))) d
ORA-22813: operand value exceeds system limits
SQL>
Not sure how to get around this one? The query was correctly a number of times without any issue.
Also, I've got over ~90 SYS_IOT_OVER_39#### and ~70 SYS_NT+qzhFuERTRio9lfg061upw== bla, bal table objects.
Not sure what these are?
Thanks.

OK
A number of questions. I assume GTP only occurs once per document (eg is the root node), in which case there is no point to 'sequencing' GTP.
I'm assuming that Study can occur more than once with GTP, and Site more than once within Study and Investigator more than once with Site.
If the above is true the problem may be caused by the way you are constructing your sequences. XMLSequence relies on a correlated join. Hence each level of nesting should be obtained as a subset of the previous level. In your case you are creating a cartesian product which may explain the error...
Try the following
select
extractValue(value(x), '/GTP/TransmissionSource@ID') TransmissionSourceID,
extractValue(value(x), '/GTP/TransmissionSource@Name') TransmissionSourceName,
extractValue(value(study), '/Study@ID') StudyID,
extractValue(value(study), '/Study@Name') StudyName,
extractValue(value(study), '/Study@TransmissionType') StudyTransmissionType,
extractValue(value(site), '/Site@ID') SiteID,
extractValue(value(investigator), '/Investigator@ID') InvestigatorID,
extractValue(value(d), '/Investigator@Name') InvestigatorName
from gtp x,
table(xmlsequence(extract(value(x),'/GTP/Study'))) study,
table(xmlsequence(extract(value(study),'/Study/Site'))) site,
table(xmlsequence(extract(value(site),'Site/Investigator'))) investigator
The tables you see imply that you have 70 different collections within your XML Schema.

Similar Messages

  • Important--Bug(ORA-22813) fixes in Oracle Database 10g  Release 10.2.0.4

    Hi,
    I found that in SELECT STATEMENTs where we use "xmlagg() functions with a GROUP BY/ORDER BY,it fails with ORA-22813 if the result is too large.
    This happens as there is a hard coded limit on the result size. (max 30k)
    Next,i confirmed that and when i removed a portion of the XML agg() values and executed it---Wonders,it runs perfectly fine.
    This means that ""xmlagg() functions with a GROUP BY/ORDER BY,fails with ORA-22813 since the result is too large.
    I have come to know that patch "Release 10.2.0.4" has the fix for Bug-22813 for "xmlagg() functions with a GROUP BY/ORDER BY".
    Could you all please confirm that "Oracle Database 10g Release 10.2.0.4" (patch 10.2.04) fixes the issue?
    Based on your confirmation,i can go ahead to get the patch installed.
    Current Version:-Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    Thanks
    Mainak

    Your query should be written something like this..
    select d.*
      from fbnk_customer,
           XMLTABLE
              '//c3'
              passing XMLRECORD
              columns
              C3_VALUE varchar2(60) path 'text()'
           ) d
    where recid='1001400'; Although it would be better to use an extact path expression rather than '//c3'

  • ORA-22813: operand value exceeds system limits

    hi all,
    ORA-22813: operand value exceeds system limits
    the above error occurs while calling a function. in 10g its working fine. after moving it to 11g v r facing the error. Plz help

    KRIS wrote:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionIt seems that is a bug,refer
    *Bug 8861467 - ORA-22813 / ORA-600 [12761] can occur in 11g [ID 8861467.8]*
    To fixing problem you need apply 11.2.0.2 (Server Patch Set)

  • ORA 22813 in merge statement

    hi gems..good afternoon...
    My database version is 11.2.0.1.0 64 bit Solaris OS.
    I am facing an "ORA-22813: operand value exceeds system limits" while running a procedure.
    I have used loggers and found that it is getting failed in a MERGE statement.
    That merge statement is used to merge a table with a collection. the code is like below:
    MERGE /*+ INDEX(P BALANCE_HISTORIC_INDEX) */
        INTO BALANCE_HOLD_HISTORIC P
        USING TABLE(GET_BALANCE_HIST(V_MERGE_REC)) M
        ON (P.CUSTOMER_ID = M.CUSTOMER_ID AND P.BOOK_ID = M.BOOK_ID AND P.PRODUCT_ID = M.PRODUCT_ID AND P.SUB_BOOK_ID = M.SUB_BOOK_ID AND)
        WHEN MATCHED THEN
          UPDATE
             <set .....>
        WHEN NOT MATCHED THEN
          INSERT<.....>The parameter of the function GET_BALANCE_HIST(V_MERGE_REC) is a table type.
    Now the function GET_BALANCE_HIST(V_MERGE_REC) is a pipelined function and we have used that because the collection V_MERGE_REC may get huge with data.
    This proc was running fine from the beginning but from day before yesterday it was continously throwing ORA 22813 error in that line.
    please help..thanks in advance..

    hi paul..thanks for your reply...
    the function GET_BALANCE_HIST is not selecting data from any tables.
    What this pipeline function is doing is, it is taking the huge collection V_MERGE_REC as parameter and releasing its datas in pipelined form. The code for the functions is :
    CREATE OR REPLACE FUNCTION GET_BALANCE_HIST(P_MERGE IN TAB_TYPE_BALANCE_HISTORIC)
      RETURN TAB_TYPE_BALANCE_HISTORIC
      PIPELINED AS
      V_MERGE TAB_TYPE_BALANCE_HISTORIC := TAB_TYPE_BALANCE_HISTORIC();
    BEGIN
      FOR I IN 1 .. P_MERGE.COUNT LOOP
        V_MERGE.EXTEND;
        V_MERGE(V_MERGE.LAST) := OBJ_TYPE_BALANCE_HISTORIC(P_MERGE(I).CUSTOMER_ID,
                                                 P_MERGE(I).BOOK_ID,
                                                 P_MERGE(I).PRODUCT_ID,
                                                 P_MERGE(I).SUB_BOOK_ID,
                                                 P_MERGE(I).EARNINGS,
                                                 P_MERGE(I).EARNINGS_HOUSE,
                                                 P_MERGE(I).QUANTITY,
                                                 P_MERGE(I).ACCOUNT_INTEGER);
      END LOOP;
      FOR J IN 1 .. V_MERGE.COUNT LOOP
        PIPE ROW(OBJ_TYPE_BALANCE_HISTORIC(V_MERGE(I).CUSTOMER_ID,
                                                 V_MERGE(I).BOOK_ID,
                                                 V_MERGE(I).PRODUCT_ID,
                                                 V_MERGE(I).SUB_BOOK_ID,
                                                 V_MERGE(I).EARNINGS,
                                                 V_MERGE(I).EARNINGS_HOUSE,
                                                 V_MERGE(I).QUANTITY,
                                                 V_MERGE(I).ACCOUNT_INTEGER));
      END LOOP;
      RETURN;
    END;I think the error is comming because of the parameter value of V_MERGE_REC. Since it is huge, so loading that into memory is causing problem. But in this case, how can I resolve it?? Can I use a global temporary table for this??
    Please suggest...

  • Ora-22813 in oracle 11g

    Hi all
    db version 11.2.0.1.0
    OS : Windows 2003 64bit
    We are getting this error ora-22813 in oracle 11g
    thanks

    Hi Pavan
    this is the query executed by user
    select * from
    select admin.*,
    DENSE_RANK() OVER (partition by ctry ORDER BY feattyp) AS spro_admin_layer
    from
    select a_union.* from
    select ctry, feattyp, 'MN_A0' tbl_name, name, namelc, order00, id, geom from mn_a0 union all
    select ctry, feattyp, 'MN_A1' tbl_name, name, namelc, order01, id, geom from mn_a1 union all
    select ctry, feattyp, 'MN_A2' tbl_name, name, namelc, order02, id, geom from mn_a2 union all
    select ctry, feattyp, 'MN_A7' tbl_name, name, namelc, order07, id, geom from mn_a7 union all
    select ctry, feattyp, 'MN_A8' tbl_name, name, namelc, order08, id, geom from mn_a8 union all
    select ctry, feattyp, 'MN_A9' tbl_name, name, namelc, order09, id, geom from mn_a9
    ) a_union
    ) admin
    ) where (ctry='ITA' or ctry='SMR') and spro_admin_layer = 2
    thanks

  • ORA-22813 error when deleting spatial objects in LIVE

    Hi,
    We are getting an ORA-22813 error when attempting to delete a spatial object from a version-enabled table in workspace LIVE. The spatial object to be removed has a SDO type of multipolygon. The geometry information consists of 4 rings with a combined ordinate count of 4120. The statement used to delete the row is:
    delete from tableA where tableA.id in (select tableA.id from tableA where tableA.id = 3);
    The error occurs only on a 10g R1 Oracle instance (10.1.0.5.0) with Workspace Manager version 10.1.0.7.1. The delete operation succeeds without problems on a 10g R2 (10.2.0.2.0) instance with Workspace Manager version 10.2.0.3.1. The statement is also executed successfully on the 10g R1 instance if the table is not version-enabled.
    Any help on this would be appreciated.
    Thanks

    Hi,
    I would recommend filing a TAR in this one. The only ora-22813 error involving workspace manager and geometry columns that I know about involves queries that need to sort data for which the size of the geometry column was >30k. Does the execution plan for the delete statement involve any kind of sort? However, this is an old 9.2 bug, that I believe was fixed for all 10.1 and newer releases.
    Does the same error happen if you do not use the subquery?
    Regards,
    Ben

  • ORA-22813 on SELECT with ORDER BY

    Hi all,
    I have on a server Windows 2003 (3Gb RAM) an Oracle Database 9.2.0.1
    Until last week all seems right, but now when i submit this SQL:
    Select a,b,c,d,e... from mytable order by a,b
    I have this error:
    ORA-22813 "operand value exceeds system limits"
    -If i remove a first or second column from order by clause
    -If i remove a geometry column or some columns from the select
    It works.
    To fix this problem I have tried to:
    Recreate any indexes (data and spatial), no result
    Enable/Disable all costraints, no result
    Set parameter hash_join_tables to true and false, no result
    Enlarge the PGA, no result
    Copy only 10 record in another blank table, no result
    Export the table and import it in another database in another server (Windows 2003 / Oracle 9.2.0.1). The Export and Import are ok but the error is the same.
    Can you help me? Have you any suggest?
    Thanks

    Error:     ORA-22813
    Text:     operand value exceeds system limits
    Cause:     Object or Collection value was too large. The size of the value might
         have exceeded 30k in a SORT context, or the size might be too big for
         available memory.
    Action:     Choose another value and retry the operation.Has your data changed such that one of the recent values is hitting this limit.
    Note also Metalink bug 5959987 Spatial aggregations fail with ORA-22813 which is only confirmed as being in 10.2.0.3 but says
    ORA-22813 can occur when performing Spatial aggregations
    (SDO_AGGR_UNION or SDO_AGGR_CONCAT_LINES)
    when used with a GROUP BY clause and the input contains
    geometries that occupy more than 32K."

  • ORA-22813: operand value exceeds system limits when generation XML

    Hi All,
    We are using Oracle 11GR2 database and I am trying to generate XML Files using SQL/XML Functions.
    I am in the end of development and while testing I am facing this freaking issue. ORA-22813: operand value exceeds system limits.
    SELECT XMLSERIALIZE(DOCUMENT DATA AS CLOB) AS DATA FROM (
              SELECT
              XMLELEMENT (
    "Region_Data",
    XMLAGG (
    XMLFOREST (
    R.region as "Region_Name",
    R.first_name||R.last_name as "EmployeeFullName",
    R.ntlogin as "EmployeeAlias",
    R.job_title as "EmployeeRole",
    R.sap_number as "SAPNumber",
    R.sales_transaction_dt AS "Day",
    R.region AS "RegionName",
    R.postpaid_totalqty AS "PostpaidCount",
    R.postpaid_totaldollars AS "PostpaidAmount",
    R.postpaidfeature_totalqty AS "PostpaidFeatureCount",
    R.postpaidfeature_totaldollar AS "PostpaidFeatureAmount",
    R.prepaid_totalqty AS "PrepaidCount",
    R.prepaid_totaldollars AS "PrepaidAmount" ,
    R.prepaidfeature_totalqty AS "PrepaidFeatureCount",
    R.prepaidfeature_totaldollars AS "PrepaidFeatureAmount",
    R.accessory_totalqty AS "AccessoriesCount",
    R.accessory_totaldollars AS "AccessoriesAmount",
    R.handset_totalqty AS "HandsetsCount",
    R.handset_totaldollars AS "HandsetsAmount",
    (SELECT XMLAGG (
    XMLELEMENT (
    "Division",
    XMLFOREST (
    di.division AS "DivisonName",
    di.postpaid_totalqty AS "PostpaidCount",
    di.postpaid_totaldollars AS "PostpaidAmount",
    di.postpaidfeature_totalqty AS "PostpaidFeatureCount",
    di.postpaidfeature_totaldollar AS "PostpaidFeatureAmount",
    di.prepaid_totalqty AS "PrepaidCount",
    di.prepaid_totaldollars AS "PrepaidAmount" ,
    di.prepaidfeature_totalqty AS "PrepaidFeatureCount",
    di.prepaidfeature_totaldollars AS "PrepaidFeatureAmount",
    di.accessory_totalqty AS "AccessoriesCount",
    di.accessory_totaldollars AS "AccessoriesAmount",
    di.handset_totalqty AS "HandsetsCount",
    di.handset_totaldollars AS "HandsetsAmount",
    (SELECT XMLAGG (
    XMLELEMENT (
    "District",
    XMLFOREST (
    dis.district AS "DistrictName",
    dis.postpaid_totalqty AS "PostpaidCount",
    dis.postpaid_totaldollars AS "PostpaidAmount",
    dis.postpaidfeature_totalqty AS "PostpaidFeatureCount",
    dis.postpaidfeature_totaldollar AS "PostpaidFeatureAmount",
    dis.prepaid_totalqty AS "PrepaidCount",
    dis.prepaid_totaldollars AS "PrepaidAmount" ,
    dis.prepaidfeature_totalqty AS "PrepaidFeatureCount",
    dis.prepaidfeature_totaldollars AS "PrepaidFeatureAmount",
    dis.accessory_totalqty AS "AccessoriesCount",
    dis.accessory_totaldollars AS "AccessoriesAmount",
    dis.handset_totalqty AS "HandsetsCount",
    dis.handset_totaldollars AS "HandsetsAmount",
    (SELECT XMLAGG (
    XMLELEMENT (
    "Store",
    XMLFOREST (
    mst.store_id AS "StoreNumber",
    mst.store_name AS "StoreLocation",
    mst.postpaid_totaldollars AS "PostpaidAmount",
    mst.postpaidfeature_totalqty AS "PostpaidFeatureCount",
    mst.postpaidfeature_totaldollar AS "PostpaidFeatureAmount",
    mst.prepaid_totalqty AS "PrepaidCount",
    mst.prepaid_totaldollars AS "PrepaidAmount" ,
    mst.prepaidfeature_totalqty AS "PrepaidFeatureCount",
    mst.prepaidfeature_totaldollars AS "PrepaidFeatureAmount",
    mst.accessory_totalqty AS "AccessoriesCount",
    mst.accessory_totaldollars AS "AccessoriesAmount",
    mst.handset_totalqty AS "HandsetsCount",
    mst.handset_totaldollars AS "HandsetsAmount"
    FROM stores_comm_mobility_info_vw mst
    WHERE mst.district = dis.district
    ) "Store_Data")))
    FROM diST_comm_mobility_info_vw dis
    WHERE dis.division = di.division
    ) "District_Data")))
    FROM div_comm_mobility_info_vw di
    WHERE di.region = r.region
    ) AS "Division_Data"))) AS DATA
    FROM reg_comm_mobility_info_vw R GROUP BY region)
    This is working fine for conditions where there is less amount of data, but when there is more data this query is failing.
    I do not know what to do now. Is there any way of this limit or do I need someother mechanisms to generate XML Files.
    The challenge is we need to generate XML Files and send the XML Data to an Interface which will use this data to display in a cell phone.
    I am really frustated now as I am getting this error when I am testing for huge amount of data.
    Appreciate if anyone can help me out ASAP.
    (tHE BELOW XML I am trying to generate)
    <REGION>
         <Region_Data>
              <Region_Name>Southwest</Region_Name>
              <EmployeeFullName>AllisonAndersen</EmployeeFullName>
              <EmployeeAlias>AANDERS60</EmployeeAlias>
              <EmployeeRole>District Manager, Retail Sales</EmployeeRole>
              <SAPNumber>P12466658</SAPNumber>
              <Day>JAN</Day>
              <RegionName>Southwest</RegionName>
              <PostpaidCount>52</PostpaidCount>
              <PostpaidAmount>1579.58</PostpaidAmount>
              <PostpaidFeatureCount>296</PostpaidFeatureCount>
              <PostpaidFeatureAmount>4174.19</PostpaidFeatureAmount>
              <AccessoriesCount>394</AccessoriesCount>
              <AccessoriesAmount>45213.87</AccessoriesAmount>
              <Division_Data>
                   <Division>
                        <DivisonName>Southern California</DivisonName>
                        <PostpaidCount>52</PostpaidCount>
                        <PostpaidAmount>1579.58</PostpaidAmount>
                        <PostpaidFeatureCount>296</PostpaidFeatureCount>
                        <PostpaidFeatureAmount>4174.19</PostpaidFeatureAmount>
                        <AccessoriesCount>394</AccessoriesCount>
                        <AccessoriesAmount>45213.87</AccessoriesAmount>
                        <District_Data>
                             <District>
                                  <DistrictName>Orange County West</DistrictName>
                                  <PostpaidCount>52</PostpaidCount>
                                  <PostpaidAmount>1579.58</PostpaidAmount>
                                  <PostpaidFeatureCount>296</PostpaidFeatureCount>
                                  <PostpaidFeatureAmount>4174.19</PostpaidFeatureAmount>
                                  <AccessoriesCount>394</AccessoriesCount>
                                  <AccessoriesAmount>45213.87</AccessoriesAmount>
                                  <Store_Data>
                                       <Store>
                                            <StoreNumber>9551</StoreNumber>
                                            <StoreLocation>TM - BROOKHURST &amp; WARNER</StoreLocation>
                                            <PostpaidAmount>10</PostpaidAmount>
                                            <PostpaidFeatureCount>22</PostpaidFeatureCount>
                                            <PostpaidFeatureAmount>319.89</PostpaidFeatureAmount>
                                            <AccessoriesCount>27</AccessoriesCount>
                                            <AccessoriesAmount>4330</AccessoriesAmount>
                                       </Store>
                                  </Store_Data>
                             </District>
                        </District_Data>
                   </Division>
              </Division_Data>
         </Region_Data>
    </REGION>
    Thanks,
    Madhu K.

    You didn't give any feedback in your previous thread.
    Did you try the approach suggested here in {message:id=10998557}, instead of using nested inline subqueries ?

  • ORA-39126 when exporting with expdp

    Hi there,
    I'm getting a crash on 11g 11.1.0.7 when exporting a schema using expdp:
    expdp
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 3.096 GB
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA [TABLE_DATA:"NSNPL"."SYS_EXPORT_SCHEMA_02"]
    ORA-22813: operand value exceeds system limits
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 7839
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    0x14d81a160     18237  package body SYS.KUPW$WORKER
    0x14d81a160      7866  package body SYS.KUPW$WORKER
    0x14d81a160      2744  package body SYS.KUPW$WORKER
    0x14d81a160      8504  package body SYS.KUPW$WORKER
    0x14d81a160      1545  package body SYS.KUPW$WORKER
    0x14d81db88         2  anonymous block
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA []
    ORA-22813: operand value exceeds system limits
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.KUPW$WORKER", line 7834
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    0x14d81a160     18237  package body SYS.KUPW$WORKER
    0x14d81a160      7866  package body SYS.KUPW$WORKER
    0x14d81a160      2744  package body SYS.KUPW$WORKER
    0x14d81a160      8504  package body SYS.KUPW$WORKER
    0x14d81a160      1545  package body SYS.KUPW$WORKER
    0x14d81db88         2  anonymous block
    Job "NSNPL"."SYS_EXPORT_SCHEMA_03" stopped due to fatal error at 00:57:23
    This look awfully similar to bug 6991626 (cf ID 737618.1), however the database has already been successfully patched (and in fact even repatched) for this bug:
    $ORACLE_HOME/OPatch/opatch lsinventory
    Invoking OPatch 11.1.0.6.2
    Oracle Interim Patch Installer version 11.1.0.6.2
    Copyright (c) 2007, Oracle Corporation.  All rights reserved.
    Oracle Home       : /opt/oracle/db11g
    Central Inventory : /opt/oracle/inventory
       from           : /etc/oraInst.loc
    OPatch version    : 11.1.0.6.2
    OUI version       : 11.1.0.7.0
    OUI location      : /opt/oracle/db11g/oui
    Log file location : /opt/oracle/db11g/cfgtoollogs/opatch/opatch2010-06-02_08-23-53AM.log
    Lsinventory Output file location : /opt/oracle/db11g/cfgtoollogs/opatch/lsinv/lsinventory2010-06-02_08-23-53AM.txt
    Installed Top-level Products (2):
    Oracle Database 11g                                                  11.1.0.6.0
    Oracle Database 11g Patch Set 1                                      11.1.0.7.0
    There are 2 products installed in this Oracle Home.
    Interim patches (6) :
    Patch  6991626      : applied on Tue Jun 01 22:35:32 WET 2010
       Created on 14 Oct 2008, 23:25:07 hrs PST8PDT
       Bugs fixed:
         6991626
    [...]Does anyone have an idea on what might be the culprit here?
    Thanks for your help,
    Chris

    Hi Prathmesh,
    in fact I saw this very thread before and I made sure that both solutions were applied. Moreover as I said patch 6991626 had already been applied earlier, precisley to fix this problem, and I've had been able to successfully export other albeit somewhat smaller schemas (500M instead of 3GB) in the last few months. This is why I was so puzzled to see that exact bug raise its ugly head again. As far as I can tell I didn't do any modification to the DB since that last patch in nov. 2009. In fact the DB has been running pretty much untouched since then.
    I even tried yestereday to reinstalled the patch again; opatch does the operation gracefully, first rolling back the patch then reapplying it again, with only a warning about the patch being already present. However the pb does not get fixed any better.
    Thanks a lot for your help,
    Chris

  • ORA-39125:Worker unexpected fatal error for different different objects...

    Hi All,
    I am using Oracle database 10.2.0.4 on windows 2003 server.
    I want to take full bakup/schema level backup, for this I am using expdp. when I run this expdp it's getting failed with below error:=
    ===================================================================================
    take the schema level backup: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_03": system/********@TEST schemas=JISPBILCORBILLING501 directory=BACKUP_DIR dumpfile=JISPBILCORBILLING501.dmp logfile=JISPBILCORBILLING501.log
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
    ORA-01115: IO error reading block from file 5 (block # 3913)
    ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\JISPBILCORBILLING501.DBF'
    ORA-27091: unable to queue I/O
    ORA-27070: async read/write failed
    OSD-04006: ReadFile() failure, unable to read from file
    O/S-Error: (OS 38) Reached the end of the file.
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 6307
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    000007FF9ABA7AC0 15032 package body SYS.KUPW$WORKER
    000007FF9ABA7AC0 6372 package body SYS.KUPW$WORKER
    000007FF9ABA7AC0 9206 package body SYS.KUPW$WORKER
    000007FF9ABA7AC0 1936 package body SYS.KUPW$WORKER
    000007FF9ABA7AC0 6944 package body SYS.KUPW$WORKER
    000007FF9ABA7AC0 1314 package body SYS.KUPW$WORKER
    000007FF94192598 2 anonymous block
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_03" stopped due to fatal error at 11:12:50
    ==================================================================================================
    On the same server if I take the other database full backup then also getting similar type of error:
    Starting "SYSTEM"."SYS_EXPORT_FULL_11": system/********@jisp full=y directory=BACKUP_DIR dumpfile=jispratcorbilling501_full%U.dmp filesize=3G logfile=jispratcorbilling.log
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while ca
    lling DBMS_METADATA.FETCH_XML_CLOB [TYPE:"SYSMAN"."MGMT_CONTAINER_CRED_ARRAY"]
    ORA-22813: operand value exceeds system limits
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 229
    ORA-06512: at "SYS.KUPW$WORKER", line 889
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    000007FFCB72CDB0 15032 package body SYS.KUPW$WORKER
    000007FFCB72CDB0 6372 package body SYS.KUPW$WORKER
    000007FFCB72CDB0 2396 package body SYS.KUPW$WORKER
    000007FFCB72CDB0 6944 package body SYS.KUPW$WORKER
    000007FFCB72CDB0 1314 package body SYS.KUPW$WORKER
    000007FFCBAA7290 2 anonymous block
    Job "SYSTEM"."SYS_EXPORT_FULL_11" stopped due to fatal error at 11:48:49
    ====================================================================================================
    Can anyone suggest me what to look for this error?
    Thanks...

    I did
    1. DMSYS account unlock as well change the password .
    2. run this query and verify it VALID status.
    select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';
    is any thing else require?

  • Error while exporting a schema using data pump

    Hi all,
    I have 11.1.0.7 database and am using expdp to export a schema. The schema is quite huge and has roughly about 4 GB of data. When i export using the following command,
    expdp owb_exp_v1/welcome directory=dmpdir dumpfile=owb_exp_v1.dmp
    i get the following error after running for around 1 hour.
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA [TABLESPACE_QUOTA:"OWB_EXP_V1"]
    ORA-22813: operand value exceeds system limits
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 7839
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    4A974B9C 18237 package body SYS.KUPW$WORKER
    4A974B9C 7866 package body SYS.KUPW$WORKER
    4A974B9C 2744 package body SYS.KUPW$WORKER
    4A974B9C 8504 package body SYS.KUPW$WORKER
    4A961BF0 1 anonymous block
    4A9DAA4C 1575 package body SYS.DBMS_SQL
    4A974B9C 8342 package body SYS.KUPW$WORKER
    4A974B9C 1545 package body SYS.KUPW$WORKER
    4A8CD200 2 anonymous block
    Job "SYS"."SYS_EXPORT_SCHEMA_01" stopped due to fatal error at 14:01:23
    This owb_exp_v1 user has dba privileges. I am not sure what is causing this error. I have tried running it almost thrice but in vain. I also tried increasing the sort_area_size parameter. Even then, i get this error.
    Kindly help.
    Thanks,
    Vidhya

    Hi,
    Can you let us know what the last object type it was working on? It would be the line in the log file that looks like:
    Processing object type SCHEMA_EXPORT/...
    Thanks
    Dean

  • Error loading xml file with sqlldr

    Hi there,
    I am having trouble loading an xml file via sqlldr into oracle.
    The version i am running is Oracle Database 10g Release 10.2.0.1.0 - 64bit Production and the file size is 464 MB.
    It ran for about 10 hours trying to load the file and then threw up the error:
    ORA-22813: operand value exceeds system limits.
    I have loaded a file of 170MB using the same process succesfully.
    Any Ideas?
    Cheers,
    Dan.

    Looked a bit into the issue (ORA-22813) and although it can be caused by a lot of issues varrying database versions, you could have a go at sizing up your PGA database parameter. See Oracle support Doc ID 837220.1 for more info.
    The following might help
    CREATE OR REPLACE PROCEDURE show_pga_memory (context_in IN VARCHAR2 DEFAULT NULL)
    SELECT privileges required on:
      SYS.v_$session
      SYS.v_$sesstat
      SYS.v_$statname
    Here are the statements you should run:
    GRANT SELECT ON SYS.v_$session  TO <schema>;
    GRANT SELECT ON SYS.v_$sesstat  TO <schema>;
    GRANT SELECT ON SYS.v_$statname TO <schema>;
    IS
      l_memory NUMBER;
    BEGIN
      SELECT st.VALUE
        INTO l_memory
        FROM SYS.v_$session se, SYS.v_$sesstat st, SYS.v_$statname nm
      WHERE se.audsid = USERENV ('SESSIONID')
        AND st.statistic# = nm.statistic#
        AND se.SID = st.SID
        AND nm.NAME = 'session pga memory';
      DBMS_OUTPUT.put_line (CASE WHEN context_in IS NULL
                              THEN NULL
                              ELSE context_in || ' - '
                            END
                            || 'PGA memory used in session = ' || TO_CHAR (l_memory));
    END show_pga_memory;
    /

  • Expdp error

    &#22788;&#29702;&#23545;&#35937;&#31867;&#22411; SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39125: &#22312; KUPW$WORKER.UNLOAD_METADATA &#20013; Worker &#21457;&#29983;&#24847;&#22806;&#30340;&#33268;&#21629;&#38169;&#35823; (&#22312;&#35843;&#29992; DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS] &#26102;)
    ORA-22813: &#25805;&#20316;&#25968;&#20540;&#36229;&#20986;&#31995;&#32479;&#30340;&#38480;&#21046;
    ORA-06512: &#22312; "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: &#22312; "SYS.KUPW$WORKER", line 6241
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    267BC180 14916 package body SYS.KUPW$WORKER
    267BC180 6300 package body SYS.KUPW$WORKER
    267BC180 2340 package body SYS.KUPW$WORKER
    267BC180 6861 package body SYS.KUPW$WORKER
    267BC180 1262 package body SYS.KUPW$WORKER
    2278B6E4 2 anonymous block
    &#20316;&#19994; "DB002_OWNER"."SYS_EXPORT_SCHEMA_05" &#22240;&#33268;&#21629;&#38169;&#35823;&#20110; 16:45:13 &#20572;&#27490;

    Hello,
    I can't read those chinese/japanese characters with the info and error number here what I got. Hope this info might help in resolving the issue.
    -Sri
    Cause
    DMSYS schema objects have been accidentally removed
    or
    Step 'DELETE from exppkgact$ WHERE SCHEMA='DMSYS'; ' from Note 297551.1 has not been performed
    Solution
    If DMSYS has already been dropped
    Start SQLPLUS and connect with user SYS as SYSDBA
    SQL> DELETE FROM exppkgact$ WHERE SCHEMA='DMSYS';
    SQL> exit;
    proceed with export jobs
    If dmsys schema objects have been accidentally removed
    set ORACLE_HOME and ORACLE_SID
    Start SQLPLUS and connect with user SYS as SYSDBA
    SQL> run $ORACLE_HOME/dm/admin/dminst.sql SYSAUX TEMP $ORACLE_HOME/dm/admin/
    SQL> run $ORACLE_HOME/dm/admin/odmpatch.sql (if the database is at 10g patch level, e.g. 10.1.0.3 or 10.1.0.4)
    SQL> run $ORACLE_HOME/rdbms/admin/utlrp.sql
    Ensure 'Oracle Data Mining' is at Valid status in dba_registry
    SQL> select COMP_NAME,VERSION,STATUS from dba_registry where COMP_NAME='Oracle Data Mining';
    proceed with export jobs

  • Operand value exceeds system limits in sdo_aggr_mbr

    Hi-- i'm trying to get the MBR of a fairly large geometry (1429 vertices) and run into a strange problem:
    when i:
    select sdo_aggr_mbr(shape)
    from FEEDER_LINES_SDO
    where subname = 'OCEANO';
    i get what i expect:
    SDO_GEOMETRY(2003, 82212, NULL, SDO_ELEM_INFO_ARRAY(1,1003, 3),SDO_ORDINATE_ARRAY(712103.736,3876977.34, 733591.744, 3896557.18))
    however when i try to get the subname in my query as well:
    select subname ,sdo_aggr_mbr(shape)
    from FEEDER_LINES_SDO
    where subname = 'OCEANO'
    group by subname;
    i get
    ERROR at line 1:
    ORA-22813: operand value exceeds system limits
    The query fails with "ORA-00937: not a single-group group function" when i leave out the group by clause
    i can get around it with a kludge, but would like to know why the group by fails
    the kludge:
    select subname,min(t.x) minx, min(t.y) miny, max(t.x) maxx, max(t.y) maxy from
    FEEDER_LINES_SDO c,
    TABLE(SDO_UTIL.GETVERTICES(c.shape)) t
    where subname = 'OCEANO'
    group by subname;
    SUBNAME MINX MINY MAXX MAXY
    OCEANO     712103.736 3876977.34 733591.744 3896557.18
    where minx(), miny() etc are variations on:
    function minx (geom_in mdsys.sdo_geometry)
    return number DETERMINISTIC IS
    begin
    return sdo_geom.sdo_min_mbr_ordinate(geom_in,1);
    end;
    the group by expression seems to work fine on geometries with less than 1200 vertices. Is there a system parameter i can change?
    elvis{44}% oerr ora 22813
    22813, 00000, "operand value exceeds system limits"
    // *Cause:  Object or Collection value was too large. The size of the value
    // might have exceeded 30k in a SORT context, or the size might be
    // too big for available memory.
    // *Action:  Choose another value and retry the operation.
    i am running oracle 9.2.0.1 on solaris8
    any insight on this will be greatly appreciated
    cheers
    --kassim

    Hi Kassim,
    At KMS I recently ran into the same ORA-22813, when running this cursor SQL
         CURSOR lcur_montage IS
         select m.mont_id, m.sys_PK, m.krtp_id, m.mont_geom, m.til_dato_id , m.forloeb      
         from MTK_montage m
         where m.fra_dato_id = in_dato_id
         and m.krtp_id = 1           
         order by m.mont_id;
    Omitting the order by clause makes it work fine. If I alternatively omit the SDO_geometry m.mont_geom as a select item, the query also works.
    Our problem seems to arise when trying to sort selected rows, which contain large objects such as SDO_geometry.
    Yesterday we played around with SORT_AREA_SIZE, but to no avail. It turns out to be a known bug.
    When I today search for ORA-22813 in MetaLink, the first list item is
    1.
    9.2.0.X Patch Sets - List of Bug Fixes by Problem Type
    Type: Note
    Doc ID: 217194.1
    Score: 63%
    Modified Date: 18-FEB-2003
    Status: PUBLISHED
    Platform: Generic issue
    Product: Oracle Server - Enterprise Edition
    which unfortunately will not open and reveal its content.
    On the other hand trying MetaLink -> Bugs -> search for 'ORA-22813' gives amongst others Bug 2656107, which looks a lot like my problem.
    For Oracle eyes: - when will this bug be fixed? Does it solve the problem at hand?
    - regards
    Jens Ole Jensen
    Kort & MatrikelStyrelsen (WWW: http://www.kms.dk)
    Danmark  
    version: (32 bit) Oracle9i Enterprise Edition Release 9.2.0.2.0 - Production on Sun/SunOS 5.8 (64 bit)

  • Generating large amounts of XML without running out of memory

    Hi there,
    I need some advice from the experienced xdb users around here. I´m trying to map large amounts of data inside the DB (Oracle 11.2.0.1.0) and by large I mean files up to several GB. I compared the "low level" mapping via PL/SQL in combination with ExtractValue/XMLQuery with the elegant XML View Mapping and the best performance gave me the View Mapping by using the XMLTABLE XQuery PATH constructs. So now I have a View that lies on several BINARY XMLTYPE Columns (where the XML files are stored) for the mapping and another view which lies above this Mapping View and constructs the nested XML result document via XMLELEMENT(),XMLAGG() etc. Example Code for better understanding:
    CREATE OR REPLACE VIEW MAPPING AS
    SELECT  type, (...)  FROM XMLTYPE_BINARY,  XMLTABLE ('/ROOT/ITEM' passing xml
         COLUMNS
          type       VARCHAR2(50)          PATH 'for $x in .
                                                                let $one := substring($x/b012,1,1)
                                                                let $two := substring($x/b012,1,2)
                                                                return
                                                                    if ($one eq "A")
                                                                      then "A"
                                                                    else if ($one eq "B" and not($two eq "BJ"))
                                                                      then "AA"
                                                                    else if (...)
    CREATE OR REPLACE VIEW RESULT AS
    select XMLELEMENT("RESULTDOC",
                     (SELECT XMLAGG(
                             XMLELEMENT("ITEM",
                                          XMLFOREST(
                                               type "ITEMTYPE",
    ) as RESULTDOC FROM MAPPING;
    ----------------------------------------------------------------------------------------------------------------------------Now all I want to do is materialize this document by inserting it into a XMLTYPE table/column.
    insert into bla select * from RESULT;
    Sounds pretty easy but can´t get it to work, the DB seems to load a full DOM representation into the RAM every time I perform a select, insert into or use the xmlgen tool. This Representation takes more than 1 GB for a 200 MB XML file and eventually I´m running out of memory with an
    ORA-19202: Error occurred in XML PROCESSING
    ORA-04030: out of process memory
    My question is how can I get the result document into the table without memory exhaustion. I thought the db would be smart enough to generate some kind of serialization/datastream to perform this task without loading everything into the RAM.
    Best regards

    The file import is performed via jdbc, clob and binary storage is possible up to several GB, the OR storage gives me the ORA-22813 when loading files with more than 100 MB. I use a plain prepared statement:
            File f = new File( path );
           PreparedStatement pstmt = CON.prepareStatement( "insert into " + table + " values ('" + id + "', XMLTYPE(?) )" );
           pstmt.setClob( 1, new FileReader(f) , (int)f.length() );
           pstmt.executeUpdate();
           pstmt.close(); DB version is 11.2.0.1.0 as mentioned in the initial post.
    But this isn´t my main problem, the above one is, I prefer using binary xmltype anyway, much easier to index. Anyone an idea how to get the large document from the view into a xmltype table?

Maybe you are looking for

  • Automation Open and Automation Close (Excel)

    Hi I am using ActiveX to write and read from Excel. I am writing a program which a case statement is involved. When the boolean is true, the program in the case statment (View Result.VI) will open Excel and display the last save filename file. I run

  • The mystery of multiple itunes accounts and countries

    After about a year , i think i get it,  but could someone just confrim this for me . I used some analogies, helps me thing about it . Icloud account ( using apple id) is like a parent account that sychs your devices with mail,notes,contacts etc. Gene

  • Runtime error in my program

    *& Report  ZGSCREEN                                                    * REPORT  ZGSCREEN no standard page heading tables :zgtable,sscrfields. data:begin of itable occurs 0,      mandt like zgtable-mandt,      trackid like zgtable-trackid,      artis

  • OBIEE 11g - Maps with Google Map as Source

    Hi Experts, Am trying to create some maps using information from Google Maps as source. In Map-viewer console, I have added Google Maps as the source in 'Manage Map Tile Layer' and was able to see the base map when I did a 'View Map/Manage Tiles'. Ho

  • Downloading and installing applications

    I can download applications via the web browser but cannot get them to install - specifically, I am trying to install FLIP4MAC so I can play Windows Media files via Quicktime. I download to the desktop and when I double click to install it keeps aski