ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
Query runs fine when query the partitioned table without partitioned indexes.
Here is the query.
SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
al27.accessory_code
FROM vlc.veh_vdc_accessorization_fact al1,
vlc.vdc_dim al2,
vlc.model_attribute_dim al7,
vlc.ppo_list_dim al18,
vlc.ppo_list_indiv_type_dim al23,
vlc.accy_type_dim al27
WHERE ( al2.vdc_id = al1.vdc_location_id
AND al7.model_attribute_id = al1.model_attribute_id
AND al18.mydppolist_id = al1.ppo_list_id
AND al23.mydppolist_id = al18.mydppolist_id
AND al23.mydaccytyp_id = al27.mydaccytyp_id
AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
AND al2.vdc_name IN
('PORT OF BALTIMORE',
'PORT OF JACKSONVILLE - LEXUS',
'PORT OF LONG BEACH',
'PORT OF NEWARK',
'PORT OF PORTLAND'
AND al27.accessory_code IN ('42', '43', '44', '45')
GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

I would recommend that you post this at the following OTN forum:
Database - General
General Database Discussions
and perhaps at:
Oracle Warehouse Builder
Warehouse Builder
The Oracle OLAP forum typically does not cover general data warehousing topics.

Similar Messages

  • ORA-19007 when coping a table with an xml type in it to a new schema in the

    ORA-19007 when coping a table with an xml type in it to a new schema in the same database.
    Hi all,
    When I copy a table with an xml type in it to a new schema in the same database I get an ora-19009.
    The setup is as follows I have a schema a with table TABLE_WITH_XMLTYPE where data is:
    CREATE
    TABLE TABLE_WITH_XMLTYPE
    FOLDER_ID NUMBER (10, 0) NOT NULL,
    SEARCH_PROPERTIES XMLTYPE ,
    CONSTRAINT TABLE_WITH_XMLTYPE PK PRIMARY KEY (FOLDERID) USING INDEX
    XMLTYPE COLUMN SEARCH_PROPERTIES XMLSCHEMA
    "http://xxxxxxx.net/FolderProperties.xsd" element "FolderProperties"
    VARRAY SEARCH_PROPERTIES."XMLDATA"."PROPERTIES"."PROPERTY" STORE AS TABLE
    PROPERTY_TABLE
    (PRIMARY KEY (NESTED_TABLE_ID, ARRAY_INDEX)) ORGANIZATION INDEX OVERFLOW
    Both schemas have the following xml schema registered as a local xml schema
    BEGIN
    DBMS_XMLSCHEMA.registerSchema(
    SCHEMAURL => 'http://xxxxxxx.net/FolderProperties.xsd',
    SCHEMADOC =>
    '<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
    xmlns:xdb="http://xmlns.oracle.com/xdb"
    xdb:storeVarrayAsTable="true">
    <xs:element name="FolderProperties"
    type="FolderPropertiesType"
    xdb:defaultTable="FOLDER_SEARCH_PROPERTIES" />
    <xs:complexType name="FolderPropertiesType" xdb:SQLType="FOLDERPROPERTIES_T">
    <xs:sequence>
    <xs:element name="FolderID" type="FolderIDType" minOccurs="1" xdb:SQLName="FOLDER_ID"/>
    <xs:element name="Properties" type="PropertiesType" xdb:SQLName="PROPERTIES"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="PropertiesType" xdb:SQLType="PROPERTIES_T">
    <xs:sequence>
    <xs:element name="Property" type="PropertyType" maxOccurs="unbounded"
    xdb:SQLName="PROPERTY" xdb:SQLCollType="PROPERTY_V"/>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="PropertyType" xdb:SQLType="PROPERTY_T">
    <xs:sequence>
    <xs:element name="DateValue" type="DateType" xdb:SQLName="DATE_VALUE"/>
    <xs:element name="NumValue" type="NumType" xdb:SQLName="NUM_VALUE"/>
    <xs:element name="StringValue" type="StringType" xdb:SQLName="STRING_VALUE"/>
    </xs:sequence>
    <xs:attribute name="Name" xdb:SQLName="NAME" xdb:SQLType="VARCHAR2">
    <xs:simpleType>
    <xs:restriction base="xs:string">
    <xs:minLength value="1"/>
    <xs:maxLength value="255"/>
    </xs:restriction>
    </xs:simpleType>
    </xs:attribute>
    </xs:complexType>
    <xs:simpleType name="FolderIDType">
    <xs:restriction base="xs:integer"/>
    </xs:simpleType>
    <xs:simpleType name="DateType">
    <xs:restriction base="xs:dateTime"/>
    </xs:simpleType>
    <xs:simpleType name="NumType">
    <xs:restriction base="xs:decimal"/>
    </xs:simpleType>
    <xs:simpleType name="StringType">
    <xs:restriction base="xs:string" />
    </xs:simpleType>
    </xs:schema>',
    LOCAL => TRUE,
    GENTYPES => TRUE,
    GENTABLES => FALSE);
    END;
    when I try to do the following insert:
    insert into schemaB.TABLE_WITH_XMLTYPE ( FOLDER_ID, SEARCH_PROPERTIES)
    select FOLDER_ID, SEARCH_PROPERTIES from schemaB.TABLE_WITH_XMLTYPE;
    I’ll get an ora-19007.
    Can some one point me in the right direction how to solve this error.
    Thanks Roelof.

    Who did you create the second table, in other words, how did you COPY the table as you said...
    If you created the second table via a CTAS (create table as select) then you will have created a table that is not the same as the original one. AFAIK I have once created an enhancement request for this after discovering that JDeveloper, for example, creates a "copy" via a CTAS which creates the wrong structure. Double check via package DBMS_METADATA.
    SQL> set long 1000000
    SQL> select DBMS_METADATA('TABLE','TABLE_WITH_XMLTYPE','SchemaA') from dual;
    SQL> select DBMS_METADATA('TABLE','TABLE_WITH_XMLTYPE','SchemaB') from dual;If you have got two different tables, than Mark's solution should help.
    M.
    Edited by: Marco Gralike on Feb 15, 2009 11:16 AM

  • ORA-00600 problem when create XMLType table with registerd schema

    Hi,
    I am using Oracle9i Enterprise Edition Release 9.2.0.4.0 on RedHat Linux 7.2
    I found a problem when I create table with registered schema with follow content:
         <xs:element name="body">
              <xs:complexType>
                   <xs:sequence>
                   </xs:sequence>
                   <xs:attribute name="id" type="xs:ID"/>
                   <xs:attribute name="class" type="xs:NMTOKENS"/>
                   <xs:attribute name="style" type="xs:string"/>
              </xs:complexType>
         </xs:element>
         <xs:element name="body.content">
              <xs:complexType>
                   <xs:choice minOccurs="0" maxOccurs="unbounded">
                        <xs:element ref="p"/>
                        <xs:element ref="hl2"/>
                        <xs:element ref="nitf-table"/>
                        <xs:element ref="ol"/>
                   </xs:choice>
                   <xs:attribute name="id" type="xs:ID"/>
              </xs:complexType>
         </xs:element>
    Does Oracle not support element reference to other element with dot?
    For instance, body -> body.content
    Thanks for your attention.

    Sorry, amendment on the schema
         <xs:element name="body">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element ref="body.head" minOccurs="0"/>
                        <xs:element ref="body.content" minOccurs="0" maxOccurs="unbounded"/>
                        <xs:element ref="body.end" minOccurs="0"/>
                   </xs:sequence>
                   <xs:attribute name="id" type="xs:ID"/>
                   <xs:attribute name="class" type="xs:NMTOKENS"/>
                   <xs:attribute name="style" type="xs:string"/>
              </xs:complexType>
         </xs:element>

  • Client installation with Net Configuration-Assistant : ora-00604 / ora-0224

    Hi forum,
    I am trying to install a client on a remote computer with Net Configuration-Assistant. When testing connection as user system I always get oracle errors
    ora-00604 / ora-0224.
    When I try connection with sys the reaction is : should be done with sysdba or sysoper privilege.
    On command line level I can login with : sqlplus sys/...@srvtstora:1527/oeg
    where oeg is the ORACLE_SID in question and srvtstora is the PC on which the
    database resides.
    what can i do ?
    Kind regards
    Samplitude

    error ORA-00604: error occurred at recursive SQL level
    ORA-00224, 00000, "controlfile resize attempted with illegal record type (%s)"
    Is it what it shows?
    Then there are problems with the controlfile. Make sure you have a valid backup before proceeding any further.
    On the other hand, the error when connection with sys it's because you have to specify the SYSDBA/SYSOPER role at the connecting string ( sqlplus sys/SysPasswd as sysdba)

  • Strange explain plan when query SYS tables

    Oracle Version 9.2.0.7
    We have an application that runs the following query on Oracle 9.2.0.7
    SELECT T1.TABLE_NAME,T1.COLUMN_NAME, T1.SRID, T2.SDO_UB, T2.SDO_LB, T1.OWNER FROM ALL_SDO_GEOM_METADATA T1, TABLE(T1.DIMINFO) T2 WHERE T1.OWNER=UPPER(:"SYS_B_0") AND T1.TABLE_NAME=UPPER(:"SYS_B_1")
    Without the self join the query is fine, but with the self join on our customers database the explain plan is doing full table scans and Hash Joins on SYS tables and takes 2 minutes.
    Rows Row Source Operation
    2 FILTER
    2 NESTED LOOPS
    1 TABLE ACCESS FULL SDO_GEOM_METADATA_TABLE
    2 COLLECTION ITERATOR PICKLER FETCH
    1 UNION-ALL
    0 FILTER
    0 NESTED LOOPS OUTER
    0 HASH JOIN
    37 TABLE ACCESS FULL TS$
    0 HASH JOIN OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 TABLE ACCESS CLUSTER TAB$
    1 INDEX UNIQUE SCAN I_OBJ# (object id 3)
    0 TABLE ACCESS CLUSTER SEG$
    0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (object id 9)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS FULL USER$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    0 FILTER
    0 NESTED LOOPS OUTER
    0 HASH JOIN
    54 TABLE ACCESS FULL USER$
    0 HASH JOIN
    29447 TABLE ACCESS FULL OBJ$
    0 HASH JOIN OUTER
    0 HASH JOIN OUTER
    0 HASH JOIN OUTER
    0 NESTED LOOPS
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 TABLE ACCESS CLUSTER TAB$
    1 INDEX UNIQUE SCAN I_OBJ# (object id 3)
    0 TABLE ACCESS CLUSTER TS$
    0 INDEX UNIQUE SCAN I_TS# (object id 7)
    0 TABLE ACCESS CLUSTER COL$
    0 TABLE ACCESS CLUSTER SEG$
    0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (object id 9)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS CLUSTER COLTYPE$
    0 TABLE ACCESS FULL USER$
    0 TABLE ACCESS FULL OBJ$
    0 TABLE ACCESS FULL USER$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    1 FILTER
    1 NESTED LOOPS
    1 NESTED LOOPS OUTER
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 INDEX UNIQUE SCAN I_TYPED_VIEW1 (object id 105)
    1 INDEX UNIQUE SCAN I_VIEW1 (object id 104)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    On our development database it takes 0.07 sec with no full table scans and no hash joins.
    Rows Row Source Operation
    2 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID SDO_GEOM_METADATA_TABLE
    1 INDEX RANGE SCAN SDO_GEOM_IDX (object id 36753)
    1 UNION-ALL
    0 FILTER
    0 NESTED LOOPS
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 TABLE ACCESS CLUSTER TAB$
    1 INDEX UNIQUE SCAN I_OBJ# (object id 3)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS CLUSTER USER$
    0 INDEX UNIQUE SCAN I_USER# (object id 11)
    0 TABLE ACCESS CLUSTER SEG$
    0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (object id 9)
    0 TABLE ACCESS CLUSTER TS$
    0 INDEX UNIQUE SCAN I_TS# (object id 7)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    0 FILTER
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS OUTER
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 TABLE ACCESS CLUSTER TAB$
    1 INDEX UNIQUE SCAN I_OBJ# (object id 3)
    0 TABLE ACCESS CLUSTER TS$
    0 INDEX UNIQUE SCAN I_TS# (object id 7)
    0 TABLE ACCESS CLUSTER COL$
    0 TABLE ACCESS CLUSTER COLTYPE$
    0 TABLE ACCESS CLUSTER SEG$
    0 INDEX UNIQUE SCAN I_FILE#_BLOCK# (object id 9)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS CLUSTER USER$
    0 INDEX UNIQUE SCAN I_USER# (object id 11)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS CLUSTER USER$
    0 INDEX UNIQUE SCAN I_USER# (object id 11)
    0 INDEX UNIQUE SCAN I_OBJ1 (object id 36)
    0 TABLE ACCESS BY INDEX ROWID OBJ$
    0 INDEX RANGE SCAN I_OBJ3 (object id 38)
    0 TABLE ACCESS CLUSTER USER$
    0 INDEX UNIQUE SCAN I_USER# (object id 11)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    1 FILTER
    1 NESTED LOOPS
    1 NESTED LOOPS OUTER
    1 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID USER$
    1 INDEX UNIQUE SCAN I_USER1 (object id 44)
    1 TABLE ACCESS BY INDEX ROWID OBJ$
    1 INDEX RANGE SCAN I_OBJ2 (object id 37)
    0 INDEX UNIQUE SCAN I_TYPED_VIEW1 (object id 105)
    1 INDEX UNIQUE SCAN I_VIEW1 (object id 104)
    0 NESTED LOOPS
    0 FIXED TABLE FULL X$KZSRO
    0 INDEX RANGE SCAN I_OBJAUTH2 (object id 109)
    0 FIXED TABLE FULL X$KZSPR
    2 COLLECTION ITERATOR PICKLER FETCH
    ALL_SDO_GEOM_METADATA is a view in the MDSYS schema (generated by Oracle ).
    SELECT SDO_OWNER OWNER,
    SDO_TABLE_NAME TABLE_NAME,
    SDO_COLUMN_NAME COLUMN_NAME,
    SDO_DIMINFO DIMINFO,
    SDO_SRID SRID
    FROM SDO_GEOM_METADATA_TABLE
    WHERE
    (exists
    (select table_name from all_tables
    where table_name=sdo_table_name
    and owner = sdo_owner
    union all
    select table_name from all_object_tables
    where table_name=sdo_table_name
    and owner = sdo_owner
    union all
    select view_name table_name from all_views
    where view_name=sdo_table_name
    and owner = sdo_owner))
    Statistics have been gathered for the MDSYS user.
    If this had not been SYS schema I would have immediately concluded that fresh statistics are required. The SYS objects concerend are valid with all indexes
    From my understanding you are not meant to gather stats for the SYS schema in Oracle 9 as Data Dictionary queries still uses RBO?
    Any ideas as to why Oracle is doing full table scans when querying SYS tables? The optimizer_mode is set to FIRST_ROWS.
    Any ideas greatly recevied.
    Thanks

    Maybe I'm missing something but this:
    INDEX FULL SCAN     SISESTAT     I0_ESTRUTURA_COMERCIALindicates that one of those indexes is being used.
    This:
    T_ESTRUTURA_COMERCIALIs nowhere to be found in your Explain Plan. It appears that either you have posted the wrong plan
    or Oracle is doing a query rewrite to a materialized view.

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • Performance between two partitionned tables with different structure

    Hi,
    I would like if there is a difference between two partitionned tables with different structure in term of performance (access, query, insertions, updates ).
    I explain myself in detail :
    I have a table that stores one value every 10 minutes in a day (so we have 144 values (24*6) in the whole day), with the corresponding id.
    Here is the structure :
    | Table T1 |
    + id PK |
    + date PK |
    + sample1 |
    + sample2 |
    + ... |
    + sample144 |
    The table is partionned on the column date, with a partionned every months. The primary key is based on the columns (id, date).
    There is an additionnal index on the column (id) (is it useful ?).
    I would like to know if it is better to have a table with just (id, date, value) , so for one row in the first table we'll have 144 rows in the future? table. The partition will already be on the columns (id, date) with the index associated.
    What are the gains or loss in performance with this new structure ( access, DMLs , storage ) ?
    I discuss with the Java developers and they say it is simpler to manage in their code.
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Thanks & Regards
    From France
    Oliver
    Edited by: 998239 on 5 avr. 2013 01:59

    I mean storage in tablespaces and datafiles on disk.
    Can you justify please and give me concrete arguments why the two structures are equivalent ( except inserting data in T(id, date,value))
    because i have to make a report.i didnt say any thing like
    two structures are equivalent ( except inserting data in T(id, date,value)i said
    About structure : TABLE1(id, date, value) is better than TABLE1(id, date, sample1, .... sample144)because
    1) oracle has restriction for numbers of column. Ok you can have 144 columns now but for future if you must have more than 1000 columns , what will you do?
    2) Restrictions on Table Compression (Table compression is not supported for tables with more than 255 columns.)
    3) store same type values on diffrent columns is bad practise
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i4383
    i remember i seen Toms article about this but now i cant find it sorry ((( if i found i will post here

  • Is it possible to create table with partition in compress mode

    Hi All,
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables.
    My question is:
    cant we create a table with partition/subpartition in compress mode..? Please help.
    Below is the code that i have used for table creation.
    CREATE TABLE temp
      TRADE_ID                    NUMBER,
      SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      SRC_TRADE_ID                VARCHAR2(60 BYTE),
      SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      TRADE_STATUS                VARCHAR2(60 BYTE),
      TRADE_TYPE                  VARCHAR2(60 BYTE),
      SECURITY_TYPE               VARCHAR2(60 BYTE),
      VOLUME                      NUMBER,
      ENTRY_DATE                  DATE,
        REASON                      VARCHAR2(255 BYTE),
    TABLESPACE data
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    NOLOGGING
    COMPRESS
    NOCACHE
    PARALLEL (DEGREE 6 INSTANCES 1)
    MONITORING
    PARTITION BY RANGE (TRADE_DATE)
    SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    SUBPARTITION TEMPLATE
      (SUBPARTITION SALES VALUES ('sales'),
       SUBPARTITION MAG VALUES ('MAG'),
       SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
       SUBPARTITION FIS VALUES ('FIS'),
       SUBPARTITION GD VALUES ('GS'),
       SUBPARTITION ST VALUES ('ST'),
       SUBPARTITION KOR VALUES ('KOR'),
       SUBPARTITION BLR VALUES ('BLR'),
       SUBPARTITION SUT VALUES ('SUT'),
       SUBPARTITION RM VALUES ('RM'),
       SUBPARTITION DEFAULT VALUES (default)
    PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
        LOGGING
            TABLESPACE data
         ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE data,
        SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE data,
        SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE data,
        SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE data,
        SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE data,
        SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE data,
        SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE data,
        SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE data,
        SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE data,
        SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE data,
        SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE data)); Edited by: user11942774 on 8 Dec, 2011 5:17 AM

    user11942774 wrote:
    I want to create a table in compress option, with partitions. When i create with partitions the compression isnt enabled, but with noramal table creation the compression option is enables. First of all your CREATE TABLE statement is full of syntax errors. Next time test it before posting - we don't want to spend time on fixing things not related to your question.
    Now, I bet you check COMPRESSION value of partitioned table same way you do it for a non-partitioned table - in USER_TABLES - and therefore get wrong results. Since compreesion can be enabled on individual partition level you need to check COMPRESSION in USER_TAB_PARTITIONS:
    SQL> CREATE TABLE temp
      2  (
      3    TRADE_ID                    NUMBER,
      4    SRC_SYSTEM_ID               VARCHAR2(60 BYTE),
      5    SRC_TRADE_ID                VARCHAR2(60 BYTE),
      6    SRC_TRADE_VERSION           VARCHAR2(60 BYTE),
      7    ORIG_SRC_SYSTEM_ID          VARCHAR2(30 BYTE),
      8    TRADE_STATUS                VARCHAR2(60 BYTE),
      9    TRADE_TYPE                  VARCHAR2(60 BYTE),
    10    SECURITY_TYPE               VARCHAR2(60 BYTE),
    11    VOLUME                      NUMBER,
    12    ENTRY_DATE                  DATE,
    13      REASON                      VARCHAR2(255 BYTE),
    14    TRADE_DATE                  DATE
    15  )
    16  TABLESPACE users
    17  PCTUSED    0
    18  PCTFREE    10
    19  INITRANS   1
    20  MAXTRANS   255
    21  NOLOGGING
    22  COMPRESS
    23  NOCACHE
    24  PARALLEL (DEGREE 6 INSTANCES 1)
    25  MONITORING
    26  PARTITION BY RANGE (TRADE_DATE)
    27  SUBPARTITION BY LIST (SRC_SYSTEM_ID)
    28  SUBPARTITION TEMPLATE
    29    (SUBPARTITION SALES VALUES ('sales'),
    30     SUBPARTITION MAG VALUES ('MAG'),
    31     SUBPARTITION SPI VALUES ('SPI', 'SPIM', 'SPIIA'),
    32     SUBPARTITION FIS VALUES ('FIS'),
    33     SUBPARTITION GD VALUES ('GS'),
    34     SUBPARTITION ST VALUES ('ST'),
    35     SUBPARTITION KOR VALUES ('KOR'),
    36     SUBPARTITION BLR VALUES ('BLR'),
    37     SUBPARTITION SUT VALUES ('SUT'),
    38     SUBPARTITION RM VALUES ('RM'),
    39     SUBPARTITION DEFAULT_SUB VALUES (default)
    40    )  
    41  (  
    42   PARTITION RMS_TRADE_DLY_MAX VALUES LESS THAN (MAXVALUE)    
    43      LOGGING
    44          TABLESPACE users
    45       ( SUBPARTITION TS_MAX_SALES VALUES ('SALES')      TABLESPACE users,
    46      SUBPARTITION TS_MAX_MAG VALUES ('MAG')      TABLESPACE users,
    47      SUBPARTITION TS_MAX_SPI VALUES ('SPI', 'SPIM', 'SPIIA')      TABLESPACE users,
    48      SUBPARTITION TS_MAX_FIS VALUES ('FIS')      TABLESPACE users,
    49      SUBPARTITION TS_MAX_GS VALUES ('GS')      TABLESPACE users,
    50      SUBPARTITION TS_MAX_ST VALUES ('ST')      TABLESPACE users,
    51      SUBPARTITION TS_MAX_KOR VALUES ('KOR')      TABLESPACE users,
    52      SUBPARTITION TS_MAX_BLR VALUES ('BLR')      TABLESPACE users,
    53      SUBPARTITION TS_MAX_SUT VALUES ('SUT')      TABLESPACE users,
    54      SUBPARTITION TS_MAX_RM VALUES ('RM')      TABLESPACE users,
    55      SUBPARTITION TS_MAX_DEFAULT VALUES (default)      TABLESPACE users));
    Table created.
    SQL>
    SQL>
    SQL> SELECT  PARTITION_NAME,
      2          COMPRESSION
      3    FROM USER_TAB_PARTITIONS
      4    WHERE TABLE_NAME = 'TEMP'
      5  /
    PARTITION_NAME                 COMPRESS
    RMS_TRADE_DLY_MAX              ENABLED
    SQL> SELECT  COMPRESSION
      2    FROM USER_TABLES
      3    WHERE TABLE_NAME = 'TEMP'
      4  /
    COMPRESS
    SQL> SY.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Convert non-partition table to partition table

    Hello Everybody
    I am just want to ask about how to Convert non-partition table to partition table ?
    Thanks
    Ramez S. Sawires

    Dear ARF
    First of all thank you for replying me , second do u have any links talking about dbms_redefinition package
    I am using Database Oracle 10g
    Thanks
    Ramez S. Sawires
    Message was edited by:
    Ramez S. Sawires

  • Migrating a new partition table with transportable tablespace

    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

    user564785 wrote:
    I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
    1) create a table as CTAS from 2012 in new Tablespace on source
    2) transport the tablespace
    3) Add partition to existing partition table Or exchange partition
    Oracle has also documented this procedure:
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

  • "A GUID Partition Table (GPT) partitioning scheme is required."

    Hi.
    Im trying to encrypt an external HDD and it says "A GUID Partition Table (GPT) partitioning scheme is required."
    Im aware HDD needs a MAC OS Plus (Journaled), I have check disk Information and its formated as MAC OS Plus (Journaled), so why its keep saying to a GPT Partion scheme is required)
    Any idea on how to overcome this?
    Thanks!

    Issue was with partition map, it was Apple instead GUID as listed in diskutil list

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • ORA-00604 & ORA-30512 CANNOT MODIFY TABLE MORE THAN ONCE IN A TRANSACTION

    We have a requirement where two tables should be in sync at any given point
    in time with respect to the structure of the tables.
    Any change on table/column via ALTER (MODIFY, ADD, RENAME COLUMN, DROP
    COLUMN) on the parent table should be replicated to the replica table.
    I created a DDL_TRIGGER on the schema and the desired result is achieved but
    for the following one scenario for which its failing.
    The issue is, if we try to reduce the width of the column (via ALTER ..
    MODIFY) it fails with the following error
    ORA-00604: error occurred at recursive SQL level 1
    ORA-30512: cannot modify DEVTESTF_OIM.REPLICA_ABC more than once in a
    transaction
    Please follow the steps to reproduce the issue (the issue is across the DB
    version checked on 10.2, 11.1 and 11.2 DB version)
    -- Step1 Create Parent Table
    CREATE TABLE abc (col1 VARCHAR2(10))
    -- Step2 Create Replica Table
    CREATE TABLE replica_abc (col1 VARCHAR2(10))
    -- Step3 Create DDL Trigger
    CREATE OR REPLACE TRIGGER ddl_trigger
    AFTER ALTER ON SCHEMA
    DECLARE
    operation VARCHAR2(30);
    object_name VARCHAR2(30);
    l_sqltext VARCHAR2(100);
    i PLS_INTEGER;
    l_count NUMBER:=0;
    sql_text dbms_standard.ora_name_list_t;
    BEGIN
    i := dbms_standard.sql_txt(sql_text);
    SELECT ora_sysevent, ora_dict_obj_name, UPPER(sql_text(i))
    INTO operation, object_name, l_sqltext
    FROM dual;
    IF ora_dict_obj_name = 'ABC' THEN
    l_count := INSTR(l_sqltext,'ADD CONSTRAINT',1,1);
    l_count := l_count + INSTR(l_sqltext,'DISABLE',1,1);
    l_count := l_count + INSTR(l_sqltext,'DROP CONSTRAINT',1,1);
    l_count := l_count + INSTR(l_sqltext,'PRIMARY KEY',1,1);
    l_count := l_count + INSTR(l_sqltext,'ADD CHECK',1,1);
    IF (l_count = 0) THEN
    l_count := INSTR(l_sqltext,'ADD',1,1);
    l_count := l_count + INSTR(l_sqltext,'MODIFY',1,1);
    l_count := l_count + INSTR(l_sqltext,'DROP COLUMN',1,1);
    l_count := l_count + INSTR(l_sqltext,'RENAME
    COLUMN',1,1);
    IF (l_count >0) THEN
    l_sqltext := REPLACE(l_sqltext,'TABLE ABC','TABLE REPLICA_ABC');
    execute immediate l_sqltext;
    END IF;
    END IF;
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    RAISE ;
    END;
    -- Step 4 Issue the following ALTER command on the Parent table 'ABC'
    ALTER TABLE ABC MODIFY COL1 VARCHAR2(9);
    will show the following
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-30512: cannot modify DEVTESTF_OIM.REPLICA_ABC more than once in a
    transaction
    ORA-06512: at line 34
    whereas the following commands works fine
    ALTER TABLE ABC MODIFY COL1 VARCHAR2(11);
    and also the rest of the operations
    ALTER TABLE ABC ADD COL2 VARCHAR2(20);
    ALTER TABLE ABC RENAME COLUMN COL2 TO COL3;
    ALTER TABLE ABC DROP COLUMN COL3;
    The issue is while reducing the size of VARCHAR2 columns, while the rest of
    option works fine.
    Any suggestion or workaround please.

    It looks like a bug to me. The failing statement from the SQL trace is
    PARSE ERROR #12:len=77 dep=3 uid=0 oct=3 lid=0 tim=1263332549608656 err=30512
    select /*+ first_rows */ 1 from "TIM"."REPLICA_ABC" where LENGTHB("COL1") > 9and exception cannot explain it.

  • ORA-07445 in the alert log when inserting into table with XMLType column

    I'm trying to insert an xml-document into a table with a schema-based XMLType column. When I try to insert a row (using plsql-developer) - oracle is busy for a few seconds and then the connection to oracle is lost.
    Below you''ll find the following to recreate the problem:
    a) contents from the alert log
    b) create script for the table
    c) the before-insert trigger
    d) the xml-schema
    e) code for registering the schema
    f) the test program
    g) platform information
    Alert Log:
    Fri Aug 17 00:44:11 2007
    Errors in file /oracle/app/oracle/product/10.2.0/db_1/admin/dntspilot2/udump/dntspilot2_ora_13807.trc:
    ORA-07445: exception encountered: core dump [SIGSEGV] [Address not mapped to object] [475177] [] [] []
    Create script for the table:
    CREATE TABLE "DNTSB"."SIGNATURETABLE"
    (     "XML_DOCUMENT" "SYS"."XMLTYPE" ,
    "TS" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE
    ) XMLTYPE COLUMN "XML_DOCUMENT" XMLSCHEMA "http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd" ELEMENT "Object"
    ROWDEPENDENCIES ;
    Before-insert trigger:
    create or replace trigger BIS_SIGNATURETABLE
    before insert on signaturetable
    for each row
    declare
    -- local variables here
    l_sigtab_rec signaturetable%rowtype;
    begin
    if (:new.xml_document is not null) then
    :new.xml_document.schemavalidate();
    end if;
    l_sigtab_rec.xml_document := :new.xml_document;
    end BIS_SIGNATURETABLE2;
    XML-Schema (xmldsig-core-schema.xsd):
    =====================================================================================
    <?xml version="1.0" encoding="utf-8"?>
    <!-- Schema for XML Signatures
    http://www.w3.org/2000/09/xmldsig#
    $Revision: 1.1 $ on $Date: 2002/02/08 20:32:26 $ by $Author: reagle $
    Copyright 2001 The Internet Society and W3C (Massachusetts Institute
    of Technology, Institut National de Recherche en Informatique et en
    Automatique, Keio University). All Rights Reserved.
    http://www.w3.org/Consortium/Legal/
    This document is governed by the W3C Software License [1] as described
    in the FAQ [2].
    [1] http://www.w3.org/Consortium/Legal/copyright-software-19980720
    [2] http://www.w3.org/Consortium/Legal/IPR-FAQ-20000620.html#DTD
    -->
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:xdb="http://xmlns.oracle.com/xdb"
    targetNamespace="http://www.w3.org/2000/09/xmldsig#" version="0.1" elementFormDefault="qualified">
    <!-- Basic Types Defined for Signatures -->
    <xs:simpleType name="CryptoBinary">
    <xs:restriction base="xs:base64Binary">
    </xs:restriction>
    </xs:simpleType>
    <!-- Start Signature -->
    <xs:element name="Signature" type="ds:SignatureType"/>
    <xs:complexType name="SignatureType">
    <xs:sequence>
    <xs:element ref="ds:SignedInfo"/>
    <xs:element ref="ds:SignatureValue"/>
    <xs:element ref="ds:KeyInfo" minOccurs="0"/>
    <xs:element ref="ds:Object" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureValue" type="ds:SignatureValueType"/>
    <xs:complexType name="SignatureValueType">
    <xs:simpleContent>
    <xs:extension base="xs:base64Binary">
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:extension>
    </xs:simpleContent>
    </xs:complexType>
    <!-- Start SignedInfo -->
    <xs:element name="SignedInfo" type="ds:SignedInfoType"/>
    <xs:complexType name="SignedInfoType">
    <xs:sequence>
    <xs:element ref="ds:CanonicalizationMethod"/>
    <xs:element ref="ds:SignatureMethod"/>
    <xs:element ref="ds:Reference" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="CanonicalizationMethod" type="ds:CanonicalizationMethodType"/>
    <xs:complexType name="CanonicalizationMethodType" mixed="true">
    <xs:sequence>
    <xs:any namespace="##any" minOccurs="0" maxOccurs="unbounded"/>
    <!-- (0,unbounded) elements from (1,1) namespace -->
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <xs:element name="SignatureMethod" type="ds:SignatureMethodType"/>
    <xs:complexType name="SignatureMethodType" mixed="true">
    <xs:sequence>
    <xs:element name="HMACOutputLength" minOccurs="0" type="ds:HMACOutputLengthType"/>
    <xs:any namespace="##other" minOccurs="0" maxOccurs="unbounded"/>
    <!-- (0,unbounded) elements from (1,1) external namespace -->
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <!-- Start Reference -->
    <xs:element name="Reference" type="ds:ReferenceType"/>
    <xs:complexType name="ReferenceType">
    <xs:sequence>
    <xs:element ref="ds:Transforms" minOccurs="0"/>
    <xs:element ref="ds:DigestMethod"/>
    <xs:element ref="ds:DigestValue"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    <xs:attribute name="URI" type="xs:anyURI" use="optional"/>
    <xs:attribute name="Type" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <xs:element name="Transforms" type="ds:TransformsType"/>
    <xs:complexType name="TransformsType">
    <xs:sequence>
    <xs:element ref="ds:Transform" maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:complexType>
    <xs:element name="Transform" type="ds:TransformType"/>
    <xs:complexType name="TransformType" mixed="true">
    <xs:choice minOccurs="0" maxOccurs="unbounded">
    <xs:any namespace="##other" processContents="lax"/>
    <!-- (1,1) elements from (0,unbounded) namespaces -->
    <xs:element name="XPath" type="xs:string"/>
    </xs:choice>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <!-- End Reference -->
    <xs:element name="DigestMethod" type="ds:DigestMethodType"/>
    <xs:complexType name="DigestMethodType" mixed="true">
    <xs:sequence>
    <xs:any namespace="##other" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Algorithm" type="xs:anyURI" use="required"/>
    </xs:complexType>
    <xs:element name="DigestValue" type="ds:DigestValueType"/>
    <xs:simpleType name="DigestValueType">
    <xs:restriction base="xs:base64Binary"/>
    </xs:simpleType>
    <!-- End SignedInfo -->
    <!-- Start KeyInfo -->
    <xs:element name="KeyInfo" type="ds:KeyInfoType"/>
    <xs:complexType name="KeyInfoType" mixed="true">
    <xs:choice maxOccurs="unbounded">
    <xs:element ref="ds:KeyName"/>
    <xs:element ref="ds:KeyValue"/>
    <xs:element ref="ds:RetrievalMethod"/>
    <xs:element ref="ds:X509Data"/>
    <xs:element ref="ds:PGPData"/>
    <xs:element ref="ds:SPKIData"/>
    <xs:element ref="ds:MgmtData"/>
    <xs:any processContents="lax" namespace="##other"/>
    <!-- (1,1) elements from (0,unbounded) namespaces -->
    </xs:choice>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="KeyName" type="xs:string"/>
    <xs:element name="MgmtData" type="xs:string"/>
    <xs:element name="KeyValue" type="ds:KeyValueType"/>
    <xs:complexType name="KeyValueType" mixed="true">
    <xs:choice>
    <xs:element ref="ds:DSAKeyValue"/>
    <xs:element ref="ds:RSAKeyValue"/>
    <xs:any namespace="##other" processContents="lax"/>
    </xs:choice>
    </xs:complexType>
    <xs:element name="RetrievalMethod" type="ds:RetrievalMethodType"/>
    <xs:complexType name="RetrievalMethodType">
    <xs:sequence>
    <xs:element ref="ds:Transforms" minOccurs="0"/>
    </xs:sequence>
    <xs:attribute name="URI" type="xs:anyURI"/>
    <xs:attribute name="Type" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <!-- Start X509Data -->
    <xs:element name="X509Data" type="ds:X509DataType"/>
    <xs:complexType name="X509DataType">
    <xs:sequence maxOccurs="unbounded">
    <xs:choice>
    <xs:element name="X509IssuerSerial" type="ds:X509IssuerSerialType"/>
    <xs:element name="X509SKI" type="xs:base64Binary"/>
    <xs:element name="X509SubjectName" type="xs:string"/>
    <xs:element name="X509Certificate" type="xs:base64Binary"/>
    <xs:element name="X509CRL" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax"/>
    </xs:choice>
    </xs:sequence>
    </xs:complexType>
    <xs:complexType name="X509IssuerSerialType">
    <xs:sequence>
    <xs:element name="X509IssuerName" type="xs:string"/>
    <xs:element name="X509SerialNumber" type="xs:integer"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End X509Data -->
    <!-- Begin PGPData -->
    <xs:element name="PGPData" type="ds:PGPDataType"/>
    <xs:complexType name="PGPDataType">
    <xs:choice>
    <xs:sequence>
    <xs:element name="PGPKeyID" type="xs:base64Binary"/>
    <xs:element name="PGPKeyPacket" type="xs:base64Binary" minOccurs="0"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"
    maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:sequence>
    <xs:element name="PGPKeyPacket" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"
    maxOccurs="unbounded"/>
    </xs:sequence>
    </xs:choice>
    </xs:complexType>
    <!-- End PGPData -->
    <!-- Begin SPKIData -->
    <xs:element name="SPKIData" type="ds:SPKIDataType"/>
    <xs:complexType name="SPKIDataType">
    <xs:sequence maxOccurs="unbounded">
    <xs:element name="SPKISexp" type="xs:base64Binary"/>
    <xs:any namespace="##other" processContents="lax" minOccurs="0"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End SPKIData -->
    <!-- End KeyInfo -->
    <!-- Start Object (Manifest, SignatureProperty) -->
    <xs:element name="Object" type="ds:ObjectType"/>
    <xs:complexType name="ObjectType" mixed="true">
    <xs:sequence minOccurs="0" maxOccurs="unbounded">
    <xs:any namespace="##any" processContents="lax"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    <xs:attribute name="MimeType" type="xs:string" use="optional"/> <!-- add a grep facet -->
    <xs:attribute name="Encoding" type="xs:anyURI" use="optional"/>
    </xs:complexType>
    <xs:element name="Manifest" type="ds:ManifestType"/>
    <xs:complexType name="ManifestType">
    <xs:sequence>
    <xs:element ref="ds:Reference" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureProperties" type="ds:SignaturePropertiesType"/>
    <xs:complexType name="SignaturePropertiesType">
    <xs:sequence>
    <xs:element ref="ds:SignatureProperty" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <xs:element name="SignatureProperty" type="ds:SignaturePropertyType"/>
    <xs:complexType name="SignaturePropertyType" mixed="true">
    <xs:choice maxOccurs="unbounded">
    <xs:any namespace="##other" processContents="lax"/>
    <!-- (1,1) elements from (1,unbounded) namespaces -->
    </xs:choice>
    <xs:attribute name="Target" type="xs:anyURI" use="required"/>
    <xs:attribute name="Id" type="xs:ID" use="optional"/>
    </xs:complexType>
    <!-- End Object (Manifest, SignatureProperty) -->
    <!-- Start Algorithm Parameters -->
    <xs:simpleType name="HMACOutputLengthType">
    <xs:restriction base="xs:integer"/>
    </xs:simpleType>
    <!-- Start KeyValue Element-types -->
    <xs:element name="DSAKeyValue" type="ds:DSAKeyValueType"/>
    <xs:complexType name="DSAKeyValueType">
    <xs:sequence>
    <xs:sequence minOccurs="0">
    <xs:element name="P" type="ds:CryptoBinary"/>
    <xs:element name="Q" type="ds:CryptoBinary"/>
    </xs:sequence>
    <xs:element name="G" type="ds:CryptoBinary" minOccurs="0"/>
    <xs:element name="Y" type="ds:CryptoBinary"/>
    <xs:element name="J" type="ds:CryptoBinary" minOccurs="0"/>
    <xs:sequence minOccurs="0">
    <xs:element name="Seed" type="ds:CryptoBinary"/>
    <xs:element name="PgenCounter" type="ds:CryptoBinary"/>
    </xs:sequence>
    </xs:sequence>
    </xs:complexType>
    <xs:element name="RSAKeyValue" type="ds:RSAKeyValueType"/>
    <xs:complexType name="RSAKeyValueType">
    <xs:sequence>
    <xs:element name="Modulus" type="ds:CryptoBinary"/>
    <xs:element name="Exponent" type="ds:CryptoBinary"/>
    </xs:sequence>
    </xs:complexType>
    <!-- End KeyValue Element-types -->
    <!-- End Signature -->
    </xs:schema>
    ===============================================================================
    Code for registering the xml-schema
    begin
    dbms_xmlschema.deleteSchema('http://xmlns.oracle.com/xdb/schemas/DNTSB/www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    dbms_xmlschema.DELETE_CASCADE_FORCE);
    end;
    begin
    DBMS_XMLSCHEMA.REGISTERURI(
    schemaurl => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    schemadocuri => 'http://www.sporfori.fo/schemas/www.w3.org/TR/xmldsig-core/xmldsig-core-schema.xsd',
    local => TRUE,
    gentypes => TRUE,
    genbean => FALSE,
    gentables => TRUE,
    force => FALSE,
    owner => 'DNTSB',
    options => 0);
    end;
    Test program
    -- Created on 17-07-2006 by EEJ
    declare
    XML_TEXT3 CLOB := '<Object xmlns="http://www.w3.org/2000/09/xmldsig#">
                                  <SignatureProperties>
                                       <SignatureProperty Target="">
                                            <Timestamp xmlns="http://www.sporfori.fo/schemas/dnts/general/2006/11/14">2007-05-10T12:00:00-05:00</Timestamp>
                                       </SignatureProperty>
                                  </SignatureProperties>
                             </Object>';
    xmldoc xmltype;
    begin
    xmldoc := xmltype(xml_text3);
    insert into signaturetable
    (xml_document, ts)
    values
    (xmldoc, current_timestamp);
    end;
    Platform information
    Operating system:
    -bash-3.00$ uname -a
    SunOS dntsdb 5.10 Generic_125101-09 i86pc i386 i86pc
    SQLPlus:
    SQL*Plus: Release 10.2.0.3.0 - Production on Fri Aug 17 00:15:13 2007
    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
    With the Partitioning and Data Mining options
    Kind Regards,
    Eyðun

    You should report this in a service request on http://metalink.oracle.com.
    It is a shame that you put all the effort here to describe your problem, but on the other hand you can now also copy & paste the question to Oracle Support.
    Because you are using 10.2.0.3; I am guessing that you have a valid service contract...

Maybe you are looking for

  • Where can I find an authorised iPhone repair shop in Indonesia?

    I bought my iPhone 5s in a bundled package from Indonesia's largest carrier on 28 January 2014 at its Jakarta office and its still under warranty except for user mistake. My iphone fell in a swimming pool in Bali and is now not working and  am scared

  • PO dont close with the GOODS RECEIPT

    hi We have this scenario. there are some goods receipt PO based in a Purchase order xxx, but  the  Goods receipt  doesnt a base document and the PO xxx  is a open status.  The goods receipt show in the remarks "based on xxx purchase order" and we can

  • Adobe Premiere Elements 7 and SSE4

    I have searched and searched, but still can't find information anywhere on this question. Maybe I've just missed it somehow.... My question is: Does Adobe Premiere Elements 7 make use of SSE4 instructions on Intel processors? Again my question regard

  • Error Validation in Struts is killing me!

    Hi I have got a problem with validating the string values to be entered in the registration form, (Makepaymentform.java, MakePayment.jsp... Below are the files and configuration. Basically the validation just don't work, when a user leaves a field em

  • ECM- Plan&Review Total/Distributed/Remaining dollar amounts are wrong

    Hello Experts- We have completed all of our prep steps in ECM and are ready to go in as managers to Plan and Review. On the Planning Overview screen for managers within the portal, the Merit Pool, Annual Bonus Pool and Discretionary Pool Available/Di