IOT without overflow treats differently char/varchar2

Hi,
test case with 8k block size:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Automatic Storage Management and Data Mining options
SQL>
     create table test1 (x number primary key, y char(2000)) ORGANIZATION INDEX;
Table created.
SQL>
     create table test2 (x number primary key, y varchar2(2000)) ORGANIZATION INDEX;
create table test2 (x number primary key, y varchar2(2000)) ORGANIZATION INDEX
ERROR at line 1:
ORA-01429: Index-Organized Table: no data segment to store overflow row-pieces
SQL>
     create table test2 (x number primary key, y varchar2(1000)) ORGANIZATION INDEX;
create table test2 (x number primary key, y varchar2(1000)) ORGANIZATION INDEX
ERROR at line 1:
ORA-01429: Index-Organized Table: no data segment to store overflow row-pieces
SQL>
     create table test2 (x number primary key, y varchar2(500)) ORGANIZATION INDEX;
Table created.why does it fail with varchar2 but not with char?
Thanks,
Joaquin Gonzalez
Edited by: user4070490 on 10-feb-2011 23:41
Maybe it is something different with characterset used for char and varchar:
     select * from V$NLS_PARAMETERS;
PARAMETER               VALUE
NLS_LANGUAGE            AMERICAN
NLS_TERRITORY           SPAIN
NLS_CURRENCY            €
NLS_ISO_CURRENCY        SPAIN
NLS_NUMERIC_CHARACTERS  ,.
NLS_CALENDAR            GREGORIAN
NLS_DATE_FORMAT         dd/mm/yyyy hh24:mi:ss
NLS_DATE_LANGUAGE       AMERICAN
NLS_CHARACTERSET        AL32UTF8
NLS_SORT                BINARY
NLS_TIME_FORMAT         HH24:MI:SSXFF
NLS_TIMESTAMP_FORMAT    DD/MM/YYYY HH24:MI:SS.FF3
NLS_TIME_TZ_FORMAT      HH24:MI:SSXFF TZR
NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
NLS_DUAL_CURRENCY       €
NLS_NCHAR_CHARACTERSET  AL16UTF16
NLS_COMP                BINARY
NLS_LENGTH_SEMANTICS    CHAR
NLS_NCHAR_CONV_EXCP     FALSEddl after creation is:
CREATE TABLE "EXPLOTACION"."TEST1"
  (    "X" NUMBER,
       "Y" CHAR(2000 CHAR),
        PRIMARY KEY ("X") ENABLE
  ) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
TABLESPACE "EXPLOTACION"
PCTTHRESHOLD 50;
CREATE TABLE "EXPLOTACION"."TEST2"
  (    "X" NUMBER,
       "Y" VARCHAR2(500 CHAR),
        PRIMARY KEY ("X") ENABLE
  ) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
TABLESPACE "EXPLOTACION"
PCTTHRESHOLD 50;

its about size of colunm not datatype
SQL>/
create table test4 (x number primary key, y char(4000)) ORGANIZATION INDEX;
ERROR at line 1:
ORA-00910: specified length too long for its datatype
Elapsed: 00:00:00.56
SQL>create table test4 (x number primary key, y char(400)) ORGANIZATION INDEX;
Table created.
Elapsed: 00:00:00.28
SQL>

Similar Messages

  • IOTs and overflow tablespace

    Hi everyone,
    I created an IOT without an overflow tablespace. What does Oracle do if it cannot fit a row into the index block? Do it give an error or will it use multiple index blocks to store the row of data?
    Thanks,
    Paul

    <quote>I created an IOT without an overflow tablespace.
    What does Oracle do if it cannot fit a row into the index block?</quote>
    It will never happen.
    An IOT defined with or without an overflow will get a default PCTTHRESHOLD of 50 ...
    the allowed values are between 1 and 50.
    PCTTHRESHOLD represents the percentage in an index block which could be occupied by one IOT row.
    flip@FLOP>  show parameter db_block_size
    NAME                                 TYPE        VALUE
    db_block_size                        integer     8192
    flip@FLOP>  create table tt
      2  ( x   number     not null
      3   ,c1  char(2000) not null
      4   ,c2  char(2000) not null
      5   ,constraint ttpk primary key (x)
      6  )
      7  organization index
      8  ;
    create table tt
    ERROR at line 1:
    ORA-01429: Index-Organized Table: no data segment to store overflow row-pieces
    So, Oracle sees we didn’t define an overflow and does some upfront checking on the declared size of the row
    (BTW, it will be the same if the CHAR columns were defined as VARCHAR2).
    It figures up it cannot store a row in 50% of my 8K block (there is some overhead in the block too).
    flip@FLOP> create table tt
      2  ( x   number     not null
      3   ,c1  char(2000) not null
      4   ,constraint ttpk primary key (x)
      5  )
      6  organization index
      7  ;
    Table created.
    Not it can. And just to show, PCTHRESHOLD is defaulted to 50 …
    flip@FLOP>  select dbms_metadata.get_ddl( 'TABLE', 'TT') from dual;
    DBMS_METADATA.GET_DDL('TABLE','TT')
      CREATE TABLE "FLIP"."TT"
       (    "X" NUMBER NOT NULL ENABLE,
            "C1" CHAR(2000) NOT NULL ENABLE,
             CONSTRAINT "TTPK" PRIMARY KEY ("X") ENABLE
       ) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "USERS"
    PCTTHRESHOLD 50
    So, if you have wide rows you better have an overflow … well, you have to.

  • What is actual diff between char & varchar2

    what is actual diffrence between char & varchar2. it's effect in memory.
    when & why i use char or varchar2.
    tell me friends.
    Thanks in Advnace
    Rajendrasingh

    Hi,
    Varchar is variable length datatype. Char is constant length datatype.
    I will try to explain it on example:
    create table tableb
    (char_col CHAR(10),
    varchar_col varchar2(10));
    insert into tableb
    values ('test','test');
    select * from tableb;
    select length(char_col), length(varchar_col) from tableb;
    LENGTH(CHAR_COL) LENGTH(VARCHAR_COL)
    10 4
    1 rows selected
    As you can see if data inserted inside char column is less then specified length then Oracle fill data with spaces to have specified length. Varchar column stores exactly the same data it was inserted without additional spaces.
    Peter D.

  • Vendor confirmation via EDI treated differently by MRP than Manually enterd

    I have a strange situation with MRP. 
    I have a schedule agreement with the following information.
    Date          Delivery sch            vendor confirmation
    05/03/07          100                         100
    05/10/07          100                         100
    05/17/07          100                         100
    05/24/07          100                         100
    05/31/07          100                         100
    06/07/07          100                         100
    06/14/07          100                         100
    Then I increase my independent requirments on 05/10/07 to 200. 
    If I have input my vendor confirmation manually via ME38, then MRP will add an additional Schedule line on 05/10/07 as I expected.   However if the vendor confirmations were sent in via idocs/edi then the schedule line is not added until the end of the confirmations.   
    Does anyone know why MRP would treat these differently based on if the confirmation were manual versus EDI?

    It should not be treated saperately by MRP if it is posted manually or via EDI
    i can think one thing that
    do you have diff confirmaiton category for EDI and diff for manually than it can happen
    so just make sure that all the confirmation category which you use is check for MRP relevant or not in
    SPRO_MM_Purchasing-conformation-Set Up Confirmation Control-select your EDI confirmation contro key and click on confirmation sequence
    and check here that the conf cat is check for MRP or not
    if not than check than you will see that MEP will pickup that.

  • HT204053 We have two Iphone 4S but we set each one up with different accounts, how can we share purchased or downloaded music from and iTunes card without purchasing twp different cards?

    We have two Iphone 4s with different accounts, what would we need to do to share music from and Itune card if one of us purchased and the other did not?

    Put all of the music on the computer(s) to which you sync, then sync to your idevices

  • Are plain text files treated differently in SP 2010 than MOSS 2007?

    We just moved to SP 2010 a week ago.  A user just contacted us asking why the files she accesses are not displaying.
    When she accessed these plain text files in MOSS 2007, they displayed within IE as a full page of text.
    When she accesses these now in SP 2010, she is prompted to download the file.
    These files do not end in .txt - they are names such as 2013.09.20 or whatever.
    Is there a way to configure 2010 to display, without any interpretation, plain text files within the browser?
    Thank you.

    Hi ,
    According to your description, my understanding is that you could not access the txt files with browser after migrading to SharePoint 2010..
    Whether this issue occurred if you upload a new txt file to a library.
    In my testing, everything worked well. The Open Documents in Client Applications by Default feature was acitve, and in the Library Settings->Advanced settings, ‘Use the server default(Open in the client application)’ was selected. When I clicked the file
    name, it promted a dialog like the below, I selected ‘Read-Only’, and clicked OK, the file could open with browser.
    Please go to IE->Tools->Internet Options->Programs->Manage add-ons, enable all add-ons related to SharePoint, compare the result.
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

  • Symlinks treated differently in mavericks stacks

    in mountain lion I used symlinks to reference most of the folders in my home directory to a hard drive connected to my time capsule. this worked as expexcted when I placed a stack on my dock: in grid veiw, a symlinked folder would open withn the stack. Since I have upgraded to mavericks symlinked folders open in a new finder window. is there some new way that mavericks stacks handles symlinks? can I change it back? is a herd link the way to go now? can I make hard links to directories across volumes???

    OK, Installed Mavericks 10.9.2 and the behaviour is a bit different now.
    It would appear that Apple has gone to their old ways and now when you put an "alias" in your home folders it will follow them AS symlinks.

  • Treating xmltype as varchar2

    Hi people,
    I'm using DBD::Oracle perl module to access data in Oracle Database. the database characterset is WE8MSWIN1252 and my data are utf8 encoded in an xmltype.
    I can get the xml using this query :
    select x.xmlbody.getCLOBVal() FROM tableA x;
    However with this query the xmlbody is WE8MSWIN1252 encoded cause I get a CLOB.
    How can get an utf8 encoded value with this query in sqlplus :
    select xmlbody from tableA;
    But I can use this query in my perl script cause the xmltype is not supported in the last version of the module DBD::Oracle. How to get a varchar2 value from this query ? How can I cast this object type ?
    thanks

    Note I would STRONGLY recommend converting the database characater set to AL32UTF8 if at all possible.... Bad things will happen if you end up with an UTF8 characater that cannot be represented in the database characater set.

  • Is it possible to back up an external hard drive to Time Machine from two different computers without creating two different records?

    I have an external hard drive that I do my work on. I use it on my office MacPro and on my MacBook Pro. I have Time Machine setup on both computers to back up the external hard drive to a remote backup drive, but when it backs it up, the backup goes into either the MacPro record of it, or the MacBook Pro record. The effect of this is that if I'm at my laptop and I need to access an earlier version of a file that existed when I was working on my MacPro, I can't access it.
    Is it at all possible to create a universal record (or Sparse Disk Image Bundle to use Apple lingo) for my external hard drive?

    Hi BDAqua,
    Thanks for your help as always, sorry I never thanked you earlier, finally got some time to attend this.
    I'm in the mail folder in the library of my user name. Just to make sure I get all my mail boxes I'm going to archive simply the whole 'Mail' folder after going in to the library of my unsername.
    I notice you mentioned about importing there .... I've seen the .emix files you mention .... i have 30,000+ of them.
    What I find helpful in mail is that if I type a client's second name, in this case 'Lopez' I can pull up all the mails relating to him in an instant, around 450. Obviously it would be impossible to pick through 30K .emix files to find the right ones .... so could I easily import them in to my Mail on my new computer?
    Could I select the whole lot of them, then drag and drop them in to my new Mail on the new computer, and then use the search function on Mail? ... the normal desktop search function doesn't seem to look through the emix files, or emails for that matter.
    Also, just as an extra precaution, could I make a smart folder in Mail with the name of said client for example for, then archive that folder separately?
    TY
    Message was edited by: Scottishengineer

  • How do I get Itunes to properly import a cd without making a different album for each song..

    I try to import my cds, and everytime it features another artist it creates a new album for that song... how do I get it to imort the album and show as just the album name...

    Hi
    All my tracks are MP3.
    In List View, I do not have an Album column I have an Album by Artist column.  If I click on this column the albums look right in the playlist on the screen but when I burn to the CD the tracks are not in one folder per album instead they seem to be in folders per Artist then Album, which as one of the albums is a compilation has created about 30 folders.
    If I switch on the 'Sort by Album' column it burns the disc as I expected.

  • Autosuggest:  XML and JSON datasets treated differently

    Not sure if I found a bug or not, but I found something kinda
    odd with Autosuggest and XML data vs. JSON data.
    Check this url:
    http://www.christopherchin.com/sandbox/testSpry.html
    The first input box is XML data the second input is JSON.
    When you type Christopher in both fields, you get a list of
    people named christopher.
    Now type a space and the first letter of the last name of one
    of the Chris'.
    XML works. JSON doesn't.
    Thoughts?
    Thanks!
    Chris

    Look at your JSON output, it has 2 spaces between the first
    name and last name for some of your entries, but only in your JSON
    output, not your XML output.
    If you fix your JSON ouput so it matches the XML output, I'll
    bet it works.
    --== Kin ==--

  • Accent marks in Pages. I'm looking for a simple way to employ  accent marks when writing in another language WITHOUT using a different keyboard.

    In Microsoft Word for PC, all one needs to do is hold the CTL key down, press the accent key needed, such as ` or ~ or ' for example, then press the letter key over which the accent should go.  Does Pages and Apple Mail not have such an easy way to do it?  Thanks to anyone who has a better way.

    Type the dead letter and the vowel:
    eg umlat is option u then the vowel = äëïöü
    option i then the vowel = âêîôû
    option e then the vowel i = áéíóú
    option ` (above the tab key) then the vowel = àèìòù
    option n then the letter = ñãõ
    They are characters usually associated with the accent and have been standard on the Mac for decades.
    Peter

  • CHAR (and VARCHAR2) attribute semantics in Catalog Views

    Hello,
    It may sound quite elementary but I am struggling to find information about this.
    I have created an object that contains an attribute declared in the type spec as CHAR(8 CHAR).
    But where in the catalog views will I find the character semantic ?
    For example, in the case of TABLEs, I can look at the column CHAR_USED in the DBA_TAB_COLUMNS view. Unfortunately though, none of the three views (DBA_TYPES, DBA_TYPE_VERSIONS and DBA_TYPE_ATTRS) related to type specifications have such an indicator.
    I could of course do a string analysis on the TEXT column of the DBA_TYPE_VERSIONS view, but that would seem rather unnatural if there is a direct wayl ...
    Any ideas?
    Best Regards
    Philip

    mouratos wrote:
    But where in the catalog views will I find the character semantic?Good question. Oracle obviously keeps it in data dictionary tables but not in views. If you check DBA_TYPE_ATTRS view, you'll see it is based on sys.attribute$ table. It has column PROPERTIES and it looks like it holds attribute character semantic:
    SQL> create or replace
      2    type mouratos_chars
      3    as object(n varchar2(10 char))
      4  /
    Type created.
    SQL> create or replace
      2    type mouratos_bytes
      3    as object(n varchar2(10 byte))
      4  /
    Type created.
    SQL> select  ta.type_name,
      2          ta.attr_type_name,
      3          properties
      4    from  user_types t,
      5          user_type_attrs ta,
      6          sys.attribute$
      7    where toid = t.type_oid
      8      and ta.type_name = t.type_name
      9      and t.type_name like 'MOURATOS%'
    10  /
    TYPE_NAME                      ATTR_TYPE_NAME                 PROPERTIES
    MOURATOS_BYTES                 VARCHAR2                                2
    MOURATOS_CHARS                 VARCHAR2                             4098
    SQL> Now based on the above we can deduce it is 13th bit (from the right):
    select  ta.type_name,
            ta.attr_type_name,
            case
              when ta.attr_type_name in ('CHAR','VARCHAR2') then decode(bitand(properties,4096),0,'BYTE','CHAR')
            end char_semantic
      from  user_types t,
            user_type_attrs ta,
            sys.attribute$
      where toid = t.type_oid
        and ta.type_name = t.type_name
        and t.type_name like 'MOURATOS%'
    TYPE_NAME                      ATTR_TYPE_NAME                 CHAR
    MOURATOS_BYTES                 VARCHAR2                       BYTE
    MOURATOS_CHARS                 VARCHAR2                       CHAR
    SQL>  SY.

  • Validating XML stored in CLOB without exceptions for malformed XML

    Hello,
    I have a CLOB column that should contain an XML which conforms to a registered XSD schema. However, there might also be cases, in which the CLOB content violates the XSD or is not even wellformed XML at all.  Now I am trying to find a way to either
    identify all records, for which the CLOB contains well-formed XML that validates against my schema (find all "good" records) , or
    find all the other records (those with either no XML at all, malformed XML, or XML that does not validate against the XSD) (find all "exceptions")
    The problem is that all XML-validation methods I know of (e.g. isXmlValid or XmlType.isSchemaValid)  require an XmlType instance, and that records, for which no proper XmlType can be constructed from the CLOB, because the CLOB does not contain well-formed XML, will cause an exception, rather than just returning false or NULL or something else I could filter out.
    Is there a way to do something like
    SELECT * FROM MYTABLE where <MY_XML_COL is wellformed XML> and <MY_XML_COL validates against 'blabla.xsd'>
    without getting an ORA-31011 or whatever other kind of exception as soon as there is a row that does not contain proper XML?
    Thank you...

    So here is my example - this will be quiet a long post now...
    First, I create the table with the CLOBs
    CREATE TABLE ZZZ_MINITEST_CLOB (
      ID NUMBER(10,0) NOT NULL ENABLE,
      XML_DOC CLOB,
      REMARKS VARCHAR2(1000),
      CONSTRAINT PK_ZZZ_MINITEST_CLOB PRIMARY KEY (ID)
    ) NOLOGGING;
    Then I insert some examples
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    10,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>one.one</l2Name>
      </l2Node>
      <minitestId>1</minitestId>
      <minitestName>One</minitestName>
    </minitest_root>',
    'Basic valid example');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    20,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>two.one</l2Name>
      </l2Node>
      <minitestId>2</minitestId>
      <!-- minitestName element is missing -->
    </minitest_root>',
    'Invalid - minitestName element is missing');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    30,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>three.one</l2Name>
      </l2Node>
      <!-- minitestName and minitestId are switched -->
      <minitestName>Three</minitestName>
      <minitestId>3</minitestId>
    </minitest_root>',
    'Invalid - minitestName and minitestId are switched');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    40,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>four.one</l2Name>
      </l2Node>
      <l2Node>
      <l2Num>2</l2Num>
      <l2Name>four.two</l2Name>
      </l2Node>
      <l2Node>
      <l2Num>3</l2Num>
      <l2Name>four.three</l2Name>
      </l2Node>
      <minitestId>4</minitestId>
      <minitestName>Four</minitestName>
    </minitest_root>',
    'Valid - multiple l2Node elements');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    50,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>five.one</l2Name>
      </l2Node>
      <l2Node>
      <l2Num>2</l2Num>
      <l2Name>five.two</l2Name>
      </l2Node>
      <minitestId>4</minitestId>
      <minitestName>Five</minitestName>
      <!-- another l2Node node, but too far down -->
      <l2Node>
      <l2Num>3</l2Num>
      <l2Name>five.three</l2Name>
      </l2Node>
    </minitest_root>',
    'Invalid - another l2Node node, but too far down');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    60,
    'something that is not even xml',
    'Invalid - something that is not even xml');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    70,
    NULL,
    'Invalid - good old NULL');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    80,
    '<minitest_root>
      <l2Node>
      <l2Num>1</l2Num>
      <l2Name>
      <unexpected_node>
      this one should not be here!
      </unexpected_node>
      </l2Name>
      </l2Node>
      <minitestId>8</minitestId>
      <minitestName>Eight</minitestName>
    </minitest_root>',
    'Invalid - unexpected addl node');
    INSERT INTO ZZZ_MINITEST_CLOB VALUES (
    90,
    '<something> that has tags but is no xml </either>',
    'Invalid - something that has tags but is no xml either');
    COMMIT;
    Next I register the XSD...
    BEGIN
    DBMS_XMLSCHEMA.REGISTERSCHEMA(
      'http://localhost/minitest.xsd',
      '<?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" xmlns:oraxdb="http://xmlns.oracle.com/xdb" oraxdb:storeVarrayAsTable="true" oraxdb:schemaURL="http://localhost/minitest.xsd">
      <xs:element name="minitest_root" oraxdb:SQLName="MINITEST_ROOT" oraxdb:SQLType="MINITEST_TYPE" oraxdb:defaultTable="" oraxdb:tableProps="NOLOGGING" oraxdb:maintainDOM="false">
        <xs:complexType oraxdb:SQLType="MINITEST_TYPE" oraxdb:maintainDOM="false">
          <xs:sequence>
            <xs:element maxOccurs="unbounded" ref="l2Node" oraxdb:SQLName="L2_NODE" oraxdb:SQLType="L2_NODE_TYPE" oraxdb:SQLCollType="L2_NODE_COLL" oraxdb:maintainDOM="false"/>
            <xs:element ref="minitestId" oraxdb:SQLName="MINITEST_ID" oraxdb:SQLType="NUMBER" oraxdb:maintainDOM="false"/>
            <xs:element ref="minitestName" oraxdb:SQLName="MINITEST_NAME" oraxdb:SQLType="VARCHAR2" oraxdb:maintainDOM="false"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="l2Node" oraxdb:SQLName="L2_NODE" oraxdb:SQLType="L2_NODE_TYPE" oraxdb:defaultTable="" oraxdb:tableProps="TABLESPACE CATALOG NOLOGGING" oraxdb:maintainDOM="false">
        <xs:complexType oraxdb:SQLType="L2_NODE_TYPE" oraxdb:maintainDOM="false">
          <xs:sequence>
            <xs:element ref="l2Num" oraxdb:SQLName="L2_NUM" oraxdb:SQLType="NUMBER" oraxdb:maintainDOM="false"/>
            <xs:element ref="l2Name" oraxdb:SQLName="L2_NAME" oraxdb:SQLType="VARCHAR2" oraxdb:maintainDOM="false"/>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
      <xs:element name="l2Num" type="number10_0Type" oraxdb:SQLName="L2_NUM" oraxdb:SQLType="NUMBER" oraxdb:maintainDOM="false" />
      <xs:element name="l2Name" type="varchar100Type" oraxdb:SQLName="L2_NAME" oraxdb:SQLType="VARCHAR2" oraxdb:maintainDOM="false" />
      <xs:element name="minitestId" type="number10_0Type" oraxdb:SQLName="MINITEST_ID" oraxdb:SQLType="NUMBER" oraxdb:maintainDOM="false" />
      <xs:element name="minitestName" type="varchar100Type" oraxdb:SQLName="MINITEST_NAME" oraxdb:SQLType="VARCHAR2" oraxdb:maintainDOM="false" />
      <xs:simpleType name="varchar100Type">
        <xs:restriction base="xs:string">
          <xs:maxLength value="100"/>
        </xs:restriction>
      </xs:simpleType>
      <xs:simpleType name="number10_0Type">
        <xs:restriction base="xs:integer">
          <xs:totalDigits value="10"/>
        </xs:restriction>
      </xs:simpleType>
    </xs:schema>',
      GENTABLES=>FALSE
    END;
    And then I create my XML tables - four different ones for four different test cases. I am trying unstructured binary XML and strucured OR-based XML, each with and without a trigger.
    One with structured OR storage and no trigger
    CREATE TABLE ZZZ_MINITEST_ORXML (
      ID NUMBER(10,0) NOT NULL ENABLE,
      XML_CONTENT XMLTYPE,
      CONSTRAINT PK_ZZZ_MINITEST_ORXML PRIMARY KEY (ID)
    ) NOLOGGING
    VARRAY "XML_CONTENT"."XMLDATA"."L2_NODE" STORE AS TABLE "ZZZ_OR_L2_NODE" (NOLOGGING) RETURN AS LOCATOR
    XMLTYPE XML_CONTENT STORE AS OBJECT RELATIONAL XMLSCHEMA "http://localhost/minitest.xsd" ELEMENT "minitest_root";
    One with structured OR storage which will also have a trigger added further down
    CREATE TABLE ZZZ_MINITEST_ORXML_TRGR (
      ID NUMBER(10,0) NOT NULL ENABLE,
      XML_CONTENT XMLTYPE,
      CONSTRAINT PK_ZZZ_MINITEST_ORXML_TRGR PRIMARY KEY (ID)
    ) NOLOGGING
    VARRAY "XML_CONTENT"."XMLDATA"."L2_NODE" STORE AS TABLE "ZZZ_OR_L2_NODE_TRGR" (NOLOGGING) RETURN AS LOCATOR
    XMLTYPE XML_CONTENT STORE AS OBJECT RELATIONAL XMLSCHEMA "http://localhost/minitest.xsd" ELEMENT "minitest_root";
    One with unstructured binary XML in a SECUREFILE, no trigger
    CREATE TABLE ZZZ_MINITEST_BINXML_TRGR (
      ID NUMBER(10,0) NOT NULL ENABLE,
      XML_CONTENT XMLTYPE,
      CONSTRAINT PK_ZZZ_MINITEST_BINXML_TRGR PRIMARY KEY (ID)
    ) NOLOGGING
    XMLTYPE COLUMN XML_CONTENT STORE AS SECUREFILE BINARY XML
      (NOCOMPRESS NOCACHE NOLOGGING KEEP_DUPLICATES);
    One with unstructured binary XML in a SECUREFILE, which will also have a trigger
    CREATE TABLE ZZZ_MINITEST_BINXML (
      ID NUMBER(10,0) NOT NULL ENABLE,
      XML_CONTENT XMLTYPE,
      CONSTRAINT PK_ZZZ_MINITEST_BINXML PRIMARY KEY (ID)
    ) NOLOGGING
    XMLTYPE COLUMN XML_CONTENT STORE AS SECUREFILE BINARY XML
      (NOCOMPRESS NOCACHE NOLOGGING KEEP_DUPLICATES);
    Then I create the error logging tables
    begin
    DBMS_ERRLOG.CREATE_ERROR_LOG ('ZZZ_MINITEST_BINXML', 'ZZZ_MINITEST_BINXML_E', NULL, NULL, TRUE);
    DBMS_ERRLOG.CREATE_ERROR_LOG ('ZZZ_MINITEST_BINXML_TRGR', 'ZZZ_MINITEST_BINXML_TRGR_E', NULL, NULL, TRUE);
    DBMS_ERRLOG.CREATE_ERROR_LOG ('ZZZ_MINITEST_ORXML', 'ZZZ_MINITEST_ORXML_E', NULL, NULL, TRUE);
    DBMS_ERRLOG.CREATE_ERROR_LOG ('ZZZ_MINITEST_ORXML_TRGR', 'ZZZ_MINITEST_ORXML_TRGR_E', NULL, NULL, TRUE);
    END;
    Now the two triggers
    create or replace trigger TRG_ZZZ_MINITEST_BINXML
    BEFORE UPDATE OR INSERT ON ZZZ_MINITEST_BINXML_TRGR
    REFERENCING NEW AS NEW
    for each row
    BEGIN
      :NEW.XML_CONTENT := :NEW.XML_CONTENT.createSchemaBasedXML('http://localhost/minitest.xsd');
      :NEW.XML_CONTENT.SCHEMAVALIDATE();
    END TRG_ZZZ_MINITEST_BINXML;
    CREATE OR REPLACE TRIGGER TRG_ZZZ_MINITEST_ORXML
    BEFORE UPDATE OR INSERT ON ZZZ_MINITEST_ORXML_TRGR
    REFERENCING NEW AS NEW
    for each row
    BEGIN
      :NEW.XML_CONTENT := :NEW.XML_CONTENT.createSchemaBasedXML('http://localhost/minitest.xsd');
      :NEW.XML_CONTENT.SCHEMAVALIDATE();
    END TRG_ZZZ_MINITEST_ORXML;
    I also tried to just validate the XML using queries such as the below, since validating all without the WHERE also caused exception and abortion:
    SELECT ID, DECODE(XMLISVALID(XMLTYPE(XML_DOC), 'http://localhost/minitest.xsd'), 1, 'Y', 0, 'N', '?') VALID
    FROM ZZZ_MINITEST_CLOB WHERE ID=90;
    The results are in column "VALID" in the below tables
    I tried inserting all records from ZZZ_MINITEST_CLOB into the four test tables at once using a query such as
    INSERT INTO ZZZ_MINITEST_ORXML_TRGR
    SELECT ID, XMLPARSE(DOCUMENT XML_DOC WELLFORMED) FROM ZZZ_MINITEST_CLOB
    LOG ERRORS INTO ZZZ_MINITEST_ORXML_TRGR_E REJECT LIMIT UNLIMITED;
    I also tried different versions of creating the XML in the SELECT portion:
    XMLPARSE(DOCUMENT XML_DOC WELLFORMED)
    XMLPARSE(DOCUMENT XML_DOC WELLFORMED)
    XMLTYPE(XML_DOC)
    XMLTYPE(XMLDOC, 'http://localhost/minitest.xsd', 0, 1)
    Almost all combinations of the four test tables with the four ways to instantiate the XML caused exceptions and query abortion despite the LOG ERRORS INTO clause.
    In order to find the exact problems, I started inserting records one by one using queries like
    INSERT INTO ZZZ_MINITEST_ORXML
    SELECT ID, XMLPARSE(DOCUMENT XML_DOC) FROM ZZZ_MINITEST_CLOB WHERE ID=10
    LOG ERRORS INTO ZZZ_MINITEST_ORXML_E REJECT LIMIT UNLIMITED;
    or
    INSERT INTO ZZZ_MINITEST_BINXML_TRGR
    SELECT ID, XMLTYPE(XMLDOC, 'http://localhost/minitest.xsd', 0, 1) FROM ZZZ_MINITEST_CLOB WHERE ID=20
    LOG ERRORS INTO ZZZ_MINITEST_BINXML_TRGR_E REJECT LIMIT UNLIMITED;
    I captured the results of each in the below four tables. "1" and "0" are number of rows inserted. 0 means there was no record inserted and instead, there was an error logged in the ERROR LOGGING table.
    The ORA-????? exception numbers mean that this error actually caused the INSERT to fail and abort rather than just being logged in the LOGGING TABLE.These are the most critical cases for me. Why do these exceptions "bubble up" forcing the query to be aborted rather than being logged in the error log as well???
    This table is for INSERT of XMLs using XMLTYPE(XML_DOC)
    Test case
    VALID
    Binary XML,
    no trigger
    Binary XML,
    trigger
    OR XML,
    no trigger
    OR XML,
    trigger
    10
    Good
    Y
    1
    1
    1
    1
    20
    no name
    N
    1
    0
    1
    0
    30
    switched tags
    N
    1
    1
    1
    1
    40
    Good
    Y
    1
    1
    1
    1
    50
    L2 down
    N
    1
    1
    1
    1
    60
    no xml
    EX
    ORA-31011
    ORA-31011
    ORA-31011
    ORA-31011
    70
    NULL
    EX
    ORA-06502
    ORA-06502
    ORA-06502
    ORA-06502
    80
    addl. Node
    N
    1
    0
    ORA-31187
    ORA-31187
    90
    crappy xml
    EX
    ORA-31011
    ORA-31011
    ORA-31011
    ORA-31011
    This table is for INSERT of XMLs using XMLTYPE(XML_DOC, 'http://localhost/minitest.xsd', 0, 1)
    Test case
    VALID
    Binary XML,
    no trigger
    Binary XML,
    trigger
    OR XML,
    no trigger
    OR XML,
    trigger
    10
    Good
    Y
    1
    1
    1
    1
    20
    no name
    N
    1
    0
    1
    0
    30
    switched tags
    N
    1
    1
    1
    1
    40
    Good
    Y
    1
    1
    1
    1
    50
    L2 down
    N
    1
    1
    1
    1
    60
    no xml
    EX
    ORA-31043
    ORA-31043
    ORA-31043
    ORA-31043
    70
    NULL
    EX
    ORA-06502
    ORA-06502
    ORA-06502
    ORA-06502
    80
    addl. Node
    N
    1
    0
    ORA-31187
    ORA-31187
    90
    crappy xml
    EX
    ORA-31043
    ORA-31043
    ORA-31043
    ORA-31043
    This table is for INSERT of XMLs using XMLPARSE(DOCUMENT XML_DOC WELLFORMED)
    Test case
    VALID
    Binary XML,
    no trigger
    Binary XML,
    trigger
    OR XML,
    no trigger
    OR XML,
    trigger
    10
    Good
    Y
    1
    1
    1
    1
    20
    no name
    N
    1
    0
    1
    0
    30
    switched tags
    N
    1
    1
    1
    1
    40
    Good
    Y
    1
    1
    1
    1
    50
    L2 down
    N
    1
    1
    1
    1
    60
    no xml
    ORA-31061
    0
    ORA-31011
    ORA-31011
    70
    NULL
    1
    0
    1
    0
    80
    addl. Node
    N
    0
    0
    ORA-31187
    ORA-31187
    90
    crappy xml
    ORA-31061
    0
    ORA-30937
    ORA-30937
    This table is for INSERT of XMLs using XMLPARSE(DOCUMENT XML_DOC) (same as above, but without "WELLFORMED")
    Test case
    VALID
    Binary XML,
    no trigger
    Binary XML,
    trigger
    OR XML,
    no trigger
    OR XML,
    trigger
    10
    Good
    Y
    1
    1
    1
    1
    20
    no name
    N
    1
    0
    1
    0
    30
    switched tags
    N
    1
    1
    1
    1
    40
    Good
    Y
    1
    1
    1
    1
    50
    L2 down
    N
    1
    1
    1
    1
    60
    no xml
    ORA-31011
    ORA-31011
    ORA-31011
    ORA-31011
    70
    NULL
    1
    0
    1
    0
    80
    addl. Node
    N
    1
    0
    ORA-31187
    ORA-31187
    90
    crappy xml
    ORA-31011
    ORA-31011
    ORA-31011
    ORA-31011
    As you can see, different ways to instantiate the XML which is being inserted will cause different results for the most critical cases (60, 80 and 90). One will go through with an error logged in the logging table and no record inserted, others cause exceptions. Also, using XMLPARSE with or without WELLFORMED causes different ORA numbers for record 90...
    It seems like the only way to not get any exceptions for my test records is inserting XMLs that were created using XMLPARSE (DOCUMENT ... WELLFORMED) into the trigger-protected binary XML table. However, that causes me to miss record number 80, which is one that I really need, because it is well formed and I could use XSLTs to fix the XML by removing the extra tags, as long as I know which ones those are.

  • Calculating Profit of COGS - different ways of calculating

    I found that in SAP B1 the profit calculation varies and it depends on where you look and the different settings allowed. For instance:
    I am talking about the calculation method of COGS:
    1. In System Initialisation/Document Setting one can set the workings of Profit on say Last Calculated Price
    2. In individual (say Item Code ABC) Item Master Data one can set the calculation method on FIFO
    3. In another Item Master Data (say Item code FGH) can be set on Average Cost method
    So now we have 3 ways of calculating COGS in the same system - so how can one achieve consitency?
    My understanding is that setting 1 is not taken into account when the system charges COGS, so does anybody know what is the purpose of this setting?
    Thank you
    Robert
    I am trying to debate this in order to confirm my understanding

    Robert--
    The basic rule to understand in B1 is that specific settings take precedence over general settings.  The company-wide default settings that you find in the System Initialization screens set the general parameters for the system.  But they may be overridden on specific records - like the Item Master or BP Master.  On these, company policy may determine that certain items or business partners do not fit the general rules and should be treated differently in some regard.  So an authorized user may change the default settings for these records, either during the initial entry of the record or afterwards (if changes to the specific fields are permitted).  Similarly, users may change some settings on specific documents if they represent an exception situation that should not be forced to follow the usual rules.
    The other consideration - which should make accountants happy - is that settings that affect journal postings generally cannot be changed once they have been used.  So for example, a customer's AR control account number cannot be changed once there are any transactions for that customer. An item's valuation method cannot be changed while there are quantities in the warehouse or open documents that may have caused journal postings.  These rules may not represent consistency, as you wish for, but they do enforce an integrity in the accounting process.
    The place where these restrictions cause problems is if records are imported from a legacy system without regard to the system defaults.  Since these records are brought in and saved in a single process, they can have whatever settings are specified in the import process, which may or may not be the same as the system defualts you wanted.  Once these records are created and then used in transactions, it is very difficult to make them conform to what you wanted.  Changing the system defaults will never cause retroactive changes to existing records.
    If you are having continuing problems with settings on new records, I would consider restricting authorization for adding or editing new records to those people who understand what the ramifications are.  Additional training for your personnel may also be helpful, now that you have a better understanding of how the system works.

Maybe you are looking for

  • TNS-12560: TNS:protocol adapter error

    Hi, I recently upgraded from oracle 9 to 10g and I get the error above. I created a new database I CAN connect to it using sqlplus (from console), but I cannot connect using any ide. .ora files are the following: listener.ora: LISTENER =      (DESCRI

  • Using a data output file as an input file for test development

    I have an output file from a system that I would like to use to simulate the device being attached to the computer.  Using the read module I can input the entire file into the charting module, but it reads the entire file.  Is there a way setup the i

  • Exception occuring while i'm trying to convert decimal to Hex

    Hello all you genius people, please help me out. I'm using this simple code(given below) to read String from a file then convert it to Hex and write in anoher file. But i'm getting this exception that, after reading few lines it gives such exception:

  • Report executing

    Hi all, i developed a report for CO for execution of price variance difference for production order, but in the standard sap, the t.code SQVI is taking mininum 15 Min to execute the same, this dev. report taking more than 50 Min. can anybody justify

  • Why do I receive phone calls on my iPad?

    I received a phone call on my iPad with iOS 8. How do I stop that. I don't like using a speaker phone on my calls.