Performance of large tables with ADT columns

Hello,
We are planning to build a large table ( 1 billion+ rows) with one of the columns being an Advanced Data Type (ADT) column. The ADT column will be based on a TYPE that will have approximately (250 attributes).
We are using Oracle 10g R2
Can you please tell me the following:
1. How will Oracle store the data in the ADT column?
2. Will the entire ADT record fit in one block?
3. Is it still possible to partition a table on an attribute that is part of the ADT?
4. How will the performace be affected if Oracle does a full table scan of such a table?
5. How much space will Oracle take, if any, for storing a NULL in an ADT?
I think we can create indexes on the attribute of the ADT column. Please let me know if this is not true.
Thanks for your help.

I agree with D.Morgan that object type with 250 attributes is doubtful.
I don't like object tables (tables with "row objects") too.
But, your table is a relational table with object column ("column object").
C.J.Date in An introduction to Database Systems (2004, page 885) says:
"... object/relational systems ... are, or should be, basically just relational systems
that support the relational domain concept (i.e., types) properly - in other words, true relational systems,
meaning in particular systems that allow users to define their own types."
1. How will Oracle store the data in the ADT column?...
For some answers see:
“OR(DBMS) or R(DBMS), That is the Question”
http://www.quest-pipelines.com/pipelines/plsql/tips.htm#OCTOBER
and (of course):
"Oracle® Database Application Developer's Guide - Object-Relational Features" 10g Release 2 (10.2)
http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14260/adobjadv.htm#i1006903
Regards,
Zlatko

Similar Messages

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • HOW TO CREATE A TABLE WITH 800 COLUMNS?

    I have to create a table with 800 columns.I know the create statement to create a table but it will take more time.
    So tell me the other method.

    If you really think that you have to store 800 values for a given entity, it would be a wise idea if you store you information in columnar fashion. Make a main table and a attribute table, keep the primary identifier in the  main table and store other attributes in the attribute table where you can keep the primary key of the first table as foreign key (not necessary) to maintain the relationship.
    eg.
    emp_id
    emp_name
    dob
    city
    state
    country
    pincode
    1
    Mr X
    01/01/1990
    ABC
    ZXC
    MMM
    12345
    Can be stored as
    Main Table
    emp_id
    emp_name
    1
    Mr X
    Attribute Table
    attr_id
    emp_id
    attr_nam
    attr_value
    1
    1
    dob
    01/01/1990
    2
    1
    city
    ABC
    3
    1
    state
    ZXC
    4
    1
    country
    MMM
    5
    1
    pincode
    12345
    Creating table with large number of columns is bad design as suggested by other Gurus.
    Thanks

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • Sparse table with many columns

    Hi,
    I have a table that contains around 800 columns. The table is a sparse table such that many rows
    contain up to 50 populated columns (The others contain NULL).
    My questions are:
    1. Table that contains many columns can cause a performance problem? Is there an alternative option to
    hold table with many columns efficiently?
    2. Does a row that contains NULL values consume storage space?
    Thanks
    dyahav

    [NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
    A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
    Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
    Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
    Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
    Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
    My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
    HTH!

  • Exporting Table with CLOB Columns

    Hello All,
    I am trying to export table with clob columns with no luck. It errors saying EXP-00011TABLE do not exist.
    I can query the table and the owner is same as what i am exporting it from.
    Please let me know.

    An 8.0.6 client definitely changes things. Other posters have already posted links to information on what versions of exp and imp can be used to move data between versions.
    I will just add that if you were using a client to do the export then if the client version is less than the target database version you can upgrade the client or better yet if possilbe use the target database export utility to perform the export.
    I will not criticize the existance of an 8.0.6 system as we had a parent company dump a brand new 8.0.3 application on us less than two years ago. We have since been allowed to update the database and pro*c modules to 9.2.0.6.
    If the target database is really 8.0.3 then I suggest you consider using dbms_metadata to generate the DDL, if needed, and SQLPlus to extact the data into delimited files that you can then reload via sqlldr. This would allow you to move the data with some potential adjustments for any 10g only features in the code.
    HTH -- Mark D Powell --

  • Upgrading user tables with NCHAR columns in 10.2.0.1

    Hi,
    we have upgraded our database to 10.2.0.1 from 8.1.7.4 through DBUA
    Before upgradation our database was having WEISO8895P1 Character Set and WEISO8895P1 National character set.
    In 10g we are having WEISO8895P1 Character Set and AL16UTF16 National character set.
    Now to upgrade user tables with NCHAR columns, we have to perform the following steps to run the scripts :
    SQL> SHUTDOWN IMMEDIATE
    SQL> STARTUP RESTRICT
    SQL> @ utlnchar.sql
    SQL> @ n_switch.sql
    SQL> SHUTDOWN IMMEDIATE
    SQL> STARTUP
    But when i query for the NCHAR or NVARCHAR2 or NCLOB datatype for verification, the NCHAR columns in the user tables have not been upgraded, it still remains the same.
    Kindly suggest for the same.
    Regards
    Milin

    Kindly explain or post the following
    - the 'query' you used after 'upgradation' (a word not occurring in any dictionary and probably specific to the Hindi-English) for 'verification'
    - what result you expected
    - what 'still remains the same' means
    Kindly consider no one is looking over your should. If you don't plan to post anything other than 'It doesn't work', paid support might be a better option for you.
    Volunteers like we are not being paid to tear the information out of you.
    Sybrand Bakker
    Senior Oracle DBA

  • ASSM and table with LOB column

    I have a tablespace created with ASSM option. I've heard about that tables with LOB columns can't take the advantage of ASSM.
    I made a test : create a table T with BLOB column in a ASSM tablespace. I succeeded!
    Now I have some questions:
    1. Since the segments of table T can't use ASSM to manage its blocks, what's the actrual approach? The traditional freelists??
    2. Will there be some bad impacts on the usage of the tablespace if table T becomes larger and larger and is used frequently?
    Thanks in advance.

    Can you explain what you mean by #1 because I believe it is incorrect and it does not make sense in my personal opinion. You can create a table in an ASSM tablespace that has a LOB column from 9iR2 on I believe (could be wrong). LOBs don't follow the traditional PCTFREE/PCTUSED scenario. They allocate data in what are called "chunks" that you can define at the time you create the table. In fact I think the new SECUREFILE LOBs actually require ASSM tablespaces.
    HTH!

  • Export table with LOB column

    Hi!
    I have to export table with lob column (3 GB is the size of lob segment) and then drop that lob column from table. Table has about 350k rows.
    (I was thinking) - I have to:
    1. create new tablespace
    2. create copy of my table with CTAS in new tablespace
    3. alter new table to be NOLOGGING
    4. insert all rows from original table with APPEND hint
    5. export copy of table using transport tablespace feature
    6. drop newly created tablespace
    7. drop lob column and rebuild original table
    DB is Oracle 9.2.0.6.0.
    UNDO tablespace limited on 2GB with retention 10800 secs.
    When I tried to insert rows to new table with /*+append*/ hint operation was very very slow so I canceled it.
    How much time should I expect for this operation to complete?
    Is my UNDO sufficient enough to avoid snapshot too old?
    What do you think?
    Thanks for your answers!
    Regards,
    Marko Sutic

    I've seen that document before I posted this question.
    Still I don't know what should I do. Look at this document - Doc ID:     281461.1
    From that document:
    FIX
    Although the performance of the export cannot be improved directly, possible
    alternative solutions are:
    +1. If not required, do not use LOB columns.+
    or:
    +2. Use Transport Tablespace export instead of full/user/table level export.+
    or:
    +3. Upgrade to Oracle10g and use Export DataPump and Import DataPump.+
    I just have to speed up CTAS little more somehow (maybe using parallel processing).
    Anyway thanks for suggestion.
    Regards,
    Marko

  • Find size of table with XMLTYPE column STORE AS BINARY XML

    Hi,
    I have a table with structure as:
    CREATE TABLE XML_TABLE_1
    ID NUMBER NOT NULL,
    SOURCE VARCHAR2(255 CHAR) NOT NULL,
    XML_TEXT SYS.XMLTYPE,
    CREATION_DATE TIMESTAMP(6) NOT NULL
    XMLTYPE XML_TEXT STORE AS BINARY XML (
    TABLESPACE Tablespace1_LOB
    DISABLE STORAGE IN ROW
    CHUNK 16384
    RETENTION
    CACHE READS
    NOLOGGING)
    ALLOW NONSCHEMA
    DISALLOW ANYSCHEMA
    TABLESPACE Tablespace2_DATA
    - So HOW do I find the total size occupied by this table. Does BINARY storage work as LOB storage. i.e. I need to consider USER_LOBS as well for this.
    OR foll. will work
    select segment_name as tablename, sum(bytes/ (1024 * 1024 * 1024 )) as tablesize_in_GB
    From dba_segments
    where segment_name = 'XML_TABLE_1'
    and OWNER = 'SCHEMANAME'
    group by segment_name ;
    - Also if I am copying it to another table of same structure as:
    Insert /*+ append */ into XML_TABLE_2 Select * from XML_TABLE_1.
    Then how much space in ROllbackSegment do I need. Is it equal to the size of the table XML_TABLE_1?
    Thanks..

    I think foll query calculates it right while including the LOB storage as:
    SELECT SUM(bytes)/1024/1024/1024 gb
    FROM dba_segments
    WHERE (owner = 'SCHEMA_NAME' and
    segment_name = 'TABLE_NAME')
    OR (owner, segment_name) IN (
    SELECT owner, segment_name
    FROM dba_lobs
    WHERE owner = 'SCHEMA_NAME'
    AND table_name = 'TABLE_NAME')
    It's 80 GB for our Table with XMLType Column.
    But for the second point:
    Do we need 80GB of UNDO/ROLLBACK Segment for performing:
    Insert /*+ append */ into TableName1 Select * from TableName;
    Thanks..

  • Selection in table with Master Column

    Hi,
    can anybody explain me how selection works in table with MasterColumn (TreeByNestingTableColumn)?
    I've created a table with several columns and the "master" one behaves differently from others.
    First of all, leaf elements of master column are not clickable (i.e. I don't receive notification "onLeadSelect" from the table, when user clicks in the cell of leaf element of the column). With expandable elements I receive loadchildren event and can react on selection.
    I'm using TextView as cell editor, will try LinkToAction, may be it will resolve situation, but it's rather workaround as solution
    Second, getLeadSelection doesn't work anymore for the table - it works with indexes of top level elements only.
    However onLeadSelect action brings me selected row number, which I somehow should map to the tree.
    I.e. when I get "25" as selected element, I recursively traverse context node to find 25th element. Which is rather slow operation as for any selection I have to reiterate whole structure and I hope that visual appearance is similar to context element order.
    May be there is simplier ways to get selected element by its index in "flat" list?
    And one another strange thing: first click on the table sends me onLeadSelect "0", but highlight points to the element user clicked - not necessary first one.
    Context node is "single" selection, initLeadSelect=true,
    selection = 0..1 (with 1..1 setTreeSelection doesn't work - method I want to use to be able to select a node in hierarchy on my own)
    all quite similar to other tables I have, which works well. Just the problems with the table, where Master Column is used.
    Thank you!
    Best regards,
    Nick

    >Valery's proposal was to perform reverse traverse from current element to root and summarize parent element indexes.
    It seems to work:
    in method, which sets selection I applied
    // we need to know index of the element not in whole context, but in scope of table-relevant node
    int index = getIndexInContext(selElement, wdContext.currentXXXElement());
    IContextElement curContext = wdContext.currentContextElement();
    int firstVisibleRow = curContext.getTableFirstVisibleRow();
    int visibleRows = curContext.getTableVisibleRows();
    if (index < firstVisibleRow || index > firstVisibleRow + visibleRows) {
    curContext.setTableFirstVisibleRow(index);
    and method getIndexInContext looks as:
        private int getIndexInContext (IWDNodeElement element, IWDNodeElement root) {
              int index = element.index();
              if (element == root) {
                   return index;
              IWDNodeElement parent = element.node().getParentElement();
              if (parent == null) {
    // do whatever you like here for error diagnostic
                   myMsgMgr.reportException("Internal Error: getIndexInContext - element is not under passed root", false);
                   return 0;
              // +1 - every level adds 1 (otherwise indexes of first children of hierarchy will be 0)
              return  index + getIndexInContext(parent, root) + 1;
    Best regards,
    Nick

  • Protected memory exception during bulkcopy of table with LOB columns

    Hi,
    I'm using ADO BulkCopy to transfer data from a SqlServer database to Oracle. In some cases, and it seems to only happen on some tables with LOB columns, I get the following exception:
    System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
    at Oracle.DataAccess.Client.OpsBC.Load(IntPtr opsConCtx, OPOBulkCopyValCtx* pOPOBulkCopyValCtx, IntPtr pOpsErrCtx, Int32& pBadRowNum, Int32& pBadColNum, Int32 IsOraDataReader, IntPtr pOpsDacCtx, OpoMetValCtx* pOpoMetValCtx, OpoDacValCtx* pOpoDacValCtx)
    at Oracle.DataAccess.Client.OracleBulkCopy.PerformBulkCopy()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteDataSourceToServer()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteToServer(IDataReader reader)
    I'm not sure exactly what conditions trigger this exception; perhaps only when the LOB data is large enough?
    I'm using Oracle 11gR2.
    Has anyone seen this or have an idea how to solve it?
    If I catch the exception and attempt row-by-row copying, I then get "ILLEGAL COMMIT" exceptions.
    Thanks,
    Ben

    From the doc:
    Data Types Supported by Bulk Copy
    The data types supported by Bulk Copy are:
    ORA_SB4
    ORA_VARNUM
    ORA_FLOAT
    ORA_CHARN
    ORA_RAW
    ORA_BFLOAT
    ORA_BDOUBLE
    ORA_IBDOUBLE
    ORA_IBFLOAT
    ORA_DATE
    ORA_TIMESTAMP
    ORA_TIMESTAMP_TZ
    ORA_TIMESTAMP_LTZ
    ORA_INTERVAL_DS
    ORA_INTERVAL_YM
    I can't find any documentation on these datatypes (I'm guessing these are external datatype constants used by OCI??). This list suggests ADO.NET bulk copy of LOBs isn't supported at all (although it works fine most of the time), unless I'm misreading it.
    The remaining paragraphs don't appear to apply to me.
    Thanks,
    Ben

  • Error while importing a table with BLOB column

    Hi,
    I am having a table with BLOB column. When I export such a table it gets exported correctly, but when I import the same in different schema having different tablespace it throws error
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "CMM_PARTY_DOC" ("PDOC_DOC_ID" VARCHAR2(10), "PDOC_PTY_ID" VAR"
    "CHAR2(10), "PDOC_DOCDTL_ID" VARCHAR2(10), "PDOC_DOC_DESC" VARCHAR2(100), "P"
    "DOC_DOC_DTL_DESC" VARCHAR2(100), "PDOC_RCVD_YN" VARCHAR2(1), "PDOC_UPLOAD_D"
    "ATA" BLOB, "PDOC_UPD_USER" VARCHAR2(10), "PDOC_UPD_DATE" DATE, "PDOC_CRE_US"
    "ER" VARCHAR2(10) NOT NULL ENABLE, "PDOC_CRE_DATE" DATE NOT NULL ENABLE) PC"
    "TFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS"
    " 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "TS_AGIMSAPPOLOLIVE030"
    "4" LOGGING NOCOMPRESS LOB ("PDOC_UPLOAD_DATA") STORE AS (TABLESPACE "TS_AG"
    "IMSAPPOLOLIVE0304" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE L"
    "OGGING STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEF"
    "AULT))"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'TS_AGIMSAPPOLOLIVE0304' does not exist
    I used the import command as follows :
    imp <user/pwd@conn> file=<dmpfile.dmp> fromuser=<fromuser> touser=<touser> log=<logfile.log>
    What can I do so that this table gets imported correctly?
    Also tell me "whether the BLOB is stored in different tablespace than the default tablespace of the user?"
    Thanks in advance.

    Hello,
    U can either
    1) create a tablespace with the same name in destination where you are trying to import.
    2) get the ddl of the table, modify the tablespace name to reflect the existing tablespace name in destination and run the ddl in the destination database, and run your import command with option ignore=y--> which will ignore all the create errors.
    Regards,
    Vinay

  • How to create a table with editable column values.

    Hi everyone,
    I think this is very simple but i'm not able to find how to do this. This is my requirement. I need to create a table with n columns and 1 row initially. user should be able to enter data into this table and click of a button should insert data into the data base table. Also there should be a button at the bottom of the table to add 1 row to the table.
    I know how to do the insert to database, but can anyone please let me know how to create a table that allows user to enter data and how to create a add 1 row button?
    Thanks in Advance!

    Raghu,
    Go through the ToolBox tutorial Create Page & Advanced table section of OAF Guide.
    Step 1 - You require to create EO & a VO based on this EO. This EO will be of DataBase table where you want to insert the data.
    Step 2 - Create a Advanced table region. (Refer this Adavanced table section for more on this)
    Step 3 - Attach this VO in the BC4J component of Adavanced Table region.
    Regards,
    Gyan

  • How to read/write a binary file from/to a table with BLOB column

    I have create a table with a column of data type BLOB.
    I can read/write an IMAGE file from/to the column of the table using:
    READ_IMAGE_FILE
    WRITE_IMAGE_FILE
    How can I do the same for other binary files, e.g. aaaa.zip?

    There is a package procedure dbms_lob.readblobfromfile to read BLOB's from file.
    http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#sthref3583
    To write a BLOB to file you can use a Java procedure (pre Oracle 9i R2) or utl_file.put_raw (there is no dbms_lob.writelobtofile).
    http://asktom.oracle.com/pls/ask/f?p=4950:8:1559124855641433424::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:6379798216275

Maybe you are looking for

  • Computer Locks up/hangs in various scenarios

    So, I just built this computer a few months ago. As soon as I started using it, I was getting hangs/lock ups and random reboots. At first it was quite random, but as time progressed, it seems to only be affected by some certain actions. What I've fou

  • FRM-41211 SSL-Failure From Forms

    I have a Reports/Forms problem when running run_product from within Forms in a loop (for i in a..b loop run_product(i) end loop) for generating emails or faxes using MS Outlook starting with the second report. Error message "FRM-41211 SSL-Failure..."

  • Premiere newbie would appreciate and be very happy for some "get-started" help

    Hi QUESTION 1 I´m a complete newbie to premiere elements. I have imported some .mts files and some .avi files from my digital HD camera....when I create a project all available move clips appear in my project to become possible to drag into my projec

  • No Facts in decision service

    Hi All, Iam trying to create a decision service in a BPEL. Iam able to establish a connection to ruleengine and repository, but no facts appear in decision service wizard after selecting the ruleset. I have read in one of the thread that the facts wi

  • OJMS Topic Example?

    Hi I have asking for examples, but I'm attempting to setup a proof of concept using BAM, BPEL and Application Server JMS Queue/Topic.(The queue/topic being on a separate server running iAS Beta 10.1.3.1) My iAS has some demo topics and queues set up