Conflict resolution for a table with LOB column ...

Hi,
I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
be very much appreciated.

Hi,
I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
which I hope make sense and are relevant:
1.Does an apply process detect update conflicts on LOB columns?
2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
database from the custom handler?
4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
Thanks again.

Similar Messages

  • When comparing database tables with lob columns via "Database diff" in different environments indexes are shown as different

    When using "Database diff" selecting other schemas only for compare own objects are shown too!Hi!
    For tables with lob columns (clob, blob, etc.) indexes with system names are automatically created per lob column.
    If I am on different database instances (eg. dev/test) these system names can differ and are shown as differences, but these is a false positive.
    Unfortunately there is now way to influence the index names.
    Any chance to fix this in sql developer?
    Best regards
    Torsten

    Only the Sql Dev team can respond to that question.
    Such indexes should ONLY be created by Oracle and should NOT be part of any DDL that you, the user, maintains outside the database since they will be created by Oracle when the table is created and will be named at that time.
    It is up to the Sql Dev team to decide whether to deal with that issue and how to deal with it.

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Export table with LOB column

    Hi!
    I have to export table with lob column (3 GB is the size of lob segment) and then drop that lob column from table. Table has about 350k rows.
    (I was thinking) - I have to:
    1. create new tablespace
    2. create copy of my table with CTAS in new tablespace
    3. alter new table to be NOLOGGING
    4. insert all rows from original table with APPEND hint
    5. export copy of table using transport tablespace feature
    6. drop newly created tablespace
    7. drop lob column and rebuild original table
    DB is Oracle 9.2.0.6.0.
    UNDO tablespace limited on 2GB with retention 10800 secs.
    When I tried to insert rows to new table with /*+append*/ hint operation was very very slow so I canceled it.
    How much time should I expect for this operation to complete?
    Is my UNDO sufficient enough to avoid snapshot too old?
    What do you think?
    Thanks for your answers!
    Regards,
    Marko Sutic

    I've seen that document before I posted this question.
    Still I don't know what should I do. Look at this document - Doc ID:     281461.1
    From that document:
    FIX
    Although the performance of the export cannot be improved directly, possible
    alternative solutions are:
    +1. If not required, do not use LOB columns.+
    or:
    +2. Use Transport Tablespace export instead of full/user/table level export.+
    or:
    +3. Upgrade to Oracle10g and use Export DataPump and Import DataPump.+
    I just have to speed up CTAS little more somehow (maybe using parallel processing).
    Anyway thanks for suggestion.
    Regards,
    Marko

  • Table with LOB column

    Hi,
    I have a proble. How to move a table with LOB colum? How to create a table with LOB column by specifying another tablespace for LOB column?
    Please help me.
    Regards,
    Mathew

    What is it that you are not able to find?
    The link that I provided was answer to your second question.

  • ASSM and table with LOB column

    I have a tablespace created with ASSM option. I've heard about that tables with LOB columns can't take the advantage of ASSM.
    I made a test : create a table T with BLOB column in a ASSM tablespace. I succeeded!
    Now I have some questions:
    1. Since the segments of table T can't use ASSM to manage its blocks, what's the actrual approach? The traditional freelists??
    2. Will there be some bad impacts on the usage of the tablespace if table T becomes larger and larger and is used frequently?
    Thanks in advance.

    Can you explain what you mean by #1 because I believe it is incorrect and it does not make sense in my personal opinion. You can create a table in an ASSM tablespace that has a LOB column from 9iR2 on I believe (could be wrong). LOBs don't follow the traditional PCTFREE/PCTUSED scenario. They allocate data in what are called "chunks" that you can define at the time you create the table. In fact I think the new SECUREFILE LOBs actually require ASSM tablespaces.
    HTH!

  • Protected memory exception during bulkcopy of table with LOB columns

    Hi,
    I'm using ADO BulkCopy to transfer data from a SqlServer database to Oracle. In some cases, and it seems to only happen on some tables with LOB columns, I get the following exception:
    System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
    at Oracle.DataAccess.Client.OpsBC.Load(IntPtr opsConCtx, OPOBulkCopyValCtx* pOPOBulkCopyValCtx, IntPtr pOpsErrCtx, Int32& pBadRowNum, Int32& pBadColNum, Int32 IsOraDataReader, IntPtr pOpsDacCtx, OpoMetValCtx* pOpoMetValCtx, OpoDacValCtx* pOpoDacValCtx)
    at Oracle.DataAccess.Client.OracleBulkCopy.PerformBulkCopy()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteDataSourceToServer()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteToServer(IDataReader reader)
    I'm not sure exactly what conditions trigger this exception; perhaps only when the LOB data is large enough?
    I'm using Oracle 11gR2.
    Has anyone seen this or have an idea how to solve it?
    If I catch the exception and attempt row-by-row copying, I then get "ILLEGAL COMMIT" exceptions.
    Thanks,
    Ben

    From the doc:
    Data Types Supported by Bulk Copy
    The data types supported by Bulk Copy are:
    ORA_SB4
    ORA_VARNUM
    ORA_FLOAT
    ORA_CHARN
    ORA_RAW
    ORA_BFLOAT
    ORA_BDOUBLE
    ORA_IBDOUBLE
    ORA_IBFLOAT
    ORA_DATE
    ORA_TIMESTAMP
    ORA_TIMESTAMP_TZ
    ORA_TIMESTAMP_LTZ
    ORA_INTERVAL_DS
    ORA_INTERVAL_YM
    I can't find any documentation on these datatypes (I'm guessing these are external datatype constants used by OCI??). This list suggests ADO.NET bulk copy of LOBs isn't supported at all (although it works fine most of the time), unless I'm misreading it.
    The remaining paragraphs don't appear to apply to me.
    Thanks,
    Ben

  • Shrink table with LOB column

    Hello,
    I have a table with 1.000.000 BLOB records. I updated almost a half of the records with NULL. Now I try to reclaim the free space using:
    ALTER TABLE table MODIFY LOB (column) (SHRINK SPACE);
    It's still running from some time, but what I am surprised about is that this operation generates a lot of redo logs (the full table had 30Gb, after the update it should have 15Gb, and by now I already have about 8Gb of generated archive logs).
    Do you know why this operation generates redo logs?
    Thank you,
    Adrian

    The REDO stream that Oracle generates is full of physical addresses (i.e. ROWIDs). If you run an update statement
    UPDATE some_table
      SET some_column = 4
    WHERE some_key = 12345;Oracle actually records in the REDO the logical equivalent of
    UPDATE some_table
      SET some_column = 4
    WHERE ROWID = <<some ROWID>>That is, Oracle converts your logical SQL statement into a series of updates to a series of physical addresses. That's a really helpful thing if the REDO has to be re-applied at a later date because Oracle doesn't have to do all the work of processing the logical SQL statement again (this would be particularly useful if your UPDATE statement were running a bunch of queries that took minutes or hours to return).
    But that means that if you are physically moving rows around, you have to record that fact in the redo stream. Otherwise, if you had to re-apply the redo information (or undo information) in the future, the physical addresses stored in the redo logs may not match the physical addresses in the database. That is, if you move the row with SOME_KEY = 12345 from ROWID A to ROWID B and move the row with SOME_KEY = 67890 from ROWID C to ROWID A, you have to record both of those moves in the redo stream so that the statement
    UPDATE some_table
      SET some_column = 4
    WHERE ROWID = <<ROWID A>>updates the correct row.
    Justin

  • Suggestions for JSF table with sortale columns and Pagination

    Hi,
    My JSF application needs a table with sortable columns and also pagination.
    Thank you.

    Just add a bunch of commandlinks and/or commandbuttons at the right locations and invoke the appropriate logic in the backing bean.
    You can find some useful insights in this article: [http://balusc.blogspot.com/2006/06/using-datatables.html].

  • Add columns to a table with lob column

    Hi,
    Just a quick question: is there a performance penalty after adding columns to a table with a lob fied? the lob field is now the last column in the table and via via I was told that adding columns will impact badly the IO performance on the table if the lob field isn't anymore the last column. The table is on a Oracle 10.2.0.3 version.
    thanks. regards
    Ivan

    Havent heard of performance degradation specifically due to a LOB column not being the last column in a table (although there are several issues with just having a LOB column in a table).
    You may want to build a test database to test it out. It should be easy to run tests comparing one with the additional column and one the original to prove or refute it. The results would be interesting to learn - please post them up if you intend to test it out.

  • Compare tables in two schemas for the table with particular column & value

    Hello All,
    I have a query to find out the list of table from a given schema to extract all the tables having a search column .
    ex :
    SELECT OWNER, TABLE_NAME, COLUMN_NAME FROM
    ALL_TAB_COLUMNS WHERE OWNER='<SCHEMA_NAME>'
    AND COLUMN_NAME='<COLUMN_NAME>'
    I want to compare two schemas for the same above query .
    Can we wirte a query on this - I am using SQL DEVELOPER , which has menu item - TOOL - database differneces to find the diffenence between two schemas but my requirement is to find the differences in two schemas for all the tables matching for a particular column ( as given in quer).
    Appreciate your help.
    thanks/Kumar
    Edited by: kumar73 on 29 Nov, 2012 1:50 PM

    Hi, Kumar,
    This is the SQL and PL/SQL forum. If you have a question about SQL Developer, then the SQL Developer is a better place to post it. Mark this thread as "Answered" before starting another thread for the same question.
    If SQL Developer has a tool for doing what you want, don't waste your time trying to devise a SQL solution. The SQL Developer way will probably be simpler, more efficient and more reliable.
    If you do need to try a SQL solution, then post some sample data (CREATE TABLE and INSERT statements for a table that resembles all_tab_columns; you can call it my_tab_columns) and the results you want from that data.

  • MERGE - WHEN MATCHED (for a table with 2 column that are the keys)

    Hi
    I am using merge to insert/update a table whenever new records come in.
    My question is if the table has 2 columns and both are a part of the primary key, do I need to have the merge statement "when matched" as well? Since both the columns are part of the key, when matched wont make sense here. I will only need when not matched, then insert.
    Please correct if I am wrong.
    Thx!

    You need to be matchning/not matching on the whole primary key. Matched and not matched are required before 10G.
    MERGE   INTO table1 loctab
           USING (SELECT colpk1,
                         colpk2,
                         col3,
                         col4
                  FROM   table2) remtab
              ON (loctab.colpk1 = remtab.colpk1 AND loctab.colpk2 = remtab.colpk2)
    WHEN MATCHED
    THEN
      UPDATE SET loctab.col3 = remtab.col3,
                        loctab.col4 = remtab.col4
    WHEN NOT MATCHED
    THEN
      INSERT          (loctab.colpk1,
                           loctab.colpk2,
                           loctab.col3,
                           loctab.col4
      VALUES   (remtab.colpk1,
                    remtab.colpk2,
                    remtab.col3,
                    remtab.col4
                   );

  • Exact space occupied by a table with LOB column

    Hi Gurus,
    I need to check the exact amount of space used (in bytes or MB) by a table which is having a BLOB column.
    I tried the following query but it is not giving the proper usage.
    select segment_name , sum(bytes)
    from dba_extents
    where segment_type='TABLE'
    and segment_name in ('TEST_CLOB','TEST_BLOB','TEST_CLOB_ADV','TEST_BLOB_ADV')
    group by segment_name;
    I even tried the following stored procedure
    create or replace procedure sp_get_table_size (p_table_name varchar2)
    as
        l_segment_name          varchar2(30);
        l_segment_size_blocks   number;
        l_segment_size_bytes    number;
        l_used_blocks           number; 
        l_used_bytes            number; 
        l_expired_blocks        number; 
        l_expired_bytes         number; 
        l_unexpired_blocks      number; 
        l_unexpired_bytes       number; 
    begin
        select segment_name
        into l_segment_name
        from dba_lobs
        where table_name = p_table_name;
        dbms_output.put_line('Segment Name=' || l_segment_name);
        dbms_space.space_usage(
            segment_owner           => 'dldbo', 
            segment_name            => l_segment_name,
            segment_type            => 'LOB',
            partition_name          => NULL,
            segment_size_blocks     => l_segment_size_blocks,
            segment_size_bytes      => l_segment_size_bytes,
            used_blocks             => l_used_blocks,
            used_bytes              => l_used_bytes,
            expired_blocks          => l_expired_blocks,
            expired_bytes           => l_expired_bytes,
            unexpired_blocks        => l_unexpired_blocks,
            unexpired_bytes         => l_unexpired_bytes
        dbms_output.put_line('segment_size_blocks       => '||  l_segment_size_blocks);
        dbms_output.put_line('segment_size_bytes        => '||  l_segment_size_bytes);
        dbms_output.put_line('used_blocks               => '||  l_used_blocks);
        dbms_output.put_line('used_bytes                => '||  l_used_bytes);
        dbms_output.put_line('expired_blocks            => '||  l_expired_blocks);
        dbms_output.put_line('expired_bytes             => '||  l_expired_bytes);
        dbms_output.put_line('unexpired_blocks          => '||  l_unexpired_blocks);
        dbms_output.put_line('unexpired_bytes           => '||  l_unexpired_bytes);
    end sp_get_table_size;
    But it is giving the error
    Error starting at line 298 in command:
    exec sp_get_table_size ('TEST_CLOB_ADV')
    Error report:
    ORA-03213: Invalid Lob Segment Name for DBMS_SPACE package
    ORA-06512: at "SYS.DBMS_SPACE", line 210
    ORA-06512: at "SYS.SP_GET_TABLE_SIZE", line 20
    ORA-06512: at line 1
    03213. 00000 -  "Invalid Lob Segment Name for DBMS_SPACE package"
    *Cause:    The Lob Segment specified in the DBMS_SPACE operation does not
               exist.
    *Action:   Fix the Segment Specification
    Although the LOB section is specified in create table syntax.
    Please help.
    Thanks
    Amitava.

    Here is a query that I use, to associate space from all segments under s schema (LOBs, indexes, etc.) back to the tables they support:
    with schema as (select '&your_schema' as name from dual),
    segs_by_table as(
      select 'TABLE' seg_supertype, seg.segment_name as related_table, seg.* from dba_segments seg, schema where seg.owner = schema.name and seg.segment_type in ('TABLE','TABLE PARTITION','TABLE SUBPARTITION')
      union all
      select 'INDEX' seg_supertype, i.table_name as related_table, seg.* from dba_segments seg, dba_indexes i, schema where seg.owner = schema.name and seg.segment_type in ('INDEX','INDEX PARTITION','INDEX SUBPARTITION')
      and Seg.OWNER = i.owner and seg.segment_name = i.index_name
      union all
      select 'LOB' seg_supertype, lob.table_name as related_table, seg.* from dba_segments seg, dba_lobs lob, schema where seg.owner = schema.name and seg.segment_type in ('LOBSEGMENT','LOB PARTITION')
      and seg.OWNER = lob.owner and seg.segment_name = lob.segment_name
    union all
      select 'LOB' seg_supertype, lob.table_name as related_table, seg.* from dba_segments seg, dba_lobs lob, schema where seg.owner = schema.name and seg.segment_type in ('LOBINDEX')
    and seg.OWNER = lob.owner and seg.segment_name = lob.index_name
    select owner,related_table,
    segment_type,   segment_name , partition_name, blocks
    from segs_by_table, schema
    where owner = schema.name
    --and related_table like 'BLOB_TEST%'
    order by owner,related_table,  segment_type, segment_name;
    You can revise as needed to limit to a single table, to summarize, etc.
    Mike

  • Inserting and retrieving Word docs from a custom table with LOB column

    Hi,
    I'm sure I'm not looking around in the right place for some docs on this.
    Basically I need to build a custom interface in Web PL/SQL to allow a MS Word doc to be uploaded for a user's browser and also a mechanism to retrieve the doc.
    Not sure what is the 'best' column datatype to use for this either.
    Portal Forms are out of the question due to inflexibility of application.
    Cheers,
    John

    bumping to the top!

  • Find size of table with XMLTYPE column STORE AS BINARY XML

    Hi,
    I have a table with structure as:
    CREATE TABLE XML_TABLE_1
    ID NUMBER NOT NULL,
    SOURCE VARCHAR2(255 CHAR) NOT NULL,
    XML_TEXT SYS.XMLTYPE,
    CREATION_DATE TIMESTAMP(6) NOT NULL
    XMLTYPE XML_TEXT STORE AS BINARY XML (
    TABLESPACE Tablespace1_LOB
    DISABLE STORAGE IN ROW
    CHUNK 16384
    RETENTION
    CACHE READS
    NOLOGGING)
    ALLOW NONSCHEMA
    DISALLOW ANYSCHEMA
    TABLESPACE Tablespace2_DATA
    - So HOW do I find the total size occupied by this table. Does BINARY storage work as LOB storage. i.e. I need to consider USER_LOBS as well for this.
    OR foll. will work
    select segment_name as tablename, sum(bytes/ (1024 * 1024 * 1024 )) as tablesize_in_GB
    From dba_segments
    where segment_name = 'XML_TABLE_1'
    and OWNER = 'SCHEMANAME'
    group by segment_name ;
    - Also if I am copying it to another table of same structure as:
    Insert /*+ append */ into XML_TABLE_2 Select * from XML_TABLE_1.
    Then how much space in ROllbackSegment do I need. Is it equal to the size of the table XML_TABLE_1?
    Thanks..

    I think foll query calculates it right while including the LOB storage as:
    SELECT SUM(bytes)/1024/1024/1024 gb
    FROM dba_segments
    WHERE (owner = 'SCHEMA_NAME' and
    segment_name = 'TABLE_NAME')
    OR (owner, segment_name) IN (
    SELECT owner, segment_name
    FROM dba_lobs
    WHERE owner = 'SCHEMA_NAME'
    AND table_name = 'TABLE_NAME')
    It's 80 GB for our Table with XMLType Column.
    But for the second point:
    Do we need 80GB of UNDO/ROLLBACK Segment for performing:
    Insert /*+ append */ into TableName1 Select * from TableName;
    Thanks..

Maybe you are looking for