Calculate row size of table with clob

Hi
I have a table which has clob data type, how can I calculate size of each row ?

Mark D Powell wrote:
What version of Oracle?
Are you interested in the combined row size of the base table row and the Lob segment or just the row size of the base table row? Or just the lob segment size?
Are you asking about a specific row or just the average for the entire table?
Depending on your version of Oracle dbms_stats which calculates the avg_row_len for the base table when lob columns are present incorrectly.
HTH -- Mark D Powell --I need row size for every record in the table, the table itself is only 1GB, can be combined size or lob size relevant to each record in a table it doesnt matter.
v11.2
thanks

Similar Messages

  • Understanding logminer results -- inserting row into table with CLOB field

    In using log miner I have noticed that inserts into rows that contain a CLOB (I assume this applies to other LOB type fields as well, have only tested with CLOB so far) field are actually recorded as two DML entries.
    --the first entry is the insert operation that inserts all values with an EMPTY_CLOB() for the CLOB field
    --the second entry is the update that sets the actual CLOB value (+this is true even if the value of the CLOB field is not being set explicitly+)
    This separation makes sense as there may be separate locations that the values are being stored etc.
    However, what I am tripping over is the fact the first entry, the Insert, has a RowId value of 'AAAAAAAAAAAAAAAAAA' which is invalid if I attempt to use it in a flashback query such as:
    SELECT * FROM PERSON AS OF SCN #####'  where RowId = 'AAAAAAAAAAAAAAAAAA'The second operation, the Update of the CLOB field, has the valid RowId.
    Now, again, this makes sense if the insert of the new row is not really considered "+done+" until the two steps are done. However, is there some way to group these operations together when analyzing the log contents to know that these two operations are a "+matched set+"?
    Not a total deal breaker, but would be nice to know what is happening under the hood here so I don't act on any false assumptions.
    Thanks for any input.
    To replicate:
    Create a table with CLOB field:
    CREATE TABLE DEVUSER.TESTTABLE
            ID NUMBER
           , FULLNAME VARCHAR2(50)
          , AGE NUMBER  
          , DESCRIPTION CLOB
           );Capture the before SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Insert a new row in the test table:
    INSERT INTO TESTTABLE(ID,FULLNAME,AGE) VALUES(1,'Robert BUILDER',35);
         COMMIT;Capture the after SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Start logminer session with the bracketing scn values and options etc:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN=>2619174, ENDSCN=>2619191, -
               OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + -
               DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER)Query the logs for the changes in that range:
    SELECT
           commit_scn, xid,operation,table_name,row_id
           ,sql_redo,sql_undo, rs_id,ssn
           FROM V$LOGMNR_CONTENTS
        ORDER BY xid asc,sequence# ascResults:
    2619178     0C00070028000000     START                  AAAAAAAAAAAAAAAAAA     set transaction read write
    2619178     0C00070028000000     INSERT     TESTTABLE     AAAAAAAAAAAAAAAAAA     insert into "DEVUSER"."TESTTABLE" ...
    2619178     0C00070028000000     UPDATE     TESTTABLE     AAAFEXAABAAALEJAAB     update "DEVUSER"."TESTTABLE" set "DESCRIPTION" = NULL ...
    2619178     0C00070028000000     COMMIT                  AAAAAAAAAAAAAAAAAA     commitEdited by: 958701 on Sep 12, 2012 9:05 AM
    Edited by: 958701 on Sep 12, 2012 9:07 AM

    Scott,
    Thanks for the reply.
    I am inserting into the table over a database link.
    I am using the new version of HTML Db (2.0)
    HTML Db is connected to an Oracle 10 database I think, however the table I am trying to insert data into (via the database link) is in an Oracle 8 database - this is why we created a link to it as we couldn't have the HTML Db interacting with the Oracle 8 database directly due to compatibility problems (or so I've been told)
    Simon

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • Import of Tables with CLOB problem.

    Hi,
    I am having problems with the import of tables with CLOB columns.
    Tables with CLOB columns will force the use of the tablespace in which it (table) was created originally. It will not be created on the new schema's default tablespace. Is there a way to get around this problem?
    More on this problem can be found here :
    Re: Import error 1647, CLOB fields problem, need help!
    I am using Oracle 9.2.0.6 standard edition, that is why the transportable tablespace option will not work.
    Thanx
    Rob

    Sorry I have posted the wrong link, it should be:
    Re: Import error 1647, CLOB fields problem, need help!
    Regards
    Rob

  • Exporting Table with CLOB Columns

    Hello All,
    I am trying to export table with clob columns with no luck. It errors saying EXP-00011TABLE do not exist.
    I can query the table and the owner is same as what i am exporting it from.
    Please let me know.

    An 8.0.6 client definitely changes things. Other posters have already posted links to information on what versions of exp and imp can be used to move data between versions.
    I will just add that if you were using a client to do the export then if the client version is less than the target database version you can upgrade the client or better yet if possilbe use the target database export utility to perform the export.
    I will not criticize the existance of an 8.0.6 system as we had a parent company dump a brand new 8.0.3 application on us less than two years ago. We have since been allowed to update the database and pro*c modules to 9.2.0.6.
    If the target database is really 8.0.3 then I suggest you consider using dbms_metadata to generate the DDL, if needed, and SQLPlus to extact the data into delimited files that you can then reload via sqlldr. This would allow you to move the data with some potential adjustments for any 10g only features in the code.
    HTH -- Mark D Powell --

  • How to compare two rows from two table with different data

    how to compare two rows from two table with different data
    e.g.
    Table 1
    ID   DESC
    1     aaa
    2     bbb
    3     ccc
    Table 2
    ID   DESC
    1     aaa
    2     xxx
    3     ccc
    Result
    2

    Create
    table tab1(ID
    int ,DE char(10))
    Create
    table tab2(ID
    int ,DE char(10))
    Insert
    into tab1 Values
    (1,'aaa')
    Insert
    into tab1  Values
    (2,'bbb')
    Insert
    into tab1 Values(3,'ccc')
    Insert
    into tab1 Values(4,'dfe')
    Insert
    into tab2 Values
    (1,'aaa')
    Insert
    into tab2  Values
    (2,'xx')
    Insert
    into tab2 Values(3,'ccc')
    Insert
    into tab2 Values(6,'wdr')
    SELECT 
    tab1.ID,tab2.ID
    As T2 from tab1
    FULL
    join tab2 on tab1.ID
    = tab2.ID  
    WHERE
    BINARY_CHECKSUM(tab1.ID,tab1.DE)
    <> BINARY_CHECKSUM(tab2.ID,tab2.DE)
    OR tab1.ID
    IS NULL
    OR 
    tab2.ID IS
    NULL
    ID column considered as a primary Key
    Apart from different record,Above query populate missing record in both tables.
    Result Set
    ID ID 
    2  2
    4 NULL
    NULL 6
    ganeshk

  • Calculate size of table with BLOB column

    Hello,
    when trying to calculate the occupied space for a table, I'm using DBA_SEGMENTS, which works fine as long as the table does not have a BLOB column.
    As far as I can tell, the size of the BLOB data is stored with the SEGMENT_TYPE = 'LOBSEGMENT', but I cannot find a view that tells me which DBA_SEGMENT row belongs to the BLOB column in the table I'm checking.
    To give you an example:
    select sum(BYTES)
    from DBA_SEGMENTS
    where owner = user
    and segment_name = 'MY_TABLE'
    group by SEGMENT_NAMEreturns 262144
    running:
    SELECT sum(length(blob_column))
    FROM my_tablereturns 821333
    There are entries in DBA_SEGMENTS for my user with the type LOBSEGMENT, but I cannot find a way to map the correct DBA_SEGMENTS row to the table I am checking.
    Any ideas?
    I'm using Oracle 10.2.0.3.0 on Redhat
    Thanks in advance
    Thomas

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_1092.htm#i1581211
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:266215435203#993030200346552097

  • Find size of table with XMLTYPE column STORE AS BINARY XML

    Hi,
    I have a table with structure as:
    CREATE TABLE XML_TABLE_1
    ID NUMBER NOT NULL,
    SOURCE VARCHAR2(255 CHAR) NOT NULL,
    XML_TEXT SYS.XMLTYPE,
    CREATION_DATE TIMESTAMP(6) NOT NULL
    XMLTYPE XML_TEXT STORE AS BINARY XML (
    TABLESPACE Tablespace1_LOB
    DISABLE STORAGE IN ROW
    CHUNK 16384
    RETENTION
    CACHE READS
    NOLOGGING)
    ALLOW NONSCHEMA
    DISALLOW ANYSCHEMA
    TABLESPACE Tablespace2_DATA
    - So HOW do I find the total size occupied by this table. Does BINARY storage work as LOB storage. i.e. I need to consider USER_LOBS as well for this.
    OR foll. will work
    select segment_name as tablename, sum(bytes/ (1024 * 1024 * 1024 )) as tablesize_in_GB
    From dba_segments
    where segment_name = 'XML_TABLE_1'
    and OWNER = 'SCHEMANAME'
    group by segment_name ;
    - Also if I am copying it to another table of same structure as:
    Insert /*+ append */ into XML_TABLE_2 Select * from XML_TABLE_1.
    Then how much space in ROllbackSegment do I need. Is it equal to the size of the table XML_TABLE_1?
    Thanks..

    I think foll query calculates it right while including the LOB storage as:
    SELECT SUM(bytes)/1024/1024/1024 gb
    FROM dba_segments
    WHERE (owner = 'SCHEMA_NAME' and
    segment_name = 'TABLE_NAME')
    OR (owner, segment_name) IN (
    SELECT owner, segment_name
    FROM dba_lobs
    WHERE owner = 'SCHEMA_NAME'
    AND table_name = 'TABLE_NAME')
    It's 80 GB for our Table with XMLType Column.
    But for the second point:
    Do we need 80GB of UNDO/ROLLBACK Segment for performing:
    Insert /*+ append */ into TableName1 Select * from TableName;
    Thanks..

  • How to find row size of table

    Hi,
    How can I find row size of a table ?
    In My table has millinon's of data inserted.
    thanks in advance.

    Here is a PL/SQL procedure to calculate average row length. It works, but it has an issue because you cannot use VSIZE on BLOB data columns, and it could be re-done using dbms_lob.getlength(BLOB_COLUMN) to get an accurate average_row_length for rows with a BLOB column..
    To find the actual size of a row I did this:
    /* TABLE */
    select
    3 + avg(nvl(dbms_lob.getlength(CASE_DATA),0)+1 +
    nvl(vsize(CASE_NUMBER ),0)+1 +
    nvl(vsize(CASE_DATA_NAME),0)+1 +
    nvl(vsize(LASTMOD_TIME_T),0)+1
    ) "Total bytes per row"
    from
    arch_case_data
    where
    case_number = 301;
    Total bytes per row
    3424
    /* INDEX */
    select sum(COLUMN_LENGTH)
    from dba_ind_columns
    where TABLE_NAME = 'ARCH_CASE_DATA';
    SUM(COLUMN_LENGTH)
    22
    So, the total bytes used is 3424+22.

  • How to check the number of rows in a table with input data?`

    Dear all,
    I am having a table ( with declaration of 5 rows during the init ) there's a next button which will add another 5 rows by pressing. Let say the user ony key in 7 rows of data.. how should i track it? coz these data act as input to my bapi function. One of the input field of my bapi is to state how many rows of data that a user had key in. Thank you.

    I think ur problem is empty records, u dont want to process empty table rows.
    m I right?
    if yes then just before processing the data / records, check for empty records and remove unwanted rows from the node.
    for (int i = 0; i < TableNode.size(); i++)
             if all cells of row(i) are empty
                   remove row(i) from Table Node.
    OR if you dont want to remove empty rows from display (screen), then at the time of processing, transfer useful rows to a separate node and process that node.
    Best Regards
    Deepak

  • Deleting a row from a table containing CLOB as one of the columns

    When i delete a row from a table which contains a CLOB (internal clob) i.e. CLOB or BLOB column, Will the CLOB data will also be deleted ? I understand that what exactly stored in the CLOB column is the clob locator which points to the actual data.
    So, when I delete this row, the clob locator will be deleted, but will the actual data what this locator is pointing to is also deleted ??? if not what is the process to delete the data the locator is pointing to when the row containing the locator is deleted ? If this is not happening then the actual data might become an orphan data which nobody has access to, will automatic garbage cleaning occurs on a frequent intravels to delete unaddressed data residing on the database server ?
    Thanks in advance for the help, can email me at [email protected] alternatively.
    Regards,
    Srinivasa C.

    Michael,
    Thanks very much for your inputs, here are the results i got when i tried the way you explained in your answer, the TRUNCATE command made the actual size back to normal, but the delete is not the same, so, how can i delete the data that a particular clob locator may point to ?
    truncate would delete all the rows of the table, which might not serve my purpose, i would like to delete a row and also it's associated clob data from the database! is there anyway to do this ?
    is there any limitation on the ool_sample size? i am basically a c++ programmer, i am looking for some function like FREE which would free the allocated memory to the clob once the locator is deleted.
    your help is greatly appreciated - Thanks!
    :-) Srini.
    ==========================
    My Results:
    ==========================
    SQL> create table sample (
    2 id integer primary key,
    3 the_data CLOB default empty_clob() )
    4 lob (the_data) store as ool_sample;
    Table created.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    20K
    SAMPLE
    10K
    SQL> select count(*) from sample;
    COUNT(*)
    0
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into sample values (i, RPAD('some data', 4000) );
    5 end loop;
    6 end;
    7 /
    PL/SQL procedure successfully completed.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> delete sample;
    1000 rows deleted.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> commit;
    Commit complete.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into sample values (i, rpad('some data', 4000));
    5 end loop;
    6 end;
    7 /
    PL/SQL procedure successfully completed.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    9616K
    SAMPLE
    70K
    SQL> truncate table sample;
    Table truncated.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    20K
    SAMPLE
    10K

  • Oracle 11g imp erroneously tries to recreate existing tables with CLOBs?

    I have a shell script for loading database dumps from both Datapump and the older exp/imp.
    Often when loading dumps, I need to rename the schema owner and tablespace names (which is handled by REMAP_SCHEMA and REMAP_TABLESPACE in Datapump).
    However I have a whole bunch of dumps created with exp at this point and not that many Datapump dumps yet. As such the old style dumps are handled by the shell script in this way:
    1) A first pass imp is run using INDEXFILE to generate a file with the SQL to create tables and indexes. Options also include FROMUSER and TOUSER.
    2) A series of sed command edit the SQL file to change the tablespace names (which are schema owner specific in our case).
    3) The editted SQL file is run with sqlplus to create the tables and indexes.
    4) A second pass imp is run to load the table rows as well as triggers, stored procedures, views, etc. Options include FROMUSER, TOUSER, COMMIT=Y, IGNORE=Y, BUFFER, STATISTICS=NONE, CONSTRAINTS=N
    This shell script has been working great for loading exp dump files into Oracle 9 and Oracle 10 databases, but now that I'm trying to load these dumps into Oracle 11, it fails.
    The problem is in step 4, the imp program is trying to create some of the tables that already were created with sqlplus in step 2. The problematic tables all seem to have CLOB columns in them. The table creation fails because it tries to use the tablespace names from the dump file, which do not exist in the destination database. And when the table creation fails, imp then decides not to load the rows for those table.
    This seems like a bug in the Oracle 11 imp program. I don't understand why it thinks it needs to recreate tables that already exist when those tables have CLOB columns. Is there something different about CLOB columns in Oracle 11 that I should know about that might be confusing imp into thinking that it needs to create tables when they already exist? Maybe I need to do something to those tables in SQL so that imp does not think it needs to recreate them?
    I know that the tables with the CLOBs were created correctly because I was trying to find some way to workaround this. For step 4, I tried using DATA_ONLY=Y, in which case imp does not try to create the tables and just loads the table rows. Of course using DATA_ONLY, I don't get a lot of other things like triggers, view and stored procedures. I started to try to get around that by doing 3 passes with imp, so that I could pick up the missing pieces by using an imp pass with ROWS=N, but strangely that has the same problem of trying to recreate the existing tables.

    The only solution I've found so far as a workaround is rather convoluted.
    1. I took an export using datapump's expdp of SCHEMA1 (in 10g it will skip the table with the xmltype).
    2. I imported the data to my empty schema (SCHEMA2) using impdp. To avoid the error that the type already exists with another OID, I used the TRANSFORM=oid:n parameter e.g.
    impdp user/pwd dumpfile=noxmltable.dmp logfile=importallbutxmltable.log remap_schema=SCHEMA1:SCHEMA2 TRANSFORM=oid:n directory=MYDUMPDIR
    3. I then manually created my xmltype table in the SCHEMA2 and did a select into to load it (make sure you have the select privileges to do so):
    INSERT INTO SCHEMA2.XMLTABLE2 SELECT * FROM SCHEMA1.XMLTABLE1;
    4. I am still taking an export with exp of the xmltable as well even though I'm not sure I can do anything with it.
    Thanks!
    Edited by: stacyz on Jul 28, 2009 9:49 AM

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • How get all rows of a table with a BAPI

    Hi,
    how is it possible to get more then one row by calling a BAPI from the WD. In my Application I need the rows of a Table coming from the r/3 System. How is it possible to get all the rows after the first call? What is the logic behind it? My purpose is also to create an own BAPI.
    regards,
    Sharam
    null

    Hi,
    If I understand, you don't want display the result into a Web Dynpro Table. If so, after the execution, the result of your request is stored into the context. Then you don't really need to transfert the data from your context to an Java Array.
    But if you want to do it, here is the code :
    guess your result node called
    nodeResult
    Vector myVector = new Vector();
    for (int i = 0; i < wdContext.nodeResult().size(); i++){
       myVector.put(wdContext.nodeResult().getElementAt(i));
    I hope this will answer to your question.
    Regards

  • How to get current row of Advanced table with no submit

    I have an advanced table with an 'Add another button'. When an empty row is created by clicking this button, LOV is used on the first column to populate other fields of the row. The rows of the table carry a button called "Add dependencies" whose functionality is to take the user to a different table to add rows to a different VO. This button is just a button and not a submitButton. I am passing the row's primary key as part of the URI as Vo attribute FdTaskId={@FdTaskId}
    The button works perfectly fine for database fetched rows (rows already present in the database), but, for the rows created using 'Add Another button', though LOV populates all fields including the primary key, the URI parameter passes null to the next page. That means, the VO attribute is not set. This works fine only when a 'submit' happens in the first page. For example, if I click Add another row again, then the previously inserted row passes the primary key fine as part of {@FdTaskId}.
    Could you please help me how I can resolve this issue?
    Thanks,
    Datta

    I am doing exactly that. My issues is that, VO attribute is passed correct for the table rows which are fetched from the database. For the table rows which got created through 'Add Another button', the VO attribute passed through URI as {@FdTaskId} is returning null. The VO attribute sets only when a form submit happens. For example, after adding a row through 'Add another row' button, click the same button again to add one more row and now go back to previously added row and now you can see that its VO attribute is set ({@FdTaskId} carries value )

Maybe you are looking for

  • Activation Problem from Dev to QA through transport

    We are tryiing to do a transport from Dev to QA and we are getting the following error even after several re-import of a request. We have also checked and rechecked the activated infoobjects and also if any of the objects are locked by some other req

  • IMovie08 closes immediately after launching

    The error report I get says... Application Specific Information: * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Creating more than one Application' Seems to have been installed more than once. I only can find

  • Photoshop Elements 11 installation problems

    Downloaded the bundle and extracted the files to my desktop, however when the installation wizards starts up (setup.exe) it wants to remove the files rather than allow me to go through the installation process (entering serial number, etc.). I've tri

  • Issue in printin multiple invoce in smartform

    Hi Guys,        I am facing some triky problem in smartforms , For a customer - iam running dunning .Her my requirement is , i need to print all the invoice which is pending to him in one single shot using standard tcode 150. I had configured BTE  -

  • NI-scope PCI-5124 with 2 triggers

    HI, I would like to ask about the start and reference trigger with PCI-5124. I found an sample VI on the following link, however it doesn't work as I expected. http://zone.ni.com/devzone/cda/epd/p/id/2998 The VI "start_and_reference_trigger.vi" can d