Handling internal table with more than 1 million record

Hi All,
We are facing dump for storage parameters wrongly set.
Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
Please advice is there any other way to handle these kinds of internal table.
P:S we have tried the option of using hashed table, this does not suits our scenario.
Thanks and Regards,
Vijay

Hi
your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
hope this code helps.
G_PACKAGE_SIZE = 50000.
  * Using DB Cursor to fetch data in batch.
  OPEN CURSOR WITH HOLD DB_CURSOR FOR
         SELECT *
           FROM ZTABLE.
    DO.
    FETCH NEXT CURSOR DB_CURSOR
       INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
       PACKAGE SIZE G_PACKAGE_SIZE.
    IF SY-SUBRC NE 0.
      CLOSE CURSOR DB_CURSOR.
      EXIT.
    ENDIF.

Similar Messages

  • How to read an internal table with more than  one (2 or 3) key field(s).

    how to read an internal table with more than  one (2 or 3) key field(s). in ecc 6.0 version

    hi ,
    check this..
    report.
    tables: marc,mard.
    data: begin of itab occurs 0,
          matnr like marc-matnr,
          werks like marc-werks,
          pstat like marc-pstat,
          end of itab.
    data: begin of itab1 occurs 0,
          matnr like mard-matnr,
          werks like mard-werks,
          lgort like mard-lgort,
          end of itab1.
    parameters:p_matnr like marc-matnr.
    select matnr
           werks
           pstat
           from marc
           into table itab
           where matnr = p_matnr.
    sort itab by matnr werks.
    select matnr
           werks
           lgort
           from mard
           into table itab1
           for all entries in itab
           where matnr = itab-matnr
           and werks = itab-werks.
    sort itab1 by matnr werks.
    loop at itab.
    read table itab1 with key matnr = itab-matnr
                              werks = itab-werks.
    endloop.
    regards,
    venkat.

  • Rendering a table with more than one record per "row"

    Ok, it's like this. I have a collection that I want to render as an ADF Table with a single column. Except I want to render multiple entries in the collection per row of the table
    i.e. a normal ADF table would be like:
    [Column Header]
    [Row1.TextField]
    [Row2.TextField]
    [Row3.TextField]
    [Row4.TextField]
    [Row5.TextField]
    I would like to render it something like:
    [Column Header]
    [Row1.TextField] [Row2.TextField] [Row3.TextField]
    [Row4.TextField] [Row5.TextField]
    That is, I want to add each record's text field horizontally first before adding a new row vertically.
    Any way to do this with the ADF Table? Or is there another component I should use?

    Hi,
    you can try the "af:iterator" component
    example
    <af:panelGroupLayout id="pgl4" layout="horizontal">
                    <af:iterator id="i1" value="#{bindings.MyTree.collectionModel}"
                                 var="row" varStatus="vs">
                      <af:panelGroupLayout id="pgl5"
                                           rendered="#{(vs.index mod 3) eq 1}"
                                           inlineStyle="width:100px;"
                                           layout="horizontal">
                        <af:outputText value="#{row.CityCode}" id="ot2"/>
                      </af:panelGroupLayout>
                      <af:panelGroupLayout id="pgl6"
                                           rendered="#{(vs.index mod 3) eq 2}"
                                           inlineStyle="width:100px;"
                                           layout="horizontal">
                        <af:outputText value="#{row.CityCode}" id="outputText3"/>
                      </af:panelGroupLayout>
                      <af:panelGroupLayout id="pgl7"
                                           rendered="#{(vs.index mod 3) eq 0}"
                                           inlineStyle="width:100px;"
                                           layout="horizontal">
                        <af:outputText value="#{row.CityCode}" id="outputText4"/>
                      </af:panelGroupLayout>
                      <af:outputText value=" &lt;tr>" id="ot3" escape="false"
                                     rendered="#{(vs.index mod 3) eq 2}"/>
                    </af:iterator>
    </af:panelGroupLayout>Regards
    Nicolas

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

  • Spatial index creation for table with more than one geometry columns?

    I have table with more than one geometry columns.
    I'v added in user_sdo_geom_metadata table record for every column in the table.
    When I try to create spatial indexes over geometry columns in the table - i get error message:
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
    ORA-06512: at line 1
    What is the the solution?

    I'v got errors in my user_sdo_geom_metadata.
    The problem does not exists!

  • Owa_text.vc_arr: can't handle the string with more than 4000 characters?

    In the Oracel Web Application Server 4.0 documment, it says
    about owa_text.vc_arr :Type vc_arr is table of varchar2(32767)
    index by binary_integer.
    I amusing PL_SQL with Oracle8i and OWA4.0 web server.I want to
    use owa_text.vc_arr to pass the multple line texts in my form.
    If the text length is less than 4000 characters, everything works
    fine.However when the texts are longer than 4000 characters but
    less than the max length 32767 characters, I got this error
    message:
    OWS-05101: Execution failed due to Oracle error 2005
    ORA-02005: implicit (-1) length not valid for this bind or define
    datatype.
    Owa_text.vc_arr is supposed to handle the string with more
    than 4000 characters, is it true? Could anyone tell me why? Any
    help will be greatly appreciated!!!
    Thanks very much.
    Helena Wang
    Here is the pl_sql procedure to create my form:
    PROCEDURE myform
    IS
    BEGIN
    htp.p('
    <form action="'||service_path||'helena_test.saveform3"
    method=post>
    <input type=hidden name=tdescription value="X">
    Input1: <textarea name=tdescription rows=50 cols=70
    WRAP=physical></textarea>
    Input2: <textarea name=tdescription rows=50 cols=70
    WRAP=physical></textarea>
    <input type=submit name=WSave value="Save">
    </form>
    END;
    /***** here is the pl_sql procedure which I use to save the
    form***/
    procedure saveform3(tdescription in owa_text.vc_arr,
    WSave in varchar2 default 'No')
    is
    len pls_integer;
    begin
    for i in 2..tdescription.count loop
    len := length(tdescription(i));
    htp.p(len);
    htp.p(tdescription(i));
    end loop;
    end;

    Helena, I think you might get a better response either from the SQL-PL/SQL forum, or perhaps the Portal Applications forum - both of these tend to have folks very familiar with PL/SQL and the OWA packages.
    This forum is on Web services based on SOAP, WSDL and UDDI. These can be PL/SQL based but typically don't use the mod_psql or OWA web solution.
    As a pointer, I suspect you may already be familiar with, but just in case, you can always take a look at chapter 3 of the OAS documentation, "Developer's Guide: PL/SQL and ODBC Applications" where they go through a number of examples using parameters. See:
    http://technet.oracle.com/doc/appsrvr4082/guides/plsql.pdf
    Hope this or folks from the other list can help.
    Mike.

  • Why can't  owa_text.vc_arr  handle the string with more than 4000 characters?

    I am using PL_SQL with Oracle8i and OAS 4.0 web server.I want to
    use owa_text.vc_arr to pass the inputs in serval text areas in the form on a web application.
    If the input string length is less than 4000 characters, everything works fine.However when the strings are longer than 4000 characters but less than the max length 32767 characters, I got this error message:
    OWS-05101: Execution failed due to Oracle error 2005
    ORA-02005: implicit (-1) length not valid for this bind or define datatype.
    In the Oracle Application Server 4.0 documment, it says
    about owa_text.vc_arr :Type vc_arr is table of varchar2(32767)
    index by binary_integer. It means that owa_text.vc_arr can handle multiple strings and each string can have up to 32767 single byte characters, is it right?
    Owa_text.vc_arr is supposed to handle the string with more
    than 4000 characters, is it true? Could anyone tell me why? Any
    help will be greatly appreciated!!!
    Thanks very much.
    Helena Wang
    Here is the pl_sql procedure to create my form on the web:
    PROCEDURE myform
    IS
    BEGIN
    htp.p('
    <form action="'||service_path||'helena_test.saveform3"
    method=post>
    <input type=hidden name=tdescription value="X">
    Input1: <textarea name=tdescription rows=50 cols=70
    WRAP=physical></textarea>
    Input2: <textarea name=tdescription rows=50 cols=70
    WRAP=physical></textarea>
    <input type=submit name=WSave value="Save">
    </form>
    END;
    /***** here is the pl_sql procedure which I use to save the
    form***/
    procedure saveform3(tdescription in owa_text.vc_arr,
    WSave in varchar2 default 'No')
    is
    len pls_integer;
    begin
    for i in 2..tdescription.count loop
    len := length(tdescription(i));
    htp.p(len);
    htp.p(tdescription(i));
    end loop;
    end;

    The maximum size of a VARCHAR2 field in Oracle 8i is 4000 bytes.
    you'll ned to use a LOB type (or LONG if you prefer the old way)

  • Error while running spatial queries on a table with more than one geometry.

    Hello,
    I'm using GeoServer with Oracle Spatial database, and this is a second time I run into some problems because we use tables with more than one geometry.
    When GeoServer renders objects with more than one geometry on the map, it creates a query where it asks for objects which one of the two geometries interacts with the query window. This type of query always fails with "End of TNS data channel" error.
    We are running Oracle Standard 11.1.0.7.0.
    Here is a small script to demonstrate the error. Could anyone confirm that they also have this type of error? Or suggest a fix?
    What this script does:
    1. Create table object1 with two geometry columns, geom1, geom2.
    2. Create metadata (projected coordinate system).
    3. Insert a row.
    4. Create spacial indices on both columns.
    5. Run a SDO_RELATE query on one column. Everything is fine.
    6. Run a SDO_RELATE query on both columns. ERROR: "End of TNS data channel"
    7. Clean.
    CREATE TABLE object1
    id NUMBER PRIMARY KEY,
    geom1 SDO_GEOMETRY,
    geom2 SDO_GEOMETRY
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM1',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO user_sdo_geom_metadata (table_name, column_name, srid, diminfo)
    VALUES
    'OBJECT1',
    'GEOM2',
    2180,
    SDO_DIM_ARRAY
    SDO_DIM_ELEMENT('X', 400000, 700000, 0.05),
    SDO_DIM_ELEMENT('Y', 300000, 600000, 0.05)
    INSERT INTO object1 VALUES(1, SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(550000, 450000, NULL), NULL, NULL));
    CREATE INDEX object1_geom1_sidx ON object1(geom1) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    CREATE INDEX object1_geom2_sidx ON object1(geom2) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    SELECT *
    FROM object1
    WHERE
    SDO_RELATE("GEOM1", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE' OR
    SDO_RELATE("GEOM2", SDO_GEOMETRY(2001, 2180, SDO_POINT_TYPE(500000, 400000, NULL), NULL, NULL), 'MASK=ANYINTERACT') = 'TRUE';
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'OBJECT1';
    DROP INDEX object1_geom1_sidx;
    DROP INDEX object1_geom2_sidx;
    DROP TABLE object1;
    Thanks for help.

    This error appears in GeoServer and SQLPLUS.
    I have set up a completly new database installation to test this error and everything works fine. I tried it again on the previous database but I still get the same error. I also tried to restart the database, but with no luck, the error is still there. I geuss something is wrong with the database installation.
    Anyone knows what could cause an error like this "End of TNS data channel"?

  • Tables with more than one cell with a high number of rowspan (ej. 619). This cell can not be print in a page and it must be cut. But I don't know how  indesign can do this action.

    Tables with more than one cell with a high number of rowspan (ej. 619). This cell can not be print in a page and it must be cut. But I don’t know how  indesign can do this action.

    set the wake-on lan on the main computer
    The laptop's too far away from the router to be connected by ethernet. It's all wifi.
    No separate server app on the laptop, it's all samba
    The files are on a windows laptop and a hard drive hooked up to the windows laptop. The windows share server is pants, so I'd need some sort of third party server running. Maybe you weren't suggesting to use Samba to connect to the windows share though?
    I'm glad that you've all understood my ramblings and taken and interest, thanks The way I see it, I can't be the only netbook user these days looking for this kind of convenience, and I certainly won't be once chrome and moblin hit the market.
    Last edited by saft (2010-03-18 20:38:08)

  • How to print a graph in which internal table has more than 32 entries?

    hiii experts
    i am trying to make a line graph using 'gfw_pres_show' function module.
    but in report internal table has more than 32 entries
    so how can i print a graph having more than 32 entries?

    Hi ricky_lv,
    According to your description, there is main report and subreport in it, when the subreport spans across multiple pages, you want to show column headers on each page. If that is the case, we can set column headers visible while scrolling in main report.
    For detail information, please refer to the following steps:
    In design mode, click the small drop down arrow next to Column Groups and select Advanced Mode.
    Go to your Row Groups pane, click on the first static member.
    In properties grid, set FixData to True.
    Set RepeatOnNextPage to True.
    Here is a relevant thread you can reference:
    https://social.technet.microsoft.com/Forums/en-US/e1f67cec-8fa3-4c5d-86ba-28b57fc4a211/keep-header-rows-visible-while-scrolling?forum=sqlreportingservi
    The following screenshots are for your reference:
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Table with more than 35 columns

    Hello All.
    How can one work with a table with more than 35 columns
    on JDev 9.0.3.3?
    My other question is related to this.
    Setting Entities's Beans properties from a Session Bean
    bought up the error, but when setting from inside the EJB,
    the bug stays clear.
    Is this right?
    Thank you

    Thank you all for reply.
    Here's my problem:
    I have an AS400/DB2 Database, a huge and an old one.
    There is many COBOL Programs used to communicate with this DB.
    My project is to transfer the database with the same structure and the same contents to a Linux/ORACLE System.
    I will not remake the COBOL Programs. I will use the existing one on the Linux System.
    So the tables of the new DB should be the same as the old one.
    That’s why I can not make a relational DB. I have to make an exact migration.
    Unfortunately I have some tables with more than 5000 COLUMNS.
    Now my question is:
    can I modify the parameters of the ORACE DB to make it accept Tables and Views with more than 1000 columns, If not, is it possible to make a PL/SQL Function that simulate a table, this function will insert/update/select data from many other small tables (<1000 columns). I want to say a method that make the ORACLE DB acting like if it has a table with a huge number of columns;
    I know it's crazy but any idea please.

  • I want to enable a PK in a tables with more than 1680 subpartitions

    Hi All
    i want to enable a PK in a tables with more than 1680 subpartitions
    SQL> ALTER SESSION ENABLE PARALLEL DDL ;
    Session altered.
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK parallel 8 nologging
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    SQL> alter table FDW.GL_JE_LINES_BASE_1 parallel 8;
    Table altered.
    SQL> alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK;
    alter table FDW.GL_JE_LINES_BASE_1 enable constraint GL_JE_LINES_BASE_1_PK
    ERROR at line 1:
    ORA-01652: unable to extend temp segment by 128 in tablespace TS_FDW_DATA
    Please advice or tell how it would be the best way to do this
    Regards
    Jesus

    When you try to create a PK you automaticaly created an index. If you want to put this index in different tablespace you should use 'using index..' option when you create this primary key.

  • Row chaining in table with more than 255 columns

    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahav

    user10952094 wrote:
    Hi,
    I have a table with 1000 columns.
    I saw the following citation: "Any table with more then 255 columns will have chained
    rows (we break really wide tables up)."
    If I insert a row populated with only the first 3 columns (the others are null), is a row chaining occurred?
    I tried to insert a row described above and no row chaining occurred.
    As I understand, a row chaining occurs in a table with 1000 columns only when the populated data increases
    the block size OR when more than 255 columns are populated. Am I right?
    Thanks
    dyahavYesterday, I stated this on the forum "Tables with more than 255 columns will always have chained rows." My statement needs clarification. It was based on the following:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#i4383
    "Oracle Database can only store 255 columns in a row piece. Thus, if you insert a row into a table that has 1000 columns, then the database creates 4 row pieces, typically chained over multiple blocks."
    And this paraphrase from "Practical Oracle 8i":
    V$SYSSTAT will show increasing values for CONTINUED ROW FETCH as table rows are read for tables containing more than 255 columns.
    Related information may also be found here:
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c11schem.htm
    "When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row's pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row."
    http://download.oracle.com/docs/html/B14340_01/data.htm
    "For a table with several columns, the key question to consider is the (average) row length, not the number of columns. Having more than 255 columns in a table built with a smaller block size typically results in intrablock chaining.
    Oracle stores multiple row pieces in the same block, but the overhead to maintain the column information is minimal as long as all row pieces fit in a single data block. If the rows don't fit in a single data block, you may consider using a larger database block size (or use multiple block sizes in the same database). "
    Why not a test case?
    Create a test table named T4 with 1000 columns.
    With the table created, insert 1,000 rows into the table, populating the first 257 columns each with a random 3 byte string which should result in an average row length of about 771 bytes.
    SPOOL C:\TESTME.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
    COL1,
    COL2,
    COL3,
    COL255,
    COL256,
    COL257)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=1000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat are the results of the above?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        166
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        166                                                
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252        332  Another test, this time with an average row length of about 12 bytes:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME2.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL256,
      COL257,
      COL999)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWith 100,000 rows each containing about 12 bytes, what should the 'table fetch continued row' statistic show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue        332 
    After the insert:
    NAME                      VALUE                                                
    table fetch continue        332
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695The final test only inserts data into the first 4 columns:
    DELETE FROM T4;
    COMMIT;
    SPOOL C:\TESTME3.TXT
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    INSERT INTO T4 (
      COL1,
      COL2,
      COL3,
      COL4)
    SELECT
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3),
    DBMS_RANDOM.STRING('A',3)
    FROM
      DUAL
    CONNECT BY
      LEVEL<=100000;
    SELECT
      SN.NAME,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SET AUTOTRACE TRACEONLY STATISTICS
    SELECT
    FROM
      T4;
    SET AUTOTRACE OFF
    SELECT
      SN.NAME,
      SN.STATISTIC#,
      MS.VALUE
    FROM
      V$MYSTAT MS,
      V$STATNAME SN
    WHERE
      SN.NAME = 'table fetch continued row'
      AND SN.STATISTIC#=MS.STATISTIC#;
    SPOOL OFFWhat should the 'table fetch continued row' show?
    Before the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the insert:
    NAME                      VALUE                                                
    table fetch continue      33695
    After the select:
    NAME                 STATISTIC#      VALUE                                     
    table fetch continue        252      33695 My statement "Tables with more than 255 columns will always have chained rows." needs to be clarified:
    "Tables with more than 255 columns will always have chained rows +(row pieces)+ if a column beyond column 255 is used, but the 'table fetch continued row' statistic +may+ only increase in value if the remaining row pieces are found in a different block."
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.
    Edited by: Charles Hooper on Aug 5, 2009 9:52 AM
    Paraphrase misspelled the view name "V$SYSSTAT", corrected a couple minor typos, and changed "will" to "may" in the closing paragraph as this appears to be the behavior based on the test case.

  • Compressed tables with more than 255 columns

    hi,
    Would anyone have a sql to find out compressed tables with more than 255 columns.
    Thank you
    Jonu

    SELECT table_name,
           Count(column_name)
    FROM   user_tab_columns utc
    WHERE  utc.table_name IN (SELECT table_name
                              FROM   user_tables
                              WHERE  compression = 'ENABLED')
    HAVING Count(column_name) > 255
    GROUP  BY table_name

Maybe you are looking for

  • Error during print request output. l_rc = 1

    Hi , I have a request to create on ouput device which can copy the spool request to desired location in application server. Our server is not connected to any physical server and is windows NT. I am tring to create ACCESS TYPE 'L' with host printer n

  • How to remove automatic ken burns effect in iMovie '13

    I am currently doing a stop-amimation using imovie '13 (10.0.2). When I move photos into the movie sequence, they automacitcl have ken burns applyed. Obviosly this is bad for a stop-animation, I need the whole photo and for it not to move! how to I r

  • Need help for Scheduling a Spool file and FTP the file

    I have one requirement like below... 1. Start Scheduling a job 2. Generate a Spool file (.csv file) 3. If Spool file generation is successful then start FTP the file Else End job 4. After successful FTP process end the job. We need to create a log fi

  • SAP RF

    Hi, I am a new user in SAP-RF transaction and following are some of the queries: 1) When I am using Tcode-LM61 after I input the delivery note No and pressing F6 and F8 keys I do the confirmation of transfer order. Now when I branch off to LM26 scree

  • Bill Plan with Sales Order item rejected

    Milestone Billing Plan created. IMG Config Sales Order Item Reason for Rejection = BIC checked (Not Relevant for Billing) Add material on the (Billing Plan) Sales Order Item. Set the Item Reason For rejection (same reason as set in IMG config for BIC