Postgres' LIMIT .. OFFSET for large table

Hi!
I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
Currently, my 'workaround' looks like this (a bit more complex in reality):
1 SELECT * FROM (
2 SELECT
3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
5 FROM MSG_TABLE
6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
Thanks a lot in advance
André

See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
The one comment in there that I entirely agree with
is that such large result sets are meaningless to the
human eye so I would question exactly what you are
trying to achieve. As Tom rightly says, nobody is
ever going to scroll down to rows 999001 - 999010,
even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
Thanks again for the great link.
André (who really starts to like Oracle and its community)

Similar Messages

  • HS connection to MySQL fails for large table

    Hello,
    I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
    I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-00942: table or view does not exist
    [Generic Connectivity Using ODBC][MySQL][ODBC 3.51
        Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
    (SQL State: S1T00; SQL Code: 2013)
    ORA-02063: preceding 2 lines from MYSQL_RMG
    I traced the HS connection and here is the result from the .trc file:
    Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
    Heterogeneous Agent Release
    10.2.0.1.0
    (0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
    (0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
    (0) IDENTIFIER_QUOTE_CHAR="",
    (0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
    (0) tasource name='MYSQL_RMG' type='ODBC'
    (0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
    (0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
    (0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
    (0) tokenSize='1000' noInsertParameterization='true'
    noThreadedReadAhead='true'
    (0) noCommandReuse='true'/></environment></binding></navobj>
    (0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
    (0) hoadtab(26); Entered.
    (0) Table 1 - PROGRESSNOTES
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
    (0) memory (SQL State: S1T00; SQL Code: 2008)
    (0) (Last message occurred 2 times)
    (0)
    (0) hoapars(15); Entered.
    (0) Sql Text is:
    (0) SELECT * FROM "PROGRESSNOTES"
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
    (0) server during query (SQL State: S1T00; SQL Code: 2013)
    (0) (Last message occurred 2 times)
    (0)
    (0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
    (0) log file for details.
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
    (0) the log file for details.
    (0) Closing log file at THU JUN 12 11:20:38 2008.
    I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-02068: following severe error from MYSQL_RMG
    ORA-28511: lost RPC connection to heterogeneous remote agent using
    SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
    Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
    Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
    # HS init parameters
    HS_FDS_CONNECT_INFO = MYSQL_RMG
    HS_FDS_TRACE_LEVEL = ON
    My SID_LIST_LISTENER entry is:
    (SID_DESC =
    (PROGRAM = HSODBC)
    (SID_NAME = MYSQL_RMG)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    Finally, here is my TNSNAMES.ORA entry for the HS connection:
    MYSQL_RMG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
    (CONNECT_DATA =
    (SID = MYSQL_RMG)
    (HS = OK)
    Your advice will be greatly appeciated,
    Thanks,
    Luis
    Message was edited by:
    lmconsite

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Gather table stats taking longer for Large tables

    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    Does Table size actually matter for stats collection ?

    Max wrote:
    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    09:40:05 SQL> desc user_tables
    Name                            Null?    Type
    TABLE_NAME                       NOT NULL VARCHAR2(30)
    TABLESPACE_NAME                        VARCHAR2(30)
    CLUSTER_NAME                             VARCHAR2(30)
    IOT_NAME                             VARCHAR2(30)
    STATUS                              VARCHAR2(8)
    PCT_FREE                             NUMBER
    PCT_USED                             NUMBER
    INI_TRANS                             NUMBER
    MAX_TRANS                             NUMBER
    INITIAL_EXTENT                         NUMBER
    NEXT_EXTENT                             NUMBER
    MIN_EXTENTS                             NUMBER
    MAX_EXTENTS                             NUMBER
    PCT_INCREASE                             NUMBER
    FREELISTS                             NUMBER
    FREELIST_GROUPS                        NUMBER
    LOGGING                             VARCHAR2(3)
    BACKED_UP                             VARCHAR2(1)
    NUM_ROWS                             NUMBER
    BLOCKS                              NUMBER
    EMPTY_BLOCKS                             NUMBER
    AVG_SPACE                             NUMBER
    CHAIN_CNT                             NUMBER
    AVG_ROW_LEN                             NUMBER
    AVG_SPACE_FREELIST_BLOCKS                   NUMBER
    NUM_FREELIST_BLOCKS                        NUMBER
    DEGREE                              VARCHAR2(10)
    INSTANCES                             VARCHAR2(10)
    CACHE                                  VARCHAR2(5)
    TABLE_LOCK                             VARCHAR2(8)
    SAMPLE_SIZE                             NUMBER
    LAST_ANALYZED                             DATE
    PARTITIONED                             VARCHAR2(3)
    IOT_TYPE                             VARCHAR2(12)
    TEMPORARY                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NESTED                              VARCHAR2(3)
    BUFFER_POOL                             VARCHAR2(7)
    FLASH_CACHE                             VARCHAR2(7)
    CELL_FLASH_CACHE                        VARCHAR2(7)
    ROW_MOVEMENT                             VARCHAR2(8)
    GLOBAL_STATS                             VARCHAR2(3)
    USER_STATS                             VARCHAR2(3)
    DURATION                             VARCHAR2(15)
    SKIP_CORRUPT                             VARCHAR2(8)
    MONITORING                             VARCHAR2(3)
    CLUSTER_OWNER                             VARCHAR2(30)
    DEPENDENCIES                             VARCHAR2(8)
    COMPRESSION                             VARCHAR2(8)
    COMPRESS_FOR                             VARCHAR2(12)
    DROPPED                             VARCHAR2(3)
    READ_ONLY                             VARCHAR2(3)
    SEGMENT_CREATED                        VARCHAR2(3)
    RESULT_CACHE                             VARCHAR2(7)
    09:40:10 SQL> >
    Does Table size actually matter for stats collection ?yes
    Handle:     Max
    Status Level:     Newbie
    Registered:     Nov 10, 2008
    Total Posts:     155
    Total Questions:     80 (49 unresolved)
    why so many unanswered questions?

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • How to limit rows for a table?

    Hi,
    does anyone knows a good way to limit the number of rows for a table?
    thanks
    aldo

    You can limit the number of rows using a trigger. Below is quick and dirty example that has not been fully tested.
    test@ORCL> create table rowlimit (col1 number);
    Table created.
    Elapsed: 00:00:00.34
    test@ORCL> create or replace trigger limit_rows
      2  before insert
      3  on rowlimit
      4  declare
      5    v_count number;
      6  begin
      7     select count(*)
      8     into v_count
      9     from rowlimit;
    10
    11     if v_count >= 4 then
    12       raise_application_error(-20000, 'Table can have no more then 4 rows');
    13     end if;
    14  end;
    15  /
    Trigger created.
    Elapsed: 00:00:00.09
    test@ORCL> insert into rowlimit values(1);
    1 row created.
    Elapsed: 00:00:00.03
    test@ORCL> insert into rowlimit values(2);
    1 row created.
    Elapsed: 00:00:00.00
    test@ORCL> insert into rowlimit values(3);
    1 row created.
    Elapsed: 00:00:00.00
    test@ORCL> insert into rowlimit values(4);
    1 row created.
    Elapsed: 00:00:00.00
    test@ORCL> insert into rowlimit values(5);
    insert into rowlimit values(5)
    ERROR at line 1:
    ORA-20000: Table can have no more then 4 rows
    ORA-06512: at "TEST.LIMIT_ROWS", line 9
    ORA-04088: error during execution of trigger 'TEST.LIMIT_ROWS'
    Elapsed: 00:00:00.04
    test@ORCL> commit;
    Commit complete.
    Elapsed: 00:00:00.00
    test@ORCL> select count(*) from rowlimit;
      COUNT(*)
             4
    Elapsed: 00:00:00.00
    test@ORCL>

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • Query timeout for large table

    Dear friend,
    My view always shows timeout because my table is now having 18,00,000 data row.
    Now what should I do with this table? can anyone help me?
    another question is, in my work purpose I need to create 5-7 report every day. So every time I need to create view for those report. I can not create procedure always because creating view is easier for me.  But the views become slower day by day. My
    server is I think quiet good. Xeon cuad core dual processor and Ram is 32.
    Is advice will be appreciated.
    Thanks in advance

    Ya thanks for your time. I am appreciating all the way.
    Actually I attach those for present an idea about my database. Most of the time I need to work with just 3 or 4 table which is LC_Profile and student_profile or ROSC database.
    I am adding the query but you do not need to go all the query. Just understand how difficult my query use to be. My question is there good way to get result faster than the view?I need to make several report every day. So I use view and join many tables
    and need to use many where clause, case, convert time etc. That is why I am asking for suggestion.
    SELECT TOP (100) PERCENT dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr, dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.Division,
    dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.District, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID, dbo.vw_Geocode.Upazila, dbo.vw_Geocode.UpazilaB,
    dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNm, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachYr, dbo.Vw_Teacher_Active.TeachEdu,
    CASE WHEN TeachEdu = 1 THEN 3000 ELSE 3000 END AS TeacherSalaryOld, dbo.LCProfile.LCAccountNo, dbo.Vw_Teacher_Active.TeachNm,
    dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.Vw_Teacher_Active.TeachMob, dbo.LCProfile.UnionID, dbo.UnionCode.UnionB,
    dbo.LCProfile.LCVill, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AS MDistrictID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AS MUpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AS MLCID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
    MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Venu_Info.NoWindow,
    MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater, MonitoringROSCII.dbo.Venu_Info.HigLatrin,
    MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard, MonitoringROSCII.dbo.Venu_Info.DistrictID AS VDistrictID,
    MonitoringROSCII.dbo.Venu_Info.UpazilaID AS VUpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID AS VLCID,
    MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AS UDistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AS UUpazilaID,
    MonitoringROSCII.dbo.Vw_UniformYes.LCID AS ULCID, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
    MonitoringROSCII.dbo.Teacher_Training.DistrictID AS TDistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID AS TUpazilaID,
    MonitoringROSCII.dbo.Teacher_Training.LCID AS TLCID, MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, MonitoringROSCII.dbo.LC_Info.PrsnMale,
    MonitoringROSCII.dbo.LC_Info.PrsnFemale, MonitoringROSCII.dbo.LC_Info.PrsnStdTot, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
    + RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
    dbo.Vw_Teacher_Active.LCID), 3) AS InstituteID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent AS NoQStudent,
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
    dbo.vw_Geocode.Status AS UpStatus, dbo.vw_Geocode.Phase, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus,
    dbo.ACF_LCs.SpecialStatus AS SpecialStatusACF, MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
    CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
    TcrPres = 2) THEN 0 ELSE 3000 END AS TeacherSalary
    FROM dbo.Vw_Teacher_Active RIGHT OUTER JOIN
    dbo.PO RIGHT OUTER JOIN
    dbo.vw_LC_Functioning RIGHT OUTER JOIN
    dbo.Vw_LCProfile_QStudent_LCwise2013_3 INNER JOIN
    dbo.ACF_LCs INNER JOIN
    dbo.vw_Geocode INNER JOIN
    dbo.LCProfile ON dbo.vw_Geocode.DistrictID = dbo.LCProfile.DistrictID AND dbo.vw_Geocode.UpazilaID = dbo.LCProfile.UpazilaID ON
    dbo.ACF_LCs.DistrictID = dbo.LCProfile.DistrictID AND dbo.ACF_LCs.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.ACF_LCs.LcID = dbo.LCProfile.LCID ON
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.DistrictID = dbo.ACF_LCs.DistrictID AND
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.UpazilaID = dbo.ACF_LCs.UpazilaID AND dbo.Vw_LCProfile_QStudent_LCwise2013_3.LCID = dbo.ACF_LCs.LcID ON
    dbo.vw_LC_Functioning.DistrictID = dbo.ACF_LCs.DistrictID AND dbo.vw_LC_Functioning.UpazilaID = dbo.ACF_LCs.UpazilaID AND
    dbo.vw_LC_Functioning.LCID = dbo.ACF_LCs.LcID LEFT OUTER JOIN
    MonitoringROSCII.dbo.Teacher_Training RIGHT OUTER JOIN
    MonitoringROSCII.dbo.Venu_Info RIGHT OUTER JOIN
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo LEFT OUTER JOIN
    MonitoringROSCII.dbo.Teacher_Profile ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Teacher_Profile.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Teacher_Profile.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Teacher_Profile.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Teacher_Profile.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Teacher_Profile.LCVisitYr AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.Teacher_Profile.Trimister ON
    MonitoringROSCII.dbo.Venu_Info.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    MonitoringROSCII.dbo.Venu_Info.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    MonitoringROSCII.dbo.Venu_Info.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
    MonitoringROSCII.dbo.Venu_Info.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
    MonitoringROSCII.dbo.Venu_Info.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
    MonitoringROSCII.dbo.Venu_Info.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister LEFT OUTER JOIN
    MonitoringROSCII.dbo.Vw_UniformYes ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Vw_UniformYes.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Vw_UniformYes.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Vw_UniformYes.LCVisitYr LEFT OUTER JOIN
    MonitoringROSCII.dbo.LC_Info ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.LC_Info.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.LC_Info.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.LC_Info.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.LC_Info.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.LC_Info.LCVisitYr AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.LC_Info.Trimister ON
    MonitoringROSCII.dbo.Teacher_Training.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    MonitoringROSCII.dbo.Teacher_Training.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    MonitoringROSCII.dbo.Teacher_Training.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
    MonitoringROSCII.dbo.Teacher_Training.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
    MonitoringROSCII.dbo.Teacher_Training.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
    MonitoringROSCII.dbo.Teacher_Training.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister ON
    dbo.ACF_LCs.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    dbo.ACF_LCs.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    dbo.ACF_LCs.LcID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID ON dbo.PO.DistrictID = dbo.LCProfile.DistrictID AND
    dbo.PO.UpazilaID = dbo.LCProfile.UpazilaID ON dbo.Vw_Teacher_Active.DistrictID = dbo.LCProfile.DistrictID AND
    dbo.Vw_Teacher_Active.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.Vw_Teacher_Active.LCID = dbo.LCProfile.LCID LEFT OUTER JOIN
    dbo.UnionCode ON dbo.LCProfile.UnionID = dbo.UnionCode.UnionID AND dbo.LCProfile.UpazilaID = dbo.UnionCode.UpazilaID AND
    dbo.LCProfile.DistrictID = dbo.UnionCode.DistrictID LEFT OUTER JOIN
    dbo.vw_Bank_Branch ON dbo.LCProfile.LCBankBr = dbo.vw_Bank_Branch.BranchID
    GROUP BY dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID,
    dbo.vw_Geocode.UpazilaB, dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachEdu, dbo.LCProfile.LCAccountNo,
    dbo.Vw_Teacher_Active.TeachNm, dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.UnionCode.UnionB, dbo.vw_Geocode.Division,
    dbo.vw_Geocode.District, dbo.vw_Geocode.Upazila, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
    MonitoringROSCII.dbo.Venu_Info.NoWindow, MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater,
    MonitoringROSCII.dbo.Venu_Info.HigLatrin, MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard,
    MonitoringROSCII.dbo.Venu_Info.DistrictID, MonitoringROSCII.dbo.Venu_Info.UpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID,
    MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
    MonitoringROSCII.dbo.Vw_UniformYes.DistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID, MonitoringROSCII.dbo.Vw_UniformYes.LCID,
    MonitoringROSCII.dbo.Teacher_Training.DistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID, MonitoringROSCII.dbo.Teacher_Training.LCID,
    MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, dbo.LCProfile.UnionID, MonitoringROSCII.dbo.LC_Info.PrsnMale, MonitoringROSCII.dbo.LC_Info.PrsnFemale,
    MonitoringROSCII.dbo.LC_Info.PrsnStdTot, dbo.LCProfile.LCVill, dbo.Vw_Teacher_Active.TeachMob, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
    + RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
    dbo.Vw_Teacher_Active.LCID), 3), MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
    dbo.vw_Geocode.Status, dbo.vw_Geocode.Phase, dbo.Vw_Teacher_Active.TeachYr, dbo.LCProfile.LCNm,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus, dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr,
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.ACF_LCs.SpecialStatus,
    MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
    CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
    TcrPres = 2) THEN 0 ELSE 3000 END
    HAVING (dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = 2014) AND (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = 1) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = 3) AND (NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)) OR
    (dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)
    ORDER BY dbo.vw_Geocode.DivisionID
    Another problem is ,
    Lets say I have a table with id, name, place,address, timeof_attendance and an id which is big int type. This table have 18,00,000 record. and each day increasing with 5000 record. From there I am finding the attandance in every day. So I have to create
    4 nested view to come to a result. Now My query shows timeout. If I delete old data then it works. This kind of problem I am facing.
    Please advice me.
    Thanks

  • How to identity segment when to setup partition for large table?

    I have a table with size is about 3G. There is a code column in this table with distinguish 20 value. I am try to create list partition on this column.  How can I assign segment for different value for this partition?In my database, only less than 10 segments available. If I want the better performance, each partition should be on different segment or different device? Sould each sement have enough size to hold all data? what happen if the segment is smaller? for example, if I honly have 4 sgements each with 500M?
    If I remove or change the strategy of partition, for example, change the type to range, if the system can release the partitions on segment automatically?

    This section of the performance and tuning guide addresses all of these concerns.  Give it a good read and post questions that you have about the documentation:
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00841.1570/html/phys_tune/title.htm

  • Synchronization npt working for large table

    we are trying to synchronize Sql on premise  table to Sql Azure but each time it gets failed for different reasons.this time we encountered this exception.
    Sync failed with the exception "GetStatus failed with exception:
    SqlException Error Code: -2146232060 - SqlError Number:20, Message: The instance of SQL Server you attempted to connect to does not support encryption.
    For more information, provide tracing id ‘7ba9caff-c9d5-44c3-8f85-32e5ba0f9adb’ to customer support.
    Thanks.

    Hi,
    I am sure you have the answer by now. However, you can also have a check on the below thread which had a similar issue.
    http://social.msdn.microsoft.com/Forums/windowsazure/en-US/cab00a4f-603f-43f2-9a22-e0406db19a77/many-problems-with-sql-azure-data-sync
    Regards,
    Mekh.

  • Sql access for large table

    hi,
    if a table has more than 6,000,000 records, any way to optimise to access so that can be faster.
    i try not to use sql statement alot. i try to manipulate data in internal table but first time i also need to have select statement to copy to internal table.
    any advice.
    thanks

    Tips
    1)in select include all primary keys in where condition to fetch data
    2)delare table without header line and with out occurs statement. and use work area to handle it
    Ex:-
    TYPES:BEGIN OF gty_kna1,                             " General Data in Customer Master
          kunnr TYPE kna1-kunnr,                         " Payer Number
          name1 TYPE kna1-name1,                                " Name1
          telf1 TYPE kna1-telf1,                         " Communication
          konzs TYPE kna1-konzs,                         " Corporate Group
          END OF gty_kna1.
    data:gs_kna1             TYPE gty_kna1,                             " General Data in Customer Master
    gt_kna1             TYPE TABLE OF gty_kna1,                    " General Data in Customer Master
    Note:
    •     In a SELECT statement, only the fields (field-list) which are needed are selected in the order that they reside on the database, thus network load is considerably less. The number of fields can be restricted in two ways using a field list in the SELECT clause of the statement or by using a view defined in ABAP/4 Dictionary.  The usage of view has the advantage of better reusability.
    •     SELECT SINGLE is used instead of SELECT-ENDSELECT loop when the entire key is available. SELECT SINGLE requires one communication with the database system, whereas SELECT-ENDSELECT needs two
    •     Always specify the conditions in the WHERE-clause instead of checking them with check-statements, the database system can then use an index (if possible) and the network load is considerably less.  You should not check the conditions with the CHECK statement because the contents of the whole table must be read from the database files into DBMS cache and transferred over the network. If the conditions are specified in the where clause DBMS reads exactly the needed data.
    •     Complex code is not embedded within a SELECT / ENDSELECT statement.
    •     No complex WHERE clauses, since complex where clauses are poison for the statement optimizer in any database system.
    •     For all frequently used SELECT statements, try to use an index. You always use an index if you specify (a generic part of) the index fields concatenated with logical ANDs in the Select statement's WHERE clause
    •     When loading data into Internal table, INTO TABLE OR APPENDING TABLE is used instead of a SELECT/APPEND combination. It is always faster to use the INTO TABLE version of a Select statement than to use APPEND statements.                      
    •     Use a select list with aggregate functions instead of checking and computing, when trying to find the maximum, minimum, sum and average value or the count of a database column.
    Rewards if useful...............
    Minal

  • New FAQ Entry on JVM Parameters for Large Cache Sizes

    I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
    What JVM parameters should I consider when tuning an application with a large cache size?
    If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
    Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
    Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
    You may also want to refer to the following articles:
    * Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
    * The most complete list of -XX options for Java 6 JVM
    * My Favorite Hotspot JVM Flags
    Edited by: Charles Lamb on Oct 22, 2009 9:13 AM

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • SEQUENCE Object for Small Tables Only?

    QUOTE from a recent thread: "Long term solution should be to use SEQUENCE with NO CACHE for small tables instead of identity and benefit from performance improvement for large tables."
    Thread:
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/cf63d145-7084-4371-bde0-eb3b917c7163/identity-big-jump-100010000-a-feature?forum=transactsql
    How about using SEQUENCE objects for large tables? Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    Well Erland, either you calm down your manager (with a martini?) or use NO CACHE.
    QUOTE: "This could cause a sequence to run out of numbers much more quickly than an IDENTITY value. It could also cause managers to become upset that values are missing, in which case they’ll need to simply get over it and accept that
    there will be numbers missing.
    If you need SQL Server to use every possible value, configure a cache setting of NO CACHE. This will cause the sequence to work much like the IDENTITY property. However, it will impact the sequence performance due to the additional metadata writes."
    LINK: Microsoft SQL Server: The Sequencing Solution
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Large table

    Hi, I will have a large table with 1 million records inserted everyday. The table schema has a couple of CLOBs. We are going to keep 1 year's data, so the table will eventually have about 365 million records. I have following questions.
    1) Is this amount of data can be handled by Oracle?
    2) Is there anything I can do for performance tuning, like partition, etc.?
    3) Any other advice for large table? Any recommended online documents/articles?
    Thanks,
    Zhe

    Philippe Florent wrote:
    You work with RAC and Linux, do you think this OS is mature to build an HA solution ? As many advices as people I asked. I don't expect a definitive answer of course, just thoughts...Well, I am a bit biased as I used Linux since the pre 1.0 kernel days. :-)
    We have 5 RAC clusters - all running x86_64 h/w and Linux as the o/s. Linux today is equivalent today to any other nix system - from HP-UX to Solaris. Thus no backseat ito performance, flexibility and scalability. There seems to me to be a lot of FUD around Linux. Amongst the old nix hands, it is seen as a new snotty nose and immature kid on the block. This is helped along with marketing propaganda by companies like Microsoft.
    Last year local Oracle asked me to do a Q&A session with one of their customers as they were looking at RAC and wanted answers from a user/customer perspective and not Oracle sales. Halfway through it we got to talking operating systems - and I was surprised to still find this idea that Linux is a hobbyist/immature type operating system and not comparable to a "real' o/s like Solaris or HP-UX for example.
    There's a single fact that I use to try and counter this idea about Linux not being ready for running big commercial systems. The question: "<i>What o/s do the vast majority of the 500 largest and fastest computer systems on this planet use?</i>" The answer: <a href="http://www.top500.org/stats/list/35/osfam"><i>Linux.</i></a>
    455 of the world's faster 500 computer clusters use Linux. That is 91%. Up from around 84% last year this time. Not one of these run HP-UX. Only 5 runs Microsoft Windows. 2 runs Open Solaris. 19 runs AIX.
    So if Linux is the predominant choice for o/s to run these large clusters, why then would it not be "+good enough+" for a (many times smaller) corporate cluster? In these large clusters, everything is magnified. Performance. Administration. Support. Flexibility. Robustness and stability. Costs. Scalability. Etc. And 91% of the time, Linux was selected to address these requirements and concerns.
    So I like to reverse the question and instead ask "+What are your reasons for not using Linux and are these valid ones?+" :-)

  • For which tables we allowed buffer

    hi gurus
    plz inform me
    for which tables we allowed buffer
    thank you
    kals.

    Hi
    <b>Table buffering</b>
    Advantages of buffering
    Concept of buffering
    Buffering types
    Buffer synchronization
    <b>Database access using Buffer concept</b>
    Buffering allows you to access data quicker by letting you
    access it from the application server instead of the database.
    <b>Advantages of buffering</b>
    Table buffering increases the performance when the records of the table are read.
    As records of a buffered table are read directly from the local buffer of the application server on which the accessing transaction is running, time required to access data is greatly reduced. The access improves by a factor of 10 to 100 depending on the structure of the table and on the exact system configuration.
    If the storage requirements in the buffer increase due to further data, the data that has not been accessed for the longest time is displaced. This displacement takes place asynchronously at certain times which are defined dynamically based on the buffer accesses. Data is only displaced if the free space in  the buffer is less than a predefined value or the quality of the access is not satisfactory at this time.
    Entering $TAB in the command field resets the table buffers on the corresponding application server. Only use this command if there are inconsistencies in the buffer. In large systems, it can take several hours to fill the buffers. The performance is considerably reduced during this time.
    <b>Concept of buffering</b>
    <b>The R/3 System manages and synchronizes the buffers on the individual application servers. If an application program accesses data of a table, the database interfaces determines whether this data lies in the buffer of the application server. If this is the case, the data is read directly from the buffer. If the data is not in the buffer of the application server, it is read from the database and loaded into the buffer. The buffer can therefore satisfy the next access to this data.
    The buffering type determines which records of the table are loaded into the buffer of the application server when a record of the table is accessed. There are three different buffering types.
    With full buffering, all the table records are loaded into the buffer when one record of the table is accessed.
    With generic buffering, all the records whose left-justified part of the key is the same are loaded into the buffer when a table record is accessed.
    With single-record buffering, only the record that was accessed is loaded into the buffer.</b><b>Buffering types</b>
    With full buffering, the table is either completely or not at all in the buffer. When a record of the table is accessed, all the records of the table are loaded into the buffer.
    When you decide whether a table should be fully buffered, you must take the table size, the number of read accesses and the number of write accesses into consideration. The smaller the table is, the more frequently it is read and the less frequently it is written, the better it is to fully buffer the table.
    Full buffering is also advisable for tables having frequent accesses to records that do not exist. Since all the records of the table reside in the buffer, it is already clear in the buffer whether or not a record exists.
    The data records are stored in the buffer sorted by table key. When you access the data with SELECT, only fields up to the last specified key field can be used for the access. The left-justified part of the key should therefore be as large as possible for such accesses. For example, if the first key field is not defined, the entire table is scanned in the buffer. Under these circumstances, a direct access to the database could be more efficient if there is a suitable secondary index there.
    With generic buffering, all the records whose generic key fields agree with this record are loaded into the buffer when one record of the table is accessed. The generic key is a left-justified part of the primary key of the table that must be defined when the buffering type is selected. The generic key should be selected so that the generic areas are not too small, which would result in too many generic areas. If there are only a few records for each generic area, full buffering is usually preferable for the table. If you choose too large a generic key, too much data will be invalidated if there are changes to table entries, which would have a negative effect on the performance.
    A table should be generically buffered if only certain generic areas of the table are usually needed for processing.
    Client-dependent, fully buffered tables are automatically generically buffered. The client field is the generic key. It is assumed that not all of the clients are being processed at the same time on one application server. Language-dependent tables are a further example of generic buffering. The generic key includes all the key fields up to and including the language field.
    The generic areas are managed in the buffer as independent objects. The generic areas are managed analogously to fully buffered tables. You should therefore also read the information about full buffering.
    Single-record buffering is recommended particularly for large tables in which only a few records are accessed repeatedly with SELECT SINGLE. All the accesses to the table that do not use SELECT SINGLE bypass the buffer and directly access the database.
    If you access a record that was not yet buffered using SELECT SINGLE, there is a database access to load the record. If the table does not contain a record with the specified key, this record is recorded in the buffer as non-existent. This prevents a further database access if you make another access with the same key
    You only need one database access to load a table with full buffering, but you need several database accesses with single-record buffering. Full buffering is therefore generally preferable for small tables that are frequently accessed.
    <b>Synchronizing local buffers</b>
    The table buffers reside locally on each application server in the system. However, this makes it necessary for the buffer administration to transfer all changes made to buffered objects to all the application servers of the system.
    If a buffered table is modified, it is updated synchronously in the buffer of the application server from which the change was made. The buffers of the whole network, that is, the buffers of all the other application servers, are synchronized with an asynchronous procedure.
    Entries are written in a central database table (DDLOG) after each table modification that could be buffered. Each application server reads these entries at fixed time intervals.
    If entries are found that show a change to the data buffered by this server, this data is invalidated. If this data is accessed again, it is read directly from the database. In such an access, the table can then be loaded to the buffer again.
    <b>Reward if usefull</b>

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

Maybe you are looking for