HS connection to MySQL fails for large table

Hello,
I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-00942: table or view does not exist
[Generic Connectivity Using ODBC][MySQL][ODBC 3.51
    Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
(SQL State: S1T00; SQL Code: 2013)
ORA-02063: preceding 2 lines from MYSQL_RMG
I traced the HS connection and here is the result from the .trc file:
Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
Heterogeneous Agent Release
10.2.0.1.0
(0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
(0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
(0) IDENTIFIER_QUOTE_CHAR="",
(0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
(0) tasource name='MYSQL_RMG' type='ODBC'
(0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
(0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
(0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
(0) tokenSize='1000' noInsertParameterization='true'
noThreadedReadAhead='true'
(0) noCommandReuse='true'/></environment></binding></navobj>
(0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
(0) hoadtab(26); Entered.
(0) Table 1 - PROGRESSNOTES
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
(0) memory (SQL State: S1T00; SQL Code: 2008)
(0) (Last message occurred 2 times)
(0)
(0) hoapars(15); Entered.
(0) Sql Text is:
(0) SELECT * FROM "PROGRESSNOTES"
(0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
(0) server during query (SQL State: S1T00; SQL Code: 2013)
(0) (Last message occurred 2 times)
(0)
(0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
(0) log file for details.
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
(0) (Last message occurred 2 times)
(0)
(0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
(0) the log file for details.
(0) Closing log file at THU JUN 12 11:20:38 2008.
I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
select * from progressnotes@mysql_rmg where "encounterID" = 224720
ERROR at line 1:
ORA-02068: following severe error from MYSQL_RMG
ORA-28511: lost RPC connection to heterogeneous remote agent using
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
# HS init parameters
HS_FDS_CONNECT_INFO = MYSQL_RMG
HS_FDS_TRACE_LEVEL = ON
My SID_LIST_LISTENER entry is:
(SID_DESC =
(PROGRAM = HSODBC)
(SID_NAME = MYSQL_RMG)
(ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
Finally, here is my TNSNAMES.ORA entry for the HS connection:
MYSQL_RMG =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
(CONNECT_DATA =
(SID = MYSQL_RMG)
(HS = OK)
Your advice will be greatly appeciated,
Thanks,
Luis
Message was edited by:
lmconsite

First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
(0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
(0) Code: 2006)
indicates the Driver or the DB abends the connection due to a timeout.
Check out the wait_timeout mysql variable on the server and increase it.

Similar Messages

  • Gather table stats taking longer for Large tables

    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    Does Table size actually matter for stats collection ?

    Max wrote:
    Version : 11.2
    I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
    Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
    But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
    09:40:05 SQL> desc user_tables
    Name                            Null?    Type
    TABLE_NAME                       NOT NULL VARCHAR2(30)
    TABLESPACE_NAME                        VARCHAR2(30)
    CLUSTER_NAME                             VARCHAR2(30)
    IOT_NAME                             VARCHAR2(30)
    STATUS                              VARCHAR2(8)
    PCT_FREE                             NUMBER
    PCT_USED                             NUMBER
    INI_TRANS                             NUMBER
    MAX_TRANS                             NUMBER
    INITIAL_EXTENT                         NUMBER
    NEXT_EXTENT                             NUMBER
    MIN_EXTENTS                             NUMBER
    MAX_EXTENTS                             NUMBER
    PCT_INCREASE                             NUMBER
    FREELISTS                             NUMBER
    FREELIST_GROUPS                        NUMBER
    LOGGING                             VARCHAR2(3)
    BACKED_UP                             VARCHAR2(1)
    NUM_ROWS                             NUMBER
    BLOCKS                              NUMBER
    EMPTY_BLOCKS                             NUMBER
    AVG_SPACE                             NUMBER
    CHAIN_CNT                             NUMBER
    AVG_ROW_LEN                             NUMBER
    AVG_SPACE_FREELIST_BLOCKS                   NUMBER
    NUM_FREELIST_BLOCKS                        NUMBER
    DEGREE                              VARCHAR2(10)
    INSTANCES                             VARCHAR2(10)
    CACHE                                  VARCHAR2(5)
    TABLE_LOCK                             VARCHAR2(8)
    SAMPLE_SIZE                             NUMBER
    LAST_ANALYZED                             DATE
    PARTITIONED                             VARCHAR2(3)
    IOT_TYPE                             VARCHAR2(12)
    TEMPORARY                             VARCHAR2(1)
    SECONDARY                             VARCHAR2(1)
    NESTED                              VARCHAR2(3)
    BUFFER_POOL                             VARCHAR2(7)
    FLASH_CACHE                             VARCHAR2(7)
    CELL_FLASH_CACHE                        VARCHAR2(7)
    ROW_MOVEMENT                             VARCHAR2(8)
    GLOBAL_STATS                             VARCHAR2(3)
    USER_STATS                             VARCHAR2(3)
    DURATION                             VARCHAR2(15)
    SKIP_CORRUPT                             VARCHAR2(8)
    MONITORING                             VARCHAR2(3)
    CLUSTER_OWNER                             VARCHAR2(30)
    DEPENDENCIES                             VARCHAR2(8)
    COMPRESSION                             VARCHAR2(8)
    COMPRESS_FOR                             VARCHAR2(12)
    DROPPED                             VARCHAR2(3)
    READ_ONLY                             VARCHAR2(3)
    SEGMENT_CREATED                        VARCHAR2(3)
    RESULT_CACHE                             VARCHAR2(7)
    09:40:10 SQL> >
    Does Table size actually matter for stats collection ?yes
    Handle:     Max
    Status Level:     Newbie
    Registered:     Nov 10, 2008
    Total Posts:     155
    Total Questions:     80 (49 unresolved)
    why so many unanswered questions?

  • AIX SSH Connectivity,  existence check failed for bin/bash

    hello;
    I received error
    AIX 7 Oracle RAC 11g R2
    Oracle RAC AIX SSH Connectivity
    existence check failed for bin/bash on node2
    but the bin/bash exists
    have any idea?
    regards
    siyavus
    Edited by: sak on May 31, 2011 10:41 PM

    Does it really say "bin/bash"? It should have a slash in front of bin: "/bin/bash"
    Can you login to node 2 and do:
    echo $PATH
    ls -l /bin/bash

  • Synchronization npt working for large table

    we are trying to synchronize Sql on premise  table to Sql Azure but each time it gets failed for different reasons.this time we encountered this exception.
    Sync failed with the exception "GetStatus failed with exception:
    SqlException Error Code: -2146232060 - SqlError Number:20, Message: The instance of SQL Server you attempted to connect to does not support encryption.
    For more information, provide tracing id ‘7ba9caff-c9d5-44c3-8f85-32e5ba0f9adb’ to customer support.
    Thanks.

    Hi,
    I am sure you have the answer by now. However, you can also have a check on the below thread which had a similar issue.
    http://social.msdn.microsoft.com/Forums/windowsazure/en-US/cab00a4f-603f-43f2-9a22-e0406db19a77/many-problems-with-sql-azure-data-sync
    Regards,
    Mekh.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Oracle 10g Discoverer Reports & export to xls fails for large reports

    Hi ,
    We have following configurations:
    1: RHEL 5.4
    2: Discoverer :Version 10.1.2.48.18
    3: Oracle10g Apps Version : Version 10.1.2.0.2
    Issue:
    Most of small reports works fine ....but when large discoverer reports are executed the page
    keeps on refreshing for 15-20 hours but no output ....same for export to xls ......
    But same reports works fine in oracle9i for same data voulme....
    Observations:
    When on linux with top command the processes are monitored its observed that discoverer process
    dis51ws dies for large reports after 1-2 minutes ...& the page keeps on refreshing but no output....
    for 1-2 minutes it consumes 50-80% cpu utilisation then process disappears & cpu 80% idle ...
    It seems that as 10g Apps is installed on RHEL 5.4 ...non certfified OS causing an issue...
    Can any one adds more inputs in this regards......
    we have checked logs : below are log details :
    Below logs gives "Logkeys: exceptions discoiv.servlet_exceptions" for this report ...
    1:
    OC4J~OC4J_BI_Forms~default_island~1:
    10/04/11 12:11:29 Oracle Application Server Containers for J2EE 10g (10.1.2.0.2) initialized
    10/04/11 12:11:59 Using oracle.reports.util.EnvironmentGlobal class
    10/04/11 14:23:37 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:23:38 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114240115278
    10/04/11 14:25:42 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:25:42 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114254315439
    10/04/11 14:28:53 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:28:54 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114285615691
    10/04/11 14:29:13 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:29:13 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114291315728
    10/04/11 14:32:35 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:35 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114323615949
    10/04/11 14:32:48 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:32:48 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114324815982
    10/04/11 14:34:44 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:44 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114344616128
    10/04/11 14:34:55 Logkeys: exceptions discoiv.servlet_exceptions
    10/04/11 14:34:55 Discoverer Model - 10.1.2.48.18
    Session ID:2010041114345516155
    10/04/11 14:36:25 Tutalii: /oracle10gas/app/oracle10g/discoverer/lib/discoverer5.jar archive
    2:
    Discoverer~SessionServer~12
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Active Eul: EULADMIN
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    Calling GetData on Preference Repository
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    DCSCORBAInterface::Delete called
    DCSCORBAInterface destructor called
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    ASSERT [email protected]:436
    3:
    application.log
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Finding async request action forward
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Calling externalize
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_TOOL: dvtb xk-1ml versionzw1.0w kyxdvtbyxbisltyxbicho vzwwjyxjbisltyxjdvtby
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: dv xk-1ml versionzw1.0w kyxdv bazw0w cszw25wyxpc vzw1wjyxjdvy
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_VIEW: lc
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_HS: dvhs
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.WorksheetModel.getStateString EXT_DS: dv_ds &qls_z!2=New GL Report.Clndr Id&arq=false&qls_x!14=Sheet 1.Closing Balance (Credit)&qls_x!3=New GL Report.Voucher No&qls_z!3=New GL Report.Frm Prd&fm=xml&qls_z!4=New GL Report.To Prd&qls_x!2=New GL Report.Prd Desc&qls_x!11=Sheet 1.Debit Amount&qls_x!4=New GL Report.Gl Voucher No&qls_x!9=Sheet 1.Opening Balance Debit&tss_s!0=New GL Report.Acct Id,lh,group,false&qls_x!1=New GL Report.Acct Sdesc&qls_z!1=New GL Report.Loc Desc&qls_x!10=Sheet 1.Opening Balance (Credit)&qls_x!12=Sheet 1.Credit Amount&aow=false&qls_x!7=New GL Report.Shrt Code&qls_x!13=Sheet 1.Closing Balance (Debit)&qls_x!6=New GL Report.Pmt Rct Dt&qls_x!8=New GL Report.Dtl Nrtn&sss=true&qls_x!5=New GL Report.Voucher Dt&qls_x!0=New GL Report.Acct Id&qls_z!0=New GL Report.Loc Id&ddsver=1
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.model.ViewerModelImpl.externalize [EXTERN_STATE]: $wksht$%24dv_ds%24%26qls_z%212%3DNew+GL+Report.Clndr+Id%26arq%3Dfalse%26qls_x%2114%3DSheet+1.Closing+Balance+%28Credit%29%26qls_x%213%3DNew+GL+Report.Voucher+No%26qls_z%213%3DNew+GL+Report.Frm+Prd%26fm%3Dxml%26qls_z%214%3DNew+GL+Report.To+Prd%26qls_x%212%3DNew+GL+Report.Prd+Desc%26qls_x%2111%3DSheet+1.Debit+Amount%26qls_x%214%3DNew+GL+Report.Gl+Voucher+No%26qls_x%219%3DSheet+1.Opening+Balance+Debit%26tss_s%210%3DNew+GL+Report.Acct+Id%2Clh%2Cgroup%2Cfalse%26qls_x%211%3DNew+GL+Report.Acct+Sdesc%26qls_z%211%3DNew+GL+Report.Loc+Desc%26qls_x%2110%3DSheet+1.Opening+Balance+%28Credit%29%26qls_x%2112%3DSheet+1.Credit+Amount%26aow%3Dfalse%26qls_x%217%3DNew+GL+Report.Shrt+Code%26qls_x%2113%3DSheet+1.Closing+Balance+%28Debit%29%26qls_x%216%3DNew+GL+Report.Pmt+Rct+Dt%26qls_x%218%3DNew+GL+Report.Dtl+Nrtn%26sss%3Dtrue%26qls_x%215%3DNew+GL+Report.Voucher+Dt%26qls_x%210%3DNew+GL+Report.Acct+Id%26qls_z%210%3DNew+GL+Report.Loc+Id%26ddsver%3D1%24dv%24xk-1ml+versionzw1.0w+kyxdv+bazw0w+cszw25wyxpc+vzw1wjyxjdvy%24wd%24false%24lc%24%24dvtb%24xk-1ml+versionzw1.0w+kyxdvtbyxbisltyxbicho+vzwwjyxjbisltyxjdvtby%24wvs%241101%24dvhs%24$cn$&vct=svd&cnk=cf_a101$ap$%26df%3D%26l%3D%26s%3D%26nc%3D%26dl%3D$expl$&sp=&node=&event=focus&state=(117)&root=63&wbt=2$prid$NEW_GL_REPORT%2F31$ctyp$viewer
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Storing state
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Externalize Perf: 8ms
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving Attributes
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving ApplicationRequest
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Checking for SSO mode
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Saving errors, messages
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Returning final forward: "/ExportProgress.uix" redirect: "false"
    10/04/11 14:37:25 discoverer: [TRACE] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute ----------------------- End Request --------------------------
    10/04/11 14:37:25 discoverer: [INFO] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.framework.ApplicationController.execute Total Request Time in AppCtrl: 18
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] org.apache.struts.action.RequestProcessor.processForwardConfig processForwardConfig(ForwardConfig[name=long running operation,path=/ExportProgress.uix,redirect=false,contextRelative=false])
    10/04/11 14:37:25 discoverer: [DEBUG] [AJPRequestHandler-ApplicationServerThread-17] oracle.discoverer.applications.viewer.view.DiscovererPageBroker.isCacheable Setting cacheable to: true
    4: XML log:
    log2010041114285615691.xml
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="TRACE"></MSG_TYPE><MSG_LEVEL>5</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Return DCSModelInterface::SendReceiveData(kScheduleInterface, inTable, outTable)
    outTable = DCITable
         Length=0
    ]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>259</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:29:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[DCSModelInterface::SendReceiveData(kScheduleInterface)]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS </MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcsmdli.cpp</FILE_NAME> <LINE_NUMBER>226</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:29:13 2010
    </LOG_TIME> <LOG_SIZE>0</LOG_SIZE> <EXTRA_INFO><MethodEnd duration="0.1" sizeChange="0" >
    real 0m0.1s
    user 0m0.900s
    sys 0m0.109s
    </MethodEnd></EXTRA_INFO> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer stopped.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>184</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    <MESSAGE><HEADER><TSTZ_ORIGINATING>2010-04-11T14:37:13+00:00</TSTZ_ORIGINATING> <COMPONENT_ID>DISCOVER</COMPONENT_ID> <MSG_TYPE TYPE="NOTIFICATION"></MSG_TYPE><MSG_LEVEL>4</MSG_LEVEL> <HOST_ID>2010041114285615691</HOST_ID> <MODULE_ID>DCS</MODULE_ID> </HEADER> <PAYLOAD><MSG_TEXT><![CDATA[Timer started.]]></MSG_TEXT> <SUPPL_DETAIL><![CDATA[
    <MSG_GROUP>DCS</MSG_GROUP> <PROCESS_ID>15691</PROCESS_ID> <FILE_NAME>dcstim.cpp</FILE_NAME> <LINE_NUMBER>162</LINE_NUMBER> <THREAD_ID>-1283799360</THREAD_ID> <LOG_TIME>Sun Apr 11 14:37:13 2010
    </LOG_TIME> ]]>
    </SUPPL_DETAIL> </PAYLOAD> </MESSAGE>
    Regards,

    Hi ,
    as per Note: 466697.1 ....its an memory error....& need to increase MaxVirtualDiskMem and MaxVirtualHeapMem
    But we already have slowly increased MaxVirtualDiskMem and MaxVirtualHeapMem to below very high values ...but the issue remains same.......
    as per note we are getting Logkeys: exceptions discoiv.servlet_exceptions error but
    after that we are not getting below error ............
    Unexpected error in state machine: java.lang.OutOfMemoryError
    Hence I presume that its different issue rather than memmory....
    Below are pref.txt values..........
    CacheFlushPercentage          = 25          # Percent of cache flushed if the cache is full. valid values 0 - 100%.
    MaxVirtualDiskMem          = 9294967296     # Maximum amount of disk memory allowed for the data cache. Should be greater than or equal to MaxVirtualHeapMem
    MaxVirtualHeapMem          = 4294967296     # Maximum amount of heap memory allowed for the data cache.
    QueryBehavior = 0          # Action to take after opening a workbook (0 = Run Query Automatically, 1 = Don't Run Query, 2 = Ask for Confirmation)
    Also we have raised an SR & as per SR .............all below settings are tried as per SR except for applying a recent patch....
    ====================================================================
    Discoverer performance is largely determined by how well the database
    has been designed and tuned for queries.
    A well-designed database will perform significantly better than a poorly
    designed database.
    Workbook design can also affect query performance.
    1. Apply latest Discoverer patch as documented in <<Note:237607.1>>
    'ALERT: Required and Recommended Patch Levels For All Discoverer Version'.
    2. Increase the maximum JVM heap memory:
    In general, the default values for the minimum heap (-Xms) memory and
    maximum heap (-Xmx) memory are sufficient.
    However, if your organization consistently runs large Discoverer queries
    then you may benefit from increasing the maximum heap memory from the
    default values.
    This is recomended if your users are typically running large queries via
    Discoverer Viewer.
    Increasing the JVM memory can help to avoid "java.lang.OutOfMemoryError"
    in Discoverer Viewer:
    Please see <<Note 563960.1>>, Best Practice: Configuring The
    OC4J_BI_Forms JVM For
    Discoverer Viewer/Portlet 10g Performance And Stability for specific
    details.
    3. Disable Query Prediction:
    Query Prediction provides an estimate of the time required to retrieve
    the information in a query.
    The Query Prediction appears before the query and consumes time.
    Edit the <oracle_home>/discoverer/util/pref.txt on the middle-tier
    server and set:
    QPPEnable=0
    Also set:
    QPPObtainCostMethod = 0
    4. Uncheck the 'Enable fantrap detection' checkbox.
    When the box is checked, every query generated by Discoverer is
    interrogated. Discoverer will detect a fan trap and rewrite the query to
    ensure that the aggregation is done at the correct level.
    Please refer to <<Note:210553.1>>, Oracle BI Discoverer: Fan Trap
    Resolution - Correct Results
    Everytime for more information on fantraps.
    To disable, in Plus go to Tools -> Options -> Advanced -> Fan Trap
    settings.
    In Viewer go to Preferences and uncheck the box.
    5. Disable Materialized Views/Summaries
    In pref.txt add parameter:
    MaterializedViewRedirectionBehavior = 0
    Value equal to 0 ensures that Materialized View (MV) Redirection is
    always performed when MVs are available.
    6. Improve query performance by optimizing the SQL.
    In pref.txt modify/add following parameters:
    SQLFlatten = 0
    SQLItemTrim = 0
    SQLJoinTrim = 0
    UseOptimizerHints = 0
    If value of SQLFlatten is 1 then Discoverer will merge inline views in
    the query SQL where ever possible.
    In case of SQLItemTrim, Discoverer will remove unused folder items from
    the query SQL where possible and for SQLJoinTrim Discoverer will remove
    unnecessary joins from the query where possible.
    UseOptimizerHints will add Optimizer hints to SQL if set >
    0.Unnecessarily making Discoverer perform these checks consumes
    resources and rather than increasing the performance may reduce the same. So unless you feel these checks have to be performed depending on
    requirements, assign zero to these parameters.
    7. When Discoverer builds a query, Discoverer makes a database security
    check to confirm that the user has access to the tables referenced in
    the folders. Avoiding this check can save time.
    So in pref.txt underDatabase section add:
    ObjectsAlwaysAccessible = 1
    8. Whenever you are creating conditions, ensure that you match the Case.
    This in turn can reduce the time Discoverer spends on changing the Case
    and matching.
    For example:
    option Upper(Department) in (Upper('VIDEO SALES')
    9. Ensure that summaries are refreshed periodically in Discoverer
    Administrator.
    10. Increase the amount of memory available for the Discoverer data cache. Please refer to <<Note 245752.1>>, Explaining Oracle BI Discoverer
    Session Memory Management
    And Server Cache Settings.
    11. Performance may be enhanced by enabling OracleAS Web Cache.
    ==================================================================
    Thanks for your reply....................
    Regards,

  • Postgres' LIMIT .. OFFSET for large table

    Hi!
    I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
    Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
    Currently, my 'workaround' looks like this (a bit more complex in reality):
    1 SELECT * FROM (
    2 SELECT
    3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
    4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
    5 FROM MSG_TABLE
    6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
    This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
    In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
    Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
    Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
    Thanks a lot in advance
    André

    See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
    The one comment in there that I entirely agree with
    is that such large result sets are meaningless to the
    human eye so I would question exactly what you are
    trying to achieve. As Tom rightly says, nobody is
    ever going to scroll down to rows 999001 - 999010,
    even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
    Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
    It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
    Thanks again for the great link.
    André (who really starts to like Oracle and its community)

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • How to create a DSN-less connection to SQL Server for linked tables in Access

    hey
    i cant understand how i use that Function
    if that information what you need
     stLocalTableName: dbo_user_name
    stRemoteTableName: user_name
    stServer :sedo2015.mssql.somee.com
    stDatabase :sedo2015
    stUsername :sedo_menf_SQLLogin_1
    stPassword :123456789
    how will be that Function??
    please write that Function to me
    '//Name : AttachDSNLessTable
    '//Purpose : Create a linked table to SQL Server without using a DSN
    '//Parameters
    '// stLocalTableName: Name of the table that you are creating in the current database
    '// stRemoteTableName: Name of the table that you are linking to on the SQL Server database
    '// stServer: Name of the SQL Server that you are linking to
    '// stDatabase: Name of the SQL Server database that you are linking to
    '// stUsername: Name of the SQL Server user who can connect to SQL Server, leave blank to use a Trusted Connection
    '// stPassword: SQL Server user password
    Function AttachDSNLessTable(stLocalTableName As String, stRemoteTableName As String, stServer As String, stDatabase As String, Optional stUsername As String, Optional stPassword As String)
    On Error GoTo AttachDSNLessTable_Err
    Dim td As TableDef
    Dim stConnect As String
    For Each td In CurrentDb.TableDefs
    If td.Name = stLocalTableName Then
    CurrentDb.TableDefs.Delete stLocalTableName
    End If
    Next
    If Len(stUsername) = 0 Then
    '//Use trusted authentication if stUsername is not supplied.
    stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";Trusted_Connection=Yes"
    Else
    '//WARNING: This will save the username and the password with the linked table information.
    stConnect = "ODBC;DRIVER=SQL Server;SERVER=" & stServer & ";DATABASE=" & stDatabase & ";UID=" & stUsername & ";PWD=" & stPassword
    End If
    Set td = CurrentDb.CreateTableDef(stLocalTableName, dbAttachSavePWD, stRemoteTableName, stConnect)
    CurrentDb.TableDefs.Append td
    AttachDSNLessTable = True
    Exit Function
    AttachDSNLessTable_Err:
    AttachDSNLessTable = False
    MsgBox "AttachDSNLessTable encountered an unexpected error: " & Err.Description
    End Function

    thanks more thanks for you
    look i add that code in form
    it worked but i cant add recored  why ??
    Private Sub Form_Open(Cancel As Integer)
    Call AttachDSNLessTable("dbo_user_name", "user_name", "sedo2015.mssql.somee.com", "sedo2015", "sedo_menf_SQLLogin_1", "123456789")
    End Sub

  • Install Fails for Large Repository

    Hi all,
    For the past 3 days i am trying to install a new repository on
    my machine. Oracle runs on Linux, I am using win nt, while
    installing the Designer 6i there was no problem. But when i
    tried to create a large/medium repository ,(Small repository was
    created ,but i cannot create any containers due to SQL erors,
    cann insert null into some tables, was the error i faced, i have
    given 50m to all the table spaces and 100k as initial and 100k
    as the next) it gives me so many SQL errors and i have to abort
    the install. This is the first time i am creating a repository.
    can ne one help me getting some nots/suggestions how i could be
    able to create a large/medium repository successfully and the
    correct way to create the work areas/containers. Where have i
    gone wrong ? PLEASE HELP ME.
    Thaks in anticipation
    Girish

    Girish,
    Are the database parameters set correctly?
    compatible = 8.1.7
    max_enabled_roles = 30
    sort_area_size = 262144
    sort_area_retained_size = 65536
    hash_area_size = 1048576
    optimizer_index_caching = 50
    optimizer_index_cost_adj = 25
    shared_pool_size = 32000000
    db_block_buffers = 2000
    open_cursors = 3000
    processes = 100
    db_file_multiblock_read_count=16 # for a 4K Oracle block
    size
    db_file_multiblock_read_count=32 # for a 2K Oracle block
    size
    db_file_multiblock_read_count=8 # for a 8K Oracle block
    size
    What version of the database are you running? - 8.1.7 is
    required for 6i release 4.
    David

  • Is it a Limitation of Named Cache Storage- Fails for large volume ???

    I debugged the code which loads data from database into the cache as mentioned in the posting : Pre-loading the Cache from Database during application start-up
    Now what this code does is load 869 rows from database into java.util.Map using Hibernate loadAll() method. All is fine uptill this point.
    The next step is to putAll the entries into cache i.e. contactCache.putAll(buffer). This is where it hungs for a min and i see org.eclipse.jdi.TimeoutException followed by below exception stack trace
    IN DEFAULT CACHE SERVER JVM
    2009-10-30 10:53:44.076/1342.849 Oracle Coherence GE 3.5.2/463 <Warning> (thread=PacketPublisher, member=1): Experienced a 1390 ms communication delay (probable remote GC) with Member(Id=2, Timestamp=2009-10-30 10:31:54.697, Address=165.137.250.122:8089, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:4856); 23 packets rescheduled, PauseRate=0.0010, Threshold=2080
    2009-10-30 11:06:10.060/2088.833 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
    2009-10-30 11:06:12.430/2091.203 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
    2009-10-30 11:06:15.657/2094.430 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:06:15.954/2094.727 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    Coherence <Error>: Halting this cluster node due to unrecoverable service failure
    2009-10-30 11:06:16.671/2095.444 Oracle Coherence GE 3.5.2/463 <Error> (thread=Termination Thread, member=1): Full Thread Dump
    Thread[Cluster|Member(Id=1, Timestamp=2009-10-30 10:31:31.621, Address=165.137.250.122:8088, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:5380),5,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[(Code Generation Thread 1),5,system]
    Thread[(Signal Handler),5,system]
    Thread[TcpRingListener,6,Cluster]
         java.net.PlainSocketImpl.socketAccept(Native Method)
         java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
         java.net.ServerSocket.implAccept(ServerSocket.java:450)
         java.net.ServerSocket.accept(ServerSocket.java:421)
         com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
         com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
         com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         java.lang.Thread.run(Thread.java:595)
    Thread[PacketSpeaker,8,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
         com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
         com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
         com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         java.lang.Thread.run(Thread.java:595)
    Thread[PacketPublisher,6,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[(VM Periodic Task),10,system]
    Thread[(Sensor Event Thread),5,system]
    Thread[(Attach Listener),5,system]
    Thread[(GC Main Thread),5,system]
    Thread[(Code Optimization Thread 1),5,system]
    Thread[Invocation:Management:EventDispatcher,5,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[Main Thread,5,main]
         java.lang.Object.wait(Native Method)
         com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
    Thread[Logger@9265725 3.5.2/463,3,main]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[Invocation:Management,5,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[Reference Handler,10,system]
         java.lang.ref.Reference.getPending(Native Method)
         java.lang.ref.Reference.access$000(Unknown Source)
         java.lang.ref.Reference$ReferenceHandler.run(Unknown Source)
    Thread[PacketListenerN,8,Cluster]
         java.net.PlainDatagramSocketImpl.receive0(Native Method)
         java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
         java.net.DatagramSocket.receive(DatagramSocket.java:712)
         com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
         com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
         com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         java.lang.Thread.run(Thread.java:595)
    Thread[Finalizer,8,system]
         java.lang.Thread.run(Thread.java:595)
    Thread[DistributedCache,5,Cluster]
         com.tangosol.util.Binary.<init>(Binary.java:87)
         com.tangosol.util.Binary.<init>(Binary.java:61)
         com.tangosol.io.AbstractByteArrayReadBuffer.toBinary(AbstractByteArrayReadBuffer.java:152)
         com.tangosol.io.pof.PofBufferReader.readBinary(PofBufferReader.java:3412)
         com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:2854)
         com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
         com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.MapRequest.read(MapRequest.CDB:24)
         com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
         com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         java.lang.Thread.run(Thread.java:595)
    Thread[PacketReceiver,7,Cluster]
         java.lang.Object.wait(Native Method)
         com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
         com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
         java.lang.Thread.run(Thread.java:595)
    Thread[PacketListener1,8,Cluster]
         java.net.PlainDatagramSocketImpl.receive0(Native Method)
         java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
         java.net.DatagramSocket.receive(DatagramSocket.java:712)
         com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
         com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
         com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
         com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         java.lang.Thread.run(Thread.java:595)
    Thread[Termination Thread,5,Cluster]
         java.lang.Thread.dumpThreads(Native Method)
         java.lang.Thread.getAllStackTraces(Thread.java:1434)
         sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         java.lang.reflect.Method.invoke(Method.java:585)
         com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
         com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
         com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
         com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
         java.lang.Thread.run(Thread.java:595)
    2009-10-30 11:06:20.958/2099.731 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:06:20.958/2099.731 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): TcpRing: disconnected from member 2 due to a kill request
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2009-10-30 11:07:17.682, Address=165.137.250.122:8089, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:4856) left Cluster with senior member 1
    2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Service guardian is 51795ms late, indicating that this JVM may be running slowly or experienced a long GC
    2009-10-30 11:07:18.073/2156.846 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
    2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Attempting recovery (due to soft timeout 26277ms ago) of Guard{Daemon=TcpRingListener}
    2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:26.835/2165.608 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
    2009-10-30 11:07:27.709/2166.482 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:27.709/2166.482 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:32.723/2171.496 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:32.723/2171.496 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:42.796/2181.569 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:42.843/2181.616 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:42.890/2181.663 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Service guardian is 10089ms late, indicating that this JVM may be running slowly or experienced a long GC
    2009-10-30 11:07:42.968/2181.741 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
    2009-10-30 11:07:47.857/2186.630 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:47.935/2186.708 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    2009-10-30 11:07:50.527/2189.300 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
    2009-10-30 11:07:52.948/2191.721 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
    2009-10-30 11:07:52.948/2191.721 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
    - SQL Error: 1400, SQLState: 23000
    - ORA-01400: cannot insert NULL into ("CTXOWNER"."CTX_TRM_TXTS"."CTX_TRM_TXT_ID")
    - SQL Error: 1400, SQLState: 23000
    - ORA-01400: cannot insert NULL into ("CTXOWNER"."CTX_TRM_TXTS"."CTX_TRM_TXT_ID")
    Coherence <Error>: Halting this cluster node due to unrecoverable service failureNow i do see its complaining about cannot insert null values. But wondering how come it was able to insert from database into java.util.Map. Its matter of dumping from Map to Coherence Cache which is another Map
    IN CACHE FACTORY VM
    Map (com.comcast.customer.contract.contract.hibernate.Term):
    2009-10-30 11:06:46.076/2095.134 Oracle Coherence GE 3.5.2/463 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=3, Timestamp=2009-10-30 10:52:20.758, Address=165.137.250.122:8090, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:2756)
    by MemberSet(Size=1, BitSetCount=2
      Member(Id=1, Timestamp=2009-10-30 10:31:31.621, Address=165.137.250.122:8088, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:5380)
    Map (com.comcast.customer.contract.contract.hibernate.Term):
    Map (com.comcast.customer.contract.contract.hibernate.Term): 2009-10-30 11:06:46.887/2095.945 Oracle Coherence GE 3.5.2/463 <Error> (thread=PacketPublisher, member=2): This node appears to have become disconnected from the rest of the cluster containing 2 nodes. All departure confirmation requests went unanswered.
    Stopping cluster service.
    Map (com.comcast.customer.contract.contract.hibernate.Term): 2009-10-30 11:06:48.773/2097.831 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=2): Service Cluster left the cluster
    2009-10-30 11:06:49.257/2098.315 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:Management, member=2): Service Management left the cluster
    2009-10-30 11:06:49.257/2098.315 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Service DistributedCache left the clusterIN JUnit Test VM
    Coherence <Error>: Halting this cluster node due to unrecoverable service failurePlease note i am running Default Cache Server VM, Cache Factory VM, Eclipse JUnit Test VM in the same machine.
    Please note the same piece of code works absolutely fine when i load other object which return 154 rows.

    Thanks for quick response.
    >
    So using the local scheme, you place 869 objects into that cache, correct? Does that work?
    I didn't tried with local scheme. But i did try with <read-write-backing-map> scheme as it was giving problem i reduced the size to 100 & changed to local-scheme.
    If you would like me to try with local-scheme i would do so but it will not prove anything as we need Hibernate Cache store to do write's
    >
    Can you explain what the remaining issue is? (What part is failing?)
    There are several issues and i am really striving to make it work :)-
    Here is the list
    - revert back to <read-write-backing-map> scheme so that i can pre-poulate the cache from database so that subsequent reads and writes hit the cache instead of database
    - to pre-populate the cache during application start-up . We use Spring 2.5, Hibernate 3.2
    - the queryContract(contract) method is similar to Search screen i.e. it takes sample contract object as an argument with some attributes populated. I am using Filter API to return the List of Contract objects based on the search parameters of sample contract as follows
    Filter filter = new EqualsFilter(IdentityExtractor.INSTANCE, contract);
    Set setEntries = contractCache.entrySet(filter); The above code expects all the attributes of sample contract object are fully populated and if not it throws Null Pointer Exception
    For example if date attribute is null then Null Pointer Exception is thrown at the following line
    writer.writeLong(2, this.date.getTimeInMillis());
    I greatly appreciate the inventor of Tangosol Coherence product responding to my queries on the forum. Hopefully with his help i will be able to resolve these issues :)-

  • Building secondary Index fails for large number(25,000,000) of records

    I am inserting 25,000,000 records of the type:
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
         private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
              try {
                   SecondaryIndex<TDetailSecondaryKey, TDetailStringKey, TDetailStringRecord> secondaryIndex = store.getSecondaryIndex(dataAccessLayer.getPrimaryIndex(), TDetailSecondaryKey.class, SECONDARY_KEY_NAME);
              } catch (DatabaseException e) {
                   throw new RuntimeException(e);
    It fails when I build the SecondaryIndex probably due to Java Heap Space Error. See the failure trace below.
    I do not face this problem when I deal with 250,000 records.
    Is there a work around that without haveing to set the memory space configurations of the JVM.
    Failure Trace:
    java.lang.RuntimeException: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:444)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
         at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
         at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
         at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66)
         at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
         at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
         at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
         at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
    Caused by: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:976)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:584)
         at com.sleepycat.je.txn.Txn.undo(Txn.java:713)
         at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:631)
         at com.sleepycat.je.txn.Txn.abort(Txn.java:599)
         at com.sleepycat.je.txn.AutoTxn.operationEnd(AutoTxn.java:36)
         at com.sleepycat.je.Environment.openDb(Environment.java:505)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         ... 22 more
    Caused by: java.lang.OutOfMemoryError: Java heap space
         at java.util.HashMap.resize(HashMap.java:462)
         at java.util.HashMap.addEntry(HashMap.java:755)
         at java.util.HashMap.put(HashMap.java:385)
         at java.util.HashSet.add(HashSet.java:200)
         at com.sleepycat.je.txn.Txn.addReadLock(Txn.java:964)
         at com.sleepycat.je.txn.Txn.addLock(Txn.java:952)
         at com.sleepycat.je.txn.LockManager.attemptLockInternal(LockManager.java:347)
         at com.sleepycat.je.txn.SyncedLockManager.attemptLock(SyncedLockManager.java:43)
         at com.sleepycat.je.txn.LockManager.lock(LockManager.java:178)
         at com.sleepycat.je.txn.Txn.lockInternal(Txn.java:295)
         at com.sleepycat.je.txn.Locker.nonBlockingLock(Locker.java:288)
         at com.sleepycat.je.dbi.CursorImpl.lockLNDeletedAllowed(CursorImpl.java:2357)
         at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2297)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2227)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1296)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1442)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1368)
         at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:1587)
         at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1397)
         at com.sleepycat.je.SecondaryDatabase.init(SecondaryDatabase.java:182)
         at com.sleepycat.je.SecondaryDatabase.initNew(SecondaryDatabase.java:118)
         at com.sleepycat.je.Environment.openDb(Environment.java:484)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    1. Does the speed of building of secondary index
    depend on the type of the data in the key? Will
    having integers in secondary key as opposed to string
    be better?The byte size of the key and data is significant of course, but the data type is not.
    2. How much are we bound of the memory? Lets assume
    my memory setting is fixed.
    a. I know with current memory settings if I set txn
    n on, I have java Heap Error.
    So will I be limited on the size of
    secondary index or
    will it just get really slow swapping
    tree information from the disk as it builds it.No. The out-of-memory error was caused by a very large transaction that holds locks. When using small transactions or non-transactional access, you won't have this problem. In general, like most databases, JE writes and reads information to/from disk as needed.
    b. Is there any other way of speeding the build of
    f secondary database?No, other then general performance tuning, nothing I know of.
    c. Will it be more beneficial not to bulk
    load when the datasize gets large
    so that secondary database is built
    incrementally?It's up to you whether you want to pay the price during an initial load or incrementally.
    d. Do you think it will help to partition the
    e original database into smaller databases
    using some criteria, and thus build
    smaller trees.          Why? You can use deferred write or non-transactional access to load any size database.
    The only weak point in this is if we have to bulk
    bulk load in one partition
    at some time increasing its size we may
    face the same problem againFace what problem?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Query timeout for large table

    Dear friend,
    My view always shows timeout because my table is now having 18,00,000 data row.
    Now what should I do with this table? can anyone help me?
    another question is, in my work purpose I need to create 5-7 report every day. So every time I need to create view for those report. I can not create procedure always because creating view is easier for me.  But the views become slower day by day. My
    server is I think quiet good. Xeon cuad core dual processor and Ram is 32.
    Is advice will be appreciated.
    Thanks in advance

    Ya thanks for your time. I am appreciating all the way.
    Actually I attach those for present an idea about my database. Most of the time I need to work with just 3 or 4 table which is LC_Profile and student_profile or ROSC database.
    I am adding the query but you do not need to go all the query. Just understand how difficult my query use to be. My question is there good way to get result faster than the view?I need to make several report every day. So I use view and join many tables
    and need to use many where clause, case, convert time etc. That is why I am asking for suggestion.
    SELECT TOP (100) PERCENT dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr, dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.Division,
    dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.District, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID, dbo.vw_Geocode.Upazila, dbo.vw_Geocode.UpazilaB,
    dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNm, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachYr, dbo.Vw_Teacher_Active.TeachEdu,
    CASE WHEN TeachEdu = 1 THEN 3000 ELSE 3000 END AS TeacherSalaryOld, dbo.LCProfile.LCAccountNo, dbo.Vw_Teacher_Active.TeachNm,
    dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.Vw_Teacher_Active.TeachMob, dbo.LCProfile.UnionID, dbo.UnionCode.UnionB,
    dbo.LCProfile.LCVill, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AS MDistrictID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AS MUpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AS MLCID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
    MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Venu_Info.NoWindow,
    MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater, MonitoringROSCII.dbo.Venu_Info.HigLatrin,
    MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard, MonitoringROSCII.dbo.Venu_Info.DistrictID AS VDistrictID,
    MonitoringROSCII.dbo.Venu_Info.UpazilaID AS VUpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID AS VLCID,
    MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AS UDistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AS UUpazilaID,
    MonitoringROSCII.dbo.Vw_UniformYes.LCID AS ULCID, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
    MonitoringROSCII.dbo.Teacher_Training.DistrictID AS TDistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID AS TUpazilaID,
    MonitoringROSCII.dbo.Teacher_Training.LCID AS TLCID, MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, MonitoringROSCII.dbo.LC_Info.PrsnMale,
    MonitoringROSCII.dbo.LC_Info.PrsnFemale, MonitoringROSCII.dbo.LC_Info.PrsnStdTot, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
    + RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
    dbo.Vw_Teacher_Active.LCID), 3) AS InstituteID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent AS NoQStudent,
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
    dbo.vw_Geocode.Status AS UpStatus, dbo.vw_Geocode.Phase, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus,
    dbo.ACF_LCs.SpecialStatus AS SpecialStatusACF, MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
    CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
    TcrPres = 2) THEN 0 ELSE 3000 END AS TeacherSalary
    FROM dbo.Vw_Teacher_Active RIGHT OUTER JOIN
    dbo.PO RIGHT OUTER JOIN
    dbo.vw_LC_Functioning RIGHT OUTER JOIN
    dbo.Vw_LCProfile_QStudent_LCwise2013_3 INNER JOIN
    dbo.ACF_LCs INNER JOIN
    dbo.vw_Geocode INNER JOIN
    dbo.LCProfile ON dbo.vw_Geocode.DistrictID = dbo.LCProfile.DistrictID AND dbo.vw_Geocode.UpazilaID = dbo.LCProfile.UpazilaID ON
    dbo.ACF_LCs.DistrictID = dbo.LCProfile.DistrictID AND dbo.ACF_LCs.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.ACF_LCs.LcID = dbo.LCProfile.LCID ON
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.DistrictID = dbo.ACF_LCs.DistrictID AND
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.UpazilaID = dbo.ACF_LCs.UpazilaID AND dbo.Vw_LCProfile_QStudent_LCwise2013_3.LCID = dbo.ACF_LCs.LcID ON
    dbo.vw_LC_Functioning.DistrictID = dbo.ACF_LCs.DistrictID AND dbo.vw_LC_Functioning.UpazilaID = dbo.ACF_LCs.UpazilaID AND
    dbo.vw_LC_Functioning.LCID = dbo.ACF_LCs.LcID LEFT OUTER JOIN
    MonitoringROSCII.dbo.Teacher_Training RIGHT OUTER JOIN
    MonitoringROSCII.dbo.Venu_Info RIGHT OUTER JOIN
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo LEFT OUTER JOIN
    MonitoringROSCII.dbo.Teacher_Profile ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Teacher_Profile.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Teacher_Profile.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Teacher_Profile.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Teacher_Profile.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Teacher_Profile.LCVisitYr AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.Teacher_Profile.Trimister ON
    MonitoringROSCII.dbo.Venu_Info.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    MonitoringROSCII.dbo.Venu_Info.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    MonitoringROSCII.dbo.Venu_Info.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
    MonitoringROSCII.dbo.Venu_Info.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
    MonitoringROSCII.dbo.Venu_Info.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
    MonitoringROSCII.dbo.Venu_Info.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister LEFT OUTER JOIN
    MonitoringROSCII.dbo.Vw_UniformYes ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.Vw_UniformYes.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.Vw_UniformYes.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.Vw_UniformYes.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.Vw_UniformYes.LCVisitYr LEFT OUTER JOIN
    MonitoringROSCII.dbo.LC_Info ON MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID = MonitoringROSCII.dbo.LC_Info.DistrictID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID = MonitoringROSCII.dbo.LC_Info.UpazilaID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID = MonitoringROSCII.dbo.LC_Info.LCID AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = MonitoringROSCII.dbo.LC_Info.VisitType AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = MonitoringROSCII.dbo.LC_Info.LCVisitYr AND
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = MonitoringROSCII.dbo.LC_Info.Trimister ON
    MonitoringROSCII.dbo.Teacher_Training.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    MonitoringROSCII.dbo.Teacher_Training.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    MonitoringROSCII.dbo.Teacher_Training.LCID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID AND
    MonitoringROSCII.dbo.Teacher_Training.VisitType = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType AND
    MonitoringROSCII.dbo.Teacher_Training.LCVisitYr = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr AND
    MonitoringROSCII.dbo.Teacher_Training.Trimister = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister ON
    dbo.ACF_LCs.DistrictID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID AND
    dbo.ACF_LCs.UpazilaID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID AND
    dbo.ACF_LCs.LcID = MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID ON dbo.PO.DistrictID = dbo.LCProfile.DistrictID AND
    dbo.PO.UpazilaID = dbo.LCProfile.UpazilaID ON dbo.Vw_Teacher_Active.DistrictID = dbo.LCProfile.DistrictID AND
    dbo.Vw_Teacher_Active.UpazilaID = dbo.LCProfile.UpazilaID AND dbo.Vw_Teacher_Active.LCID = dbo.LCProfile.LCID LEFT OUTER JOIN
    dbo.UnionCode ON dbo.LCProfile.UnionID = dbo.UnionCode.UnionID AND dbo.LCProfile.UpazilaID = dbo.UnionCode.UpazilaID AND
    dbo.LCProfile.DistrictID = dbo.UnionCode.DistrictID LEFT OUTER JOIN
    dbo.vw_Bank_Branch ON dbo.LCProfile.LCBankBr = dbo.vw_Bank_Branch.BranchID
    GROUP BY dbo.vw_Geocode.DivisionID, dbo.vw_Geocode.DivisionB, dbo.vw_Geocode.DistrictID, dbo.vw_Geocode.DistrictB, dbo.vw_Geocode.UpazilaID,
    dbo.vw_Geocode.UpazilaB, dbo.LCProfile.LCID, dbo.LCProfile.LCYr, dbo.LCProfile.LCNmB, dbo.Vw_Teacher_Active.TeachEdu, dbo.LCProfile.LCAccountNo,
    dbo.Vw_Teacher_Active.TeachNm, dbo.Vw_Teacher_Active.TeachSex, dbo.vw_Bank_Branch.LCBankBr, dbo.UnionCode.UnionB, dbo.vw_Geocode.Division,
    dbo.vw_Geocode.District, dbo.vw_Geocode.Upazila, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.DistrictID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.UpazilaID, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.MOID,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStatus, MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LC1stVstDt,
    MonitoringROSCII.dbo.Venu_Info.NoWindow, MonitoringROSCII.dbo.Venu_Info.SuffWinAir, MonitoringROSCII.dbo.Venu_Info.FreeArsWater,
    MonitoringROSCII.dbo.Venu_Info.HigLatrin, MonitoringROSCII.dbo.Venu_Info.SeatArg, MonitoringROSCII.dbo.Venu_Info.Blackboard,
    MonitoringROSCII.dbo.Venu_Info.DistrictID, MonitoringROSCII.dbo.Venu_Info.UpazilaID, MonitoringROSCII.dbo.Venu_Info.LCID,
    MonitoringROSCII.dbo.Venu_Info.VenuType, MonitoringROSCII.dbo.Venu_Info.VenuTypeOthr, MonitoringROSCII.dbo.Vw_UniformYes.RecUniformY,
    MonitoringROSCII.dbo.Vw_UniformYes.DistrictID, MonitoringROSCII.dbo.Vw_UniformYes.UpazilaID, MonitoringROSCII.dbo.Vw_UniformYes.LCID,
    MonitoringROSCII.dbo.Teacher_Training.DistrictID, MonitoringROSCII.dbo.Teacher_Training.UpazilaID, MonitoringROSCII.dbo.Teacher_Training.LCID,
    MonitoringROSCII.dbo.Teacher_Training.TcrRecFndTrn, dbo.LCProfile.UnionID, MonitoringROSCII.dbo.LC_Info.PrsnMale, MonitoringROSCII.dbo.LC_Info.PrsnFemale,
    MonitoringROSCII.dbo.LC_Info.PrsnStdTot, dbo.LCProfile.LCVill, dbo.Vw_Teacher_Active.TeachMob, RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DivisionID), 2)
    + RIGHT('00' + CONVERT(varchar, dbo.vw_Geocode.DistrictID), 2) + RIGHT(CONVERT(varchar, dbo.vw_Geocode.UpazilaID), 2) + RIGHT('000' + CONVERT(varchar,
    dbo.Vw_Teacher_Active.LCID), 3), MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCStartHr,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCEndHr, dbo.Vw_LCProfile_QStudent_LCwise2013_3.NoStudent, dbo.PO.PO_NM_E, dbo.PO.PO_NM_B,
    dbo.vw_Geocode.Status, dbo.vw_Geocode.Phase, dbo.Vw_Teacher_Active.TeachYr, dbo.LCProfile.LCNm,
    MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.SpecialStatus, dbo.ACF_LCs.YearTrim, dbo.ACF_LCs.EduYr,
    dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu13, dbo.Vw_LCProfile_QStudent_LCwise2013_3.Stu45, dbo.ACF_LCs.SpecialStatus,
    MonitoringROSCII.dbo.Teacher_Profile.TcrPres, MonitoringROSCII.dbo.Teacher_Profile.TcrMtchLCProf,
    CASE WHEN NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL) AND LCStatus = 1 AND ((TcrPres = 1 AND TcrMtchLCProf = 2) OR
    TcrPres = 2) THEN 0 ELSE 3000 END
    HAVING (dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCVisitYr = 2014) AND (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.Trimister = 1) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.VisitType = 3) AND (NOT (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)) OR
    (dbo.ACF_LCs.YearTrim = 1) AND (dbo.ACF_LCs.EduYr = 2014) AND (dbo.LCProfile.LCYr < 2013) AND
    (MonitoringROSCII.dbo.VwComplianceMonitoringBasicInfo.LCID IS NULL)
    ORDER BY dbo.vw_Geocode.DivisionID
    Another problem is ,
    Lets say I have a table with id, name, place,address, timeof_attendance and an id which is big int type. This table have 18,00,000 record. and each day increasing with 5000 record. From there I am finding the attandance in every day. So I have to create
    4 nested view to come to a result. Now My query shows timeout. If I delete old data then it works. This kind of problem I am facing.
    Please advice me.
    Thanks

  • How to identity segment when to setup partition for large table?

    I have a table with size is about 3G. There is a code column in this table with distinguish 20 value. I am try to create list partition on this column.  How can I assign segment for different value for this partition?In my database, only less than 10 segments available. If I want the better performance, each partition should be on different segment or different device? Sould each sement have enough size to hold all data? what happen if the segment is smaller? for example, if I honly have 4 sgements each with 500M?
    If I remove or change the strategy of partition, for example, change the type to range, if the system can release the partitions on segment automatically?

    This section of the performance and tuning guide addresses all of these concerns.  Give it a good read and post questions that you have about the documentation:
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00841.1570/html/phys_tune/title.htm

  • Sql access for large table

    hi,
    if a table has more than 6,000,000 records, any way to optimise to access so that can be faster.
    i try not to use sql statement alot. i try to manipulate data in internal table but first time i also need to have select statement to copy to internal table.
    any advice.
    thanks

    Tips
    1)in select include all primary keys in where condition to fetch data
    2)delare table without header line and with out occurs statement. and use work area to handle it
    Ex:-
    TYPES:BEGIN OF gty_kna1,                             " General Data in Customer Master
          kunnr TYPE kna1-kunnr,                         " Payer Number
          name1 TYPE kna1-name1,                                " Name1
          telf1 TYPE kna1-telf1,                         " Communication
          konzs TYPE kna1-konzs,                         " Corporate Group
          END OF gty_kna1.
    data:gs_kna1             TYPE gty_kna1,                             " General Data in Customer Master
    gt_kna1             TYPE TABLE OF gty_kna1,                    " General Data in Customer Master
    Note:
    •     In a SELECT statement, only the fields (field-list) which are needed are selected in the order that they reside on the database, thus network load is considerably less. The number of fields can be restricted in two ways using a field list in the SELECT clause of the statement or by using a view defined in ABAP/4 Dictionary.  The usage of view has the advantage of better reusability.
    •     SELECT SINGLE is used instead of SELECT-ENDSELECT loop when the entire key is available. SELECT SINGLE requires one communication with the database system, whereas SELECT-ENDSELECT needs two
    •     Always specify the conditions in the WHERE-clause instead of checking them with check-statements, the database system can then use an index (if possible) and the network load is considerably less.  You should not check the conditions with the CHECK statement because the contents of the whole table must be read from the database files into DBMS cache and transferred over the network. If the conditions are specified in the where clause DBMS reads exactly the needed data.
    •     Complex code is not embedded within a SELECT / ENDSELECT statement.
    •     No complex WHERE clauses, since complex where clauses are poison for the statement optimizer in any database system.
    •     For all frequently used SELECT statements, try to use an index. You always use an index if you specify (a generic part of) the index fields concatenated with logical ANDs in the Select statement's WHERE clause
    •     When loading data into Internal table, INTO TABLE OR APPENDING TABLE is used instead of a SELECT/APPEND combination. It is always faster to use the INTO TABLE version of a Select statement than to use APPEND statements.                      
    •     Use a select list with aggregate functions instead of checking and computing, when trying to find the maximum, minimum, sum and average value or the count of a database column.
    Rewards if useful...............
    Minal

Maybe you are looking for