Store result set in to cache

i want to store some result set(select clause) into cache and reuse into latter part.
kindly help me out.(oracle 10g)
Edited by: anutosh on Oct 12, 2009 6:11 AM

i am inserting value into 2 tables
'INSERT INTO ' ||
in_fct_table ||
' NOLOGGING SELECT * FROM ' ||
in_fct_table ||
'_STG PARTITION(' ||
f_pop_partition.partition_name ||
') LOG ERRORS INTO ' ||
in_fct_table ||
'_D(''' ||
f_pop_partition.partition_name ||
''') REJECT LIMIT UNLIMITED';
duplicates data stored in <_ d> (suffix) table and remaining data(without duplicate) want to store in in_fct_tables(variable name) based on some partition.
data from in_fct_tables(variable name) table want to populate into some other table . so for that we need to capture the data from in_fct_table.

Similar Messages

  • Can we store result set in session..??

    can we store result set object in session..??if yes ,,say i close the connection and use that result set object in some another page,,what happen if u call updateRow(),deleteRow() or insertRow()

    You can store a result set object in session.
    However you also have to keep that result sets connection open and dedicated to that result set.
    In that scenario you will have a 1 db connection per session. Not very nice. And NOT recommended.
    Once you close a ResultSet's connection then you shouldn't access the ResultSet. Simple as that.

  • Olap cache not storing result sets for certain queries

    Hi - another cache question
    my result set for a query is not getting set in OLAP CACHE - the query variable level is stored ok but not the result set.
    different queries store the result sets ok.
    there is nothing in the query or cube properties that should disable the caching of the result set.
    there is plenty of space available and there has not been any changes to the query or data loaded or aggregates changed.
    any ideas?
    cheers - Neil

    I would do a performance check in RSRT2.  This will usually give you some additional information related to olap cache. 
    Hope that helps.

  • Caching Result Set ?

    folks
    I have a search page which returns a result set and the results are put in the session to be able to access when user clicks on the page numbers(pagination) in the results pane.
    Is there any way we can store or cache this and access instead of fetching it off the session.

    You can store the data as a multi dimensional array in javascript on the rendered jsp page as a javascript function. It exists on the client side (browser) and you can use an onClick event on the page's button to call up various parts of the array and display it to the user. That way, your user doesnt have to submit the page back to the servlet to pagenate to the next page. The data shows up immidiately instead. You'll have to read up on javascript to learn how to do this. (Also I assume you are storing the resultant data in some type of array and not the raw resultSet).
    However, if so much data is returned to the user he needs pagenation, I suggest you add filter textfields to allow him to limit what data is returned so pagenation is not needed. Pagenation implies there is too much data for the user to effectively use at once (no one likes scrolling down a list of 200 items). For instance, instead of displaying all names in a list, add a filter so the user can search for all last names that begain with an A, or B, etc through Z. Then, when displayed to the user, show the list sorted by your filter criteria (lastName). If there is still too much data in the list, I suggest putting up a vertical scrollbar rather than pagenation.

  • 11g Client result set caching in OCI

    I'm trying out this feature in the 11.1.0.6 release. My understanding of this feature is that when enabled with the appropriate server-side init.ora parameters, a 11g OCI client connecting to the instance will cache SQL results locally in some fixed amount of RAM on the client. The idea is that network roundtrips would simply disappear in this situation.
    I'm not sure if it's working--or how to tell if it is. I have a 11g instance running and put the 11g client incl. sqlplus on a separate box, setting up TNS connectivity from client to server. Pretty standard stuff. I can connect fine and run the same query over and over, but I see incrementing execution counts on the database side and network traffic between the client and server so I'm guessing that the client-side caching isn't happening. There doesn't seem to be a ton of clear documentation on this feature so I wanted to see if anyone else has kicked it around.
    Bob

    I am also facing the same issue (enabling client result set caching). I am using Oracle Database 11g Release 11.2.0.2.0.
    I have made the below configuration changes
    1. Enabled the client result set cache by setting the server side parameter 'client_result_cache_size' to 10485760 (10 MB)
    2. Restarted the oracle instance after setting the above parameter
    3. Added a table annotation by executing the statement ALTER TABLE emp RESULT_CACHE (MODE FORCE). I verified that the annotation is applied by query the user table later.
    4. Enabled statement caching on the client side i.e. on the JDBC driver.
    5. Used prepared statements to execute the query so that statement caching kicks in. From the driver logs I verified that execution of subsequent queries after the first one used the same statement handle.
    After executing the select prepared statement query for three times I checked the CLIENT_RESULT_CACHE_STATS$ view. But this view didn't result in any rows.
    As part of troubleshooting I even tried adding the /*+ RESULT_CACHE */ hint to the query but the view didn't gave any result.
    Also on enabling sql trace I could see from tkprof that every execution of the query increased the number of rows fetched on the server which indicates that the client result set caching in OCI isn't working.
    Are there any steps which I have missed?
    Thanks in advance.

  • Can i store the Result set values as a Session variable

    hai,
    I want the result set values of a query to be used many times and I want the same resultset between different page calls.
    I want all those records fetched from the database to be available in the next page with out refetching from the database.
    can any one help me out. its very urgent.....
    Thanks and regards,
    Ravikiran
    mail to me at : [email protected]

    "can i store the Result set values as a Session variable "
    Practically Yes u can
    but u want be able to accesses it in other pages
    u can try it and see
    the other thing u can do is store the values from the resultset in a object say vector and put vector in seesion and u can use this any where
    for e.g
    Vector v=new Vector();
    While(rs.next())
    v.addElement(rs.getString(1));
    v.addElement(rs.getString(2));
    session.putValue("myVector",v);
    now where u want to get it do this
    Vector myvec=(Vector)session.getValue("myVector");
    do do futher

  • Client side result set cache

    Hello,
    I try to get the client side result set cache working, but i have no luck :-(
    I'm using Oracle Enterprise Edition 11.2.0.1.0 and as client diver 11.2.0.2.0.
    Executing the query select /*+ result_cache*/ * from p_item via sql plus or toad will generate an nice execution plan with an RESULT CACHE node and the v$result_cache_objects contains some rows.
    After I've check the server side cache works. I want to cache the client side
    My simple Java Application looks like
    private static final String ID = UUID.randomUUID().toString();
    private static final String JDBC_URL = "jdbc:oracle:oci:@server:1521:ORCL";
    private static final String USER = "user";
    private static final String PASSWORD = "password";
    public static void main(String[] args) throws SQLException {
    OracleDataSource ds = new OracleDataSource();
    ds.setImplicitCachingEnabled(true);
    ds.setURL( JDBC_URL );
    ds.setUser( USER );
    ds.setPassword( PASSWORD );
    String sql = "select /*+ result_cache */ /* " + ID + " */ * from p_item d " +
    "where d.i_size = :1";
    for( int i=0; i<100; i++ ) {
    OracleConnection connection = (OracleConnection) ds.getConnection();
    connection.setImplicitCachingEnabled(true);
    connection.setStatementCacheSize(10);
    OraclePreparedStatement stmt = (OraclePreparedStatement) connection.prepareStatement( sql );
    stmt.setLong( 1, 176 );
    ResultSet rs = stmt.executeQuery();
    int count = 0;
    for(; rs.next(); count++ );
    rs.close();
    stmt.close();
    System.out.println( "Execution: " + getExecutions(connection) + " Fetched: " + count );
    connection.close();
    private static int getExecutions( Connection connection ) throws SQLException {
    String sql = "select executions from v$sqlarea where sql_text like ?";
    PreparedStatement stmt = connection.prepareStatement(sql);
    stmt.setString(1, "%" + ID + "%" );
    ResultSet rs = stmt.executeQuery();
    if( rs.next() == false )
    return 0;
    int result = rs.getInt(1);
    if( rs.next() )
    throw new IllegalArgumentException("not unique");
    rs.close();
    stmt.close();
    return result;
    100 times the same query is executed and the statement exection count is incemented every time. I expect just 1 statement execution ( client database roundtrip ) and 99 hits in client result set cache. The view CLIENT_RESULT_CACHE_STATS$ is empty :-(
    I'm using the oracle documentation at http://download.oracle.com/docs/cd/E14072_01/java.112/e10589/instclnt.htm#BABEDHFF and I don't kown why it does't work :-(
    I'm thankful for every tip,
    André Kullmann

    I wanted to post a follow-up to (hopefully) clear up a point of potential confusion. That is, with the OCI Client Result Cache, the results are indeed cached on the client in memory managed by OCI.
    As I mentioned in my previous reply, I am not a JDBC (or Java) expert so there is likely a great deal of improvement that can be made to my little test program. However, it is not intended to be exemplary, didactic code - rather, it's hopefully just enough to illustrate that the caching happens on the client (when things are configured correctly, etc).
    My environment for this exercise is Windows 7 64-bit, Java SE 1.6.0_27 32-bit, Oracle Instant Client 11.2.0.2 32-bit, and Oracle Database 11.2.0.2 64-bit.
    Apologies if this is a messy post, but I wanted to make it as close to copy/paste/verify as possible.
    Here's the test code I used:
    import java.sql.ResultSet;
    import java.sql.PreparedStatement;
    import java.sql.SQLException;
    import oracle.jdbc.pool.OracleDataSource;
    import oracle.jdbc.OracleConnection;
    class OCIResultCache
      public static void main(String args []) throws SQLException
        OracleDataSource ods = null;
        OracleConnection conn = null;
        PreparedStatement stmt = null;
        ResultSet rset = null;
        String sql1 = "select /*+ no_result_cache */ first_name, last_name " +
                      "from hr.employees";
        String sql2 = "select /*+ result_cache */ first_name, last_name " +
                      "from hr.employees";
        int fetchSize = 128;
        long start, end;
        try
          ods = new OracleDataSource();
          ods.setURL("jdbc:oracle:oci:@liverpool:1521:V112");
          ods.setUser("orademo");
          ods.setPassword("orademo");
          conn = (OracleConnection) ods.getConnection();
          conn.setImplicitCachingEnabled(true);
          conn.setStatementCacheSize(20);
          stmt = conn.prepareStatement(sql1);
          stmt.setFetchSize(fetchSize);
          start = System.currentTimeMillis();
          for (int i=0; i < 10000; i++)
            rset = stmt.executeQuery();
            while (rset.next())
            if (rset != null) rset.close();
          end = System.currentTimeMillis();
          if (stmt != null) stmt.close();
          System.out.println();
          System.out.println("Execution time [sql1] = " + (end-start) + " ms.");
          stmt = conn.prepareStatement(sql2);
          stmt.setFetchSize(fetchSize);
          start = System.currentTimeMillis();
          for (int i=0; i < 10000; i++)
            rset = stmt.executeQuery();
            while (rset.next())
            if (rset != null) rset.close();
          end = System.currentTimeMillis();
          if (stmt != null) stmt.close();
          System.out.println();
          System.out.println("Execution time [sql2] = " + (end-start) + " ms.");
          System.out.println();
          System.out.print("Enter to continue...");
          System.console().readLine();
        finally
          if (rset != null) rset.close();
          if (stmt != null) stmt.close();
          if (conn != null) conn.close();
    }In order to show that the results are cached on the client and thus server round-trips are avoided, I generated a 10046 level 12 trace from the database for this session. This was done using the following database logon trigger:
    create or replace trigger logon_trigger
    after logon on database
    begin
      if (user = 'ORADEMO') then
        execute immediate
        'alter session set events ''10046 trace name context forever, level 12''';
      end if;
    end;
    /With that in place I then did some environmental setup and executed the test:
    C:\Projects\Test\Java\OCIResultCache>set ORACLE_HOME=C:\Oracle\instantclient_11_2
    C:\Projects\Test\Java\OCIResultCache>set CLASSPATH=.;%ORACLE_HOME%\ojdbc6.jar
    C:\Projects\Test\Java\OCIResultCache>set PATH=%ORACLE_HOME%\;%PATH%
    C:\Projects\Test\Java\OCIResultCache>java OCIResultCache
    Execution time [sql1] = 1654 ms.
    Execution time [sql2] = 686 ms.
    Enter to continue...This is all on my laptop, so results are not stellar in terms of performance; however, you can see that the portion of the test that uses the OCI client result cache did execute in approximately half of the time as the non-cached portion.
    But, the more compelling data is in the resulting trace file which I ran through the tkprof utility to make it nicely formatted and summarized:
    SQL ID: cqx6mdvs7mqud Plan Hash: 2228653197
    select /*+ no_result_cache */ first_name, last_name
    from
    hr.employees
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute  10000      0.10       0.10          0          0          0           0
    Fetch    10001      0.49       0.54          0      10001          0     1070000
    total    20002      0.60       0.65          0      10001          0     1070000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 94 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
           107        107        107  INDEX FULL SCAN EMP_NAME_IX (cr=2 pr=0 pw=0 time=21 us cost=1 size=1605 card=107)(object id 75241)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                   10001        0.00          0.00
      SQL*Net message from client                 10001        0.00          1.10
    SQL ID: frzmxy93n71ss Plan Hash: 2228653197
    select /*+ result_cache */ first_name, last_name
    from
    hr.employees
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0         11         22           0
    Fetch        2      0.00       0.00          0          0          0         107
    total        4      0.00       0.01          0         11         22         107
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 94 
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
           107        107        107  RESULT CACHE  0rdkpjr5p74cf0n0cs95ntguh7 (cr=0 pr=0 pw=0 time=12 us)
             0          0          0   INDEX FULL SCAN EMP_NAME_IX (cr=0 pr=0 pw=0 time=0 us cost=1 size=1605 card=107)(object id 75241)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      log file sync                                   1        0.00          0.00
      SQL*Net message from client                     2        1.13          1.13The key differences here are the execute, fetch, and SQL*Net message values. Using the client-side cache, the values drop dramatically due to getting the results from client memory rather than round-trips to the server.
    Of course, corrections, clarifications, etc. welcome and so on...
    Regards,
    Mark

  • Store procedure which get list of values separated by semicolon and return result set as a string semicolon separated string

    Hello,
    I am trying to make stored procedure in what i am getting i_group_id  as a list of groups seprated by semicoln like 1,2,14,17,23.
    And i want list of emails based on that group. And result set will be as a list of emails seprated by semicolon.
    If you know how to install that in stored procedure please help me. Appreciate your help.
    Thanks.
    PROCEDURE get_groups_email(i_group_id    IN VARCHAR2,
                               x_group_email_dtl_cur OUT resultcur)
    IS
    x_group_email VARCHAR2(4000):=NULL;
    BEGIN                           
       FOR i IN (SELECT   TRIM(emp.email) email
                   FROM   ems.employee emp,
                          ems.groups_employee egrp
                  WHERE   egrp.group_id IN (i_group_id)
                    AND   emp.person_id = egrp.person_id) LOOP
        x_group_email:= x_group_email || i.email ||';';
      END LOOP;
      x_group_email := RTRIM(x_group_email,';');
       OPEN x_group_email_dtl_cur FOR 
         SELECT   x_group_email
           FROM   DUAL; 
    DBMS_OUTPUT.PUT_LINE('x_group_email:' || x_group_email);                                       
    END get_groups_email;
    PROCEDURE get_groups_email(i_group_id    IN VARCHAR2,
                               x_group_email_dtl_cur OUT resultcur)
    IS
    x_group_email VARCHAR2(4000):=NULL;
    BEGIN                           
       FOR i IN (SELECT   TRIM(emp.email) email
                   FROM   ems.employee emp,
                          ems.groups_employee egrp
                  WHERE   egrp.group_id IN (i_group_id)
                    AND   emp.person_id = egrp.person_id) LOOP
        x_group_email:= x_group_email || i.email ||';';
      END LOOP;
      x_group_email := RTRIM(x_group_email,';');
       OPEN x_group_email_dtl_cur FOR 
         SELECT   x_group_email
           FROM   DUAL; 
    DBMS_OUTPUT.PUT_LINE('x_group_email:' || x_group_email);                                       
    END get_groups_email;

    1013527 wrote:
    I am using Oracle 9.7.2. Not 11g.
    No Database at hand to provide a working example.
    So use http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:73830657104020 to split your list into rows.
    Join that to your table of e-mail addresses
    then take a look at http://www.sqlsnippets.com/en/topic-11787.html maybe chosing http://www.sqlsnippets.com/en/topic-12087.html
    and you're done.
    Regards
    Etbin
    something to play with (still NOT TESTED!)
    with
    e_mails as
    (select 1 user_id,'alpha' || chr(64) || 'domain.eu' e_mail from dual union all
    select 2,'beta' || chr(64) || 'domain.eu' from dual union all
    select 3,'gamma' || chr(64) || 'domain.eu' from dual union all
    select 4,'delta' || chr(64) || 'domain.eu' from dual union all
    select 5,'epsilon' || chr(64) || 'domain.eu' from dual union all
    select 6,'zeta' || chr(64) || 'domain.eu' from dual union all
    select 7,'eta' || chr(64) || 'domain.eu' from dual union all
    select 8,'theta' || chr(64) || 'domain.eu' from dual union all
    select 9,'iota' || chr(64) || 'domain.eu' from dual union all
    select 10,'kappa' || chr(64) || 'domain.eu' from dual union all
    select 11,'lambda' || chr(64) || 'domain.eu' from dual union all
    select 12,'mu' || chr(64) || 'domain.eu' from dual union all
    select 13,'nu' || chr(64) || 'domain.eu' from dual union all
    select 14,'xi' || chr(64) || 'domain.eu' from dual union all
    select 15,'omicron' || chr(64) || 'domain.eu' from dual union all
    select 16,'pi' || chr(64) || 'domain.eu' from dual union all
    select 17,'rho' || chr(64) || 'domain.eu' from dual union all
    select 18,'sigma' || chr(64) || 'domain.eu' from dual union all
    select 19,'tau' || chr(64) || 'domain.eu' from dual union all
    select 20,'upsilon' || chr(64) || 'domain.eu' from dual union all
    select 21,'phi' || chr(64) || 'domain.eu' from dual union all
    select 22,'chi' || chr(64) || 'domain.eu' from dual union all
    select 23,'psi' || chr(64) || 'domain.eu' from dual union all
    select 24,'omega' || chr(64) || 'domain.eu' from dual
    groups as
    (select 1 g_id,'1,15,21,17' members from dual union all
    select 2,'23,10,3,20,7,23,15,9' from dual union all
    select 3,'3,4,5,6,7,8' from dual union all
    select 4,'23,24,15,16,7,18' from dual
    select g_id,
           substr(sys_connect_by_path(e_mail,';'),2) e_mail_list
      from (select g.g_id,
                   e.e_mail,
                   row_number() over (partition by g.g_id order by e.user_id) rn
              from e_mails e,
                   groups g
             where instr(','||g.members||',',','||to_char(e.user_id)||',') > 0
               and instr(','||:group_list||',',','||to_char(g.g_id)||',') > 0
    where connect_by_isleaf = 1
    start with rn = 1
    connect by rn = prior rn + 1
            and g_id = prior g_id
    Message was edited by: Etbin provided a small example

  • Store procedure which get list of values separated by semicolon and return result set as a string semicolon separated strin

    Hello,
    I am trying to make stored procedure in what i am getting i_group_id  as a list of groups seprated by semicoln like 1,2,14,17,23.
    And i want list of emails based on that group. And result set will be as a list of emails seprated by semicolon.
    Can anybody please help me for that. Thanks in advance.
    PROCEDURE get_groups_email(i_group_id    IN VARCHAR2,
                               x_group_email_dtl_cur OUT resultcur)
    IS
    x_group_email VARCHAR2(4000):=NULL;
    BEGIN                            
       FOR i IN (SELECT   TRIM(emp.email) email
                   FROM   ems.employee emp,
                          ems.groups_employee egrp
                  WHERE   egrp.group_id IN (i_group_id)
                    AND   emp.person_id = egrp.person_id) LOOP
        x_group_email:= x_group_email || i.email ||';';
      END LOOP;
      x_group_email := RTRIM(x_group_email,';');
       OPEN x_group_email_dtl_cur FOR  
         SELECT   x_group_email
           FROM   DUAL;  
    DBMS_OUTPUT.PUT_LINE('x_group_email:' || x_group_email);                                        
    END get_groups_email;
    PROCEDURE get_groups_email(i_group_id    IN VARCHAR2,
                               x_group_email_dtl_cur OUT resultcur)
    IS
    x_group_email VARCHAR2(4000):=NULL;
    BEGIN                            
       FOR i IN (SELECT   TRIM(emp.email) email
                   FROM   ems.employee emp,
                          ems.groups_employee egrp
                  WHERE   egrp.group_id IN (i_group_id)
                    AND   emp.person_id = egrp.person_id) LOOP
        x_group_email:= x_group_email || i.email ||';';
      END LOOP;
      x_group_email := RTRIM(x_group_email,';');
       OPEN x_group_email_dtl_cur FOR  
         SELECT   x_group_email
           FROM   DUAL;  
    DBMS_OUTPUT.PUT_LINE('x_group_email:' || x_group_email);                                        
    END get_groups_email;

    >
    Can anybody please help me for that
    >
    Not in this forum they can't. This forum, as the title says, is for SQL Developer questions only.
    Please mark this question ANSWERED and repost it in the SQL and PL/SQL forum
    https://forums.oracle.com/community/developer/english/oracle_database/sql_and_pl_sql

  • Can we pass temporary result set to the  procedure?

    Hi,
    The result set is stored in a temporary storage. Can we pass that resultset to the procedure?
    Thanks and regards
    Gowtham Sen.

    I'm still unclear just what this result set is... Is is a table or a cursor or something else?
    If you imply the physical result set of a SQL query, there is no such thing in Oracle. Oracle does not create and store a result set in memory containing the rows of the SQL SELECT. Creating such sets in memory does not scale. A single SELECT on such a database can kill the performance of the entire database by exhasuting all memory with a single large physical result set.
    Oracle "results" live in the database cache (residing in the SGA). Rows (as data blocks) are paged into and out of this case as demand dictates. When PL/SQL code, for example, fetches a row, the SQL engine grabs the row from the db cache (SGA) and copies it to the PGA (the private memory area of the PL/SQL process). The row also may not yet exist in the db cache - in which case it needs a physical read from disk to get the data block containing that row into the db cache (after which it is copied to the PGA).
    A PL/SQL process can do a bulk fetch - e.g. fetch a 100 rows from the SQL query/cursor at a time. In that case, a 100 rows are copied from the SGA db cache to the PGA.
    At no time is there are single large unique and dedicated memory struct in the SGA that contains the complete "result set" of a SQL query.
    Once you have fetched that row, that is it. Deal is done. You cannot reverse the cursor and fetch the row again. After you have fetched the last row in that cursor, you cannot pass that cursor to another process - the cursor is now empty. That other process cannot rewind the cursor and start fetching from the 1st row again. You will need to pass the SQL to that process in order for it to create its own cursor - also keeping in mind that in between the rows can have changed and that this other process could now see different results with its cursor.
    If you want to create such a physical temporary result set that is consistent and re-usable, you can use a temporary table - and insert the results of the SELECT into this temp table for further processing. This temp table is session specific and is auto destroyed when the session terminates.
    A comment though - it sounds like you're approaching the date warehouse processing (scrubbing, transformation and loading of data) as a row-by-row process.
    That is a flawed approach. Row-by-row processing does not scale. One should deal with data sets. Especially when the volumes are large. One should also attempt to perform minimal passes through a data set. Processing a bunch of rows, then passing that rows to another process to do some processing on the same rows.. this means multiple passes through the same data. That is very inefficient performance and resource wise.

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • Oracle Coherence implementation for JDBC result set

    Instead of ORM, is Oracle coherence will support for accessing the data using normal JDBC calls?
    If it supports JDBC , what is the CacheStore implemetation for this?
    Can you provide me some example how to cache the JDBC ResultSet values and retrieve back from Cache?

    Hi,
    I think you mix up several concepts.
    user13266701 wrote:
    Instead of ORM, is Oracle coherence will support for accessing the data using normal JDBC calls?At the moment (up to 3.5.x), there is no way to query Coherence via JDBC out-of-the-box. There were a couple of initiatives to provide such features but they were not part of the product, only initiatives in the Incubator, but I believe they have been dropped.
    Upcoming releases may bring replacement, although I don't think it would be outright JDBC compatible, as that would imply converting an object-oriented data to result set so that you can convert it back, so it would in effect waste CPU resources.
    If it supports JDBC , what is the CacheStore implemetation for this?
    CacheStores have nothing to do with how you access the cache itself. Those are internal operations invoked by Coherence itself. Don't look for a relationship between querying the cache and doing anything with the cache store. The only relationship with them is that key-based operations on the cache may lead to operations on the cache store, but whether they do happen or not depends on the cache content, too.
    Can you provide me some example how to cache the JDBC ResultSet values and retrieve back from Cache?If I understand, that is yet a third concept. If you indeed want to cache JDBC ResultSet-s obtained from the database, then I have to disappoint you: it is not directly possible, as the ResultSet object is an abstraction over an open database cursor, and hence it keeps a database resource. Active resources cannot be cached in Coherence. You may acquire a RowSet from the JDBC driver, which would be possible to cache, however it may not be the most efficient thing to do due to possibly not optimal serialization format used by the RowSet to serialize itself.
    So if I did not answer the questions you wanted to ask, would you please explain in more detail what exactly you would like to do?
    Best regards,
    Robert

  • Access Subform - Can the Subforms Source Object be defined by an SQL SP result set?

    Hi Guys,
    I can't clearly answer this question with a yes or no.
    I have an Access Sub Form that I am populating with a record set from a Store Procedure. Fairly early on I discovered that for this to work correctly the Source Object for the Sub Form Control must be set first, and most examples (including a working version
    of my own) achieve this by defining an Access Query and setting the Source Object to this.
    What I would really like to do is define the Source Object using the results of a SQL Store Procedure using ONLY code within VBA.
    Now before anyone starts providing alternatives "why don't you just..."  I'm noting now that I have a semi complex solution that makes most non-VBA based approaches ineffective. While it does work at present with an Access Query I'm needing
    to make the result set more dynamic meaning in future I will not know how many columns will be returned or the name of them, only the SP will have this information.
    Thanks in advance!

    Well after much trial and error I've got something which does what I want, although I'm not thrilled that I couldn't do this via my existing ADODB connections, in any case example provided below;
        Dim db As DAO.Database
        Dim qdf As New DAO.QueryDef
        Set db = CurrentDb()
       'qryMyTest refers to a dummy Access query (non pass through). 
        With db.QueryDefs("qryMyTest")
            .Connect = CurrentDb.TableDefs("tblSomeTestSQLTable").Connect
            .SQL = "exec sp_MyTestSP"
            Me.subfrmTest1.SourceObject = "Query.qryMyTest"
        End With   
        Set qdf = Nothing
    I've also marked your response Alphonse as an answer as it lead me onto the right path.

  • How do I create an Event Handler for an Execute SQL Task in SSIS if its result set is empty

    So the precedence on my entire package executing is based on my first SELECT of my Table and an updatable column. If that SELECT results in an empty result set, how do I create an Event Handler to handle an empty result set?
    A Newbie to SSIS.
    I appreciate your review and am hopeful for a reply.
    PSULionRP

    Depends upon what you want to do in the eventhandler. this is what you can do
    Store the result set from the Select to a user variable.
    Pass this user variable to a Script task.
    In the Script task do whatever you want to do including failing the package this can be done by failing the script task, which in turns fails the package. something like
    Dts.TaskResult = Dts.Results.Failure
    Abhinav http://bishtabhinav.wordpress.com/

  • Large query result set

    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for search in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It results in
    too much memory consumtion in our ejb application. What is the best way to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

    you can think of following options
    1) paging: read only few thousands at a time and maintain a index to page
    through complete dataset
    2) caching!
    a) you can create a serialized data file in server to cache the result set
    and can use that to browse through. you may do on the fly
    compression/uncompression while sending data to client.
    b) applet based solution where caching could be in client side. Look in
    http://www.sitraka.com/software/jclass/cs_ims.html
    thanks,
    Srinivas
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Thanks Slava Imeshev,
    We already have search criteria and a limit. When records exceeds thatlimit
    then we prompt user that it may take sometime, do you want to proceed? If
    he clicks yes then we retrieve those records. This results in lot ofmemory
    consumtion.
    I was thinking if there is some way that from database I can retrieve some
    block of records at a time rather the all records of a query. I wander how
    internet search sites work, where thousnds of sites/pages match criteriaand
    client can move back & front on any page.
    Regards,
    Parvez
    "Slava Imeshev" <[email protected]> wrote in message
    news:[email protected]...
    Hi chauhan,
    You may want to narrow search criteria along with processing a
    limited number of resulting records. I.e. if the size of the result
    is bigger than a limit, you stop fetching results and notify the client
    that search criteria should be narrowed.
    HTH.
    Regards,
    Slava Imeshev
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for
    search
    in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It
    results
    in
    too much memory consumtion in our ejb application. What is the best
    way
    to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

Maybe you are looking for