How to handle large result set of a SQL query

Hi,
I have a question about how to handle large result set of a SQL query.
My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
Thanks a lot!

No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
- Rick

Similar Messages

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to execute entire result set of multiple sql statements via sp_executesql?

    I have a query that generates multiple insert statements (dynamic sql).  So when I execute this my result set is a table of sql insert statements (one insert statment per row in my source data table).  Like so:
                 Select 'INSERT INTO [dbo].[Table_1] ([Col1]) VALUES (' +  SrcData + ')' from SourceDataTbl
    How can I completely automate this and execute all these sql statements via sp_executesql?
    My plan is to completely automate and execute all this via an SSIS package.
    As always any help is greatly appreciated!
    fyi-  This is a very simple version of what I am trying to do.  My query probably plugs in 20+ values from the SourceDataTbl into each of the sql insert statements.

    Ah, a small error in Visakh's post, which I failed to observe, and then I added one on my own.
    DECLARE @SQL Varchar(max)
    SELECT @SQL =
       (SELECT 'INSERT INTO [dbo].[Table_1] ([Col1]) VALUES (' +  SrcData +
                ')' + char(10) + char(13)
        from SourceDataTbl
        FOR XML PATH(''), TYPE).value('.', 'nvarchar(max)')
    EXEC sp_executesql @SQL
    Without ", TYPE" FOR XML returns a string when assigned to a variable. The TYPE thing produces a value of the XML data type, so that we can apply the value method and get string out of the XML.
    And why this? Because:
    DECLARE @str nvarchar(MAX)
    SELECT @str = (SELECT 'Kalle Anka & co' FOR XML PATH(''))
    SELECT @str
    SELECT @str = (SELECT 'Kalle Anka & co' FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)')
    SELECT @str
    Although the data type is string when , TYPE is not there, it is still XML and characters special to XML are enticised.
    Confused? Don't worry, for what you are doing, this is mumbo-jumbo.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • JSP Servlet and convert the result set of an SQL Query To XML file

    Hi all
    I have a problem to export my SQL query is resulty into an XML file I had fixed my servlet and JSP so that i can display all the records into my database and that the goal .Now I want to get the result set into JSP so that i can create an XML file from that result set from the jsp code.
    thisis my servlet which will call the jsp page and the jsp just behind it.
    //this is the servlet
    import java.io.*;
    import java.lang.reflect.Array;
    import java.sql.*;
    import java.util.ArrayList;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import javax.naming.*;
    import javax.sql.*;
    public *class *Campaign *extends *HttpServlet
    *private* *final* *static* Logger +log+ = Logger.+getLogger+(Campaign.*class*.getName());
    *private* *final* *static* String +DATASOURCE_NAME+ = "jdbc/SampleDB";
    *private* DataSource _dataSource;
    *public* *void* setDataSource(DataSource dataSource)
    _dataSource = dataSource;
    *public* DataSource getDataSource()
    *return* _dataSource;
    *public* *void* init()
    *throws* ServletException
    *if* (_dataSource == *null*) {
    *try* {
    Context env = (Context) *new* InitialContext().lookup("java:comp/env");
    _dataSource = (DataSource) env.lookup(+DATASOURCE_NAME+);
    *if* (_dataSource == *null*)
    *throw* *new* ServletException("`" + +DATASOURCE_NAME+ + "' is an unknown DataSource");
    } *catch* (NamingException e) {
    *throw* *new* ServletException(e);
    protected *void *doGet(HttpServletRequest request, HttpServletResponse response)
    throws IOException, ServletException
    Connection conn = *null*;
    *try* {
    conn = getDataSource().getConnection();
    Statement stmt = conn.createStatement();
    ResultSet rs = stmt.executeQuery("select post_id,comments,postname from app.posts");
    // out.println("Le r&eacute;sultat :<br>");
    ArrayList <String> Lescomments= *new* ArrayList<String>();
    ArrayList <String> Lesidentifiant = *new* ArrayList<String>();
    ArrayList <String> Lesnoms = *new* ArrayList <String>();
    *while* (rs.next()) {
    Lescomments.add(rs.getString("comments"));
    request.setAttribute("comments",Lescomments);
    Lesidentifiant.add(rs.getString("post_id"));
    request.setAttribute("id",Lesidentifiant);
    Lesnoms.add(rs.getString("postname"));
    request.setAttribute("nom",Lesnoms);
    rs.close();
    stmt.close();
    *catch* (SQLException e) {
    *finally* {
    *try* {
    *if* (conn != *null*)
    conn.close();
    *catch* (SQLException e) {
    // les param&egrave;tres sont corrects - on envoie la page r&eacute;ponse
    getServletContext().getRequestDispatcher("/Campaign.jsp").forward(request,response);
    }///end of servlet
    }///this is the jsp page called
    <%@ page import="java.util.ArrayList" %>
    <%
    // on r&eacute;cup&egrave;re les donn&eacute;es
    ArrayList nom=(ArrayList)request.getAttribute("nom");
    ArrayList id=(ArrayList)request.getAttribute("id");
    ArrayList comments=(ArrayList) request.getAttribute("comments");
    %>
    <html>
    <head>
    <title></title>
    </head>
    <body>
    Liste des campagnes here i will create the xml file the problem is to display all rows
    <hr>
    <table>
    <tr>
    </tr>
    <tr>
    <td>Comment</td>
    <td>
    <%
    for( int i=0;i<comments.size();i++){
    out.print("<li>" + (String) comments.get(i) + "</li>\n");
    }//for
    %>
    </tr>
    <tr>
    <td>nom</td>
    <td>
    <%
    for( int i=0;i<nom.size();i++){
    out.print("<li>" + (String) nom.get(i) + "</li>\n");
    }//for
    %>
    </tr>
    <tr>
    <td>id</td>
    <td>
    <%
    for( int i=0;i<id.size();i++){
    out.print("<li>" + (String) id.get(i) + "</li>\n");
    }//for
    %>
    </tr>
    </table>
    </body>
    </html>
    This is how i used to create an XML file in a JSP page only without JSP/SERVLET concept:
    <%@ page import="java.sql.*" %>
    <%@ page import="java.io.*" %>
    <%
    // Identify a carriage return character for each output line
    int iLf = 10;
    char cLf = (*char*)iLf;
    // Create a new empty binary file, which will content XML output
    File outputFile = *new* File("C:\\Users\\user\\workspace1\\demo\\WebContent\\YourFileName.xml");
    //outputFile.createNewFile();
    FileWriter outfile = *new* FileWriter(outputFile);
    // the header for XML file
    outfile.write("<?xml version='1.0' encoding='ISO-8859-1'?>"+cLf);
    try {
    // Define connection string and make a connection to database
    Connection conn = DriverManager.getConnection("jdbc:derby://localhost:1527/SAMPLE","app","app");
    Statement stat = conn.createStatement();
    // Create a recordset
    ResultSet rset = stat.executeQuery("Select * From posts");
    // Expecting at least one record
    *if*( !rset.next() ) {
    *throw* *new* IllegalArgumentException("No data found for the posts table");
    outfile.write("<Table>"+cLf);
    // Parse our recordset
    // Parse our recordset
    *while*(rset.next()) {
    outfile.write("<posts>"+cLf);
    outfile.write("<postname>" + rset.getString("postname") +"</postname>"+cLf);
    outfile.write("<comments>" + rset.getString("comments") +"</comments>"+cLf);
    outfile.write("</posts>"+cLf);
    outfile.write("</Table>"+cLf);
    // Everything must be closed
    rset.close();
    stat.close();
    conn.close();
    outfile.close();
    catch( Exception er ) {
    %>

    Please state your problem that you are having more clearly so we can help.
    I looked at your code I here are a few things you might consider:
    It looks like you are putting freely typed-in comments from end-users into an xml document.
    The problem with this is that the user may enter characters in his text that have special meaning
    to xml and will have to be escaped correctly. Some of these characters are less than character, greater than character and ampersand character.
    You may also have a similiar problem displaying them on your JSP page since there may be special characters that JSP has.
    You will have to read up on how to deal with these special characters (I dont remember what the rules are). I seem to recall
    if you use CDATA in your xml, you dont have to deal with those characters (I may be wrong).
    When you finish writing your code, test it by entering all keyboard characters to make sure they are processed, stored in the database,
    and re-displayed correctly.
    Also, it looks like you are putting business logic in your JSP page (creating an xml file).
    The JSP page is for displaying data ONLY and submitting back to a servlet. Put all your business logic in the servlet. Putting business logic in JSP is considered bad coding and will cause you many hours of headache trying to debug it. Also note: java scriptlets in a JSP page are only run when the JSP page is compiled into a servlet by java. It does not run after its compiled and therefore you cant call java functions after the JSP page is displayed to the client.

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • Handling Multiple Result Sets

    Hi
    I have a problem while handling multiple result sets. To fix that problem i need to upgrade my JDBC version2.0 to 4.0.
    My Problem is that i dont know which jars need to be downloaded to upgrade my JDBC version.

    Mallika_23 wrote:
    Is that any more ideas ???1. Learn how google works.
    2. Find the drivers
    3. Read the documentation
    4. Ask questions here once you have actually read the documentation.

  • How to handle national character set datatypes in oracle?

    Hi
    Can anyone tell me how to handle national character set datatypes in oracle?
    Thanks in advance

    And for data manipulation, append "N" the literal values being used in the command.
    The "N" indicates that the string is to be treated as Unicode Text.
    For Example: insert into TableName (ColumnName) values (N'ValueToBeInserted');

  • How can we generate result set report?

    how can we generate result set report ?means the out put of one query be the input of another query?how it is?

    Hi
    You have to use APD ( analysis process designer) to use results of one query as the input for the other queries.
    Check this link
    http://help.sap.com/saphelp_nw70/helpdata/en/49/7e960481916448b20134d471d36a6b/frameset.htm
    Regards

  • I-bot not emailing when report returns large result set..

    Hi,
    I am trying to set up an i-bot to run daily and email the results to the user. Assuming the report in question is Report_A.
    Report_A returns around 60000 rows of data without any filter condition. When I tried to set up thei-bot for Report_A (No filter conditions on the report) the ibot is publishing results to dashboard but is not delivering via email. When I introduce a filter in Report_A to reduce the data returned then everything works fine and email is being sent out successfully.
    So
    1) Is there a size limit for i-bots to deliver by email?
    2) Is there a way to increase the limits if any so the report can be emailed even when returning large result sets?
    Please let me know.

    Sorry for late reply
    Below is the log file for one of the i-bots. Now I am getting an error message "***kmsgPortalGoRequestHasBeenCancelled: message text not found ***" and the i-bot alert message shows as "Cancelled".
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.551
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.553
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.554
    ... Sleeping for 8 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.642
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    ... Sleeping for 6 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.730
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.734
    iBotID: /shared/_ibots/common/TM/Claims Report
    Exceeded number of request retries.

  • It is required to get the result set from the last query.

    I need this SP to return the result set from the last query.
    SET QUOTED_IDENTIFIER OFF
    GO
    SET ANSI_NULLS ON
    GO
    alter        proc spQ_GetASCBillingRateIDs2
    @ScheduleID CHAR(15),
    @startdate smalldatetime,
    @enddate smalldatetime
    as
    set nocount on
    truncate table tbltmpgroup
    if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[tbltmptbltest]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
    drop table [dbo].[tbltmptbltest]
    exec sp_CreateTblTmpGroup
    insert into tbltmpgroup
    SELECT DISTINCT
    case when pd.billparent = 'N' then org.eligibleorgid
    else isnull(af.parentid, org.eligibleorgid) end as billorgid,
    pd.individualbill , pd.cobrabill, pd.billparent,
    org.eligibleorgid, org.polid, org.orgpolicyid,
    pp.planid,  pp.rateid,
    ps.ascinvoicedate,
    case when ps.ascclaimfromdate > @startdate then ps.ascclaimfromdate
    else @startdate end as premiumrundayFrom,
    case when ps.ascclaimtodate < @enddate then ps.ascclaimtodate
    else @enddate end as premiumrundayTo,
    fts.effdate, fts.termdate,
    case when fts.effdate > @startdate then fts.EffDate
    else @startdate end as ascStartDate,
    case when fts.termdate < @enddate then fts.termdate
    else @enddate end as ascEndDate
    FROM premiumschedule ps (nolock)
    inner join orgpolicy org (nolock)
    on org.ascinvoicerungroup between ps.premiumrundayfrom and ps.premiumrundayto
    inner join FundingTypeStatus fts
    on fts.orgpolicyid = org.orgpolicyid
    and fts.fundtype = 'ASC'
    and ((fts.effdate between @startdate and @enddate)
    or (fts.termdate between @startdate and @enddate)
    or (fts.effdate < @startdate and fts.termdate > @enddate))
    inner join eligibilityorg o (nolock)
    on org.eligibleorgid = o.eligibleorgid
    inner join policydef pd (nolock)
    on pd.polid = org.polid
    inner join policyplans pp (nolock)
    on pp.polid = org.polid
    inner join program p (nolock)
    on pd.programid = p.programid
    left join orgaffiliation af with (nolock)
    on org.eligibleorgid = af.childid
    WHERE ps.premiumscheduleid = @ScheduleID
    AND org.orgpolicyid <> ''
    go
    SELECT DISTINCT z.rateid, e.enrollid, z.ascstartdate, z.ascenddate
    into tbltmptbltest FROM enrollment E (nolock)
    inner join tbltmpgroup z
    on e.rateid = z.rateid
    go
    CREATE UNIQUE CLUSTERED INDEX IDXTempTable  ON tbltmptbltest(enrollid)
    create index IDXTemptableDates on tbltmptbltest(ascstartdate,ascenddate)
    go
    select distinct t.*
    from tbltmpgroup t
    where rateid in (
    select distinct t.rateid from VW_ASC_Billing)
    order by billorgid
    set nocount off
    GO
    SET QUOTED_IDENTIFIER OFF
    GO
    SET ANSI_NULLS ON
    GO

    Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions and formatting rules (you have no idea).
    Temporal data should use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect. 
    What you did post is bad SQL. 
    The prefix “tbl-” is a design flaw called tibbling and we do not do it. We seldom use temp tables in RDBMS; it is how magnetic tape file programmers fake scratch tapes. 
    If the schema is correct, then SELECT DISTINCT is almost never used. 
    Your “bill_parent” looks like a assembly language bit flag; we never use those flags in SQL. 
    “Funding_Type_Status” is an absurd name for a table. A status is a state of being, not an entity. A type is an attribute property. So this table ought to be column that is either a “funding_type” or “funding_status” (with the time period for the state of being
    shown in other columns). But this hybrid is not possible in a valid data model. 
    Want to try again, with DDL and some specs? 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • SQL Adapter Crashes with large XML set returned by SQL stored procedure

    Hello everyone. I'm running BizTalk Server 2009 32 bit on Windows Server 2008 R2 with 8 GB of memory.
    I have a Receive Port with the Transport Type being SQL and the Receive Pipeline being XML Receive.
    I have a Send Port which processes the XML from this Receive Port and creates an HIPAA 834 file.
    Once a large file is created (approximately 1.6 GB in XML format, 32 MB in EDI form), a second file 1.7 GB fails to create.
    I get the following error in the Event Viewer:
    Event Type: Warning
    Event Source: BizTalk Server 2009
    Event Category: (1)
    Event ID: 5740
    Date:  10/28/2014
    Time:  7:15:31 PM
    User:  N/A
    The adapter "SQL" raised an error message. Details "HRESULT="0x80004005" Description="Unspecified error"
    Is there a way to change some BizTalk server settings to help in the processing of this large XML set without the SQL adapter crashing?
    Paul

    Could you check Sql Profiler to trace or determine if you are facing deadlock?
    Is your Adapter running under 64 bits?
    Have you studied the possibility of using SqlBulkInser Adapter?
    http://blogs.objectsharp.com/post/2005/10/23/Processing-a-Large-Flat-File-Message-with-BizTalk-and-the-SqlBulkInsert-Adapter.aspx

  • OBIEE Answers does not display the result set of a report query

    Hi,
    We have a pivot table type of report in Oracle Business Intelligence Enterprise Edition v.10.1.3.3.2 Answers that has the following characteristics:
         3 Pages
         3 Sections , 4 Columns
         18363 Rows in the result set
    As per the NQQuery.log, the query for this report executes successfully resulting in 18363 rows. However, nothing comes up in the display on Answers. Moreover, no error is reported. The instanceconfig.xml file has the following setting:
    <PivotView>
         <CubeMaxRecords>30000</CubeMaxRecords>
         <CubeMaxPopulatedCells>300000</CubeMaxPopulatedCells>
    </PivotView>
    Even with these settings, Answers just returns a blank page - nothing is displayed in the name of the result set of the report query. Has anyone encountered this problem scenario before?
    Any help is much appreciated.
    Thanks,
    Piyush

    Hi Fiston / Pradeep,
    Thanks for your inputs. A few points to note:
    -> I am actually not getting any error message in answers or the NQQuery log. Moreover I am not getting any errors related to "query governor limit exceeding in cube generation" also.
    -> I have other pivot table type of reports in the same repository that work fine. In fact the report which has this issue even works sometimes - what actually is happening is that if I alter the number of sections from 3 to 4, then the result set changes from 14755 Rows to 18363 Rows and as per the NQQuery.log in both cases the query completes successfully. However, when the result set has 14755 rows, I get to see the output in Answers; however when the result set is 18636 rows, the Answers screen just goes blank and does not show any output or error. This makes me believe that there is some parameter in instanceconfig or the NQSconfig file - I have tried a lot of changes but nothing works !
    Any help is much appreciated.
    Best Regards,
    Piyush

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • How to save memory when processing large result set

    I need to dump multi millions of rows of data into excel files
    I query the tables and open excel to write in
    The problem is even I chopped the result into hundred files, close excel completely after 65536 rows, the memory usage keeps going up as the result set is looped and at one point hit the heap size
    How can I release the memory has been used in the result set?
    Thank you

    mycoffee wrote:
    936517 wrote:
    I think resultSet.close() will do what you want (you shouldn't have to set resultSet=null when you're done with it).
    You can't force the garbage collector to run and reclaim memory. It uses an intelligent algorithm to do so .
    I question why your project is sending millions of records to excel. Who is going to read a 10,000 page excel document(s)?
    Instead, I suggest you provide a (intelligent) filter mechanism to allow users to get a subset of data to send to an excel document rather than all data. For example: instead of sending him the entire telephone book, have him search for results based on lastName and/or firstName. That will cut down on the number of records returned. Next, does the user really need all the columns of data in each record? That will cut it down further.
    You can search Google for 'java heap size' to increase the memory for your program. However, your 65536 limit is probably due to Excel's limitation and not your Java program.Sorry I could not explain the need,
    No. That is not issue here. I already use max heap size I can
    but I can handle it now. Open files, write directly to the file instead of holding the data and dumping all at once. I save all the overhead and it works fine even the result set still consumes almost all the memory.is it possible you are using mysql? the mysql jdbc driver has a terrible default setup in that it keeps all results for the result set in memory ! i think some of the latest drivers finally allow you to stream results sensibly, but you have to use the correct options.

Maybe you are looking for

  • Trnslate command in abap

    dear all, i am a student of abaper plz solve the below problem. accept a string like 'emax technologies ,change all the occurenses of e to g? As a student, you should be doing lots of searching and lots of reading documentation. Not until that should

  • UPDATE ERROR! offline footage!

    Im having a serious problem with my adobe premiere elements 11 software! i use a windows 7 laptop. Adobe has LOST all of my footage, and replaced it with the red "offline" clip image. im assuming this has to do with an update problem, as i am unable

  • Maximum disk size for azure site recovery

    Hi everyone, I am looking into Azure Site Recovery, and I can't seem to find the maximum disk size I would be able to replicate into Azure.  I have read some articles saying that 1TB is the maximum size, and some people have said that it is 64TB!! I

  • How do I hide my name for phone calls

    How can i set my bb classic so that only my number shows up for incoming calls (my name hidden to the caller) and for outgoing calls (my name hidden to the person I'm calling)? 

  • Autogenerate Group By, only on demand?

    Hi, I find SQL Developer's "Autogenerate Group By" function very useful. But it can be quite annoying as well, especially when I'm editing an existing query. It would be great if I could create my own code up to a point, and then invoke the "autogene