Quotacheck: searchfs Result too large: Result too large

Aside from a 2006 post regarding this issue, I'm unsure how to resolve my scenario. We're using OSX server's time machine AFP goodies, but we needed to enable quotas for users. Simple? Maybe, but not mac style... so you head into the terminal, read some old posts on outdated forums... use repquota, quotacheck, and quotaon...
And everything seemed to work, until you add a user (through edquota) who's quota isn't in fstab, who can't be found in repquota...
sigh...
I turned off quota checking, tried starting from scratch... what do I get with but an error who's last mention on this forum is from 2006:
sudo quotacheck -a -v
* Checking user and group quotas for /dev/rdisk4 (/Volumes/ColdStorage)
34
quotacheck: searchfs Result too large: Result too large
Any ideas of ways around? The 2006 posts seem to indicate that after attempting variations of quotacheck, I might eventually break through!

Hello,
I've run into the same issue on our setup as well. (Xserve G5 10.4.8, data is an Xserve RAID, level 5 1TB, used for home directories) I'm working with Apple to see if there is a solution to this issue or if it is a bug. In the meanwhile, they recommended running quotacheck with the path to the share rather then -a
sudo quotacheck -v /Volumes/OURDATA
Using the command this way seems to work about half of the time for me, the other half still giving the same error message. I'm hoping this is a cosmetic issue with quotacheck, and not a hint of a problem with our setup.
I'll be sure to post if I find anything else out.
Matt Bryant
ACTC
Husson College and the New England School of Communications

Similar Messages

  • Unable to create report. Query produced too many results

    Hi All,
    Does someone knows how to avoid the message "Unable to create report. Query produced too many results" in Grid Report Type in PerformancePoint 2010. When the mdx query returns large amount of data, this message appears. Is there a way to get all
    the large amount in the grid anyway?
    I have set the data Source query time-out under Central Administration - Manager Service applications - PerformancePoint Service Application - PerformancePoint Service Application Settings at 3600 seconds.
    Here Event Viewer log error at the server:
    1. An exception occurred while running a report.  The following details may help you to diagnose the problem:
    Error Message: Unable to create report. Query produced too many results.
            <br>
            <br>
            Contact the administrator for more details.
    Dashboard Name:
    Dashboard Item name:
    Report Location: {3592a959-7c50-0d1d-9185-361d2bd5428b}
    Request Duration: 6,220.93 ms
    User: INTRANET\spsdshadmin
    Parameters:
    Exception Message: Unable to create report. Query produced too many results.
    Inner Exception Message:
    Stack Trace:    at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.ExtractReportViewData()
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.CreateRenderedView(StringBuilder sd)
       at Microsoft.PerformancePoint.Scorecards.ServerRendering.NavigableControl.RenderControl(HtmlTextWriter writer)
    PerformancePoint Services error code 20604.
    2. Unable to create report. Query produced too many results.
    Microsoft.PerformancePoint.Scorecards.BpmException: Unable to create report. Query produced too many results.
       at Microsoft.PerformancePoint.Scorecards.Server.Analytics.AnalyticQueryManager.ExecuteReport(AnalyticReportState reportState, DataSource dataSource)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportBase(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer, String formattingDimensionName)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
    PerformancePoint Services error code 20605.
    Thanks in advance for your help.

    Hello,
    I would like you to try the following to adjust your readerquotas.
    Change the values of the parameters listed below to a larger value. We recommend that you double the value and then run the query to check whether the issue is resolved. To do this, follow these steps:
    On the SharePoint 2010 server, open the Web.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\Web Services\PpsMonitoringServer\
    Locate and change the the below values from 8192 to 16384.
    Open the Client.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\WebClients\PpsMonitoringServer\
    Locate and change the below values from 8192 to 16384.
    After you have made the changes, restart Internet Information Services (IIS) on the SharePoint 2010 server.
    <readerQuotas
    maxStringContentLength="2147483647"
    maxNameTableCharCount="2147483647"
    maxBytesPerRead="2147483647"
    maxArrayLength="2147483647"
                  maxDepth="2147483647"
    />
    Thanks
    Heidi Tr - MSFT
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • How to handle large result sets?

    Hi All,
    I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

    The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
    The logic would go like this assuming 30 rows per page:
    - keep track of which page the user is on (e.g. page 3)
    - issue the full sql
    - scroll thru only the rows in the current page (e.g. rows 90-120)
    - copy the page's rows to value objects
    - close the resultset, statement, and connection
    In the above example, you would scroll to row 90 using rs.absolute(90).
    The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
    Good luck!

  • JDO: Query returns too many results.

    Hello,
    I want to get some DO-Entities from an empty table. When I execute the query I retrieve more than 2 000 000 000 results what is definitly too much for an empty table. The DO has an one-to-many relationship to another class and a many-to-one relationship to another one.
    The .map and .jdo files of the class and the classes in relation made accordng to the explanations in the Development Manual chapter "Mapping Persistent Classes to Database Tables" (so including DFG- and embedded- attributes). Enhancing is successfull and the work with the classes that doesn't have relationships also.
    The query looks like this:
                   manager = getPersistenceFactory().getPersistenceManager();
                   String filter = "version == \"" + version + "\" && postingLevel == \"" + postingLevel +
                                       "\" && closingPeriod == " + closingPeriod + " && bunit== \"" + bunit + "\"";
                   Query query = manager.newQuery(AD_DO_Protocol.class, filter);
                   Collection col = (Collection) query.execute();
    I also tried to use the extent with the same result.
    Any ideas?
    Regards,
    Jan

    Hi Guru,
    first of all in that certain method of the DAO I'm not able to use the primary key because I implement a check if an entity with the given attributes allready exists.
    Second I don't want (and as I think don't have) to write the mentioned java function for the following reason: There must be max one result for the given query. Thats a constraint implied by our business logic.
    In fact at the given moment there must not be a single result because the table IS EMPTY!!!
    Any other ideas?
    Thx and regards,
    Jan

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • Azure + Sync Framework + Error: Value was either too large or too small for a UInt64

    Hi,
    We have an in-house developed syncronisation service built on the Sync Framework v2.1 which has been running well for over 2 years. It pushes data from a local SQLServer 2005 database to one hosted on Azure with some added encryption.
    The service was stopped recently and when we try to re-start it, it fails with the error:
    System.OverflowException: Value was either too large or too small for a UInt64.
    at System.Convert.ToUInt64(Int64 value)
    at System.Int64.System.IConvertible.ToUInt64(IFormatProvider provider)
    at System.Convert.ToUInt64(Object value, IFormatProvider provider)
    at Microsoft.Synchronization.Data.SyncUtil.ParseTimestamp(Object obj, UInt64& timestamp)
    at Microsoft.Synchronization.Data.SqlServer.SqlSyncScopeHandler.GetLocalTimestamp(IDbConnection connection, IDbTransaction transaction)
    I have found the Hotfix 2703853, and we are proposing to apply this to our local server, but we have found that running SELECT CONVERT(INT, @@dbts) on the local database returns 1545488692 but running the same query on the Cloud database returns -2098169504.
    which indicates the issue is on the Azure side. Would applying the hotfix to our local server resolve the issue or would it need to be somehow applied to the Azure server?
    Thanks in advance for any assistance!
    Chris

    Hi,
    We have now applied the Sync Framework hotfixes to our server and re-provisioned the sync service. No errors were reported and the timestamp values were all within the required range. On re-starting the service the system worked as anticipated. It has now
    been running for a week and appears to be stable. No further changes were required other than installing the hotfixes and re-provisioning the scope.
    Chris

  • "Bad Data, too many results for shortname"

    Every 10 minutes since October 13 the wikid error log has recorded the following error message:
    *"Bad Data, too many results for shortname: username"*, always naming the same user, who is not actually doing anything.
    Coinciding with that message at the same time every 10 minutes, the system log records 13 or 14 of the following:
    *"Python\[15645\]: ••• -\[NSAutoreleasePool release]: This pool has already been released, do not drain it (double release)."*
    15645 is a process owned by _teamserver, which would appear to confirm the connection between the python and wikid error messages.
    Last clue: The messages (I have determined in hindsight) started while I was setting up the server to do vacation messages. The user named in the "Bad data" messages was the user who wanted to (and did) set up a vacation message at that time. They worked, too.
    Anyone have any ideas about what is going on and how to fix it? BTW, google does not find even one page where this "Bad data" error message is mentioned.

    Thanks for your response. To answer your questions:
    +Are you using AD for your directory?+ No. OD master only.
    +Are there users with duplicate shortnames in your directory system?+ No, and to double check, I searched with WGM and only the one came up.
    +As a bit of background, the wiki server keeps an private index of user and group information so we can track user preferences, and the "Bad Data, too many results for shortname: username" error is triggered when we do a lookup in that index for a particular user and we (unexpectedly) hit duplicate entries. Are you seeing an issue with using the software, or just an annoying log message?+ It's hard to say (for me) what might be related. The directory or wiki related issues with the server include:
    • A memory issue with slapd that eventually causes it to crash. (preceded by lots of "bdbdbcache: db_open(+various: sn, displayname, givenname, mail, maybe others+) failed: Cannot allocate memory (12)" and "logging region out of memory" and "index_param failed" errors.
    • The wiki is slow, despite very light use.
    • Wake from sleep (network clients with authentication required) can be very slow. Several minutes even.
    Any suggestions you may have would be appreciated.

  • ODT error in VS2005: Value was either too large or too small for an Int32

    Using ODT's Oracle Explorer in VS2005 I connected to a 3rd party's Oracle9i database that's been around for a while. I expanded the tables node and then attempted to expand a specific table. It then displayed a popup message and never expanded the table so I could manage the columns.
    The error is:
    An error ocurred while expanding the node:
    Value was either too large or too small for an Int32
    I recreated the table, with no data, in another database (same version of oracle, different physical server) and was able to expand the table in ODT's Oracle Explorer.
    I went back to the other database in Oracle Explorer and tried to expand the table and it failed with the same error message.
    The only difference I can see is that the first table contains ALOT of data (gigabytes), while the other table (the duplicate I created to duplicate the error) does not have any data.
    here's the definition of the table minus the actual table and field names.
    FLD6 contains jpg data from a 3rd party Oracle Forms application. The jpg data is between 100K and 20MB.
    CREATE TABLE myTable
    FLD1 VARCHAR2(30 BYTE),
    FLD2 VARCHAR2(15 BYTE),
    FLD3 VARCHAR2(20 BYTE),
    FLD4 VARCHAR2(20 BYTE),
    FLD5 NUMBER(3),
    FLD6 LONG RAW,
    FLD7 VARCHAR2(80 BYTE),
    FLD8 DATE,
    FLD9 VARCHAR2(20 BYTE),
    FLD10 VARCHAR2(20 BYTE),
    FLD11 VARCHAR2(99 BYTE),
    FLD12 VARCHAR2(256 BYTE)
    TABLESPACE myTableSpace
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 2048M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    NOMONITORING;
    This is just to let the developers know I ran into a problem. I've already gotten around the issue by using an alternative tool.

    Hi,
    You can also use the Map TestTool to test your maps. It uses the BizTalk engine to execute the map. You can select a map that is deployed to the GAC en execute it.
    You can download the sample tool with the source code here:
    TestTool for BizTalk 2013
    http://code.msdn.microsoft.com/Execute-BizTalk-2013-maps-e8db7f9e
    TestTool for BizTalk 2010
    http://code.msdn.microsoft.com/Execute-a-BizTalk-map-from-26166441
    Kind regards,
    Tomasso Groenendijk
    Blog 
    |  Twitter
    MCTS BizTalk Server 2006, 2010
    If this answers your question please mark it accordingly

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • ExecuteQueryForObject returned too many results jdbc error

    Hi All,
    I am getting the following error on running the tomcat 6.0 and I have Oracle 11g installed.
    08 Dec 2010 16:23:41 ERROR [QUARTZ_Worker-5] org.quartz.core.JobRunShell - Job DCTM.DCTMServerPipe threw an unhandled Exception:
    org.springframework.jdbc.UncategorizedSQLException: SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results.
         at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:121)
         at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:271)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:265)
         at com.emc.documentum.bpm.daos.impl.AbstractIbatisBaseDaoImpl.queryForObject(AbstractIbatisBaseDaoImpl.java:119)
         at com.emc.documentum.bpm.bamengine.daos.impl.ServerConfigDaoImpl.getDBCurrentTime(ServerConfigDaoImpl.java:28)
         at com.emc.documentum.bpm.bamengine.services.server.factory.impl.ServersFactoryImpl.updatePipeServerTimezoneOffset(ServersFactoryImpl.java:56)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:301)
         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:198)
         at $Proxy50.updatePipeServerTimezoneOffset(Unknown Source)
         at com.emc.documentum.bpm.bamengine.services.sharedservices.impl.TaskManagerServiceImpl.executePipe(TaskManagerServiceImpl.java:341)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:301)
         at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:198)
         at $Proxy52.executePipe(Unknown Source)
         at com.emc.documentum.bpm.bamengine.scheduler.impl.PipeJobImpl.execute(PipeJobImpl.java:17)
         at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
         at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
    Caused by: java.sql.SQLException: Error: executeQueryForObject returned too many results.
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:124)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
         at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
         at org.springframework.orm.ibatis.SqlMapClientTemplate$1.doInSqlMapClient(SqlMapClientTemplate.java:273)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:209)
         ... 23 more
    How can this be resolved? Please help !!
    Thanks, T
    Edited by: 805903 on Dec 9, 2010 3:00 AM

    Error executeQueryForObject returned too many results
    Typical Error msg:
    “SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results”
    The error is caused for using queryForObject (for a single result) instead of queryForList (when expecting multiple results).
    Example of correct solution
    In DAO:
    public List<UpdatedContractRateDO> getUpdatedCtrctRate(Map paramMap) throws DataAccessException {
    return (List<UpdatedContractRateDO>) getSqlMapClientTemplate().queryForList("charge.getUpdatedCtrctRate", paramMap);
    }

  • ExecuteQueryForObject returned too many results.

    Hi,
    My servlet accesses an Oracle db, it works fine most of the time except for when I search for one particular entry.
    When I do this the following error message is displayed:
    HTTP Status 500 -
    type Exception report
    message
    description The server encountered an internal error () that prevented it from fulfilling this request.
    exception
    javax.servlet.ServletException: Error: executeQueryForObject returned too many results.
         org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:523)
         org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:421)
         org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224)
         org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:368)
    root cause
    java.sql.SQLException: Error: executeQueryForObject returned too many results.
         com.ibatis.sqlmap.engine.mapping.statement.GeneralStatement.executeQueryForObject(GeneralStatement.java:108)
         com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:561)
         com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:536)
         com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:93)
         com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryForObject(SqlMapClientImpl.java:70)
         com.bmw.urt3fms.cardata.ctrl.SearchCarAction.execute(Unknown Source)
         org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419)
         org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224)
         org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194)
         org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:432)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
         org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:368)
    This is really strange as all the other searches I have done all worked except this one.
    Any ideas as to why it would display this??
    Nick

    Error executeQueryForObject returned too many results
    Typical Error msg:
    “SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; Error: executeQueryForObject returned too many results.; nested exception is java.sql.SQLException: Error: executeQueryForObject returned too many results”
    The error is caused for using queryForObject (for a single result) instead of queryForList (when expecting multiple results).
    Example of correct solution
    In DAO:
    public List<UpdatedContractRateDO> getUpdatedCtrctRate(Map paramMap) throws DataAccessException {
    return (List<UpdatedContractRateDO>) getSqlMapClientTemplate().queryForList("charge.getUpdatedCtrctRate", paramMap);
    }

  • Dimension Filter LOV - Too many results; Refine your search

    Hi,
    I am using Dimension filter in Design Studio which uses the hierarchy dimension. In the preview mode the filter is not showing any members instead its asked to use the search functionality. I understand there is a limit in retrieving the LOV. Is anyway i can increase the number of LOV?
    I need to have this functionality work, The user want to navigate the hierarchy in the filter. Please any help highly appreciated
    Thanks
    VJ

    Hi,
    same question here: DS 1.2 - Too many results; refine your search
    Regards,
    David

  • Crop Tool Error "Cannot complete your request because resulting document would be too big."

    Hello!
    Anytime I try to crop an image, I can scale to the size I need, but I cannot complete the request. I get this error "cannot complete your request because resulting document would be too big".
    This only happens when I am trying to crop something, full document or PNG, JPG, etc.
    I am in Photoshop CC 2014, on a Windows platform.
    Any thoughts on how to fix this would be appreciated!
    Thanks,
    Nicole

  • I-bot not emailing when report returns large result set..

    Hi,
    I am trying to set up an i-bot to run daily and email the results to the user. Assuming the report in question is Report_A.
    Report_A returns around 60000 rows of data without any filter condition. When I tried to set up thei-bot for Report_A (No filter conditions on the report) the ibot is publishing results to dashboard but is not delivering via email. When I introduce a filter in Report_A to reduce the data returned then everything works fine and email is being sent out successfully.
    So
    1) Is there a size limit for i-bots to deliver by email?
    2) Is there a way to increase the limits if any so the report can be emailed even when returning large result sets?
    Please let me know.

    Sorry for late reply
    Below is the log file for one of the i-bots. Now I am getting an error message "***kmsgPortalGoRequestHasBeenCancelled: message text not found ***" and the i-bot alert message shows as "Cancelled".
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.551
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.553
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.554
    ... Sleeping for 8 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.642
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    ... Sleeping for 6 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.730
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.734
    iBotID: /shared/_ibots/common/TM/Claims Report
    Exceeded number of request retries.

  • How to save memory when processing large result set

    I need to dump multi millions of rows of data into excel files
    I query the tables and open excel to write in
    The problem is even I chopped the result into hundred files, close excel completely after 65536 rows, the memory usage keeps going up as the result set is looped and at one point hit the heap size
    How can I release the memory has been used in the result set?
    Thank you

    mycoffee wrote:
    936517 wrote:
    I think resultSet.close() will do what you want (you shouldn't have to set resultSet=null when you're done with it).
    You can't force the garbage collector to run and reclaim memory. It uses an intelligent algorithm to do so .
    I question why your project is sending millions of records to excel. Who is going to read a 10,000 page excel document(s)?
    Instead, I suggest you provide a (intelligent) filter mechanism to allow users to get a subset of data to send to an excel document rather than all data. For example: instead of sending him the entire telephone book, have him search for results based on lastName and/or firstName. That will cut down on the number of records returned. Next, does the user really need all the columns of data in each record? That will cut it down further.
    You can search Google for 'java heap size' to increase the memory for your program. However, your 65536 limit is probably due to Excel's limitation and not your Java program.Sorry I could not explain the need,
    No. That is not issue here. I already use max heap size I can
    but I can handle it now. Open files, write directly to the file instead of holding the data and dumping all at once. I save all the overhead and it works fine even the result set still consumes almost all the memory.is it possible you are using mysql? the mysql jdbc driver has a terrible default setup in that it keeps all results for the result set in memory ! i think some of the latest drivers finally allow you to stream results sensibly, but you have to use the correct options.

Maybe you are looking for