PRS600 ereader taking too much time to load all the books

Whenever I open My PRS600 from complete shutdown it takes too much time to load all the files which are present in Memory card. Is anyone else facing same issue?

Hello,
Recently, I have been having some serious issues when it comes to my Sony PRS-600. I am running on a Windows 7 64-bit, and an updated Reader Library 3.3
The issue comes when transferring books to the e-reader from the library, and from the e-reader to a collection. The problem is that the software becomes intolerably slow while it's processing the command. The Reader Library software window grays out, displays (Not Working) and if clicked on it, shades to a white color and displays either "cancel operation" or "wait until program responds". If I do close the operation, it appears as if the e-reader doesn't follow the operation and still displays "Do not disconnect". Since I do not see any other way to disconnect (other than the eject option), I remove the USB plug which causes a bit more issues with the reader (such as removing all of my collections, for example!).
But anyway, that's not the main issue here. The main issue is that the book transferring is really slow. I need to wait a couple of minutes (or even more) just for the software to process. Moving just 1 MB of data requires so much time as if it's 1 GB. Sometimes it's random and does it fast, and sometimes the application is better not to be dealt with at all while it's processing the command. If I would inspect My Computer, the simple loading of the e-reader storage icons and information would make the Windows Explorer to "crash" (e.g, close all windows and then reopen them). It just happens that in all randomness even the creation of a collection makes the software slow.
So to recap: the reader software is slow when adding and moving books.
I hope someone will help me resolve this annoyance.
Thank you,
KQ

Similar Messages

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    stop the download if it's stalled, and restart your download.

  • Full DTP taking too much time to load

    Hi All ,
    I am facing an issue where a DTP is taking too much time to load data from DSO to Cube via PC and also while manually running it.
    There are 6 such similar DTP's which load data for different countries(different DSO's and Cubes as source and target respectively) for last 7 days based on GI Date. All the DTP's are pulling almost same no. of records and finish within 25-30 min. But only one DTP takes around 3 hours. The problem started couple of days back.
    I have change the Parallel processes from 3->4->5 and packet size from 50,000->10,000->1.00.000 but no improvement. Also want to mention that all the source DSO's and target Cubes have the same structure. All the transformations have Field Routines and End Routines.
    Can you all please share some pointers which can help.
    Thanks
    Prateek

    HI Raman ,
    This is what I get when I check the report. Can this be causing issues as 2 rows have % >= 100
    ETVC0006           /BIC/DETVC00069     rows:      1.484    ratio:          0  %
    ETVC0006           /BIC/DETVC0006C     rows: 15.059.600    ratio:        103  %
    ETVC0006           /BIC/DETVC0006D     rows:        242    ratio:          0  %
    ETVC0006           /BIC/DETVC0006P     rows:         66    ratio:          0  %
    ETVC0006           /BIC/DETVC0006T     rows:        156    ratio:          0  %
    ETVC0006           /BIC/DETVC0006U     rows:          2    ratio:          0  %
    ETVC0006           /BIC/EETVC0006      rows: 14.680.700    ratio:        100  %
    ETVC0006           /BIC/FETVC0006      rows:          0    ratio:          0  %
    ETVC0007           rows: 13.939.200    density:              0,0  %

  • BPC application is taking too much time to load

    Hi experts!
    I'm facing a very weird problem...
    We've developed a BPC application (app name: USM).
    This application is taking too much time to be loaded  in some computers (around 8 minutes to load).  Yes, in SOME computers.
    There is around 100.000 registers in the database and most coming from material master data.
    If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
    I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
    After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
    Have anybody ever seen something like this?
    Thank you in advance.
    Rubens
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PM

    Hi Rubens,
    I would try making a couple of test:
    1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
    2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
    It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
    hope it helps,

  • Page leads to proxy error sometimes or taking too much time to load

    Hi All,
    APEX4.0
    Web server: Apache 1.3.9 (Oracle 9iAS 10.0.1.2.2)
    I am getting  the below error at first time when I try to open a page or the page takes 3 to 5 mins to load. From second time on wards, It takes 5 to 8 sec to open as normal. I debugged the page and checked the log. Logs and execution time are looking normal.  why the page takes too much time to load at first time or it leads to proxy error?? Is anybody got same experience before??
    Proxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request GET/pls/apex/f.
    Reason : Document contains no data
    Please guide me to find out and  resolve this issue....
    Thanks in Advance
    Lakshmi

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Crystal Report Taking too much time to load

    Dear Support team,
    Every time I open an existing Crystal Report, it is taking me about 5 minutes to load (Even if its a new and Blank report). Also after one report has loaded for the first time, Loading further reports would take the same long time.
    Please note that I was not facing this problem earlier (Same Crystal Report version), this issue started occurring after I came back from a leave, and our IT guy have told me that he had uninstalled the Visual Studio in my absence (License issue), and no further relevant changes that he can think of.
    Appreciate your help on resolving this issue, I work with a large number of crystal reports on daily basis, and this problem is killing my time.
    Thank you for your time and support.
    Best Regards,
    Saadeddine Nahlous
    Systems Developer

    Hello again,
    I just found out more info about this problem that might help us troubleshoot it:
    My colleague and I were both using the Old Crystal Report version 10 (Before SAP) till about a month ago. Afterwards, I have installed the new SAP Crystal Reports 2013 Trial version to test it, and i faced no issues there and ordered to purchase the product. One week ago, my company bought for me and my colleague two licenses which arrived in two boxes containing two DVDs and we re installed the software using the DVDs. Since then we BOTH are facing this delay issue when opening any Crystal Report. (The delay is unbearable, it is truly about 3 min without exaggeration).
    For the record, we both have high spec PCs.
    Appreciate investigating this issue with us please.
    Thank you

  • 0HR_PT_3 Load is taking too much time when i execute the Info package

    Dear All,
    i am working on HR Module.
    i am extracting data from 0HR_PT_3 from R/3 PRD system through info package its taking long time to load data.
    I have checked in RSA3 in R/3 PRD system whenever i extract "0HR_PT_3"  instead of showing records  its showing this error message "Infotype 2002 not read due to lack of authorization" .
    could you please let me know any one have resolve this issue.
    Thanks,
    Venkat.
    Edited by: venkatnaresh7 on Jul 12, 2011 11:49 AM

    hi
    I am facing same issue. did you able to fix this?

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Ssrs Report taking to much time to load

    Hello all,
    I have a matrix report which was running nice before some days, but now taking too much time to load 
    i tested store procedure but it is ok to return same data in 5 seconds 
    and when i tested in ssrs BIDS side i set 60 sec for dataset timeout option, and it gets timeout 
    after i set 1000 seconds timeout but it is just rolling and rolling loading 
    when i check in executionlog of reportserver i found  something like ..
    TimeDataRetrieval     TimeProcessing     TimeRendering       Status                                          
            RowCount
    131951   323    
    95         rsHttpRuntimeClientDisconnectionError    1784
    Reply me fast please, Help must appreciated .. 
    Thanks ..
    Dilip Patil..

    Hi Dilip,
    According to your description, your report keeps loading symbol when rendering. Right?
    In this scenario, as you can see in the Excecution Log, it spends a lot of time on data retrieval. Based on the status information, it's the issue on the conncetion. Since you have tested the stored procedure, and it works properly. Please check if the credentials
    for data source has permisson to connect the database. You can also Test Connection when creating data source. As you mentioned, it uses Raido Frequency to connection. There might be large network traffic on transfering data. 
    Reference:
    Troubleshooting Reports: Report Performance
    Best Regards,
    Simon Hou

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Why it is taking too much time to kill the process?

    Hi All,
    Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
    After that I ran Maxl statement
    alter system kill request 552599515;
    there is no use at all.
    Please reply if you have any solutions to kill this process.
    Thanks in advance.
    Ram.

    Hi Ram,
    1. Firstly, How much time does your calculation scripts normally run.
    2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
    3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
    4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
    5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
    6. Do log off all the users, so that you dont corrupt any filter related sec file.
    Be very careful , if its production ( and I assume you have latest backups)
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Server0 process taking too much time

    Hi All,
        Once i start the Netweaver server, the server) process taking too much time.
    When i was installed Netweaver that time 13 min, after 2 months 18 min.. then 25 min now it is taking 35 minutes.... to become green color.
    Why it is taking too much time, what might be the cause.....
    Give some ideas to solve this problem..............
    The server0 developer trace has this information continuously 6 to 7 times...
    [Thr 4204] *************** STISEND ***************
    [Thr 4204] STISEND: conversation_ID: 86244265
    [Thr 4204] STISEND: sending 427 bytes
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STISEND: send synchronously
    [Thr 4204] STISEND GW_TOTAL_SAPSEND_HDR_LEN: 88
    [Thr 4204] NiIWrite: hdl 0 sent data (wrt=515,pac=1,MESG_IO)
    [Thr 4204] STIAsSendToGw: Send to Gateway o.k.
    [Thr 4204] STIAsRcvFromGw: timeout value: -1
    [Thr 4204] NiIRead: hdl 0 recv would block (errno=EAGAIN)
    [Thr 4204] NiIRead: hdl 0 received data (rcd=3407,pac=2,MESG_IO)
    [Thr 4204] STIAsRcvFromGw: Receive from Gateway o.k.
    [Thr 4204] STISEND: data_received: CM_COMPLETE_DATA_RECEIVED
    [Thr 4204] STISEND: received_length: 3327
    [Thr 4204] STISEND: status_received: CM_SEND_RECEIVED
    [Thr 4204] STISEND: request_to_send_received: CM_REQ_TO_SEND_NOT_RECEIVED
    [Thr 4204] STISEND: ok
    [Thr 4204] STIRCV: new buffer state = BUFFER_EMPTY
    [Thr 4204] STIRCV: ok
    [Thr 4204] *************** STSEND ***************
    [Thr 4204] STSEND: conversation_ID: 86244265
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STSEND: new buffer state = BUFFER_DATA
    [Thr 4204] STSEND: 106 bytes buffered
    [Thr 4204] *************** STIRCV ***************
    [Thr 4204] STIRCV: conversation_ID: 86244265
    [Thr 4204] STIRCV: requested_length: 16000 bytes
    [Thr 4204] STISearchConv: found conv without search
    [Thr 4204] STIRCV: send 106 buffered bytes before receive
    [Thr 4204] STIRCV: new buffer state = BUFFER_DATA2
    [Thr 4204] *************** STISEND ***************
    then
    [Thr 4252] JHVM_NativeGetParam: get profile parameter DIR_PERF
    [Thr 4252] JHVM_NativeGetParam: return profile parameter DIR_PERF=C:\usr\sap\PRFCLOG
    this message continuously
    Can i have any solution for the above problem let me know .
    Thanks & regards,
    Sridhar M.

    Hello Manoj,
           Thanks for your quick response, Previously the server has 4GB RAM and now also it has same.
    Yesterday i found some more information, like deployed(through SDM) applications also take some memory at the time of starting the J2EE server...Is it right?
    Any other cause...let me know
    Thanks & Regards,
    Sridhar M.

  • ACCTIT table Taking too much time

    Hi,
      In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
    Thanku

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

Maybe you are looking for

  • How do I add a banner on my iweb page

    I have a website published thru iWeb and an FTP server... I'm trying to add a banner for which I do not have a swf file. Here is the banner: http://files.slamcom.com/Ricard/Banner-560x120-Boule-NYC.8.html How can I add the banner on my site? I tried

  • Fico cash journal spl gl indicator possible

    Hey SAP Gurus, I want to show in cash journal , any payment made to an employee as a staff advance earlier, an indication, when another advance is made to the same employee (just like down payment made). By using spl gl indicator anand

  • Downloaded files from safari

    my program upgrade files downloaded appear damaged. I have downloaded from adobe and other sites, get warning that downloaded files are damaged and need to move to trash. Mac pro 3.7GHz late 2013 with 10.10.1 and 12 GB ram

  • Can't run - Illegal charakter in path at index 60

    Hi, i'm not able to run my catalyst projekts from the predefined worskspace. Error Message: Illegal charakter in path at index 60: file://///////Servername-Server1/RedirectedFolders/username/Application Data/Adobe/Flash Catalyst/Workspace/Project/bin

  • Shut down the concurrent managers to terminate the process  ORA-01000

    Dear All, Error As Follows Shut down the concurrent managers to terminate the process or use port-specific operating system commands to terminate the process. Job number is: 638 Job number is: 639 01-FEB-2013 10:46:36 - ORACLE error 1000 in get_statu