Too much time watching beachballs!  How to eliminate them?

This a possible duplicate or similar message posted a few minutes ago.   I'm not sure it posted and it's not showing yet.
I have an iMac 20 inch Late 2006 MA589WA  I'm running OX 10.7.3
It has Intel Core 2 Duo
1 Processore, 2 Core
L2 Cache 4 mb
Memory 2mb
Buss speed 667
The memory seems to be about 80% filled according to a chart I saw earlier.  I back up everything to Time Machine and suppose I could clean the
disk of all but applications making sure I had properly saved all my records, letters, papers written and songs and see if that does it but I'm much rather find a simpler fix from the community if any  of you have similar problems and have solved them.
Many Thanks,  Bill Friedlander      [email protected]

Hi Bill,apparently it didn't posr the other time.
What App;e doesn't te;;, is that while it will workk with omly 2 GB of RAM, it'll br teribble performance wide, even 4 GB was terrinle on my iMac, 6 GB made a big diff, but wish it could take 16 GB for 10.7+
Free Disc space is another bufaboo, free space is not our free spae anymore, OSC would like at least 3% free space to work mostly OK.
Open Activity Monitor in Applications>Utilities, select All Processes & sort on CPU%, any indications there?
How much RAM & free space do you have also, click on the Memory & Disk Usage Tabs.
In the Memory tab, are there a lot of Pageouts?

Similar Messages

  • Large records retrieve from network took too much time.How can I improve it

    HI All
    I have Oracle server 10g, and I have tried to fetch around 200 thousand (2 lack ) records. I have used Servlet that is deploy into Jboss 4.0.
    And this records come from network.
    I have used simple rs.next_ method but it took too much time. I have got the only 30 records with in 1 sec. So if I want all these 2 lacks records system take around more than 40 min. And my requirement is that it has to be retrieve within 40 min.
    Is there any another way around this problem? Is there any way that at one call Result set get 1000 records at one time?
    As I read somewhere that “ If we use a normal ResultSet data isn't retrieved until you do the next call. The ResultSet isn't a memory table, it's a cursor into the database which loads the next row on request (though the drivers are at liberty to anticipate the request). “
    So if we can pass the a request to around 1000 records at one call then maybe we can reduce time.
    Has anyone idea How to improve this problem?
    Regards,
    Shailendra Soni

    That true...
    I have solved my problem invokeing setFetchSize on ResultSet object.
    like ResultSet.setFetchSize(1000).
    But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
    Actually I had read a one nice article on net
    [http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
    They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
    They gave two solutions i,e
    Fetch small amount of data iteratively instead of fetching whole data at once
    Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
    1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
    // But i don't understand How can I do it in java.
    2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
    // But i don't understand How can I do it in java.
    If you know any one of these solutions then can you please give me examples to do it.
    Thanks in Advance,
    Shailendra

  • How do I stop touchsmart from loading on start up, taking too much time

    pc taking too long to boot, touchsmart loading on every boot up, taking way too much time

    richardfit wrote:
    pc taking too long to boot, touchsmart loading on every boot up, taking way too much time
    disable inside application, use msconfig, uninstall other applications, etc.... (as I try to replicate your grammar and style)

  • White MacBook takes too much time to boot up

    Hi there.
    I got the topcase(keyboard) of my MacBook repaired. (White MacBook 2006) and after that it takes nearly 7 up to 10 seconds after pressing the power button that I first see apple logo and it boots up.
    I'm not sure but seems like that detecting the hardwares take too much time .
    any solution?
    PS: I'm running Snow Leopard(10.6.2)

    macbig wrote:
    OK, 32 seconds is not good. I don't think it has anything to do with a keyboard repair. You need to try a little maintenance: Reset Pram - http://docs.info.apple.com/article.html?artnum=2238
    Repair Permissions - http://support.apple.com/kb/HT1452
    Run Disk Utility >applications>utilities>disk utilities>verify disk
    How much free space do you have on your HD: Right click on HD icon and select "get info" in the drop down menu (if you don't have right click capabilities, highlight your HD icon, go file in the Apple menu, and select "get info" in the drop down menu. Let us know your results.
    Yes PRAM was the the problem, I've already tried it from this thread http://discussions.apple.com/thread.jspa?threadID=2303507&tstart=0
    by the way thank you .

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Taking too much time (1min) to connect to database

    Hi,
    I have oracle 10.2 and 10g application server.
    Its taking too much time to connect to database through application (on browser). The connection through sqlplus is fine.
    Please share your experience.
    Regards,
    Naseer

    Dear AnaTech,
    i am going to ask not related this question which already you answered i am going to ask you about that how to connect forms6i and Developer 10g with OracleAS.
    i have installed and working Developer Suite 10g Ver. 10.1.2 and also Form Builder 6i. On my other machine i installed and working Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 and also on the database machine i installed Oracle Enterprise Manager 10g Application Server Control 10.1.2.0.2.
    my database conectivity with Developer suite Forms and Reports and also Form6i and Reports6i are working fine. no problem.
    now the 1 question of mine is that when i try to run form6i through run from web i got this error. FRM-99999: error 18121 occured see the release not.
    this and the main question of mine is this that how can i control my OracleAS 10g with forms because basically the functionality of OracleAS is Mid-Tier but i am not utilizing the Mid-tier i am using here Two-tier Envrionment even i installed 3-Tier Environment so tell me how i utilize it with 3-Tier..
    I hope you don't mind that i ask this question here and also if you give me you email then we can discuss this in detail and i can be helpful of your great expertise. i also know and utilize my 3-tier real envrionment.
    Waiting for your great response.
    Regards,
    K.J.J.C

  • Why it is taking too much time to kill the process?

    Hi All,
    Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
    After that I ran Maxl statement
    alter system kill request 552599515;
    there is no use at all.
    Please reply if you have any solutions to kill this process.
    Thanks in advance.
    Ram.

    Hi Ram,
    1. Firstly, How much time does your calculation scripts normally run.
    2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
    3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
    4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
    5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
    6. Do log off all the users, so that you dont corrupt any filter related sec file.
    Be very careful , if its production ( and I assume you have latest backups)
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Data extracting to BW from R3 taking too much time

    Hi,
    We have one delta data load to ODS from R3 this is taking 4-5 hours .this job runs in r3 itself for 4-5 hours even for 30-40 records.and after this ODS data updated to cube so but since in ODS itself takes too much time so delta brings 0 records in cube hence we have to update manually.
    Also as now job is running for load to ODS so can't we check records for delta in RSA3 Its giving me error saying  "error occurs during extraction ".
    can u please guide how we can make this loading faster if any index needs to be build how to proceed on that front
    Thanks
    Nilesh

    rAHUL,
    I tried with R its giving me dump with message "Resul of customer enhancemnet 19571 records"
    Erro details are -
    Short text
        Function module " " not found.
    What happened?
        The function module " " is called,
        but cannot be found in the library.
        Error in the ABAP Application Program
        The current ABAP program "SAPLRSA3" had to be terminated because
        come across a statement that unfortunately cannot be executed.
    What can you do?
        Note down which actions and inputs caused the error.
        To process the problem further, contact you SAP system
        administrator.
        Using Transaction ST22 for ABAP Dump Analysis, you can look
        at and manage termination messages, and you can also
        keep them for a long time.

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Sun cluster 3.2 - resource hasstorageplus taking too much time to start

    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?
    Jan 28 20:28:01 hnmdb02 Cluster.Framework: [ID 801593 daemon.notice] stdout: becoming primary for data
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 350207 daemon.notice] 24 fe_rpc_command: cmd_type(enum):<3>:cmd=<null>:tag=<hnmdb.data.10>: Calling security_clnt_connect(..., host=<hnmdb02>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<0>, ...)
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <hnmdb.data.10> has been resumed.
    Jan 28 20:34:57 hnmdb02 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <data>, resource group <hnmdb>, node <hnmdb02>, time used: 23% of timeout <1800 seconds>

    heldermo wrote:
    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?I'm not sure how this is supposed to be related to Messaging Server. I suggest you ask your question in the Cluster forum:
    http://forums.sun.com/forum.jspa?forumID=842
    Regards,
    Shane.

  • Serial port vi taking too much time to communicate?

    i am using visa vis for serial communication the read vi takes too much time to give me the bytes at serial port.what should i do to decrease the delay?

    You wouldn't happen to be specifying a byte count to read or setting read termination when a specific character is read would you? If you are, if you don't have the right number of bytes in the serial buffer or if the termination character is not seen, then VISA read will wait for the time specifed by the VISA timeout value. To fix the first problem, use the VISA Bytes at Serial Port function to determine how many bytes to read. In the second case, use a VISA property node to set Message Based Settings:Termination Character Enable to false and Serial Settingserial End Mode for Reads to none.

  • SAP Portal take too much time to startup.

    Hi Team ,
    SAP Portal EP 7.3 is taking too much time to startup  in jsmon .
    It took around 1 hrs to up the system . Is there any parameter's to tune in JVM , or configtool .
    Please assist here .
    Highly appreciate your inputs.
    Thanks,
    Pradeep.

    Hi Sunil,
    PFB details ,
    DB :- Oracle 11.2g
    How to tune java stack for SAP EP 7.3 .
    I have not done any tuning from our end . Please assist .
    dev_serverout:-
    J
    J Wed Mar 26 18:15:15 2014
    J  577.193: [GC 577.197: [ParNewJ
    J Wed Mar 26 18:15:16 2014
    : 381696K->38144K(381696K), 0.9183179 secs] 707578K->400480K(2059008K), 0.9227558 secs] [Times: user=1.18 sys=0.04, real=0.93 secs]
    J
    J Wed Mar 26 18:15:29 2014
    J  590.998: [GC 591.001: [ParNewJ
    J Wed Mar 26 18:15:30 2014
    : 381606K->38144K(381696K), 1.0207009 secs] 744570K->557213K(2059008K), 1.0249050 secs] [Times: user=0.99 sys=0.15, real=1.03 secs]
    J
    J Wed Mar 26 18:15:37 2014
    J  599.162: [GC 599.166: [ParNewJ
    J Wed Mar 26 18:15:38 2014
    : 381550K->38144K(381696K), 0.6601464 secs] 900673K->826545K(2059008K), 0.6647236 secs] [Times: user=0.52 sys=0.23, real=0.67 secs]
    J
    J Wed Mar 26 18:15:47 2014
    J  608.917: [GC 608.921: [ParNew: 381577K->38144K(381696K), 0.3152940 secs] 1170275K->915700K(2059008K), 0.3195879 secs] [Times: user=0.47 sys=0.08, real=0.32 s
    ecs]
    J  609.291: [GC [1 CMS-initial-mark: 877556K(1677312K)] 917998K(2059008K), 0.0856098 secs] [Times: user=0.06 sys=0.02, real=0.09 secs]
    J
    J Wed Mar 26 18:15:47 2014
    J  609.382: [CMS-concurrent-mark-start]
    J
    J Wed Mar 26 18:15:56 2014
    J  618.168: [GC 618.171: [ParNewJ
    J Wed Mar 26 18:15:57 2014
    : 381696K->38144K(381696K), 0.4155820 secs] 1259294K->961900K(2059008K), 0.4201296 secs] [Times: user=0.46 sys=0.05, real=0.42 secs]
    J
    J Wed Mar 26 18:16:02 2014
    J  623.498: [CMS-concurrent-mark: 10.044/14.114 secs] [Times: user=21.20 sys=2.02, real=14.12 secs]
    J  623.504: [CMS-concurrent-preclean-start]
    J
    J Wed Mar 26 18:16:04 2014
    J  626.383J
    J Wed Mar 26 18:16:05 2014
    : [CMS-concurrent-preclean: 2.088/2.878 secs] [Times: user=3.48 sys=0.35, real=2.89 secs]
    J  626.389: [CMS-concurrent-abortable-preclean-start]
    J
    J Wed Mar 26 18:16:10 2014
    J  631.387: [GC 631.391: [ParNew: 381696K->22533K(381696K), 0.2637625 secs] 1305473K->977544K(2059008K), 0.2693728 secs] [Times: user=0.38 sys=0.04, real=0.27 s
    ecs]
    J
    J Wed Mar 26 18:16:11 2014
    J   CMS: abort preclean due to time 633.133: [CMS-concurrent-abortable-preclean: 4.064/6.741 secs] [Times: user=10.14 sys=1.76, real=6.75 secs]
    J
    J Wed Mar 26 18:16:11 2014
    J  633.143: [GC[YG occupancy: 63189 K (381696 K)]633.149: [Rescan (parallel) , 0.0602906 secs]633.214: [weak refs processing, 0.0769317 secs]633.295: [class unl
    oadingJ
    J Wed Mar 26 18:16:12 2014
    , 0.1948185 secs]633.494: [scrub symbol & string tables, 0.1490077 secs] [1 CMS-remark: 955011K(1677312K)] 1018201K(2059008K), 0.6887325 secs] [Times: user=0.74
    sys=0.01, real=0.70 secs]
    J
    J Wed Mar 26 18:16:12 2014
    J  633.839: [CMS-concurrent-sweep-start]
    J
    J Wed Mar 26 18:16:21 2014
    J  643.237: [GC 643.242: [ParNewJ
    J Wed Mar 26 18:16:22 2014
    : 366085K->31546K(381696K), 0.2383948 secs] 1235004K->900465K(2059008K), 0.2461795 secs] [Times: user=0.43 sys=0.01, real=0.25 secs]
    J
    J Wed Mar 26 18:16:28 2014
    J  649.442: [CMS-concurrent-sweep: 8.471/15.601 secs] [Times: user=23.37 sys=5.04, real=15.61 secs]
    J  649.448: [CMS-concurrent-reset-start]
    J  649.474: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.05 sys=0.00, real=0.03 secs]
    J
    J Wed Mar 26 18:16:28 2014
    J  649.690: [GC 649.694: [ParNew: 375098K->26102K(381696K), 0.2501056 secs] 781384K->432388K(2059008K), 0.2543750 secs] [Times: user=0.43 sys=0.01, real=0.26 se
    cs]
    J
    J Wed Mar 26 18:16:36 2014
    J  657.708: [GC 657.712: [ParNew: 369654K->38144K(381696K), 0.2855292 secs] 776086K->459228K(2059008K), 0.2899376 secs] [Times: user=0.51 sys=0.01, real=0.29 se
    cs]
    J
    J Wed Mar 26 18:16:58 2014
    J  680.045: [GC 680.048: [ParNew: 381696K->25752K(381696K), 0.2315117 secs] 803279K->451797K(2059008K), 0.2360279 secs] [Times: user=0.43 sys=0.00, real=0.24 se
    cs]
    J
    J Wed Mar 26 18:20:29 2014
    J  891.356: [GC 891.360: [ParNewJ
    J Wed Mar 26 18:20:30 2014
    : 369304K->38144K(381696K), 0.4481697 secs] 797013K->480414K(2059008K), 0.4526714 secs] [Times: user=0.77 sys=0.00, real=0.45 secs]
    J
    J Wed Mar 26 18:25:20 2014
    J  1181.876: [GC 1181.880: [ParNew: 381696K->38144K(381696K), 0.4403585 secs] 825160K->494576K(2059008K), 0.4451872 secs] [Times: user=0.72 sys=0.01, real=0.45
    secs]
    J
    J Wed Mar 26 18:31:14 2014
    J  1536.174: [GC 1536.178: [ParNewJ
    J Wed Mar 26 18:31:15 2014
    : 381696K->38144K(381696K), 0.6600226 secs] 838443K->525966K(2059008K), 0.6659761 secs] [Times: user=1.04 sys=0.01, real=0.67 secs]
    J
    J Wed Mar 26 18:31:36 2014
    J  1557.349: [GC 1557.353: [ParNewJ
    J Wed Mar 26 18:31:37 2014
    : 381696K->38144K(381696K), 1.5205645 secs] 869518K->625864K(2059008K), 1.5256733 secs] [Times: user=2.49 sys=0.01, real=1.53 secs]
    F
    F [Thr 70] Wed Mar 26 18:32:20 2014
    F  [Thr 70] *** LOG => State changed from 10 (Starting apps) to 3 (Running).
    F  [Thr 70] *** LOG    state real time: 1434.706 CPU time: 68.010 sys, 403.010 usr
    F  [Thr 70] *** LOG    total real time: 1603.079 CPU time: 84.960 sys, 648.540 usr
    F  [Thr 70]
    J
    J Wed Mar 26 18:33:09 2014
    J  1651.096: [GC 1651.101: [ParNewJ
    J Wed Mar 26 18:33:10 2014
    : 381694K->38144K(381696K), 0.5561227 secs] 969771K->660443K(2059008K), 0.5617626 secs] [Times: user=0.95 sys=0.00, real=0.56 secs]
    J
    J Wed Mar 26 18:39:47 2014
    J  2048.550: [GC 2048.554: [ParNew: 381696K->33278K(381696K), 0.4621651 secs] 1007300K->690505K(2059008K), 0.4674345 secs] [Times: user=0.78 sys=0.01, real=0.47
    secs]
    J
    J Wed Mar 26 18:40:16 2014
    J  2077.817: [GC 2077.839: [ParNewJ
    J Wed Mar 26 18:40:17 2014
    : 376830K->38144K(381696K), 0.5596568 secs] 1035383K->718151K(2059008K), 0.5647648 secs] [Times: user=0.96 sys=0.00, real=0.59 secs]
    J
    J Wed Mar 26 18:42:52 2014
    J  2233.875: [GC 2233.884: [ParNew: 381696K->38144K(381696K), 0.3730432 secs] 1062088K->728723K(2059008K), 0.3840431 secs] [Times: user=0.67 sys=0.01, real=0.39
    secs]
    J
    J Wed Mar 26 18:43:06 2014
    J  2248.178: [GC 2248.184: [ParNewJ
    J Wed Mar 26 18:43:07 2014
    : 381696K->38144K(381696K), 0.3272163 secs] 1072434K->740746K(2059008K), 0.3346392 secs] [Times: user=0.60 sys=0.01, real=0.34 secs]
    J
    J Wed Mar 26 19:40:37 2014
    J  5698.207: [GC 5698.211: [ParNew: 381696K->26419K(381696K), 0.4125838 secs] 1084312K->747814K(2059008K), 0.4175661 secs] [Times: user=0.67 sys=0.00, real=0.42
    secs]
    J
    J Wed Mar 26 20:44:00 2014
    J  9501.747: [GC 9501.751: [ParNew: 369971K->35314K(381696K), 0.2025741 secs] 1091366K->756710K(2059008K), 0.2084589 secs] [Times: user=0.34 sys=0.00, real=0.21
    secs]
    J
    J Wed Mar 26 22:20:09 2014
    J  15270.427: [GC 15270.432: [ParNew: 378866K->34210K(381696K), 0.2035446 secs] 1100262K->755606K(2059008K), 0.2093321 secs] [Times: user=0.32 sys=0.01, real=0.
    21 secs]
    std_server0.out:-
    649.442: [CMS-concurrent-sweep: 8.471/15.601 secs] [Times: user=23.37 sys=5.04, real=15.61 secs]
    649.448: [CMS-concurrent-reset-start]
    649.474: [CMS-concurrent-reset: 0.024/0.024 secs] [Times: user=0.05 sys=0.00, real=0.03 secs]
    649.690: [GC 649.694: [ParNew: 375098K->26102K(381696K), 0.2501056 secs] 781384K->432388K(2059008K), 0.2543750 secs] [Times: user=0.43 sys=0.01, real=0.26 secs]
    657.708: [GC 657.712: [ParNew: 369654K->38144K(381696K), 0.2855292 secs] 776086K->459228K(2059008K), 0.2899376 secs] [Times: user=0.51 sys=0.01, real=0.29 secs]
    680.045: [GC 680.048: [ParNew: 381696K->25752K(381696K), 0.2315117 secs] 803279K->451797K(2059008K), 0.2360279 secs] [Times: user=0.43 sys=0.00, real=0.24 secs]
    891.356: [GC 891.360: [ParNew: 369304K->38144K(381696K), 0.4481697 secs] 797013K->480414K(2059008K), 0.4526714 secs] [Times: user=0.77 sys=0.00, real=0.45 secs]
    1181.876: [GC 1181.880: [ParNew: 381696K->38144K(381696K), 0.4403585 secs] 825160K->494576K(2059008K), 0.4451872 secs] [Times: user=0.72 sys=0.01, real=0.45 sec
    s]
    1536.174: [GC 1536.178: [ParNew: 381696K->38144K(381696K), 0.6600226 secs] 838443K->525966K(2059008K), 0.6659761 secs] [Times: user=1.04 sys=0.01, real=0.67 sec
    s]
    1557.349: [GC 1557.353: [ParNew: 381696K->38144K(381696K), 1.5205645 secs] 869518K->625864K(2059008K), 1.5256733 secs] [Times: user=2.49 sys=0.01, real=1.53 sec
    s]
    1651.096: [GC 1651.101: [ParNew: 381694K->38144K(381696K), 0.5561227 secs] 969771K->660443K(2059008K), 0.5617626 secs] [Times: user=0.95 sys=0.00, real=0.56 sec
    s]
    2048.550: [GC 2048.554: [ParNew: 381696K->33278K(381696K), 0.4621651 secs] 1007300K->690505K(2059008K), 0.4674345 secs] [Times: user=0.78 sys=0.01, real=0.47 se
    cs]
    2077.817: [GC 2077.839: [ParNew: 376830K->38144K(381696K), 0.5596568 secs] 1035383K->718151K(2059008K), 0.5647648 secs] [Times: user=0.96 sys=0.00, real=0.59 se
    cs]
    2233.875: [GC 2233.884: [ParNew: 381696K->38144K(381696K), 0.3730432 secs] 1062088K->728723K(2059008K), 0.3840431 secs] [Times: user=0.67 sys=0.01, real=0.39 se
    cs]
    2248.178: [GC 2248.184: [ParNew: 381696K->38144K(381696K), 0.3272163 secs] 1072434K->740746K(2059008K), 0.3346392 secs] [Times: user=0.60 sys=0.01, real=0.34 se
    cs]
    5698.207: [GC 5698.211: [ParNew: 381696K->26419K(381696K), 0.4125838 secs] 1084312K->747814K(2059008K), 0.4175661 secs] [Times: user=0.67 sys=0.00, real=0.42 se
    cs]
    9501.747: [GC 9501.751: [ParNew: 369971K->35314K(381696K), 0.2025741 secs] 1091366K->756710K(2059008K), 0.2084589 secs] [Times: user=0.34 sys=0.00, real=0.21 se
    cs]
    15270.427: [GC 15270.432: [ParNew: 378866K->34210K(381696K), 0.2035446 secs] 1100262K->755606K(2059008K), 0.2093321 secs] [Times: user=0.32 sys=0.01, real=0.21
    secs]
    Thanks,
    Pradeep.

  • PDPageDrawContentsToWindowEx takes too much time

    We are using an Acrobat plugin that renders the PDF file to a bitmap in memory.
    We are using Acrobat Professional X. But same problems also appear on Acrobat 9.
    We have received several problematic PDF files from our customers that is causing the call that renders the image -
    PDPageDrawContentsToWindowEx() to take unreasonably long time.
    My target resolutions is 600 dpi but I couldn't wait for the call to return, after more than 6 minutes I kill the process.
    The same PDFs render in Acrobat with slight delay (flickering and repainting) but in reasonable time.
    I have written for this problem on previous occasions (Aug 5 2010). Since then further problematic samples
    show that is linked somehow with transparency being present, but not on all PDFs with transparency.
    We have a fast computer so the problem is somewhere in the PDF analysis.
    Trying to optimize the file didn't help.
    Checking with Preflight for PDF syntax issues also didn't find anything.

    I'll have to check the headers, but I KNOW that we exposed it to plugins in A9 – it was necessary since we pulled the DrawToWindow call on the Mac (carbon vs. cocoa).
    What you are asking the SDK to do is going to be painful on large complex images.  Drawing into an HDC/Window adds SIGNIFICANT overhead to our rendering process, since we have to do all the work in our own "bit buffer" and then copy that buffer into the OS's provided HDC.  OUCH!  This is why  DrawToMemory is better.
    Additionally, if you have files with complex transparency AND you want OverprintPreview, that's also going to be VERY complex rendering pipeline – made WORSE by the need to end up in an HDC.   Forgetting separations for the moment, consider that we have to convert everything to CMYK (since you can only compute OP in CMYK), blend colors, then convert all of that to RGB.   And that's assuming SIMPLE transparency.  If you have multiple blending groups, soft masks, etc. then you just raised the bar even more!
    How long does it take Acrobat to open up the PDF and render it completely to screen?   For separations, how long does it take to do a "Flatten Transparency" operation?
    UpdateRect won't help because of the OP and then transparency flattening
    From: Adobe Forums <[email protected]<mailto:[email protected]>>
    Reply-To: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
    Date: Mon, 5 Dec 2011 06:45:46 -0800
    To: Leonard Rosenthol <[email protected]<mailto:[email protected]>>
    Subject: PDPageDrawContentsToWindowEx takes too much time
    Re: PDPageDrawContentsToWindowEx takes too much time
    created by Nikolay Tasev<http://forums.adobe.com/people/Nikolay+Tasev> in Acrobat SDK - View the full discussion<http://forums.adobe.com/message/4064198#4064198

  • STAD Report generation takes too Much time

    Dear Experts,
    In our Production system if we execute the STAD report for10 minutes to get the business transaction data for a single user or multiple users. Its taking too much time upto 500 sec sometimes throwing DUMP due to time out.
    We have the issue from the begin itself.
    What are the checks needs to done and how to fix the issue ?
    Is that possible to schedule the same in Background to get the output and how ?
    System Info:
    ECC6.0 EHP4, Kernel Release 701 Support Pck - 55.
    Windows MSCS Clustering.
    SAPOSCOL on both instance.
    Thanks,
    Jai

    Hi,
    If the load of your system is very high, 10 minutes is much too long for STAD.
    The solution that I use is to display shorter intervals (maximum 3 minutes in my most loaded system).
    As rdisp/max_wprun_time is a dynamic parameter, a possible workaround is to increase its value during the time necessary to run STAD.
    Regards,
    Olivier

Maybe you are looking for

  • Quite interesting situation....

    Hey guys, so I have an older LGA 775 Intel processor for my desktop and I am having a little situation: Modern computer monitors do not work. I have tried using a workstation graphics card (Quadro 600) and I have tried using a consumer graphics card

  • Host Agent Service critical - Paging file is too small for this operation to complete

    Hi all, at the moment I have the problem that some of my hosts get in the errorstate critical in VMM 2012 SP1. The error details in VMM are: Error (2912) An internal error has occurred trying to contact the hostname server: : . WinRM: URL: [http://ho

  • FM or BAPI for transaction F871

    Hello! There are any Function Module or BAPI for transaction F871? Thanks a lot!

  • What the hell is going on with flash lately!?!?

    I'm using firefox on Windows 8 x64 and even after the updates i can't watch a video without going nuts! IMDB trailers no longer work without freezing my browsers, youtube videos won't allow to change quality without freezing or stutering as hell, so

  • Loop through layers and determine if its a text containing layer

    Hi, I am trying to loop/iterate through all layers, determine if it's a "text-layer", if it is, set it's opacity to 0. This is what I have so far: var iapp = new Illustrator.Application(); var openoptions = new Illustrator.OpenOptions(); var idoc = i