Poor performance of my Z60M, found disabling AcSvc.exe improves performance

I have a Z60M using XP, and have poor performance due to processor operating at near 100%,  I found that the process ScSvc.exe had most of the CPU.  I ended the process, and have an operable machine again.   I downloaded the latest version of ScSvc, but that did not correct the problem.
I am tempted to disable all the thinkpad software and go to a pure XP system, but have not done any looking into that.
Has anyone experienced these problems, or disabled the thinkpad software?

Tables/Indexes last analyzed today.
I figured out that it was the Oracle Text indexes synching to frequently that was causing the problem. I disabled all the jobs that kicked off those indexes and my CPU utilization dropped to almost 0%. I will work on tuning the interval/re-enabling the indexes for my dynamic datasources.
Thank you for everyone's suggestions!
Mimi

Similar Messages

  • Upgraded both computers in the household and found Lion is too disruptive to workflow.  Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return computer and get upgraded memory to improve performance of existing MacBookPro?

    Upgraded both computers in the household and found Lion is too disruptive to workflow. 
    Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return new computer and get upgraded memory to improve performance of existing MacBookPro?  I'm mostly still happy with existing MacBookPro, but Aperture doesn't work, the computer can't handle it.
    Other possibility is setting up virtual machine with Snow Leopard server software on new computer.
    Any opinions on what would allow moving forward with the least hassle and best workflow continuity?

    hi,
    what year and specs does the MBP have

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

  • Brush Lag? - Here are OpenGL settings that improve performance

    For anyone having horrible Brush lag, please try the following settings and report your findings in this thread.
    After much configuring and comparing CS4 vs CS3, I found that these settings do improve CS4's brush lag significantly. CS3 is still faster, but these settings made CS4 brush strokes a lot more responsive.
    Please try these settings and share your experiences.
    NOTE: these do not improve clone tool performance, the best way to improve clone performance right now seems to be to turn off the clone tools overlay feature.
    Perhaps Adam or Chris from Adobe could explain what is happening here. The most significant option that improved performance appears to be the "Use for Image Display - OFF". I have no idea what this feature does or does not do but it does seem to be the biggest performance hit. The next most influential setting seems to be "3D Interaction Acceleration - OFF"
    Set the following settings in Photoshop CS4 Preferences:
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - OFF
    Color Matching - ON

    Hi guys,
    As I am having very little problems with my system I though I should post my specs and settings for comparison reasons.
    System - Asus p5Q deluxe,
    Intel quad 9650 3ghz,
    16gb pc6400 800Mhz ram,
    loads of drives ( system drive on 10K 74gb Raptor, Vista partition on fast 500gb drive, Ps scratch on an another fast 500gb drive, the rest are storage/bk-ups ),
    Gainward 8800GTS 640mb GPU,
    30ins monitor @2560x1600 and a 24ins 1920x1200,
    Wacom tablet.
    PS CS4 x64bit
    Vista x64
    All latest drivers
    No Antivirus. Index and superfetch is ON, Defender is ON
    No internet connection except for updates
    No faffing around with vista processes
    Wacom Virtual HID and Wacom Mouse Monitor are disabled
    nVidia GPU set to default settings
    On this system I am able to produce massive images, the last major size was 150x600cm@200ppi and the brushes are smooth until they increased to around 700+ pixels then there is a slight lag of around 1 second if I draw fast, if I take it slow then there's no lag. All UI is snappy.
    I have the following settings in Photoshop CS4 Preferences:
    Actual Ram: 14710MB
    Set to use (87%) :12790MB
    Scratch Disk
    on a separate fast 500gb - to become a 80gb ssd soon
    History state: 50
    Cache: 8
    OpenGL - ON
    Vsync - OFF
    3D Interaction Acceleration - OFF
    Force Bilinear Interpolation - OFF
    Advanced Drawing - ON
    Use for Image Display - ON
    Color Matching - ON
    I hope this helps in some way too...
    EDIT: I should also add that I Defrag all my drives with a third party defrag software every night due to the large image files.

  • How to improve performance of MediaPlayer?

    I tried to use the MediaPlayer with a On2 VP6 flv movie.
    Showing a video with a resolution of 1024x768 works.
    Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
    Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
    Does somebody know a solution for this problems?
    Cheers
    masim

    duplicate thread..
    How to improve performance of attached query

  • I have a MAC Pro from 2011 currently running MAC OS 10.9.5.  This weekend I cloned the MAC HD drive to a new SSD drive for improved performance.  The clone was completed successfully with no errors.  After the clone completed I successfull restarted my sy

    I have a MAC Pro from 2011 currently running MAC OS 10.9.5.  This weekend I cloned the MAC HD drive to a new SSD drive for improved performance.  The clone was completed successfully with no errors.  After the clone completed I successfully restarted my system using the SSD as the boot device.  I then successfully tested all of my products, including Photoshop CS6 and all of its plug-ins.  I successfully tested the key features that I frequently use.  Today while attempting to launch Photoshop CS6 a message is being displayed indicating that a scratch disk cannot be found.  All drives are available on the system via the Finder and Disk Utility.  I can access all drives including the old MAC HD which is no longer the boot device.  I've even attempted to launch Photoshop from the old device yet the same error persist.  Is there a way to review/edit/change Photoshop preferences if Photoshop doesn't launch?  I've even restarted my system several times to see if that would resolve the issues.  Does anyone have any recommendations for this issue?  Have you previously address this issue? 
    Thank you Gregg Williams

    Boilerplate text:
    Reset Preferences
    http://forums.adobe.com/thread/375776
    1) Close the program and press Ctrl+Alt+Shift/Cmd+Option+Shift during startup (not reversible)
    or
    2) Move the Folder. See:
    http://www.bugge.com/Family-and-friends/Illy/illy.html
    --OB

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • Improving performance of zreport

    Hi,
    I have on requirement of improving performance of zreport.
    (1) Apart from nested loops, select within loop, inner joins, is there any other changes i have to consider.
    (2) There 4 nested loops found, which have upto 1 level.  Is it mandatory for avoiding nested loops which have upto 1 level?
    (3) How to avoid SELECT, ENDSELECT within LOOP and ENDLOOP
    (4) The FM WS_DOWNLOAD, which is outdated is called for downloading data into excel. Which one i have to choose for replacing this FM.
    I will reward, if it is useful.

    Hi,
    Downloading Internal Tables to Excel
    Often we face situations where we need to download internal table contents onto an Excel sheet. We are familiar with the function module WS_DOWNLOAD. Though this function module downloads the contents onto the Excel sheet, there cannot be any column headings or we cannot differentiate the primary keys just by seeing the Excel sheet.
    For this purpose, we can use the function module XXL_FULL_API. The Excel sheet which is generated by this function module contains the column headings and the key columns are highlighted with a different color. Other options that are available with this function module are we can swap two columns or supress a field from displaying on the Excel sheet. The simple code for the usage of this function module is given below.
    Program code :
    REPORT Excel.
    TABLES:
      sflight.
    header data................................
    DATA :
      header1 LIKE gxxlt_p-text VALUE 'Suresh',
      header2 LIKE gxxlt_p-text VALUE 'Excel sheet'.
    Internal table for holding the SFLIGHT data
    DATA BEGIN OF t_sflight OCCURS 0.
            INCLUDE STRUCTURE sflight.
    DATA END   OF t_sflight.
    Internal table for holding the horizontal key.
    DATA BEGIN OF  t_hkey OCCURS 0.
            INCLUDE STRUCTURE gxxlt_h.
    DATA END   OF t_hkey .
    Internal table for holding the vertical key.
    DATA BEGIN OF t_vkey OCCURS 0.
            INCLUDE STRUCTURE gxxlt_v.
    DATA END   OF t_vkey .
    Internal table for holding the online text....
    DATA BEGIN OF t_online OCCURS 0.
            INCLUDE STRUCTURE gxxlt_o.
    DATA END   OF t_online.
    Internal table to hold print text.............
    DATA BEGIN OF t_print OCCURS 0.
            INCLUDE STRUCTURE gxxlt_p.
    DATA END   OF t_print.
    Internal table to hold SEMA data..............
    DATA BEGIN OF t_sema OCCURS 0.
            INCLUDE STRUCTURE gxxlt_s.
    DATA END   OF t_sema.
    Retreiving data from sflight.
    SELECT * FROM sflight
             INTO TABLE t_sflight.
    Text which will be displayed online is declared here....
    t_online-line_no    = '1'.
    t_online-info_name  = 'Created by'.
    t_online-info_value = 'KODANDARAMI REDDY.S'.
    APPEND t_online.
    Text which will be printed out..........................
    t_print-hf     = 'H'.
    t_print-lcr    = 'L'.
    t_print-line_no = '1'.
    t_print-text   = 'This is the header'.
    APPEND t_print.
    t_print-hf     = 'F'.
    t_print-lcr    = 'C'.
    t_print-line_no = '1'.
    t_print-text   = 'This is the footer'.
    APPEND t_print.
    Defining the vertical key columns.......
    t_vkey-col_no   = '1'.
    t_vkey-col_name = 'MANDT'.
    APPEND t_vkey.
    t_vkey-col_no   = '2'.
    t_vkey-col_name = 'CARRID'.
    APPEND t_vkey.
    t_vkey-col_no   = '3'.
    t_vkey-col_name = 'CONNID'.
    APPEND t_vkey.
    t_vkey-col_no   = '4'.
    t_vkey-col_name = 'FLDATE'.
    APPEND t_vkey.
    Header text for the data columns................
    t_hkey-row_no = '1'.
    t_hkey-col_no = 1.
    t_hkey-col_name = 'PRICE'.
    APPEND t_hkey.
    t_hkey-col_no = 2.
    t_hkey-col_name = 'CURRENCY'.
    APPEND t_hkey.
    t_hkey-col_no = 3.
    t_hkey-col_name = 'PLANETYPE'.
    APPEND t_hkey.
    t_hkey-col_no = 4.
    t_hkey-col_name = 'SEATSMAX'.
    APPEND t_hkey.
    t_hkey-col_no = 5.
    t_hkey-col_name = 'SEATSOCC'.
    APPEND t_hkey.
    t_hkey-col_no = 6.
    t_hkey-col_name = 'PAYMENTSUM'.
    APPEND t_hkey.
    populating the SEMA data..........................
    t_sema-col_no  = 1.
    t_sema-col_typ = 'STR'.
    t_sema-col_ops = 'DFT'.
    APPEND t_sema.
    t_sema-col_no = 2.
    APPEND t_sema.
    t_sema-col_no = 3.
    APPEND t_sema.
    t_sema-col_no = 4.
    APPEND t_sema.
    t_sema-col_no = 5.
    APPEND t_sema.
    t_sema-col_no = 6.
    APPEND t_sema.
    t_sema-col_no = 7.
    APPEND t_sema.
    t_sema-col_no = 8.
    APPEND t_sema.
    t_sema-col_no = 9.
    APPEND t_sema.
    t_sema-col_no = 10.
    t_sema-col_typ = 'NUM'.
    t_sema-col_ops = 'ADD'.
    APPEND t_sema.
    CALL FUNCTION 'XXL_FULL_API'
      EXPORTING
      DATA_ENDING_AT          = 54
      DATA_STARTING_AT        = 5
       filename                = 'TESTFILE'
       header_1                = header1
       header_2                = header2
       no_dialog               = 'X'
       no_start                = ' '
        n_att_cols              = 6
        n_hrz_keys              = 1
        n_vrt_keys              = 4
       sema_type               = 'X'
      SO_TITLE                = ' '
      TABLES
        data                    = t_sflight
        hkey                    = t_hkey
        online_text             = t_online
        print_text              = t_print
        sema                    = t_sema
        vkey                    = t_vkey
    EXCEPTIONS
       cancelled_by_user       = 1
       data_too_big            = 2
       dim_mismatch_data       = 3
       dim_mismatch_sema       = 4
       dim_mismatch_vkey       = 5
       error_in_hkey           = 6
       error_in_sema           = 7
       file_open_error         = 8
       file_write_error        = 9
       inv_data_range          = 10
       inv_winsys              = 11
       inv_xxl                 = 12
       OTHERS                  = 13
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    reward if helpful
    raam

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • Improving performance when using LineStripArray?

    I'm rendering approximately 680 LineStripArrays to represent an airport on a situation display. I read the data from a DXF file and I imagine I can strip that down by removing parts of the airport I don't want to show.
    However, performance is poor - I'm probably hitting 50fps when I'm barely rendering anything. Apart from not displaying some LineStripArrays, what can I do to improve performance? Should I merge some LineStripArrays? Is there another geometry class I should use?
    My code is:
                   // This array will hold the vertices.
                   float[] vertices = new float[count*3];
                   // Create the colour.
                   Color3f colour = new Color3f(1.0f,1.0f, 1.0f);
                   // iterate over all vertex of the polyline
                   for (int i = 0; i < count; i++) {                    
                        vertices[(i*3)+0] = (float)vertex.getX();
                        vertices[(i*3)+1] = (float)vertex.getY();
                        vertices[(i*3)+2] = (float)vertex.getZ();
                   }And then:
    layerData = new LineStripArray(count, LineArray.COORDINATES, strip_counts);
    layerData.setCoordinates(0, vertices);Thanks
    Edited by: BobCrivens on Sep 18, 2008 7:39 AM

    Yes, it will cause performance issues.  Whether you notice it or not may be a different story.
    LabVIEW drawing engine starts at the bottom layer and works its way up.  So, it has to redraw the image and then redraw the control when you update the control/indicator.
    It's been a while since I benchmarked this on a project, but in LabVIEW 6.1, I looked into why my tests ran so slow, and saw a 10-15% decrease in test time by removing the background decorations I used to make the window pretty.  If I didn't show the GUI feedback for the test at all (no GUI windows for each test), I saw a 30% decrease in test time.
    You will also find that better video cards will have a positive effect on this, as they redraw the screen faster.  In the same benchmark, I was able to outperform the early PXI controllers with a slower PC because NI was using a lower end video chip for their onboard graphics.

  • Improve performance on my Power PC iMac G5

    I notice my mac doesn't seem as speedy as it once was. I have had this computer for about 5 or 6 years and have filled it up with tons of crap over those years.
    My hard drive still have about 50 gig free.
    I have researched how to improve mac performance but haven't found any valuable info.
    I'm considering reformatting so I can get rid of any unneeded software and files.
    My questions are is this worth it? Once reformatted can I simply drag my applications from my backup hard drive to my computer and they will magically work again?
    I really want to save everything about my iTunes; applications, playlists, purchased music, and I even want my play counts. Is all that possible?
    Thanks for any help on my journey towards optimization.

    Look at these links
    Mac Tune-up: 34 Software Speedups
    http://www.macworld.com/article/49489/2006/02/softwarespeed.html
    52 Ways to Speed Up OS X
    http://www.imafish.co.uk/articles/post/articles/130/52-ways-to-speed-up os-x/
    Tuning Mac OS X Performance
    http://www.thexlab.com/faqs/performance.html
    11 Ways to Optimize Your Mac's Performance
    http://lowendmac.com/eubanks/07/0312.html
    The Top 7 Free Utilities To Maintain A Mac.
    http://mac360.com/index.php/mac360/comments/thetop_7_free_utilities_to_maintain_amac/
    Mac OS X: System maintenance
    http://discussions.apple.com/thread.jspa?messageID=607640
     Cheers, Tom

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to improve performance when there are many TextBlocks in ItemsControl items?

       Hi,
       I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
    to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
    show the items.
       I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
    to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
    I don't thing it is a good solution.
       Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
    the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
    Thank you
    Alex

       Using Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
    Alex

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

Maybe you are looking for