A64 Tweaker and Improving Performance

I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

Quote
Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

Similar Messages

  • Firmware Version 4 1 106 1982 of IomegaEZ MAL PERFORMANC​E and improve performanc​e

    Firmware Version 4.1.106.31982 of IomegaEZ MAL PERFORMANCE !!! and improve performance?
    I upgraded to version 4.1.106.31982 the IomegaEZ and enter SETUP MAL has PERFORMANCE WEB, it is very slow unlike the previous firmware version; as performance improves VERSION of this? You can return to the previous version without losing data?
    Thank You
    Ruben Arno
    ESPAÑOL
    Version de Firmware 4.1.106.31982 de  IomegaEZ de MAL RENDIMIENTO !!! como mejorar el rendimiento ?
    He actualizado a la version 4.1.106.31982 el IomegaEZ y al ingresar al SETUP WEB tiene MAL RENDIMIENTO,  es muy lento a diferencia de la version de firmware anterior; como se mejora el rendimiento de esta VERSION ? se puede volver a la version anterior sin perder datos ?
    Gracias
    Ruben Arno

    Hello pcyservice
    There is NOT a way to downgrade/roll-back/revert to a previous firmware without wiping all data.  This is specifically mentioned on the firmware update page:
    " CAUTION!
    This update is not data destructive, however, ALWAYS back up your data before performing any firmware update!
    Once you have updated the firmware, you will NOT be able to revert to an older firmware version."
    I recommend you disable any unnecessary features/protocols such as Media Server and Active Folders, then reboot the unit.
    If you are still experiencing performance issues, please contact LenovoEMC support to troubleshoot further.
    LenovoEMC Contact Information is region specific. Please select the correct link then access the Contact Us at the top right:
    US and Canada: https://lenovo-na-en.custhelp.com/
    Latin America and Mexico: https://lenovo-la-es.custhelp.com/
    EU: https://lenovo-eu-en.custhelp.com/
    India/Asia Pacific: https://lenovo-ap-en.custhelp.com/
    http://support.lenovoemc.com/

  • Improving Performance between WRT610N (v1) and WET610N (v1)

    Any advice on how to improve performance of the connection between these two Wireless N devices?
    WRT610N (v1) and WET610N (v1)
    We have a PS3 connected to the WET610N via Ethernet directly, and it connects the PS3 communication wirelessly to the Internet through the WRT610N.  Just wondering if there are some known tweaks that will improve performance between them.  One thing I think is ridiculous is that the WRT610N is showing only 30% signal strength on the WET610N.  That is ridiculous as they are not far from eachother.
    Any input is welcome!

    Whats the Distance between your Router and the Bridge? Are you connected to 5GHz wireless network or 2.4GHz wireless network?  Whats the Wireless Settings you have setup on your Router.

  • Audio and Video Performance Tweaking

    Hey guys I'm back with more questions.  I'm trying to improve the overall quality of my video conference experience.   My target market is the elderly and their families, so not necessarly people who are used to computers and tweaking things so we want to make it as good as we can right out of the box.
    Video
    1.)  All of my decoded video has clearly visible blocks.  I've attached a sample screen shot to show what I mean.  Is there a deblocking filter option I'm missing somewhere?
    2.)  The encoder in flash player 10.x is still H.263 based right?  I'm not missing a secret option to use the H.264 encoder am I?
    Audio
    1.)  I've set my audio codec to speex in my audio publisher, and it looks like it defaults to a quality setting of 10 (<rtc:AudioPublisher id="audioPub" codec="{SoundCodec.SPEEX}"/>).  Am I missing any tweaks to improve the audio quality?
    Audio questions where I assume the answer is no but I ask anyway
    2.)  Does the LCCS code or flash player prioritize audio packets over video packets or are they just sent when they're available.  I'd like to sacrafice video for audio if it's possible.
    3.)  Is there a way to better syncronize the video and audio streams.  Right now there's a lot of lag between what is said and how the lips move (this is even with peer to peer on the same private network).   I get the idea that there isn't but I thought I'd ask.
    4.)  Is there anything I can monitor to give me an idea of the current connection performance?   In otherwords can I monitor something and then make my own decicions to reduce the video quality or audio quality based on avaiable bandwidth between the two connected peers?
    5.)  You secretely have echo cancellation built in but were waiting till now to tell me right?  Just kidding.
    Thanks as usual
    -Eric

    Hironmay,
           Thank you for getting back to me, I know I asked a lot in one post there.   It would be nice to have a way for audio packets to be prioritized over video since they need to get there but  you can live with dropped video frames.   I'd love to see a way to sync the audio and video up as well.  I was just reading about some research where they showed if you can watch someones mouth move while you hear them talk it's equivelant to a 20dB volume increase (referrning to how well the person can understand you.)   I have elderly users so something like that is very important to me.
    I agree with you that when I changed the captureAndHeightFactor that the blocks got smaller, but did not go away.  I was hoping there was a way to smooth out the edges of the blocks so the boundries don't look so harsh.   I had been looking at flash.media.video in the flex 3.4 language reference and it talks about deblocking.  I copied the relevent text from the documentaiton at the end of this message.  I was wondering if there was something there I could use.   I'll play around with it some more.
    Believe me I've been very vocal about the need for echo cancellation on the Flash Forums
    Thanks for the help,
    Eric
    relevent text from flash.media.video
    deblocking
    property
    deblocking:intIndicates the type of filter applied to decoded video as part of post-processing.      The default value is 0, which lets the video compressor apply a deblocking filter as needed.
    Compression of video can result in undesired artifacts. You can use the       deblocking property to set filters that reduce blocking and,      for video compressed using the On2 codec, ringing.
    Blocking refers to visible imperfections between the boundaries      of the blocks that compose each video frame. Ringing refers to distorted      edges around elements within a video image.
    Two deblocking filters are available: one in the Sorenson codec and one in the On2 VP6 codec.      In addition, a deringing filter is available when you use the On2 VP6 codec.       To set a filter, use one of the following values:
    0—Lets the video compressor apply the deblocking filter as needed.
    1—Does not use a deblocking filter.
    2—Uses the Sorenson deblocking filter.
    3—For On2 video only, uses the On2 deblocking filter but no deringing filter.
    4—For On2 video only, uses the On2 deblocking and deringing filter.
    5—For On2 video only, uses the On2 deblocking and a higher-performance      On2 deringing filter.
    If a value greater than 2 is selected for video when you are using      the Sorenson codec, the Sorenson decoder defaults to 2.
    Using a deblocking filter has an effect on overall playback performance, and it is usually      not necessary for high-bandwidth video. If a user's system is not powerful enough,       the user may experience difficulties playing back video with a deblocking filter enabled.

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Upgraded both computers in the household and found Lion is too disruptive to workflow.  Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return computer and get upgraded memory to improve performance of existing MacBookPro?

    Upgraded both computers in the household and found Lion is too disruptive to workflow. 
    Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return new computer and get upgraded memory to improve performance of existing MacBookPro?  I'm mostly still happy with existing MacBookPro, but Aperture doesn't work, the computer can't handle it.
    Other possibility is setting up virtual machine with Snow Leopard server software on new computer.
    Any opinions on what would allow moving forward with the least hassle and best workflow continuity?

    hi,
    what year and specs does the MBP have

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • ABAP performance issues and improvements

    Hi All,
    Pl. give me the ABAP performance issue and improvement points.
    Regards,
    Hema

    Performance tuning for Data Selection Statement
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of
    entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the
    length of the WHERE clause.
    The plus
    Large amount of data
    Mixing processing and reading of data
    Fast internal reprocessing of data
    Fast
    The Minus
    Difficult to program/understand
    Memory could be critical (use FREE or PACKAGE size)
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table
    Sorting the driver table
          If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
          FOR ALL ENTRIES IN i_tab
            WHERE mykey >= i_tab-low and
                  mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data
    Mixing processing and reading of data
    Easy to code - and understand
    The minus:
    Large amount of data
    when mixed processing isn’t needed
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data
    Similar to Nested selects - when the accesses are planned by the programmer
    In some cases the fastest
    Not so memory critical
    The minus
    Very difficult to program/understand
    Mixing processing and reading of data not possible
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
          The runtime analysis (SE30)
          SQL Trace (ST05)
          Tips and Tricks tool
          The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT
    ORDER BY / GROUP BY / HAVING clause
    Any WHERE clasuse that contains a subquery or IS NULL expression
    JOIN s
    A SELECT... FOR UPDATE
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.

  • How to improve database and application performance

    Hi,
    Any body please help me out that how can we improve the database and Application performance.
    Regards,
    Bhatia

    bhatia wrote:
    Hi,
    Any body please help me out that how can we improve the database and Application performance.
    Regards,
    Bhatiathere is no simple answer. There is no DATABASE_FAST=TRUE initialization parameter. There are a myriad of reasons why an application is performing poorly. It could be that the application (code and data relationships) is poorly designed. It could be that individual SQL statements are poorly written. It could be that you don't have enough cpu/memory/disk bandwidth/network bandwidth.
    You need to determine the root cause of poor performance and address it. If you application is poorly designed, you can tune the database until the cows come home and it won't make any difference. If you are trying to run 100k updates per second against a database hosted on hardware that only meets minimal requirements to install Oracle ... well, hopefully you get the picture.
    First, go to tahiti.oracle.com. Drill down to your selected Oracle product and version. There you will find the complete doc library. Find the Performance Tuning Guide
    Second, go to amazon.com and browse titles by Tom Kyte and Cary Milsap. I particularly recommend "Effective Oracle by Design" and "Optimizing Oracle Performance", though I see a lot of new titles that look promising (I think I'll be doing some buying!)

  • Using Lightroom and Aperture, will a new ATI 5770/5870 vs. GT 120 improve performance?

    I have a MP (2009, 3.3 Nehalem Quad and 16GB RAM) and wanted to improve performance in APERTURE (see clock wheel processing all the time) with edits, also using Lightroom, and sometimes CS5. 
    Anyone with experience that can say upgrading from the GT120 would see a difference and how much approximately?
    Next, do I need to buy the 5870 or can I get the 5770 to work?
    I am assuming I have to remove the GT120 for the new card to fit?
    Thanks

    Terrible marketing. ALL ATI 5xxx work in ALL Mac Pro models. With 10.6.5 and later.
    It really should be yours to just check AMD and search out reviews that compare these to others. You didn't look at the specs of each or Barefeats? He has half a dozen benchmark tests, but the GT120 doesn't even show up or in the running on most.
    From AMD 5870 shows 2x the units -
    TeraScale 2 Unified Processing Architecture   
    1600 Stream Processing Units
    80 Texture Units
    128 Z/Stencil ROP Units
    32 Color ROP Units
    ATI Radeon™ HD 5870 graphics
    That should hold up well.
    Some are on the fence or don't want to pay $$
    All they or you (and you've been around for more than a day!) is go to Apple Store:
    ATI Radeon HD 5870 Graphics Upgrade Kit for Mac Pro (Mid 2010 or Early 2009)
    ATI Radeon HD 5870 Upgrade

  • While browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel?

    hi,
    while browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel? 
    I have 20 measures and 8 dimensions.
    while filtering data by using dimensions in excel it is taking so much time.
    Ex:
    I browsed 15 measures in excel and filtered data based on time(quarter  wise) and other dimesion product. It is taking long time to get  data.
    Can you please help on this issue.
    Regards,
    Samba

    Hi Samba,
    What're the versions of your Office Excel and SQL Server Analysis Services? It will be helpful if you can share the detail computer resource information to us while encountered this issue.
    In addition, we don't know your cube structure and the underlying relationships. But you can take a look at the following articles to troubleshoot the performance issue:
    Improving Excel's Cube Performance:
    http://richardlees.blogspot.com/2010/04/improving-excels-cube-performance.html
    Excel Against a SSAS Cube Slow: Performance Improvement Tips:
    http://www.msbicentral.com/Blogs/tabid/131/articleType/ArticleView/articleId/136/Excel-Against-a-SSAS-Cube-Slow-Performance-Improvement-Tips.aspx
    Regards, 
    Elvis Long
    TechNet Community Support

  • New Performance Settings and Improvements in 2004s

    Hi all,
    I was wondering if there is a document or website with all performance settings that are
    new in BI 2004s (and that were not available in BW 3.5 or lower). 
    If this information is not grouped yet, I want to invite everyone to gather the 2004s specific performance information in this thread. The improvements that I'm aware  of are:
    Write-optimized DSO: http://help.sap.com/saphelp_nw70/helpdata/en/b6/de1c42128a5733e10000000a155106/frameset.htm
    BI Accelerator http://help.sap.com/saphelp_nw70/helpdata/en/06/49c0417951d117e10000000a155106/frameset.htm
    Optimization hint for logical MultiProvider partitioning (native in BI 2004s) Note 911939
    Of course there must be more.
    Best regards,
    Danië

    You are partially rite and you mentioned the new settings which come along with the aforementioned settings
    Heres my take on few others -
    <b>1)Tcode = RSODSO_SETTINGS</b>
    used for Parallel activation of ODS to improve performance
    <b>2)RSCUSTV25</b>
    Query- SQL Split
    Data mart Query Split
    <b>3)se16-->RSADMIN</b>
    DB_STATISTICS = OFF
    OSS note 938040  - as per SAP
    As of Support Package 8, the parameter is no longer used. This means that BI triggers the creation of database statistics, regardless of this parameter. The database itself determines whether the statistics are to be created or changed.
    Tool BRCONNECT to be used after loading of all cubes.
    <b>4)RSCUSTV8</b>
    Reporting Lock = X
    If you set this parameter to X, database commits are executed during master data activation. This prevents an overflow of rollback segments. Because temporary inconsistent states can occur here, reporting is locked during this time.
    If you currently load large quantities of data in which many values change, leading to problems with the rollback segments, you should set this parameter.
    <b>5)Parallel Processing of query</b>
    Se16-->RSADMIN
    QUERY_MAX_WP_DIAG = 6
    https://forums.sdn.sap.com/click.jspa?searchID=4695392&messageID=2935804
    <b>6)RSA1>Administration>Current Settings</b>
    Here you get all the different independent areas which you can work to perk up the
    performance related issues.
    Infoprovider
    DSO
    Aggregate Parameters
    BI system Data transfer
    OLAP cache
    Planning Locks
    System Parameters
    Batch manager
    I luv to see my name popping up in wikipidea
    Hope it Helps
    Chetan
    @CP..

  • Improve SATA and IDE Performance on K8N Neo 4

    Hello,
    I have 4 SATA hard drives [2 Western Digital 36 gig Raptors running Raid 0] and [2 Maxtor 200 gig Hard drives running Raid 0] running on XP 64.  I am using the XP64 drives off of MSI's website, and although I appear stable - performance seems a little slow.  I am concerned at the benchmarks Sandra Soft gave me using the Filesystem Benchmark - only rating the drive index of the raptors to 53 Mb/Sec.
    My 2 IDE CD ROM drives are also performing poor - with skips, stutters and hangs.  I am using the MS ide drive and both devices are running at DMA 2.
    Any advice / better drivers I can use to improve performance.
    Also, does anyone know a program compatible with XP 64 where I can monitor temps and voltages?

    6.66 ?  I also have the problem is that when i first format and install windows, I get almost everything installed the way I want it, and then i reboot and windows decides to stop loading.  The only way to fix is to do a repair install and then it doesnt seem to do it again.  I have formated maybe 6 times and it has happened each time.  Perhaps when I install XP, I should use the new drivers and not the floppy the MB came with?  I am also getting random crashes.

  • Any suggestions about this program to improve performance and effective cod

    CREATE OR REPLACE PACKAGE SEODS02.ODS_ACCOUNT
    AS
    Package Name : ODS_ACCOUNT
    /* Description : This procedure will be called to move the data from */
    /* ACCT_ALT_ID_STG (staging) table to ACCT_ALT_ID table and EODS_ACCT */
    /* table for all new accounts */
    /**************** Change History ***********************************/
    /* Date Version Author Description */
    /* 15-04-2011 1.00 Lakshmi Draft version */
    -- Global Specifications.
    package_name_in VARCHAR2(50) :='ODS_ACCOUNT';
    v_cntl_schema VARCHAR2(30) :='SCNTL02';
    v_location INTEGER := 10;
    -- Procedure Specifications.
    PROCEDURE INSERT_SURR_ACCTS (job_name_in IN VARCHAR2,
    proc_cd_in IN VARCHAR2,
    proc_step_cd_in IN VARCHAR2,
    bch_dte_in IN DATE,
    fl_nbr_in IN NUMBER,
    verbose_log_flag_in IN INTEGER,
    pred_check_req_in IN INTEGER,
    ibd_id_in IN NUMBER,
    proc_step_status_out OUT INTEGER,
    sp_hier_inout IN OUT VARCHAR2);
    END ODS_ACCOUNT;
    CREATE OR REPLACE PACKAGE BODY SEODS02.ODS_ACCOUNT
    AS
    Procedure Name : INSERT_SURR_ACCTS
    Description : This procedure will be called to move the data from
    ACCT_ALT_ID_STG (staging) table to ACCT_ALT_ID table
    and EODS_ACCT table for all new accounts.
    Release Date : 27 MAY 2011
    Created By : C2119810
    PROCEDURE INSERT_SURR_ACCTS (job_name_in IN VARCHAR2,
    proc_cd_in IN VARCHAR2,
    proc_step_cd_in IN VARCHAR2,
    bch_dte_in IN DATE,
    fl_nbr_in IN NUMBER,
    verbose_log_flag_in IN INTEGER,
    pred_check_req_in IN INTEGER,
    ibd_id_in IN NUMBER,
    proc_step_status_out OUT INTEGER,
    sp_hier_inout IN OUT VARCHAR2)
    AS
    /* Local Variables Declaration*/
    v_curr_date DATE := CURRENT_DATE;
    procedure_name_in VARCHAR2(30) := 'INSERT_SURR_ACCTS';
    stat_code_in VARCHAR2(30);
    proc_step_start_out NUMBER(1);
    proc_step_upd_out NUMBER(1);
    error_msg_in VARCHAR2(1000);
    option_in VARCHAR2(30);
    v_cmit_nbr NUMBER(8);
    v_query VARCHAR2(10000);
    v_actl_inpt_cnt NUMBER(10):=0;
    v_rec_inserted_cnt_in NUMBER(10):=0;
    v_rec_errored_cnt_in NUMBER(10):=0;
    v_proc_step_upd_out NUMBER(1);
    handled_exception EXCEPTION;
    CURSOR c1 IS
    SELECT *
    FROM acct_alt_id_stg
    WHERE fl_nbr = fl_nbr_in AND
    alt_acct_rec_proc_flag = 'N' AND
    eods_acct_rec_proc_flag = 'N' AND
    to_date(bch_dte,'DD-MON-YY') <= bch_dte_in;
    TYPE sttg_cursor IS TABLE OF acct_alt_id_stg%ROWTYPE;
    sttg_array sttg_cursor;
    BEGIN
    /* Enable the logging if verbose log flag is 0*/
    IF verbose_log_flag_in = 0 THEN
    DBMS_OUTPUT.ENABLE();
    ELSE
    DBMS_OUTPUT.DISABLE();
    END IF;
    /* Start the Insert Surrogate Accounts process step after all the predecessor process steps are complete */
    ODS_CONTROL_UTILITY.PROC_STEP_START( proc_cd_in => proc_cd_in,
    proc_step_cd_in => proc_step_cd_in,
    ibd_id_in => ibd_id_in,
    bch_dte_in => bch_dte_in,
    job_name_in => job_name_in,
    verbose_log_flag_in => verbose_log_flag_in,
    pred_chk_reqd_in => pred_check_req_in,
    proc_step_stat_out => proc_step_start_out,
    sp_hier_inout => sp_hier_inout);
    IF proc_step_start_out = 0 THEN
    dbms_output.put_line('Process Step '|| proc_step_cd_in ||' started for Process '||proc_cd_in);
    error_msg_in := 'Error in reading Commit point';
    v_query := 'SELECT proc_cmit_nbr FROM '||v_cntl_schema||'.proc
    WHERE proc_id = '|| chr(39) || proc_cd_in || chr(39) ||
    ' AND ibd_id = '|| ibd_id_in;
    EXECUTE IMMEDIATE v_query INTO v_cmit_nbr;
    dbms_output.put_line('Comit point number is : '||v_cmit_nbr);
    OPEN c1;
    LOOP
    FETCH c1 BULK COLLECT INTO sttg_array LIMIT v_cmit_nbr;
    FOR i IN 1..sttg_array.COUNT LOOP
    error_msg_in := 'Error in inserting ACCT_ALT_ID table';
    INSERT INTO acct_alt_id (acct_alt_id, ibd_id, acct_alt_id_cntx_cde, eods_acct_id, data_grp_cde, crte_pgm, crte_tstp, updt_pgm, updt_tstp)
    VALUES (sttg_array(i).acct_alt_id, sttg_array(i).ibd_id, sttg_array(i).acct_alt_id_cntx_cde, seq_eods_acct_id.nextval, sttg_array(i).data_grp_cde, job_name_in, v_curr_date, job_name_in, v_curr_date);
    error_msg_in := 'Error in inserting EODS_ACCT table';
    INSERT INTO eods_acct (eods_acct_id, acct_typ_cde, data_grp_cde, updt_dte)
    VALUES (seq_eods_acct_id.currval, sttg_array(i).acct_typ_cde, sttg_array(i).data_grp_cde, v_curr_date);
    error_msg_in := 'Error in Updating process flag in ACCT_ALT_ID_STG table';
    UPDATE acct_alt_id_stg
    SET alt_acct_rec_proc_flag = 'Y',
    eods_acct_rec_proc_flag = 'Y',
    updt_pgm = job_name_in,
    updt_tstp = v_curr_date
    WHERE acct_alt_id = sttg_array(i).acct_alt_id
    AND acct_alt_id_cntx_cde = sttg_array(i).acct_alt_id_cntx_cde
    AND fl_nbr = sttg_array(i).fl_nbr;
    /*Incrementing the count of records inserted*/
    v_actl_inpt_cnt := v_actl_inpt_cnt + 1;
    v_rec_inserted_cnt_in := v_rec_inserted_cnt_in + 1;
    END LOOP;
    EXIT WHEN c1%NOTFOUND;
    END LOOP;
    CLOSE c1;
    /* Update the count of records inserted and total processed count to proc_step_exec table. */
    ODS_CONTROL_UTILITY.PROC_STEP_UPDATE (proc_cd_in => proc_cd_in,
    proc_step_cd_in => proc_step_cd_in,
    ibd_id_in => ibd_id_in,
    bch_dte_in => bch_dte_in,
    job_name_in => job_name_in,
    status_in => 'IN PROCESS',
    rec_inserted_cnt_in => v_rec_inserted_cnt_in,
    actl_inpt_cnt_in => v_actl_inpt_cnt,
    verbose_log_flag_in => verbose_log_flag_in,
    pred_chk_reqd_in => pred_check_req_in,
    proc_step_upd_out => proc_step_upd_out,
    sp_hier_inout => sp_hier_inout);
    IF proc_step_upd_out = 0 THEN
    COMMIT;
    ELSE
    error_msg_in := 'Issue in updating process step '||proc_step_cd_in||' for process '||proc_cd_in;
    option_in := 'proc_step';
    v_location := 30;
    RAISE handled_exception;
    END IF;
    ODS_CONTROL_UTILITY.PROC_STEP_UPDATE (proc_cd_in => proc_cd_in,
    proc_step_cd_in => proc_step_cd_in,
    ibd_id_in => ibd_id_in,
    bch_dte_in => bch_dte_in,
    job_name_in => job_name_in,
    status_in => 'COMPLETED',
    verbose_log_flag_in => verbose_log_flag_in,
    pred_chk_reqd_in => pred_check_req_in,
    proc_step_upd_out => v_proc_step_upd_out,
    sp_hier_inout => sp_hier_inout);
    IF v_proc_step_upd_out = 0 THEN
    COMMIT;
    dbms_output.put_line('Data has been successfully inserted into ACCT_ALT_ID and EODS_ACCT tables');
    proc_step_status_out := 0;
    ELSE
    error_msg_in := 'Issue in ending process step for process ' || proc_cd_in || ' and process step '|| proc_step_cd_in;
    option_in := 'others';
    v_location := 40;
    RAISE handled_exception;
    END IF;
    ELSE
    error_msg_in := 'Issue in starting the process step for process ' || proc_cd_in || ' and process step '|| proc_step_cd_in;
    option_in := 'others';
    RAISE handled_exception;
    END IF;
    EXCEPTION
    WHEN handled_exception THEN
    IF c1%ISOPEN THEN
    CLOSE c1;
    END IF;
    ROLLBACK;
    DBMS_OUTPUT.ENABLE();
    ODS_CONTROL_UTILITY.COMM_SP_EXCEP_HNDLR(package_name_in => package_name_in,
    procedure_name_in => procedure_name_in,
    location_in => v_location,
    error_mesg_in => error_msg_in,
    proc_cd_in => proc_cd_in,
    proc_step_cd_in => proc_step_cd_in,
    ibd_id_in => ibd_id_in,
    option_in => option_in,
    job_name_in => job_name_in,
    bch_dte_in => bch_dte_in,
    sp_hier_inout => sp_hier_inout);
    sp_hier_inout :=
    CASE
    WHEN sp_hier_inout IS NULL THEN package_name_in || '.' || procedure_name_in
    WHEN sp_hier_inout IS NOT NULL THEN package_name_in || '.' || procedure_name_in || '-->' || sp_hier_inout
    END;
    proc_step_status_out := 1;
    WHEN OTHERS THEN
    IF c1%ISOPEN THEN
    CLOSE c1;
    END IF;
    ROLLBACK;
    error_msg_in := error_msg_in||' : '||SQLCODE || ' : '||SQLERRM;
    DBMS_OUTPUT.ENABLE();
    option_in := 'proc_step';
    ODS_CONTROL_UTILITY.COMM_SP_EXCEP_HNDLR(package_name_in => package_name_in,
    procedure_name_in => procedure_name_in,
    location_in => v_location,
    error_mesg_in => error_msg_in,
    proc_cd_in => proc_cd_in,
    proc_step_cd_in => proc_step_cd_in,
    ibd_id_in => ibd_id_in,
    option_in => option_in,
    job_name_in => job_name_in,
    bch_dte_in => bch_dte_in,
    sp_hier_inout => sp_hier_inout);
    sp_hier_inout :=
    CASE
    WHEN sp_hier_inout IS NULL THEN package_name_in || '.' || procedure_name_in
    WHEN sp_hier_inout IS NOT NULL THEN package_name_in || '.' || procedure_name_in || '-->' || sp_hier_inout
    END;
    proc_step_status_out := 1;
    END INSERT_SURR_ACCTS;
    END ODS_ACCOUNT;
    /

    I assume that the parts taking time are in the
    FOR i IN 1..sttg_array.COUNT LOOP In here you do 2 INSERTs and an UPDATE for every loop...
    If you LOOP 1 Mill tiimes (how many??) then you do 2 Mill single inserts and 1Mill sinlge UPDATES.
    A trace with TKPROF will tell you details.
    You better do an INSERT ALL to do the two inserts and a MERGE to do the UDPATE. NO LOOP!!! That will reduce the code and maximize performance by a magniture.

  • Best methods to speed up and increase performance of Linux?

    I would like this thread to be dedicated to various speed up techniques and performance tweaks for Linux and especially arch.
    K.Mandla offers quite a bit of interesting tweaks in the guide "Howto: Set up Hardy for speed", most of it applies to all linux distros.
    http://kmandla.wordpress.com/2008/05/04 … for-speed/
    If you have come across any interesting information relating to getting faster and better performance from Linux please do not hesitate to post. Thanks.

    PrimoTurbo wrote:
    I would like this thread to be dedicated to various speed up techniques and performance tweaks for Linux and especially arch.
    K.Mandla offers quite a bit of interesting tweaks in the guide "Howto: Set up Hardy for speed", most of it applies to all linux distros.
    http://kmandla.wordpress.com/2008/05/04 … for-speed/
    If you have come across any interesting information relating to getting faster and better performance from Linux please do not hesitate to post. Thanks.
    You will most likely end up wasting a lot of time for very little gain.
    There are no miracle solutions, you either use faster apps or buy faster hardware.
    The tweaks which are truly interesting without major drawbacks usually make their way on a default system, one way or another.
    For example, for directory index on ext3 :
    http://wiki.archlinux.org/index.php/Ext … ystem_Tips : "Note: Directory indexing is activated by default in Archlinux via /etc/mke2fs.conf"
    So if you want an advice, there are areas more interesting to explore, like programming for instance. You might end up having the capability to really improve apps performance, or write your own lighter alternatives.

Maybe you are looking for

  • I have just downloaded IOS 7

    I have just downloaded IOS 7 and need the new iTunes 11.1 to be able to use my iPhone on iTunes but I need the new mac software update OXS 10.6.8, but my mac doesn't need or have an update what should I do! I'm upset :'(

  • SQL Injection Produces "Wrong Name" Errors

    Hi all, By now, you're familiar with the SQL Injection attacks floatin' around out there, but what has me puzzled is how my ColdFusion servers are responding to them. For each SQL Injection attempt, CF throws an application error; this is from my APP

  • Problem with fresh PT 6 installtion

    hi all, I am very new to the portal world and just managed to install a copy of PT 6 Foundation (aka portal server). After installation when I access server.pt I see a error message saying "Error displaying Dropdown menu tabs.". Can anyone help me un

  • Cross Company Controlling Issue

    Dear Experts, My Client is using two controlling areas.First controlling area have 1 company code assigned on it and another have 10 company codes assigned on it. Now they want to do cross company code controlling between these 11 company codes but a

  • Leopard Defaults to Wireless, even when Gigabit ethernet is plugged in...

    This is a bit of frustration for me, but after installing 10.5.5, it seems as though leopard defaults to use the wireless connection instead of the "fastest available" connection. When I plug in ethernet, i stay connected via wireless, even though I