Improving performance while adding groups

Hello,
I've been monitoring my crystal reports from a week or so and the report performance is going for a toss. I would like to narrate this in little detail. I have created 3 groups to select dynamic parameters and each group has a formula for itself. In my parameters I have added one parameter with 7 entities (which is hard coded), now a user can select any 3 entity out of those seven when initiallly refreshing the document, each of the parameter entity is bundeled in a conditional formula (mentioned under formula fields) for each entity. The user may select any entity and may get the respective data for that entity.
For all this i have created 3 groups and same formula is pasted under all the 3 groups. I have then made the formula group to be selected under Group expert. The report works fine and yields me correct data. However, during the grouping of the formula's crystal selects all the database tables from the database field as these tables are mentioned under the group formula. Agreed all fine.
But when I run the report the "Show SQL query" selects all the database tables under Select clause which should not be the case. Due to this even if i have selected an entity which has got only 48 to 50 records, crystal tends to select all the 16,56,053 records from the database fields which is hampering the crystal performance big time. When I run the same query in SQL it retrives the data in just 8 seconds but as crystal selecting all the records gives me data after 90 seconds which is frustrating for the user.
Please suggest me a workaround for this. Please help.
Thank you.

Hi,
I suspect the problem isn't necessarily just your grouping but with your Record Selection Formula as well.  If you do not see a complete Where clause is because your Record Selection Formula is too complicated for Crystal to translate to SQL. 
The same would be said for your grouping.  There are two suggestions I can offer: 
1)  Instead of linking the tables in Crystal, use a SQL Command and generate your query in SQL directly.  You can use parameters and at the very least, get a working WHERE clause. 
2)  Create a Stored Procedure or view that can use the logic you need to retrieve the records. 
At the very least you want to be able to streamline the query to improve performance.  Grouping may not be possible but my guess it's more with the Selection formula than the grouping.
Good luck,
Brian

Similar Messages

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • How can we improve performance while selection production orders from resb

    Dear all,
    there is a performance issue in a report which compares sales order and production order.
    Below is the code, in this while reading production order data from resb with the below select statement.
    can any body tell me how can we improve the performance? should we use indexing, if yes how to use indexing.
    *read sales order data
      SELECT vbeln posnr arktx zz_cl zz_qty
      INTO (itab-vbeln, itab-sposnr, itab-arktx, itab-zz_cl, itab-zz_qty)
      FROM vbap
      WHERE vbeln  = p_vbeln
      AND   uepos  = p_posnr.
        itab-so_qty = itab-zz_cl * itab-zz_qty / 1000.
        CONCATENATE itab-vbeln itab-sposnr
           INTO itab-document SEPARATED BY '/'.
        CLEAR total_pro.
    **read production order data*
        SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.

    Himanshu,
    Put a break point before these two select statements and execute in the production.This way you will come to know which select statement is taking much time to get executed.
    In both the select statements the where clause is not having the primary keys.
    Coming to the point of selecting the data from vbap do check the SAP note no:-185530 accordigly modify the select statement.
    As far as the table RESB is concerened here also the where clause doesn't have the primary keys.Do check the SAP Note No:-187906.
    I guess not using primary keys is maring the performance.
    K.Kiran.

  • Error while assigning group

    Hi friends,
      1. Iam getting the following error when assigning the group to a user.
      <b>An error occurred while adding group assignments; to see the correct status, perform a new assigned groups search</b>
    2. When uploading the template also its throwing the java.null exception..
       template contains , user name, password, roles , mail_id, group..
      after uploading the template all the above information is created except group
      when i search for the user which is created based on uploading template ,
      the group is not assigned.
       I need to send users list asap with groups assigned. but its giving problem...
      can anyone suggest me on this..
      Thanks & regards
      Sireesha.

    Hello Sireesha,
    Have u followed standard format while uploading the text file....
    If not use this format while preparing the groups and users etc.....
    http://help.sap.com/saphelp_nw04/helpdata/en/ae/7cdf3dffadd95ee10000000a114084/content.htm
    Rgds
    Pradeep

  • Issue while adding DB resources to the fail safe group

    Hi,
    we are installaing ERP 6.0 EHP4 , oracle10.2.04 in MSCS
    During the step, Adding the oracle Database Resource to the fail safe
    group , I am getting the error.
    28 13:21:57 ** WARNING : FS-10288: Parameter file C:\oracle\BCP\102\database\init<SID>_OFS.ora is not located on a cluster disk
    29 13:21:57 ** WARNING : FS-10404: The database uses a nonclustered disk in one of the system parameters. Value of parameter is C:\ORACLE\<SID>\102\RDBMS\AUDIT
    30 13:21:58 ** ERROR : FS-10036: The resource uses disk SAP HDD, which is also used by cluster resource SAP VIP in another group
    31 13:21:58 ** ERROR : FS-10778: The Oracle Database resource provider failed to configure the cluster resource <SID>.WORLD
    32 13:21:58 ** ERROR : FS-10890: Oracle Services for MSCS failed during the add operation
    33 13:21:58 ** ERROR : FS-10497: Starting clusterwide rollback of the operation
    34 13:21:58 FS-10488:<primary node name> : Starting rollback of operation
    35 13:21:58 > FS-10090: Rolling back Oracle Net changes on node <primary node name>
    I am having one local disk C: one shared disk Z: and quorum disk Q:
    Shared disk Z: is already used for SAP<sid> group.
    Regards
    Joel

    No crossposting please!
    Error while adding oracle database resource to the fail safe group

  • How can we improve the performance while fetching data from RESB table.

    Hi All,
    Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
    SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.
    Here I am using 'KDAUF'  & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
    Regards,
    Himanshu

    Hi ,
    Declare intenal table with only those four fields.
    and try the beloe code....
    SELECT aufnr posnr roms1 roanz
    INTO  table itab
    FROM resb
    WHERE kdauf = p_vbeln
    AND ablad = itab-sposnr+2.
    yes, you can also use secondary index for improving the performance in this case.
    Regards,
    Anand .
    Reward if it is useful....

  • Improving Performance of Group By clause

    I'm a developer who needs to create an aggreagate, or roll-up, of a large table (tens of millions of rows) using a group by clause. One of the several items I am grouping by is a numeric column called YEAR. My DBA recommended I create an index on YEAR to improve the performance of the group by clause. I read that indexes only are used when referenced in the where clause, which I do not need. Will my DBA's reccomendation help? Can you recommend a technique? Thank you.

    When you select millions of rows, grouped or not, the database has to fetch
    each of them, so an index on the group column isn't useful.
    If you have a performance problem that cannot be solved through an index on
    columns used in your where-clause, perhaps a materialzed view with the
    dimension(s) of your group clause will help.

  • Multiple log groups per thread to improve performance with high redo writes

    I am reading Pro Oracle 10g RAC on Linux (good book). On p.35 the authors state that they recommend 3-5 redo log groups per thread if there is a "large" amount of redo.
    Who does having more redo log groups improve performance? Does oracle paralelize the writes?

    redo logs are configured per instance, from experience you need atleast 3 redo log groups per thread to help switch over and sufficient time for archives to complete before reuse of the first redo log group. When you have a large redo log activity there is a potential that redo log groups will switch more often and it is important that archive has completed before an exisiting redo log group can be reused, else the database /instance may hang.
    I think that is what the author is referencing here, have sufficient redo log groups (based on the acitivty of your environment) to allow switching and allowing sufficient time for archives to complete.

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • While browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel?

    hi,
    while browsing the cube data Excel the circle pointer starts to spin and the excel go into a not-responding state,any recommendations to improve performance in excel? 
    I have 20 measures and 8 dimensions.
    while filtering data by using dimensions in excel it is taking so much time.
    Ex:
    I browsed 15 measures in excel and filtered data based on time(quarter  wise) and other dimesion product. It is taking long time to get  data.
    Can you please help on this issue.
    Regards,
    Samba

    Hi Samba,
    What're the versions of your Office Excel and SQL Server Analysis Services? It will be helpful if you can share the detail computer resource information to us while encountered this issue.
    In addition, we don't know your cube structure and the underlying relationships. But you can take a look at the following articles to troubleshoot the performance issue:
    Improving Excel's Cube Performance:
    http://richardlees.blogspot.com/2010/04/improving-excels-cube-performance.html
    Excel Against a SSAS Cube Slow: Performance Improvement Tips:
    http://www.msbicentral.com/Blogs/tabid/131/articleType/ArticleView/articleId/136/Excel-Against-a-SSAS-Cube-Slow-Performance-Improvement-Tips.aspx
    Regards, 
    Elvis Long
    TechNet Community Support

  • What is the default group id/ home /shell while adding new account with useradd without specifying these parameters?

    What is the default group id/ home /shell while adding new account with useradd without specifying these parameters?
    reagrds

    Hi,
    You can check the default values from the below file
    /usr/sadm/defadduser
    and from this command
    #useradd -D

  • Error while adding LDAP group

    Hi, I configured LDAP authentication on BOXI R2 SP3 on IIS. The settings are as given below.
    To change a setting, click on the value to start the LDAP Configuration Wizard.  I have replaced few entries with XXXX and YYYY due to security.
    LDAP Hosts: nccXXX.XXX.YYYY.XX.YY:636
    LDAP Server Type: Novell eDirectory
    Base LDAP Distinguished Name: ou=XXXXX,dc=YY
    LDAP Server Administration Distinguished Name: cn=XXX,o=YYYYY
    LDAP Referral Distinguished Name: ""
    Maximum Referral Hops: 0
    SSL Type: Server Authentication
    Server Side SSL Strength: Always accept server certificate
    Single Sign On Type: None
    When I add any new group then its not added and I get below error message in the Logging directory  for WCA.
    Error: 2009-08-24 14:56:30, Thread:161, WriteData::_Flush catch unexcepted exception, source: System.Web, message: Specified argument was out of the range of valid values.
    Parameter name: offset, stack:    at System.Web.HttpResponseStream.Write(Byte[] buffer, Int32 offset, Int32 count)
       at BusinessObjects.Enterprise.WebComponentAdapter.WriteData._Flush(IntPtr handle)
    Can anyone help to find if LDAP is configured correctly before adding group?
    Thanks,

    Resolved. It was due to wrong LDAP group given to me.
    Thanks,

  • How to improve slow PowerPivot performance when adding/modifying measures, calculated columns or Relationships?

    I have been using PowerPivot for a couple of months now and whilst it is extremely quick when pulling in data to populate Pivot Tables, it is extremely slow to make the following kind of changes to the Data Model:
    - Add a Measure / Calculated Field
    - Add a Calculated Column
    - Rename a Calculated Field
    - Re-name a Calculated Column
    - Modify a relationship
    - Change a tables properties
    - Update a table
    In the status bar of excel I get a very quick 'calculating', then it spends a lot of time 'reading data',
    then it 'finalises' after which nothing is in the status bar but it still takes approx. 45 seconds before the program becomes responsive again. This waiting time does not change depending on the action, it is the same if I rename a
    column as it is if I add a new measure.
    My question is what affects performance of these actions and how do I improve it?
    To give you an idea of where my data comes from, I have:
    - 7 tables that feed into the Data Model directly from within the workbook which contains the data model itself. These are a combination of static tables and tables that connect to a MySQL database.
    - 6 separate workbooks which contain static data that is updated manually periodically (copied and pasted from another source)
    - 5 separate workbooks which contain dynamic tables that are linked to our MySQL database and update when opened.
    Now I realise that this is probably where my issue is, however I have no idea how to fix it. You do not seem to be able to connect to a MySQL database directly within the PowerPivot window itself so there is no way to generate and update tables without
    first creating them either in a worksheet or separate workbook (as far as I know).  If I try to create all of the tables directly within the single workbook containing the Data Model I get performance and crashing issues hence why I separate tables into
    individual workbooks.
    Any advice on how to improve performance would be tremendously appreciated. I'm new and keen to learn, I'm aware this set-up is far from best practice.
    Hardware wise I am using:
    - Windows 8 64-bit
    - Excel 2013 64-bit
    - Intel Core i7 processor
    - 6 GB Ram
    Thanks,
    James

    Darren,
    I think the point I was making is its in memory, geez... BTW what do all applications do when they run out of paged memory,  if PowerPivot is using all available memory then wouldn't this force the other applications to use Virtual or essentially write
    back and forth to the disks? I think Virtual memory white to disk ??, lol Also, there are parts if the architecture of Excel 2013 that when importing data into PowerPivot require memory and when working in SharePoint the PowerPivot data is cached to disk
    unless recently refreshed... But this conversation isn't help the James who asked the question and as much as I would love to continue its become a little boring..
    Hi James,
    If you download one the ODBC MySQL Connectors
    http://dev.mysql.com/downloads/connector/odbc/ and I believe yours is the first one for x64 systems and connect directly to the data you should be able to reduce the number of workbooks your opening and if you notice in the following graphic these
    connection are automatically refreshed by default, the parts in red are the differences between PowerPivot 2010 and 2013
    You should notice a lot of improvement especially when refreshing data please let us know how it goes...
    After registering the ODBC Driver
    Click Add. on the User-DSN sheet, choose the “MySQL ODBC 5.x driver”, fill in the credentials, choose a database (from the select menu) and a data source name and you’re done.
    Back in Excel you go on the PowerPivot section of the ribbon and open the PowerPivot window  (the green icon on the left side). In the ‘Home’ section of that window you will see a small gray cylindrical symbol (the international
    symbol for “database”) which will suggest to you different data sources to choose from. Take the one where it says “ODBC”.
    In the next dialog you click on create, choose the adapter, and then Ok. Back in the assistant you can check the connection and proceed.
    Now you have the choice between importing the data from tables using the import assistant or Query depends on your skillset..
    Cheers,
    Ivan
    Ivan Sanders <a href="http://www.linkedin.com/in/iasanders">My LinkedIn </a> , <a href="http://msmvps.com/blogs/ivansanders">My Blog</a>, <a href="http://twitter.com/iasanders"> @iasanders</a>,
    <a href="http://shop.oreilly.com/product/0790145372703.do">BI in SP2013</a>, <a href="http://sharepointdemobuilds.codeplex.com">SP2013 Content Packs</a>.

  • 'No detailed information exists for this person'  while adding new reviewer

    Hi All,
    <b>This is an extremely urgent issue.</b>
    We are getting the following message in SRM while adding a new approver to the Shopping cart once the - 'No detailed information exists for this person'
    <b>We have implemented the following OSS Notes in the SRM 5.5 system, Release 700 after rasing a call with SAP.
    SAP Pilot OSS note - 1002957
    Other OSS Notes -
    999528 Adding Ad Hoc Approver fails due to comit erro...
    722357 Ad hoc: Commit control with ad hoc enhancement...
    990467 Commit Problem with AdHoc Agents...
    984582 Problems with Posting Container Data (ad Hoc
                            agent...
    947296 Inserted/changed approver is not transferred...
    910925 SRM Workflow: Ad-hoc agent not found...
    803628 BADI workflow: You cannot add an approver</b>
    Here are the details
                          <b>Cannot add additional approver</b>
                            We are now on SRM 5.5 and a document attached shows the
                            patch levels
                            from SPAM.
                            In shop transaction, it is possible to change the
                            default approvers,
                            but not possible to Add.
                            When we add, the system shows the list of available
                            approvers, allows
                            selection, but "transfer" button does not actually
                            transfer.
                            When the cart is saved, the added approval shows as "no
                            approval
                            available".
                            We are using the standard workflows for 2 step approval.
                            A document is attached to show what happens and the
                            results of checks
                            we have performed.
                            BUS2121 also shows as ok when checked.
    Any pointers what could be possible reasons.
    <b>Points for sure. !!</b>
    Regards
    - Atul

    Hi Atul,
    Here is the list.
          |-----     0000702787 EBP 4.0: Message "Enter a country"
          |-----     0000732828 E-mail messages for work items with incorrect line break
          |-----     0000735026 Memory problems at BBP_GETLIST_INDEX_FILL
          |-----     0000804317 SRM40/SUS/PO: PO inbound with error BBP_PD 147
          |-----     0000862851 PO not ordered due to different locations error
          |-----     0000902956 Changing Account assignment category BE Limit service PO
          |-----     0000911446 No reservation is created
          |-----     0000917869 SRM 5.0: PPOMV_BBP - Vendor address tab page
          |-----     0000919553 SRM 5.0: Termination of output preview for invalid partner
          |-----     0000920363 Shopping cart transfer: Sourcing indicator is not set
          |-----     0000924218 SRM 5.0: wrong error message for non existant auth. PurchOrg
          |-----     0000924811 SRM5.0: Javascript error on document screen
          |-----     0000926750 Shopping cart: Unit of measure is not restored
          |-----     0000926878 Link for expanding details in the shopping cart too small
          |-----     0000927142 SRM5.0: Browser hangs up during attachment upload
          |-----     0000938168 SRM50: Message "No default value from vendor master"
          |-----     0000938296 JavaScript runtime error in the bid invitation
          |-----     0000940805 BBPS_SC_APP_SOS cannot be enhanced
          |-----     0000941935 External requirements displayed as a reference
          |-----     0000942742 Update is cancelled if new source of supply is found
          |-----     0000944592 No location when vendor has changed
          |-----     0000944918 ECS: Find back end purchasing group as a responsible group
          |-----     0000945601 Termination when you display purchase order item detail
          |-----     0000948425 Shopping basket changeable although already approved
          |-----     0000953238 Shopping cart history in the sourcing cockpit
          |-----     0000954066 Follow-On Documents Tab does not get Highlighted
          |-----     0000954409 Error message during approval of a confirmation
          |-----     0000955377 Long runtimes for workflows with blocks
          |-----     0000956108 Empty worklist after clicking button 'Find'
          |-----     0000956723 BBPMAININT:Bus. partner display switches to maintenance mode
          |-----     0000956797 RUSRM50: Limit Purchase Order Functional Area Error Message
          |-----     0000957884 Category Management: Data disappears after program SAVE
          |-----     0000958045 BBPSC04: Delete icon not hidden despite screen variant
          |-----     0000958349 Final delivery with quantity 0
          |-----     0000959528 During PO creation wrong error message BBP_PD 428
          |-----     0000960169 Favorites not saved for 'Purchase for'
          |-----     0000961422 BBP_MON_SC: No search help for 'Created By'
          |-----     0000962093 BBP_VENDOR_CREATE: Refresh BP data
          |-----     0000962252 BBP_GETLIST_INDEX_FILL: Incorrect processing statistics
          |-----     0000962474 Error after changing partner data
          |-----     0000964132 Approval Preview for Rejected Items: Wrong User's Inbox
          |-----     0000964361 Cannot find employee of vendor
          |-----     0000964455 Country for the PO box is initial
          |-----     0000964911 Kopieren von Alertkategorien: Containerelemente verschwinden
          |-----     0000966072 Download of material fails - set type COMM_PR_UNIT
          |-----     0000966323 Service Item: Not able to create PO in ERP backend
          |-----     0000967429 Approval buttons not updated after release
          |-----     0000967615 Performance of BBP_PD_PO_GETDETAIL may be poor
          |-----     0000968110 Transfer canceled, account assignment acc. to value/quantity
          |-----     0000968251 BBPSC02:In source of supply preferred vendor not validated
          |-----     0000969091 SRM does not support passwords longer than 8 characters
          |-----     0000982293 Ext requirements: Purchasing group data not transferred
          |-----     0000982448 Impossibility to delete an item in PO
          |-----     0000982674 Change PO quantity does not update account assign quantity
          |-----     0000983130 Search for back-end purchase orders using the requester name
          |-----     0000984184 BADI BBP_ALERTING active but its implementation isn't called
          |-----     0000985179 Delete button active for backend service entry sheets
          |-----     0000990264 Multiple confirmations cannot be posted for service & limit
          |-----     0000996104 No catalog when confirming back-end blanket purchase orders
          |-----     0001003408 BBPCF: Cannot Add Catalog Item to Confirmation for Limit PO
         0001010791 Flag Confirmation based Invoice Verification set erroneous
    Kind regards,
    Yann

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

Maybe you are looking for