Removing unnecessary components in RPD to improve performance

We have a large RPD file that is over 50MB. This has made saving and making any changes terribly slow.
Is there an easy way to remove components that we do not use? Whoever set up our development environment installed a lot of unnecessary analytics such as Sales and Pharma.

Thanks for informing me about that utility. I did not know about that. But it appears that this does not find the objects that I'm trying to remove.
For example, during the installation it we included several out-of-the-box analytics and subject areas/objects that we do not want (i.e. We have 30 subject areas for Pharma that we want to remove). Given the 'Remove Unused Physical Objects' does not allow us to remove these subject areas and objects, is there another way I can get rid of them?
I'm tempted to delete but I'm afraid it might affect dependent objects.

Similar Messages

  • Remove unnecessary SD orders in SCM system

    Hello,
    To remove unnecessary Sales Orders regularly in SCM system I found out a program "/SAPAPO/SDORDER_DEL ". But I am unable to figure out the parameters to be used once I execute this program. Can someone please let me know, if there is a document which explains the fields in the tab of the program /SAPAPO/SDORDER_DEL :
    1. Delete on database
    2. Delete in SAP livecache and in Database
    3. Delete All DB Sales order data
    Suggestions are much appreciated.
    Thanks

    HI,
    This is the Program Documentation available in system
    Short text
    ATP: Delete (SD) Orders From Database
    Purpose
    You use this report to delete the SD sales document data from the database. Normally, the order data remains in the database (it is not deleted as in liveCache).
    In order to limit the data volume, you should schedule the report to delete the orders, if necessary, and it should be dependent on the lifetime of your orders.
    Example
    For example, a sales order is completed in six weeks. Such a sales order would be archived in SAP R/3 but such an archiving concept does not exist in SAP APO. You use this report to remove the sales order from the database.
    Integration
    A location product or a location can only be deleted if SD sales documents for this location product no longer exist in the database (that is, if, for example, no more entries exist in the tables /SAPAPO/ORDADM_I or /SAPAPO/ORDPART).
    Features
    A check is normally carried out before an order is deleted to see if this order still exists in liveCache. If this is the case, the order is not deleted.
    You can also print out a list of the orders that cannot be deleted if you have set the relevant indicator.
    In exceptional cases, you can deactivate this check and nevertheless still delete the selected orders.
    If you have set the Delete all orders indicator, the order data from the database is completely deleted without any check. You should take great care when using this function.
    Note
    If you have deleted orders that are still being used for business purposes, you must start a new initial data supply or, if you know the orders, you must perform a further ATP check to transfer the data from SAP R/3 to SAP APO.

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Remove unnecessary field in Web UI - not applied

    Hello, dear Experts!
    I'm trying to remove unnecessary fields from contact view page in Web UI.
    I performed the following steps:
    1. created new config role key ZTMP
    2. created new business role and binded it with my config role key
    3. bound business role in organizational structure
    4. copied <default> configuration in BSP Workbench for my new ZTMP role key (BP_CONT/ContactDetails view)
    5. removed unnecessary fields in my copied configuration and saved all
    But after this steps no changes affected in Web UI - fields not removed, it's looks like default config. Same result for BP_HEAD/AccountDetails.
    Technical data for contact page (when i press F2 in contact page):
    Role Key (Searched For) - ZTMP
    Role Key (Found) - ZTMP
    Comp.Usage (Searched For) - Overview
    Component Usage (Found) - <DEFAULT>
    Object Type (Searched For)  - <DEFAULT>
    Object Type (Found) - <DEFAULT>
    Maybe some "checkbox" blocks my config?

    hi,
    Procedure to change config is
    1) Create role config key.
    2) Assign it to your bussiness role.
    3) Open found config.
    4) Copy config and give your role config key. Do requried changes. Save it.
    5) Write required code in do_config_determination if your config is not called.
    Procedure that you are telling is correct and configuration found is ztmp, def, def. What is subobj type found? if it is default, again see configuration and compare with fields that you are getting on screen. May be you forgot to save because you have covered all the steps.
    Best regards
    Pankaj Kumar

  • Using Lightroom and Aperture, will a new ATI 5770/5870 vs. GT 120 improve performance?

    I have a MP (2009, 3.3 Nehalem Quad and 16GB RAM) and wanted to improve performance in APERTURE (see clock wheel processing all the time) with edits, also using Lightroom, and sometimes CS5. 
    Anyone with experience that can say upgrading from the GT120 would see a difference and how much approximately?
    Next, do I need to buy the 5870 or can I get the 5770 to work?
    I am assuming I have to remove the GT120 for the new card to fit?
    Thanks

    Terrible marketing. ALL ATI 5xxx work in ALL Mac Pro models. With 10.6.5 and later.
    It really should be yours to just check AMD and search out reviews that compare these to others. You didn't look at the specs of each or Barefeats? He has half a dozen benchmark tests, but the GT120 doesn't even show up or in the running on most.
    From AMD 5870 shows 2x the units -
    TeraScale 2 Unified Processing Architecture   
    1600 Stream Processing Units
    80 Texture Units
    128 Z/Stencil ROP Units
    32 Color ROP Units
    ATI Radeon™ HD 5870 graphics
    That should hold up well.
    Some are on the fence or don't want to pay $$
    All they or you (and you've been around for more than a day!) is go to Apple Store:
    ATI Radeon HD 5870 Graphics Upgrade Kit for Mac Pro (Mid 2010 or Early 2009)
    ATI Radeon HD 5870 Upgrade

  • Can View improve Performance?

    Hi,
    For improving performance, i have created a view in rpd so that my fact table row count comparing with base fact table reduce from 117888 records to 3263 records. And I am using this view as a fact table and joined with other dimension tables. But I am wondering by doing this will be there be any better performance?
    Thanks,
    Phani.

    Hi,
    I dont think view will help in performance instead go for materialized views.
    Check this.....http://www.biblogs.com/2008/11/28/thoughts-on-obiee-performance-optimization-diagnostics/
    Need help on OBIEE performance
    Regards,
    Srikanth

  • Improving performance when using LineStripArray?

    I'm rendering approximately 680 LineStripArrays to represent an airport on a situation display. I read the data from a DXF file and I imagine I can strip that down by removing parts of the airport I don't want to show.
    However, performance is poor - I'm probably hitting 50fps when I'm barely rendering anything. Apart from not displaying some LineStripArrays, what can I do to improve performance? Should I merge some LineStripArrays? Is there another geometry class I should use?
    My code is:
                   // This array will hold the vertices.
                   float[] vertices = new float[count*3];
                   // Create the colour.
                   Color3f colour = new Color3f(1.0f,1.0f, 1.0f);
                   // iterate over all vertex of the polyline
                   for (int i = 0; i < count; i++) {                    
                        vertices[(i*3)+0] = (float)vertex.getX();
                        vertices[(i*3)+1] = (float)vertex.getY();
                        vertices[(i*3)+2] = (float)vertex.getZ();
                   }And then:
    layerData = new LineStripArray(count, LineArray.COORDINATES, strip_counts);
    layerData.setCoordinates(0, vertices);Thanks
    Edited by: BobCrivens on Sep 18, 2008 7:39 AM

    Yes, it will cause performance issues.  Whether you notice it or not may be a different story.
    LabVIEW drawing engine starts at the bottom layer and works its way up.  So, it has to redraw the image and then redraw the control when you update the control/indicator.
    It's been a while since I benchmarked this on a project, but in LabVIEW 6.1, I looked into why my tests ran so slow, and saw a 10-15% decrease in test time by removing the background decorations I used to make the window pretty.  If I didn't show the GUI feedback for the test at all (no GUI windows for each test), I saw a 30% decrease in test time.
    You will also find that better video cards will have a positive effect on this, as they redraw the screen faster.  In the same benchmark, I was able to outperform the early PXI controllers with a slower PC because NI was using a lower end video chip for their onboard graphics.

  • Does syncing ipad or iPhone improve performance?

    If my iPad is acting a little buggy, will regular syncing improve performance?

    No, that's not what sync is for. It serves to make a backup of the device from which you can restore if damaged or replaced and to ensure that data such as contacts, calendars, bookmarks, etc., are synchronized across all your devices and computer.
    Given that if correctly set up, iTunes will keep a backup of EVERYTHING, all apps included, you can proceed to remove those you are not currently using in the device, freeing up room in its storage. The backup stored in iTunes allows you to copy the app back if needed later without having to download from Apple's servers. iTunes also lets you update all the apps on the computer, which will then update on the device(s) at next sync. Don't know on a PC, but on my Mac the apps update way faster and in parallel than if I do it on the devices.
    Lastly, sometimes the performance of the devices can be returned to the original level by making a complete backup, wiping the device clean ( Settings / General / Reset / Erase All Contents and Settings ) and restoring from iTunes' backup.

  • FI-CA events to improve performance

    Hello experts,
    Does anybody use the FI-CA events to improve the extraction performance for datasources 0FC_OP_01 and 0FC_CI_01 (open and cleared items)?
    It seems that this specific exits associated to BW events have been developped especially to improve performance.
    Any documentation, guide should be appreciate.
    Thanks.
    Thibaud.

    Thanks to all for the replies
    @Sybrand
    Please answer first whether the column is stored in a separate lobsegment.
    No. Table,Index,LOB,LOB index uses the same TS. I missed adding this point( moving to separate TS) as part of table modifications.
    @Hemant
    There's a famous paper / blog post about CLOBs and Database Flashback. If I find it, I'll post the URL.
    Is this the one you are referring to
    http://laimisnd.wordpress.com/2011/03/25/lobs-and-flashback-database-performance/
    By moving the CLOB column to different block size , I will test the performance improvement it gives and will share the results.
    We dont need any data from this table. XML file contains details about finger prints and once the application server completes the job , XML data is deleted from this table.
    So no need of backup/recovery operations for this table. Client will be able to replay the transactions if any problem occurs.
    @Billy
    We are not performing XML parsing on DB side. Gets the XML data from client -> insert into table -> client selects from table -> Upon successful completion of the Job from client ,XML data gets deleted.
    Regarding binding of LOB from client side, will check on that side also to reduce round trips.
    By changing the blocksize, I can keep db_32K_cache_size=2G and keep this table in CACHE. If I directly put my table to CACHE, it will age out all other operation from buffer which makes things worse for us.
    This insert is part of transaction( Registration of a finger print) and this is the only statement taking time as of now compared to other statements in the transaction.
    Thanks,
    Arun

  • How to preload sound into memory to improve performance?

    Hello all
    I have an application where it needs to play 4 different short wave files on some events. The wave files are small (less then 1 sec each) so they can be preloaded into memory. But I don't really know how to do that.. This is my current code... Performance is really important here, so the faster users can hear the sounds, the better...
    import java.io.*;
    import javax.sound.sampled.*;
    import javax.swing.*;
    import java.awt.event.*;
    public class PlaySound implements ActionListener
         private Clip clip = null;
         public void play(String name)
              if (clip != null)
                   clip.stop();
                   clip = null;
              loadClip(name);
              clip.start();
         private void loadClip(String fnm)
              try
                   AudioInputStream stream = AudioSystem.getAudioInputStream(new File(fnm + ".wav"));
                   AudioFormat format = stream.getFormat();
                   DataLine.Info info = new DataLine.Info(Clip.class, format);
                   if (!AudioSystem.isLineSupported(info))
                        JOptionPane.showMessageDialog(null, "Unsupported sound line", "Warning!", JOptionPane.WARNING_MESSAGE);
                   else
                        clip = (Clip) AudioSystem.getLine(info);
                        clip.open(stream);
                        stream.close();
              catch (Exception e)
                   JOptionPane.showMessageDialog(null, "loadClip E: " + e.toString(), "Warning!", JOptionPane.WARNING_MESSAGE);
         public static void main(String[] args)
              play("a wav file name");
    }     I would appreciate it if someone can point out how I can preload them to improve performance... Thanks in advance!

    The message above should be:
    OMG, me dumb you smart Florian...
    Thank you for your suggestion... It's not the best OR anything close to what I thought it would be, it's certainly one way to do it and better then what I've got now...
    Thanks again Florian, I really appreciate it!!
    BTW, is there anything that would produce the sound faster then this?
    Message was edited by:
    BuggyVB

  • How to improve performance of MediaPlayer?

    I tried to use the MediaPlayer with a On2 VP6 flv movie.
    Showing a video with a resolution of 1024x768 works.
    Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
    Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
    Does somebody know a solution for this problems?
    Cheers
    masim

    duplicate thread..
    How to improve performance of attached query

  • How to improve performance when there are many TextBlocks in ItemsControl items?

       Hi,
       I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
    to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
    show the items.
       I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
    to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
    I don't thing it is a good solution.
       Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
    the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
    Thank you
    Alex

       Using Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
    Alex

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • On my MacBook with Lion Safari does start, does not react immediately after trying to open it. Installing a new Safari does not help. Removing parts of Safari in the Library did not help. Where can I find and remove all components (LastSession ...)?

    How can I reset Safari with all components? On my MacBook with Lion, Safari does not start, does not react immediately after trying to open it. Installing a new Safari does not help. Removing parts of Safari in the Library does not help. Where can I find and remove all components as LastSession and TopSites?

    The only way to reinstall Safari on a Mac running v10.7 Lion is to restore OS X using OS X Recovery
    Instead of restoring OS X in order to reinstall Safari, try troubleshooting extensions.
    From the Safari menu bar click Safari > Preferences then select the Extensions tab. Turn that OFF, quit and relaunch Safari to test.
    If that helped, turn one extension on then quit and relaunch Safari to test until you find the incompatible extension then click uninstall.
    If it's not an extensions issue, try troubleshooting third party plug-ins.
    Back to Safari > Preferences. This time select the Security tab. Deselect:  Allow plug-ins. Quit and relaunch Safari to test.
    If that made a difference, instructions for troubleshooting plugins here.
    If it's not an extension or plug-in issue, delete the cache.
    Open a Finder window. From the Finder menu bar click Go > Go to Folder
    Type or copy paste the following
    ~/Library/Caches/com.apple.Safari/Cache.db
    Click Go then move the Cache.db file to the Trash.
    Quit and relaunch Safari to test.

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

Maybe you are looking for

  • Problem with rotation and keyboard click sound

    Ok, there's still a problem with the rotation and keyboard click sound and I thought I had the problem fixed. It's a serious bug when you want the keyboard click sound enbabled without having the rotation locked. I noticed this bug started when I had

  • Discoverer Desktop 10.1.2.48.18 to Report Builder

    Hi, I'm new to using oracle tools but have a background in SSRS. I need some help getting discover queries/data into report builder I've been asked to develop some reports based on oracle discoverer and while I have the queries running I want to brin

  • AP Groups - Guest Access - Anchor Controller

    Need clarification - I think it does work Does the AP Group feature work with the anchor controller guest access feature SSID guest --- LWAP -- LWAPP -- Foreign WLC --- EoIP --- Anchor Controller --- VLAN 10 or VLAN 11 ie Guests in Building 1 SSID gu

  • OSB 10.3.1 WS-Policy encrypting an optional WSDL element

    Hi everyone I want to encrypt a certain element of a request message for a proxy service. This is the policy portion embedded in the WSDL: <wssp:Confidentiality> <wssp:KeyWrappingAlgorithm URI="http://www.w3.org/2001/04/xmlenc#rsa-1_5"/> <!-- Require

  • Conference calls technical issues

    Hi. I wonder if you can help me. My question and issue is perhaps better dealt with via phone, however, I can give it a shot here: I make group calls from my second account. We have a process whereby the host for the call rotates every week (on avera