Does making final field static improve performance or save memory?

Are final fields given value at compile time by the compiler?
private final String FINAL_FIELD = "31";
Does making final field static improve performance or save memory?

Actually it's final static fields with primitive or String type that are treated specially. For other final fields only extra compile time checking is produced, they are the same at run time.
final static primitives are treated as compiler constants, and a literal value is substituted. No actual field is created.
Myself I consider an awareness of Java internals valuable. For example, if you were unaware of the above the occasional misbehaviour of final static primitives referenced in other classes would be baffling. If your code references a final static primitive from another class it's compiled as a literal, not a reference, so if their value changes in the other class, the referring class will retain the old value if not re-compiled.

Similar Messages

  • Does making objects equal null help the gc handle memory leakage problems

    hi all,
    does making objects equal null help the gc handle memory leakage problems ?
    does that help out the gc to collect unwanted objects ??
    and how can I free memory avoid memory leakage problems on devices ??
    best regards,
    Message was edited by:
    happy_life

    Comments inlined:
    does making objects equal null help the gc handle
    memory leakage problems ?To an extent yes. During the mark phase it will be easier for the GC to identify the nullified objects on the heap while doing reference analysis.
    does that help out the gc to collect unwanted objects
    ??Same answer as earlier, Eventhough you nullify the object you cannot eliminate the reference analysis phase of GC which definitelely would take some time.
    and how can I free memory avoid memory leakage
    problems on devices ??There is nothing like soft/weak reference stuffs that you get in J2SE as far as J2ME is concerned with. Also, user is not allowed to control GC behavior. Even if you use System.gc() call you are never sure when it would trigger the GC thread. Kindly as far as possible do not create new object instances or try to reuse the instantiated objects.
    ~Mohan

  • Does syncing ipad or iPhone improve performance?

    If my iPad is acting a little buggy, will regular syncing improve performance?

    No, that's not what sync is for. It serves to make a backup of the device from which you can restore if damaged or replaced and to ensure that data such as contacts, calendars, bookmarks, etc., are synchronized across all your devices and computer.
    Given that if correctly set up, iTunes will keep a backup of EVERYTHING, all apps included, you can proceed to remove those you are not currently using in the device, freeing up room in its storage. The backup stored in iTunes allows you to copy the app back if needed later without having to download from Apple's servers. iTunes also lets you update all the apps on the computer, which will then update on the device(s) at next sync. Don't know on a PC, but on my Mac the apps update way faster and in parallel than if I do it on the devices.
    Lastly, sometimes the performance of the devices can be returned to the original level by making a complete backup, wiping the device clean ( Settings / General / Reset / Erase All Contents and Settings ) and restoring from iTunes' backup.

  • Why does isolating a long sequence improve performance dramatically?

    I have been editing a 30 minute episode for a reality-action show, for which I built a fast cut sequence with lots of effects and multiclips. By the time I got to compiling the show from shorter sequences I experienced a dramatic slowdown in editing. Autosaving cost me up to 20 seconds every ten minutes (I decided against the tempting option to set longer autosave-intervals). And copying + pasting single clips took way too long and invoked spinning balls.
    I started to wonder: why is my dual 2.3 Ghz processor G5 with 4,5 Gb of RAM ghasping for air from editing a 30 minute show when people already cut feature films on Apple G4's? Maybe (I thought) I have processes in the background handicapping my Mac, too many plugins possibly, or maybe the project is just too large.
    In order to elimate a possible culprit I started experimenting and I found that copying the long sequence to a new project makes everything fast again. Even with both the large and the slim project open side by side. Project size went down from 160 Mb to 29 Mb also, which explains the swift autosaves that I get.
    But why is copy and paste now so much faster? No more spinning balls, I can hammer away on my master sequence like I want it. However I cannot acces my source material directly anymore.
    For that (and to experiment further) I also copied the master clips to the test project. That slowed everything down again. So I quickly trashed everything again, except the long sequence: STILL slow copy+paste and spinning balls. Huh?
    Isn't that funny? Apparently it is not lack of memory that slows me down. It must be the spiderweb of connections with affiliate clips built by FCP that hampers my sequence. By copying the sequence I break those connections, only the links with Quicktime media remain. By adding the masterclips I seem to recreate the spiderweb again, and removing the masterclips doesn't remove the invisible spiderweb. Does that make sense?
    Who can explain to me what is really going on here? I have been editing on FCP since 1999, and am really puzzled by this. Why can't I keep this project flying the way it should on a screaming monster like my G5. Don't tell me it is outdated just because we now have Mac Pro's. Because I'm talking about basic editing here, not rendering effects or exporting to disk.
    Thanks for your time!
    Arnoud Kwant

    Sequences soak up RAM. Many open sequences soak up even more. Use as few as possible at a time in the Timeline window. The larger the number of edits and durations, and multiclips and effects you've done, the more RAM it takes up. You're hitting the limit sounds like, that's why the slowdown. It's disk caching. Look in activity monitor to see the behavior...
    Multiclips collapsed might help, not sure...
    So that's why the slowdown. I'd guess it would be the same on faster CPU's too...
    I'm a short form guy mainly. However finished a cc'd 55 minute show with maybe 200 edits in it and it didn't bring my system to become slower.
    FCP will only address about 2.7 gigs of RAM including the OS... So don't run really RAM hog stuff in the background either. Like Photoshop.
    Jerry

  • Performance impact of final fields?

    Nowadays with functional programming styles becoming more and more popular, we are encouraged to use immutable objects wherever possible, unless there is good reason to make them mutable. This avoids unnecessary state, simplifies reasoning about the code and is considered good style (e.g. see J.Bloch's "Effective Java"). Immutable objects should of course only have final fields.
    Now, because the Java Memory Model gives special guarantees for final fields ("final really is final"), i wonder wether this style may lead to performance degradations on some architectures. Even though the JMM is seriously flawed and most VMs violate it in and out, one can expect that VMs try hard to at least ensure its basic ideas: "SC for DRF" and "final is final".
    The guarantee for final fields is: A final field shoud not change its observed value even if a reference to its containg object is obtained in a data race. But this guarantee is only needed as a safety net if untrusted code (that may not be correctly synchronized) is involved. Other than that, in a correctly synchronized program the "final guarantee" of the JMM does not buy you anything.
    But what about the performance costs? Consider an immutable class Point, with two final int fields x and y. Now, if my program creates millions of Point objects and puts them in a globally visible container, i clearly do not want the VM to execute millions of (potentially very expensive) membar instructions without reason. In the FAQs for the JMM (http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html), there's a warning that accessing volatile fields might be very expensive. This warning has been reducted since "volatiles are cheap on most platforms". But i expect them to not being cheap on some platforms with a weak underlying hardware memory model, and i see no reason why the situation for final fields should be much different.
    I always get an uneasy feeling when using final fields and thinking about their visibility guaranteed by the JMM. Has anyone observed measurable performance degradations after making fields final on some platform (IA-64?) or can point me to a discussion about this topic?
    Thank you for sharing your experience!

    JoachimSauer wrote:
    Can you provide sources for those claims? I've seen many such complains about the old JMM, but never seen any major complains about the current (JSR-133) memory model.There are many papers about this issue. For example: "On Validity of Program Transformations in the Java Memory Model" by Jaroslav Sevcík and David Aspinall (available online). (This paper also contains a presentation of the JMM, that is rigorous enough that one can actually understand it - in stark contrast to the one in the language specification) Also look at: http://www.cl.cam.ac.uk/~pes20/weakmemory/ec2.pdf
    The current JMM forbids most optimizations that VMs routinely perform, like redundant read elimination. So the VMs more or less must violate it. Since the original JSR-133 is rather vague and informal, people (including the original authors) did not understand it and have been deluded into giving wrong proofs. Unfortunately the problems with JSR-133 are not minor glitches but seem to be very fundamental. It looks like the JMM in its current form cannot be easily corrected. The whole approach of commiting actions does not seem to work.
    Pugh and Manson wrote about the old pre 1.5 JMM about 10 years ago:
    - Very, very hard to understand
    - not even the authors understood it
    - has sublte implications - that forbid standard compiler optimizations
    - all existing VMs violate it - some parts should be violated
    Unfortunately all of this is also true for the current JMM. And noone seems to know how to fix it. It seems to be a very hard task to state a working MM that actually does what JSR-133 was supposed to do. The MM of the upcoming C++ standard that originally had JSR-133 as a role model will only give SC for DRF and say that the behaviour of incorrectly synchronized programs is undefined (meaning anything can happen). I doubt that we will see a working MM for Java anytime soon. But that's not the topic here...
    From my (admittedly limited) understanding the most important thing about final fields is that they are truly final if the reference to the instance does not escape during construction of the instance. Once you write your class to not allow "this" to escape, no other code can ever see the un-initialized final fields, no matter how badly it is written.This is true. Normally you don't let "this" escape from the ctor.
    How exactly do problems with volatile fields directly map to problems with final fields?In both cases the VM does have to be very careful about inter-thread memory reordering issues.
    final or not is very unlikely to have major performance differences. What is much more likely to have an influence on the performance is the change in design that usually comes with such a change. Immutable classes are generally instantiate much more often than mutable ones. That alone is very likely to have a much bigger influence than the final change itself.Suppose we have a nonfinal nonvolatile field x:
    Integer x; // initially null
    Thread 1: x = new Integer(42);
    Thread 2: if (x != null) System.out.println(x.intValue());
    There is a data race on x. But the JMM guarantees that thread 2 will never print 0 (it must see the value 42), since the int value boxed inside a java.lang.Integer is stored in a final field.
    For Thread 1 the VM must do 3 things:
    a. Allocate memory for the Integer object.
    b. Write the value 42 into the memory location of the final field inside Integer.
    c. Write the reference to the Integer object into the memory location of x.
    Steps b and c must not be reordered, otherwise thread 2 could see a non null reference but an uninitialized field! So on a mutlicore machine with a weak memory model (a cache hierarchy that may reorder the writes b und c above), the VM must execute some memory fence or barrier instruction between b and c to prevent this reordering. There's no way around this. Such instructions are potentially very expensive (dozens of cycles), so i really wonder how this performs on those machines. I would measure it myself, but i only have access to x86(-64) machines that have a strong MM.
    From my (admittedly limited) understanding the most important thing about final fields is that they are truly final if the reference to the instance does not >>escape during construction of the instance. Once you write your class to not allow "this" to escape, no other code can ever see the un-initialized final >>fields, no matter how badly it is written.jverd wrote:
    And that correctness is far more valuable than the few cycles it costs, IMHO.The original motivation for the final field guarantee was security: Malicious code should not be able to use race conditions to see data it is not allowed to see. In a correctly synchronized program you have sequential consistency, so you will never see a final field changing its value anyway. But if you have races inside your code, you likely have much more serious problems than the possibility of a final field changing its observed value...

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • ERROR: serializable class HelloComponent does not declare a static final

    Hi everyone! I'm sorry but I'm a newbie to Java and I'm having some, I assume, basic problems learning to compile and run with javac. Here is my code:
    import javax.swing.* ;
    import java.awt.* ;
    public class MyJava {
    * @param args
    public static void main(String[] args)
    // TODO Auto-generated method stub
    JFrame frame = new JFrame( "HelloJava" ) ;
    HelloComponent hello = new HelloComponent() ;
    frame.add( hello ) ;
    frame.setSize( 300, 300 ) ;
    frame.setVisible( true ) ;
    class HelloComponent extends JComponent
    public void paintComponent( Graphics g )
    g.drawString( "Hello Java it's me!", 125, 95 ) ;
    And here is the error:
    1. WARNING in HelloJava.java
    (at line 20)
    class HelloComponent extends JComponent {
    ^^^^^^^^^^^^^^
    The serializable class HelloComponent does not declare a static final serialVersionUID field of type long
    ANY HELP WOULD BE GREAT! THANKS! =)

    Every time I extend GameLoop, it gives me the warning
    serializable class X does not declare a static final serialVersionUID field of type long
    This is just a warning. You do not need to fix this because the class you are writing will never be serialized by the gaming engine. If you really want the warning to go away, add the code to your class:
    static final long serialVersionUID=0;

  • The serializable class SpacePainter does not declare a static final serial

    The serializable class SpacePainter does not declare a static final serialVersionUID field of type longWhat does this mean??? It appears as a warning in Eclipse but I have no idea what it is. It happens when I create a class the extends JFrame or JCompnent or JApplet. I finnally got it to stop with this:
    static final long serialVersionUID = 1;

    Because your eclipse is configured that way. You can probably filter the warning. You don't have to implement the serialVersionUID, but you should if you really serialize the exceptions.

  • Would a 256 GB SSD on a 2012 Mac Mini significantly improve Final Cut Pro X Performance?

    Would a 256 GB SSD on a 2012 Mac Mini significantly improve Final Cut Pro X Performance?
    I'm considering a 2012 Mac Mini 2.6 Quad Core with 16 GB RAM and a 256 GB SSD with external Thunderbolt drives.
    I understand that Mavericks allocates 1GB VRAM from system memory in a 2012 Mini with 8GB RAM.

    It will only improve the read and write performance, and this is not as important as the processor and the GPU in Final Cut Pro.
    The Mac mini has got an integrated GPU that may work if you are a home user, but if you are a professional user, you should consider an iMac, a 15-inch MacBook Pro or a Mac Pro. You won't notice a big performance increase

  • Does ios 7.1.1 improve battery performance ,also what is its effect on the speed of iphone 4s????

    Does ios 7.1.1 improve battery performance ,also what is its effect on the speed of iphone 4s????
    if it increase battery i want to know if it reduce the iphone speed ,as after ios 7.1 my iphone got more speed but the battery drainage increased

    Tx modular, I mean by speed the general performance of iPhone which optimized after iOS 7.1 but I am afraid to update to iOS 7.1.1 not to lose this perfect performance but also I need battery life optimization

  • Do static methods perform better

    Sometimes, especially when refactoring (specifically when doing extract method), I create a method in a class to perform some helpful task or to neatly tuck a chunk of ugly code away somewhere that it can be easily referred to from the "actual" code. This new method doesn't actually directly cause a state change in the object, it just accepts a parameter and returns a value, or similar. In nearly every case this method is private (since it only helps out the class in which it appears).
    I can easily declare these helper methods static, or leave them unstatic. What I'm wondering is if there's any best practice in doing one or the other, whether for performance reasons or otherwise. I'd tend not to make them static because it might imply that the method should be called from a static context or something, but I wondered what others suggest in this regard.
    Thanks.

    I agree with you Dubwai, and in practice will probably
    not use static unless I mean it. But, since the
    methods don't really act on any of the class's
    instance members, I'm also not entirely convinced that
    this does go against the "design" -- because
    whether the method is static or not, it has the exact
    same effect.
    In fact, marking it static might even help make
    it obvious that this method performs no side-effects
    on instance members.
    But again, I think I'll just leave them nonstatic.I'm not saying that you shouldn't make things static. Quite the contrary. I'm saying that things should be static if they are static, meaning they don't change polymorphically or per instance. If this is so, the method should definitely be static.
    Not using instance members is a neccesary condition of being a static method but it is not nearly sufficient. Making a method static means throwing polymorphism out if the door. A method may do nothing but return a static final value but still need to be an instance method if a subclass needs to return a different static final value. If you make a method static and then need to refactor, it can be difficult or impossible to do so, as pointed out above. If you are calling static methods as if they are instance methods, that indicates to me that they probably should just be instance methods.

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

  • Improving Performance

    Hi Experts,
    How can we improve the performance of the select with out creating an secondary index?
    In my select query am not using  primary fields in where condation.
    so i want to know that how can we improve the performance .
    one more thing is that if we r creating secondary index what are the disadvantages of that?
    Thanks & Regards,
    Amit.

    If you select from a table without using an appropriate index or key, then the database will perform a table scan to get the required data.  If you accept that this will be slow but must be used, then the key to improving performance of the program is to minimise the number of times it does the scan of the table.
    Often the way to do this is not what would normally be counted as good programming.
    For example, if you SELECT inside a loop or SELECT using FOR ALL ENTRIES, the system can end up doing the table scan a lot of times because the SQL is broken up into lots of individual/small selects passed to the database one after the other.  So it may be quicker to SELECT from the table into an internal table without specifying any WHERE conditions, and then delete the rows from the internal table that are not wanted.  This way you do only a single table scan on the database to get all records.  Of course, this uses a lot of memory - which is often the trade off.  If you have a partial key and are then selecting based on non idexed fields, you can get all records matching the partial key and then throw away those where the remaining fields dont meet requirements.
    Andrew

  • Third-party hardware upgrades to improve performance in AE?

    Hey Folks,
    Quick question.  Does anyone know of any third-party hardware cards that will improve performance in AE CS5?  I'm running a MacPro Quad-Core (two processor, 8 core) 3.2Ghz with 16GB RAM, 4TB internal drives on a MacPro Raid Card.  Ideally I'd love to get a card that would accelerate Photoshop and Final Cut as well.  Does anyone make third-party cards that do that?  Thanks,
    Justin

    Mylenium wrote:
    No. Not since 15 years ago (ICE board for Final Effects).
    Mylenium
    Good lord. I had two--not one, but TWO--of those cool but utterly useless boat anchors.
    We were a victim of marketing.
    Bad software, unrealistic performance claims, and worthless support from a company that, when starting out, truly thought they had revolutionized the entire world of rendering effects on the NuBus Macintosh.
    All that remains of those boards is a pocketful of lovely blue anodized CNC'd heatsinks.
    bogiesan

  • Does making more classes or incapsulation effect perfoemance?

    I have a Frame with a TabbedPane on it. There are 8 tabs. Each tab contains a JTable with lots of fields & 4 buttons.
    When I design all this stuff in JBuilder all panels are put into one class (...Frame class). I want to put every tab's JPanel into a separate class (... JPanel class). But some say that it might decrease otherall performance. It's sounds like a nonsense to me. But I'm not experienced programmer. And I'd like to make it clear for myself.
    One more thing. If I'm setting all class fields private & then write get/set, is the field access slower in this case than if I'm making all fields public?
    Can anyone help me? Does anyone know what does really decrease perfomance & what does not?

    ...but some say that it might decrease otherall performance
    Having more classes does increase the time it takes a program to run because of the additional class loading activity. Deciding whether to create another class is one of the toughest things in o-o design, and it rests on whether the extra load time is worth it for readability and reusability. The overall impact is not that large if you are talking about five or ten classes.
    If I'm setting all class fields private & then write get/set, is the field access slower in this case than if I'm making all fields public?
    Yes it is, but it is pretty negligible, especially if you are just returning a value. The real advantage of get/set is to allow for additional processing to take place when a property is modified or queried.
    Mitch Goldstein
    Author, Hardcore JFC (Cambridge Univ Press)
    [email protected]

Maybe you are looking for

  • Bind Variables

    Hi I have read happily the article The Values That Bind By Mark A. Williams (Oracle Magazine September/October 2005 ). Immediately I rushed to implement the article on over developers . The problem is that we have a SQL query using the in operators .

  • KP06 Cost Center Budget Planning System error when locking the data records

    Hi, While updating Cost Center Planning system(KP06) its giving the below error: System error when locking the data records. Message no. KI502 Diagnosis The lock to protect the data records being processed could not be set. The probable reason for th

  • Editing HDV w/o a Kona and raid reccomendations

    Hi! Is anyone editing HDV without the use of a Kona? I don't need to digitize or output, so I was wondering if the Kona is necessary? Also, I'm looking for recommendations on drives or raids that work well with FCP and HDV. Thanks!

  • Photostream does not sync when photo taking away from wifi.

    iPhone 4s take a photo connected to wifi, photo uploads to photostream. Photo taken while not on wifi, and then iPhone connects to wifi, photo does not sync. The only way photos seam to sync consistently is if you are connected to wifi while you take

  • Froze and sound unbalaced

    I recently purchased a 8GIG 3rd Generation iPod Nano from Hong Kong, China. This was the sweetest thing that I have ever brought or included in my iPod collection. I currently have the following iPods: 1Gig - 1st Generation iPod Nano (given to me by