Is Static Function faster than non-static function

Hi,
I am wondering the performances issues of static Vs non-static function.
To prevent object creation, I wrote some static function in my class but I wonder if it really did faster.
any pointers in this direction will be helpful
Jacinle

to mattbunch
I still haven't pinpointed exactly the system
bottleneck
but I do think finding a better approach at the
earliest time is betterActually, the generally accepted best practice is to start with good algorithms and data structures, write the code, test the code, profile the code, and then do this kind of nickel and dime optimization after specific bottlenecks have been pinpointed. You absolultely should not make a static/non-static decision based on performance considerations. In fact, the idea that non-static is slower due to object creation is rather muddy thinking. You'd either already have the object and invoke its non-static method, or you'd invoke a static method. In the static case, your design would probably be different anyway, so you can't predict whether time saved by not creating an object would be lost by executing other code paths.
The case in which you count object creation time is if you're creating an object just to call this one method on it and get that method's result, and then not using the object anymore. In that case, the method should be static--not for performance reasons, but rather because the way you are using it suggests that the appropriate design is for it to be static.
>
so many decision to be made in designing software
I wonder if there is any practise I can follow
See "Thinking in Java" by Bruce Eckel, and "Practical Programming in Java" by Peter Haggar

Similar Messages

  • Problems running basic text in aftereffects faster than 19fps... what exactly do I need?

    OK, so I finaly upgraded my computer into the mild 21st century, and to my disapointment, I cannot seem to run anything as smoothly as I had thought.
    These are the specs for my computer...
    ASUS m5a99x EVO motherboard
    8 gigs ddr3 1600 ram
    NVIDIA 9800gt 1 gig ddr3 gpu
    AMD Phenom II x4 B50 Processor at 3.2ghz (IE its an AMD athlon II 450 X3 3.2ghz with its fourth core unlocked (of which i have had no problems with thus far, as it seems to be very stable)
    150 gig 7200 sata 2 harddrive (OLD)
    200 gig 5400 sata 2 hard drive (OLD AS SH*T)
    300ghz portable usb2 hd (7200) (2 years old)
    Basicly, I cant seem to run even basic text in after effects faster than 19 FPS.
    I've tried to change the resolution to half, and even a fourth, and that didnt work at all, infact it made it run about 1 frame worse.
    I tried changing the Open gl texture memory, raising and lowering, but to no avail, Ive changed the ram usage in after effects to use 2 gigs per core, then one gig, then turned off multiframe rendering alltogether, and nothing.
    I feel like ive tried everything in my power.
    Now the Imacs at my school, they run the program smooth as hell... and they arent that much better, spec wise than my computer.
    Even my friends Imac can run it smooth, and he only has an I5 cpu at 2.4ghz, which is fine and my understanding of cpus is that those are better proccessors, but its not that much better, and even still, why would that be neccesary just to run text scrolling accross the screen?
    Even more so, why would changing the resolution not have any effect?
    What exactly do I need to run after effects smoothly for a basic text scroll at say, 720P?
    I need to know what to upgrade, soon I plan to get cs6 and I would like to have a computer that can edit basic HD properly.
    What I realy dont get is that I know people with laptops that are running AE smoothly and these are much worse than the specs on my machine, some even with only 4 gigs of ram...
    Is there something wrong, do I have some sort of frame limiter thats capping at 19 fps? is there some sort of memory leak?
    Any help would be much apreciated.
    Now the only thing I can think of thats holding me back is the crappy hard drives, every thing else seems like it should at least run text on after effects at 30 fps.

    thanks, that at least is enough to get me started, lol I have a deadline tomorrow and have been burning a lot of time on just trying to get this to run smooth.
    BTW, I am running the project off of the portable, I switched from the old, but faster harddrive that was sata2 to the portable given I thought that might increase the speed, which it didnt.
    what I might do is crack the case and just plug it straight into the computer, though I am hesitant to do so as if I were going to do that, I might as well just purchase a usb 3.0 one and do that so i can get sata 3 out of it, since those cases dont exactly just snap back together.
    When I say basic text, I mean layered text, just word after word in order. I honestly dont have any plugins that I know of, (if I had the money for them I would have spent it on a better computer probably) so what I have is what came with the master collection.
    And when I say 19 FPS I mean spacebar...
    NOW I KNOW, that Im not garunteed 30 fps when running the preview, but when I use the mac, it previews fine... and i just looked up my CPU in comparison to the I5 in the IMAC that I was refering to, and mine is actualy faster according to some benchmarks, granted its not faster than the vast majority of I5s and I7s, but the particular ones in the computers I was refering to, mine is actualy faster over all, so I figure its not a CPU thing (unless its a -our software only works right on INTEL- thing).
    Now as far as the 3d camera, yes I am using it, but even when I run the text without a camera function (ie the thing that you have in your comp) or any sort of 3d layering it runs just as slow.
    The Audio might be a problem, I used to have a soundcard, but that died about a year ago so I have been using onboard sound (realtek HD something) which truly sucks in comparison to a proper sound card, but I cant imagine the IMACs have anything better, I mean the sound from the Imac kinda sucks alltogether, dosent even have any sort of virtual surround... But a driver issue it could be, realtek is kind of ghetto in that regard.
    I will try some of the tips above (the open gl and the preview output and such), and thank you very much.
    *EDIT*
    OK, so with the preview output, I have computer monitor only? is that what you ment?
    *EDIT*
    OK, so I did the OpenGL thing, removed it, and for a brief few secconds, it started to run at a mix of 25 to 30 fps, then, when I went to play it again, it was back at 19.

  • How can floating point division be faster than integer division?

    Hello,
    I don't know if this is a Java quirk, or if I am doing something wrong. Check out this code:
    public class TestApp
         public static void main(String args[])
              long lngOldTime;
              long lngNewTime;
              long lngTimeDiff;
              int Tmp;
              lngOldTime = System.currentTimeMillis();
              for( int A=1 ; A<=20000 ; A++)
                   for( int B=1 ; B<=20000 ; B++)
                        Tmp = A / B;
              lngNewTime = System.currentTimeMillis();
              lngTimeDiff = lngNewTime - lngOldTime;
              System.out.println(lngTimeDiff);
    }It reports that the division operations took 18,116 milliseconds.
    Now check out this code (integers replaced with doubles):
    public class TestApp
         public static void main(String args[])
              long lngOldTime;
              long lngNewTime;
              long lngTimeDiff;
              double Tmp;
              lngOldTime = System.currentTimeMillis();
              for( double A=1 ; A<=20000 ; A++)
                   for( double B=1 ; B<=20000 ; B++)
                        Tmp = A / B;
              lngNewTime = System.currentTimeMillis();
              lngTimeDiff = lngNewTime - lngOldTime;
              System.out.println(lngTimeDiff);
    }It runs in 11,276 milliseconds.
    How is it that the second code snippet could be so much faster than the first? I am using jdk1.4.2_04
    Thanks in advance!

    I'm afraid you missed several key points. I only used
    Longs for measuring the time (System.currentTimeMillis
    returns a long). Sorry you are correct I did miss that.
    However, even if I had, double is
    also a 64-bit data type - so technically that would
    have been a more fair test. The fact that 64-bit
    floating point divisions are faster than 32-bit
    integer divisions is what confuses me.
    Oh, just in case you're interested, using float's in
    that same snippet takes only 7,471 milliseconds to
    execute!Then the other explaination is that the Hotspot compiler is optimizing the floating point code to use the cpu floating point instructions but it is not optimizing the integer divide in the same way.

  • Is this logging code faster than using a standard logging API like log4J

    is this logging code faster than using a standard logging API like log4J or the logging API in java 1.4
    As you can see my needs are extremely simple. write some stuff to text file and write some stuff to dos window.
    I am thinking about using this with a multi threaded app. So all the threads ~ 200 will be using this simultaneously.
    * Tracer.class logs items according to the following criteria:
    * 2 = goes to text file Crawler_log.txt
    * 1 = goes to console window because it is higher priority.
    * @author Stephen
    * @version 1.0
    * @since June 2002
    import java.io.*;
    import java.net.*;
    import java.util.*;
    import java.text.*;
    class Tracer{
    public static void log(int traceLevel, String message, Object value)
    if(traceLevel == 1){
    System.out.println(getLogFileDate(new Date()) +" >" + message+ " value = " + value.toString()););
    }else{
    pout.write(getLogFileDate(new Date()) +" >" + message + " value = " + value.toString());
    pout.flush();
    public static void log(int traceLevel, String message )
    if(traceLevel == 1){System.out.println(message);
    }else{
    pout.write(message ) ;
    pout.flush();
    //public static accessor method
    public static Tracer getTracerInstance()
    return tracerInstance;
    private static String getLogFileDate(Date d )
    String s = df.format(d);
    String s1= s.replace(',','-');
    String s2= s1.replace(' ','-');
    String s3= s2.replace(':','.');
    System.out.println("getLogFileDate() = " + s3 ) ;
    return s3;
    //private instance
    private Tracer(){
    System.out.println("Tracer constructor works");
    df = DateFormat.getDateTimeInstance(DateFormat.MEDIUM, DateFormat.MEDIUM);
    date = new java.util.Date();
    try{
    pout = new PrintWriter(new BufferedWriter(new FileWriter("Crawler_log"+getLogFileDate(new Date())+".txt", true)));
    pout.write("**************** New Log File Created "+ getLogFileDate(new Date()) +"****************");
    pout.flush();
    }catch (IOException e){
    System.out.println("**********THERE WAS A CRITICAL ERROR GETTING TRACER SINGLETON INITIALIZED. APPLICATION WILL STOP EXECUTION. ******* ");
    public static void main(String[] argz){
    System.out.println("main method starts ");
    Tracer tt = Tracer.getTracerInstance();
    System.out.println("main method successfully gets Tracer instance tt. "+ tt.toString());
    //the next method is where it fails - on pout.write() of log method. Why ?
    tt.log(1, "HIGH PRIORITY");
    System.out.println("main method ends ");
    //private static reference
    private static Tracer tracerInstance = new Tracer();
    private static Date date = null;
    private static PrintWriter pout = null;
    public static DateFormat df = null;
    }

    In general I'd guess that a small, custom thing will be faster than a large, generic thing with a lot of options. That is, unless the writer of the small program have done something stupid, og the writer of the large program have done something very smart.
    One problem with java in this respect is that it is next to impossible to judge exactly how much machine-level processing a single java statement takes. Things like JIT compilers makes it even harder.
    In the end, there is really only one way to find out: Test it.

  • Is Mac to Mac faster than Mac to PC?

    Someone told me something which doesn't make sense to me, but I could be wrong. Is it true that a website made with my iMac is opened faster than, say Windows PC? This is what the person wrote to me. I don't know how to respond.
    She wrote:
    *'I know that but when you are working with the same exact equipment it goes faster too doesn't have to convert back and forth. That I know too.”*

    Website speed varies by web standards and web browsers used. See my FAQ* on what web standards are, and what web browsers exist:
    http://www.macmaps.com/browser.html
    Connection speed also varies widely unless you have a dedicated internet line. Not even ADSL is truly dedicated because your upstream is capped and connections on websites are as much a function of upstream as downstream traffic.
    - * Links to my pages may give me compensation.

  • Strange AIR performance; == faster than ===

    Hi.
    I have been hunting down some strange memory usage in one of our games, and tracked it down to numbers being compared to 0.0 in a loop.
    I have concentrated the observations down to a small profile snippet.
    Observations:
    1) I expected the first two profiles to take the same amount of time as everything is typed. But comparing a Number that is a natural is a LOT faster. (only true on desktop and android).
    Why is this, is the jitted versions storing Numbers that are natural (whole numbers) as integers, and thus causing a log of type conversion (and memory allocations)?
    2) I have been told (and experienced on iOS) that using the === operator is faster than == when a type coercion is possible, test 1+2 vs. 3+4 shows that this is not always the case on some of the platforms.
    Is this normal? have I messed up something when building my air apps? If this is reproducible, can somebody with better knowledge of the internals of the AIR runtime explain when it's better to use === over == (for performance).
    private var _unused:Number = 0;
    private function profileNumberCompare():void
       var t0:int = getTimer();
       _unused += profileEqEqEq(0.1);
       var t1:int = getTimer();
       _unused += profileEqEqEq(1.0);
       var t2:int = getTimer();
       _unused += profileEqEq(0.1);
       var t3:int = getTimer();
       _unused += profileEqEq(1.0);
       var t4:int = getTimer();
       trace(t1 - t0);
       trace(t2 - t1);
       trace(t3 - t2);
       trace(t4 - t3);
    private function profileEqEqEq(inc:Number):Number
       var x:Number = 0.0;
       var z:Number = 0.0;
       for(var i:int = 0; i < 10000000; i++)
       if(x === 0.0) z += 1;
       x += inc;
       return z;
    private function profileEqEq(inc:Number):Number
       var x:Number = 0.0;
       var z:Number = 0.0;
       for(var i:int = 0; i < 10000000; i++)
       if(x == 0.0) z += 1;
       x += inc;
       return z;
    Results:
    Desktop, adl  (E5530 @ 2.4GHz):
    1411
    171
    68
    71
    ipad4:
    52
    53
    53
    55
    nexus5:
    1230
    373
    148
    146

    Since your iPad is only 4 months old, I'd make an appointment at your local apple store and have them check it out.

  • Wrong Time -- 3 mins faster than my iMac

    Both my iPad and my newest generation touch are several minutes faster than the iMac they sync with. My nano is is w/in seconds, so that works fine.

    Don't know about the iPad, but AFAICT, the current iPod Touch DOES NOT TIME SYNC with either a desktop computer or over Wi-Fi. While this is absolutely absurd, as 1st generation Palms 15 years ago could sync the clock , the new iPod Touch does not. You can only change time manually and then only to the nearest minute; there is no way I know of to set it to the second.
    What's worse, despite hours of looking, I can't find an AppStore app that will perform a time sync either. I think Apple refuses to allow a developer access to the internal clock.
    I hope someone proves me wrong, but unless Apple fixes this problem or allows others to do it, we will be without this very basic function.
    Message was edited by: bobjbkln

  • Vector is way faster than HashMap (why?)

    I thought that HashMap would be faster than Vector (in adding stuff) ... could anyone explain to me why there was such a HUGE difference??
    here's the code I used:
    public class SpeedTest
    public static void main(String[] args)
       final int max=1000001;
       Integer[] arrayzinho = new Integer[max];
       Arrays.fill(arrayzinho,0,max,new Integer(1));
       Vector jota = new Vector(max,max);
       HashMap ele = new HashMap(max,1);
       System.out.println("Adicionando " + (max-1) + " elementos ao array...");
       long tempo = System.currentTimeMillis();
       for(int i=0;i<max;i++)
          arrayzinho[i] = new Integer(i);
       System.out.println("A opera??o demorou " + ((System.currentTimeMillis()-tempo)) + " msecs.");
    //ops
       System.out.println("Adicionando " + (max-1) + " elementos ao Vector...");
       tempo = System.currentTimeMillis();
       for(int i=0;i<max;i++)
          jota.add(arrayzinho);
    System.out.println("A opera??o demorou " + ((System.currentTimeMillis()-tempo)) + " msecs.");
    //ops
    System.out.println("Adicionando " + (max-1) + " elementos ao HashMap...");
    tempo = System.currentTimeMillis();
    for(int i=0;i<max;i++)
    ele.put(arrayzinho[i],arrayzinho[i]);
    System.out.println("A opera??o demorou " + ((System.currentTimeMillis()-tempo)) + " msecs.");
    Of course, when adding to HashMap, two values are entered instead of just the one added in the Vector... But, even doubling the time Vector used, the difference is huge!
    here's some output I've got:
    1:
    Adicionando 1000000 elementos ao array...
    A opera??o demorou 4500 msecs.
    Adicionando 1000000 elementos ao Vector...
    A opera??o demorou 469 msecs.
    Adicionando 1000000 elementos ao HashMap...
    A opera??o demorou 7906 msecs.
    2:
    Adicionando 1000000 elementos ao array...
    A opera??o demorou 4485 msecs.
    Adicionando 1000000 elementos ao Vector...
    A opera??o demorou 484 msecs.
    Adicionando 1000000 elementos ao HashMap...
    A opera??o demorou 7891 msecs.
    and so on, the results are almost the same everytime it's run. Does anyone know why?

    Note: This only times the for loop and insert into each one... not the lookup time and array stuff of the original..
    Test One:
    Uninitialized capacity for Vector and HashMap
    import java.util.*;
    public class SpeedTest
        public static void main(String[] args)
            final int max = 1000001;
            Vector jota = new Vector(); // new Vector(max,max);
            HashMap ele = new HashMap(); // new HashMap(max,1);
            Integer a = new Integer(1);
            long tempo = System.currentTimeMillis();
            for (int i = 0; i < max; ++i)
                jota.add(a);
            long done = System.currentTimeMillis();
            System.out.println("Vector Time " + (done - tempo) + " msecs.");
            tempo = System.currentTimeMillis();
            for (int i = 0; i < max; ++i)
                ele.put(a, a);
            done = System.currentTimeMillis();
            System.out.println("Map Time " + (done-tempo) + " msecs.");
        } // main
    } // SpeedTestAdministrator@WACO //c
    $ java SpeedTest
    Vector Time 331 msecs.
    Map Time 90 msecs.
    Test Two:
    Initialize the Vector and HashMap capacity
    import java.util.*;
    public class SpeedTest
        public static void main(String[] args)
            final int max = 1000001;
            Vector jota = new Vector(max,max);
            HashMap ele = new HashMap(max,1);
            Integer a = new Integer(1);
            long tempo = System.currentTimeMillis();
            for (int i = 0; i < max; ++i)
                jota.add(a);
            long done = System.currentTimeMillis();
            System.out.println("Vector Time " + (done - tempo) + " msecs.");
            tempo = System.currentTimeMillis();
            for (int i = 0; i < max; ++i)
                ele.put(a, a);
            done = System.currentTimeMillis();
            System.out.println("Map Time " + (done-tempo) + " msecs.");
        } // main
    } // SpeedTestAdministrator@WACO //c
    $ java SpeedTest
    Vector Time 60 msecs.
    Map Time 90 msecs.
    We see that IF you know the capacity of the vector before its usage, it is BEST to create one of the needed capacity...

  • Faster than 2000Hz / TCP takes too long

    1. How can we get faster than 500µs (2000Hz) on an PXI RT-8145 System? 502µs,501µs works fine, but 500 or lower the System doesnt respond. We loose connection to the RT System, also the RT-System Manager doesn´t respond. It doesn´t make any difference which VI we use. With 501µs we have an CPU usage about 91%.
    2. If our program start´s, it takes always 1.24min to connect over TCP/IP. We tried all timing an loop-rates, but it´s always the same time.Message Edited by prol on 04-11-2005 07:58 AM

    Could you describe in a bit more detail your application, including the code running in your host aplication and the code running in LV RT on the 8145 controller.
    What type of I/O operations are your performing?
    I assume you want to communicate between the host and LV RT application at 2000 Hz or faster? Why do you need these fast loop rates in your communication? What is the overall purpose of the application? In general TCP/IP is not well suited for fast loop rates, i.e. sending small packets back and forth at high rates. it is more designed for streaming data, sedning larger packets at high data rates in one direction. It sounds like you may be trying to setup a closed loop controller across Ethernet which will have a maximum looop rate around the rates you are describing.
    On your second question, please describe what you mean by 'program starts'. What is the sequence of operation on the host and RT system? Do you reboot or start either of these two systems at this time? How is you Ethernet configured? Specifically how are IP addresses assigned to the two systems? Do you have a DHCP server on the network or are the IP addresses statically assigned? If neither is true, Windows will take about a minute to a minute and half to assign its own IP addresses.
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense

  • Is Tiger's animation faster than that of Leopard?

    I've installed mac os x 10.5.3 and I think Tiger's animation still being faster than that of Leopard. Why?

    I was also curious about Javascript in Director, so I ran a
    JS version of the same script:
    function test() {
    startTime = _system.milliseconds
    for(potentialPrime=1;potentialPrime<=1000;potentialPrime++) {
    isPrime = true;
    for(i=2;i<potentialPrime;i++) {
    n = parseInt(potentialPrime/i);
    if (n*i == potentialPrime) {
    isPrime = false;
    break;
    trace("Total Time: "+(_system.milliseconds - startTime))
    test();
    Well, it surprised me. And disappointed me too. This came in
    at 5x slower than the same script in Lingo!

  • Faster than milisecond

    I would like to generate a diferent wave function every 10 ns and sent it to a function generatot at the same frequency. There is any possibility to make LabView faster than 1ms? I will appreciate any sugestion.

    LabVIEW is much faster than 1ms, but software timing is limited to 1-2ms. Even that cannot be guaranteed on a multipurpose OS. For deterministic timing, you need to utilize hardware timing of your DAQ hardware or use LabVIEW RT of FPGA, for example.
    10ns is awfully fast. What kind of function generator do you have and how do you communicate with it? How many points are in each 10ns chunk of waveform? Is this microwave? How many different functions do you have? Is there a repeating pattern?
    Can you provide a bit more details on what you are actually trying to do? Seems quite unrealsistic.
    LabVIEW Champion . Do more with less code and in less time .

  • Can this class run fast than Hotspot ?

    My case in Sun hotspot is almost 2 times fast than jRockit. It's very strange.
    package com.telegram;
    public class byteutils {
         public final static byte[] bytea = { 48, 49, 50, 51, 52, 53, 54, 56, 57,
                   58, 65, 66, 67, 68, 69, 70 };
         public byteutils() {
              super();
         * convert length = 2L letters Hexadecimal String to length = L bytes
         * Examples: [01][23][45][67][89][AB][CD][EF]
         public static byte[] convertBytes(String hexStr) {
              byte[] a = null;
              try {
                   a = hexStr.getBytes("ASCII");
              } catch (java.io.UnsupportedEncodingException e) {
                   e.printStackTrace();
              final int len = a.length / 2;
              byte[] b = new byte[len];
              int idx = 0;
              int h = 0;
              int l = 0;
              for (int i = 0; i < len; i++) {
                   h = a[idx++];
                   l = a[idx++];
                   h = (h < 65) ? (h - 48) : (h - 55);
                   l = (l < 65) ? (l - 48) : (l - 55);
                   // if ((h < 0) || (l < 0)) return null;
                   b[i] = (byte) ((h << 4) | l);
              a = null;
              return b;
         public static String convertHex(byte[] arr_b) {
              if (arr_b == null)
                   return null;
              final int len = arr_b.length;
              byte[] byteArray = new byte[len * 2];
              int idx = 0;
              int h = 0;
              int l = 0;
              int v = 0;
              for (int i = 0; i < len; i++) {
                   v = arr_b[i] & 0xff;
                   l = v & 0xf;
                   h = v >> 4;
                   byteArray[idx++] = bytea[h];
                   byteArray[idx++] = bytea[l];
              String r = null;
              try {
                   r = new String(byteArray, "ASCII");
              } catch (java.io.UnsupportedEncodingException e) {
                   e.printStackTrace();
              } finally {
                   byteArray = null;
              return r;
         public static void main(String[] argv) {
              byte[] a = new byte[0x10000];
              for (int c = 0; c < 0x10000; c++) {
                   a[c] = (byte) (c % 256);
              String s = "";
              int LOOP = 10000;
              long l = System.currentTimeMillis();
              for (int i = 0; i < LOOP; i++) {
                   s = convertHex(a);
                   a = convertBytes(s);
              l = System.currentTimeMillis() - l;
              double d = l / (double) LOOP;
              System.out.println("" + d + "ms.");
    }

    Thanks! Your code is essentially a microbenchmark testing the performance of sun.nio.cs.US_ASCII.Decoder.decodeLoop() and encodeLoop(), with ~35% and ~30% spent in those two methods respectively. I have verified the behavior (i.e. Sun is faster than JRockit). Due to the microbenchmark nature, it may not affect a larger running program, but it may merit a closer look regardless. I have forwarded to the JRockit perf team for analysis.
    -- Henrik

  • Synchronize faster than unsynchronized????

    Heres the deal, I was testing to see how much slower my code would run if I synchronized a block of code so I fire a bunch of threads and keep track of the time it take each thread to complete and then add all of them to a static variable. To my surprise the code with synchronized blocks runs faster than when I take them out...WHAT? Why is this?
    Tad

    Without seeing your code, I can only speculate, but here's one example of a case where you could see those results: void meth() {
        for (int ix = 0; ix < a_really_big_number; ix++) {
            do something really small, quick, simple.
    } Assume you have a single CPU. Say ten threads. If one thread can run that code in time T, then for ten threads to run it, it will take 10T. It doesn't matter if thread 0 runs from start to finish, then thread 1, etc. Or if each thread takes turns getting one loop iteration. No matter what, it's a total of 10T--ten times as long for ten threads as for one thread.
    HOWEVER, if thread 0 runs start to finish, then thread 1, etc., then you have a total of 10 context switches (including the one to get thread 0 started in the first place). On the other hand, if each thread gets one pass through the loop body, and then the next thread gets a turn, you'll have 10 * a_really_big_number of context switches. The overhead of those context switches could dominate the CPU time to actually run the loop body.
    If you synchronize the method, you're guaranteed that each thread will run start to finish, i.e., minimum context switching overhead.
    &para;

  • Is the Gig version really faster than the 100m version !?

    I just upgraded my 100 meg AEBS to the new Gig version, and ran a quick n easy benchmark, an rsync -e ssh on a 150 meg file. The server is an iMac connected via gig-e, and the Macbook c2d is connected via 802.11n (reporting a consistant 300 mbps in network utility - about 20 feet from the router, going through 2-4 sheets of drywall). The tests were conducted in my Chicago apartment, with at least 10 detectable 2.4gHz networks, and no 5.8gHz networks that I know of.
    The 802.11n 5.8gHz no backwards compatibility was by far the fastest. The fastest test I ran was 11 MBps on the copy, with 802.11a compatibility I believe was around 8, and 2.4ghz + 802.11g compatibility was around 6. I repeated all tests a few times, the results were pretty consistant.
    These results suprised me, as I was really hoping for a bit faster. I could get 40 MBps on my Linux file server over gig-e to the iMac in previous tests. Unfortunately that machine is down until I get some replacement parts, so I couldn't use it to test the new AEBS. But I seem to remember getting 11 or 12 MBps with the Linux file server over the old AEBS with 100m and 5.8gHz no backwards compatiblity.
    So how much of the performance non-difference is due to the iMac vs Linux file server, or the Gig-E version being no faster than the 100 meg version remains to be seen. I'm curious if anyone else has done tests.
    If the router, or this 802.11n implementation is the bottleneck - folks may not want to waste their money upgrading, unless they really want that 4 port (in bridge mode) gig-e switch on the back.
    Rob

    That is somewhat counterintuitive, as the 802.11n connection speed is reportedly 300 mbps. I understand the implications of protocol overhead, but 70% overhead seems a bit excessive. I guess I'm curious if the bottleneck is:
    - in the router backplane
    - in the 802.11n protocol
    - in apples implementation of 802.11(draft)n
    Also - anyone else have actual benchmark data to share?
    regards
    Rob

  • The time on my pre is faster than normal

    when i used the pre in my hometown the time worked fine. i am now at college and still in the same time zone so it is the same exact time as my hometown, but for some reason it runs faster than normal. so ill set it to my computer time then later on in the day it will be like an hour ahead. i dont know what to do, help please.
    Post relates to: Pre p100eww (Sprint)

    I've been having a similar problem. Here's my story:
    My wife and I got Pres on the day they went on sale. About a week later I noticed that the time was fast on both of them.
    Over the next few weeks I tried experimenting with everything I could think of. I tried every combination of setting the network time and zone sliders to on and off, but they still gained about 3 minutes a day.
    At one point I called Sprint. They quickly elevated me to the senior tech person, who was stumped. The best they could offer was to do a reset. Doing so did result in the phone grabbing the correct time, but it proved to be a non-fix... it started running fast again.
    In June we spent five days in Las Vegas. During that period, my phone kept absolutely perfect time; my wife's, however, continued to run fast. When we returned home to Seattle, my phone also resumed running fast.
    Last Friday we took the phones into a Sprint store with an on-site repair shop. They took mine in the back, told me they did some updates (not sure what this was since all my updates were current) and told me it was fine. No such luck. By the next day I noticed it was still running fast. My wife's phone also had some minor screen cracks so they swapped hers for a referb unit. Believe it or not, this one started running fast as well.
    At some point I noticed that the phone grabs the correct time if I connect it via USB for a media sync.
    So, that's where I'm at -- we've handled three phones and all of them run fast regardless of how the network/zone sliders are set. I really don't know what else to try but would eagerly welcome any suggestions.

Maybe you are looking for

  • ICloud doesn't sync

    Hello guys! I bought a brand new iMac with OS X 10.8.4 recently and set up my iCloud with my Apple ID. I use my Calendar, Mail, Notes, Reminders and Contacts frequently but have found that none of the info from my desktop apps can be found in my iClo

  • Quantum Gateway Breaking my MOCA at ONT

    So I got a new Quantum Gateway router on Friday.  Tried to install it and it wouldn't connect to the internet.  I called support and they stated that there was an outage and new routers couldnt be activated.  The tech said however, disconnect your cu

  • SELECTION SCREEN AND DIALOG SCREEN

    Can a selection screen and a dialog screen be combined.If yes then how

  • Could I disable ZCM Record on MBR as default setting ?

    Hi Because customer usually use some recovery AP (like Ghost),but usually cause some workstation register issue,he ask me whether we could disable ZCM Record on MBR as default setting or not ? when workstation generate extra workstation objects, he c

  • What is the best Java installer??

    Hi, I know this is not exactly a programing question,but I am writing a small desktop application which I want to distribute on CD. To make it look as professional as possible I would like to use an installer the install the applications on Windows P