Java benchmarking

hi everyone....here i found a bench mark of Java vs C++...it says java is slower, another bench mark i saw said java vs C++, java is faster than C++, which is true, can u please give me some of teh true benchmarked sites of Java vs C++...
this is the site what i saw says Java is slower
http://verify.stanford.edu/uli/java_cpp.html

"All generalizations are false."
The problem with benchmarks is that they can be manipulated to say whatever the tester wants them to say. The same is true for any comparison. There are also a lot of ways to measure "faster".
For my useage, the tool of choice most of the time is Java. That said, if my customer has an investment in M$ technology. M$ servers, SQL Server, they have staff developers familiar with VB and ASP and such. I'm not serving my customer by pushing them on a Java solution.
Or to offer yet another quote, "There are more things in heaven and earth Horatio than are dreamt of in your philosophy." Wm. Shakespeare
My suggestion on this topic, is use the tool that best fits the situation whether it be Java, or C++ or even gasp VB. I personally do not like to write VB and will farm that out to somebody I know that does that.
The question of faster or slower is moot to most customers. The time they are most interested in shortening is the time of developement.

Similar Messages

  • Web site with java benchmarks for various cpus?

    Is there a web site that shows java benchmarks run on various P4 and AMD cpus, so that it is possible to see the relatively speed at which java runs on these cpus?

    You'll find a few if you search around, but none will be meaningful. The problem with Java, or any other Virtual Machine type runtime language is you can not design a benchmark that really has meaning, in any context but the benchmark itself. First off, you're not benchmarking "Java" you're benchmarking the "JVM". I try to convince people to not waste so much time on these speed issues. Your time would be better learning how to write better code, which would give a much bigger speed increase then switching from a P4 to an Athlon. Developers often make mistakes and oversites in their work that slows a program down by an order of magnitude. You're never going to see that kind of performance difference in CPUs. So get a good deal on machine, the most "bang-for-your-buck" if you will, and go to town, and stop worrying, hotspot is fast.
    Sparc Chips are different story because a little thing called register coloring... but you didn't mention sparc chips, so I won't tell the story.
    Spinoza

  • FPC Bench, Database API and a lot more...

    FPC Bench is a FREE java benchmark to test and compare the performance of a phone with others phones.
    FPC Bench is a complete test tool to test performance and features.
    - CPU/Memory benchmark (single threaded and multi threaded)
    - NetMeter benchmark (GPRS, EDGE, UMTS, HSDPA speed)
    - Check for total heap memory size
    - Check for free heap memory size
    - Check for full screen's maximum resolution in a Java canvas
    - Check for double buffering
    - Check for RMS size
    - Check for RMS speed (external/internal memory speed)
    - Check for your internet connection speed
    - Check for available profile/configuration
    - Check for the latest APIs:
    JSR 75: File System access API.
    JSR 82: Bluetooth/OBEX API.
    JSR 118: Mobile Information Device Profile API.
    JSR 120: Wireless Messaging API (WMA 1.1).
    JSR 135: Multimedia API (MMAPI)
    JSR 139: Connected Limited Device Configuration 1.1
    JSR 172: Wev service specification.
    JSR 177: Security and Truste Services API.
    JSR 179: Location API.
    JSR 180: SIP API.
    JSR 184: Mobile 3D Graphics.
    JSR 185: Java Tech for Wireless Industry API.
    JSR 205: Wireless Messaging API (WMA 2.0).
    JSR 209: Advanced graphics and user interface.
    JSR 211: Content Handler API.
    JSR 226: Scalable 2D vector graphics for JavaME.
    JSR 229: Payment API.
    JSR 234: Advanced Multimedia API.
    JSR 238: Mobile internationalization API.
    JSR 239: Java binding for OpenGL ES.
    JSR 248: MSA Umbrella.
    JSR 248: Fully featured MSA.
    JSR 256: Mobile Sensor API.
    JSR 257: Contactless communication API.
    This application runs on all Java Micro Edition MIDP platforms.
    We have a big database where you can check if a phone supports an API simply by filtering our database with the api of your interest.
    Please help us enlarging our results database by sending us your results directly from FPC Bench using Internet or SMS.
    Sending results by Internet is really cheaper than a normal SMS.
    You can find more info about our project here:
    http://www.dpsoftware.org
    Message was edited by:
    overtheclock

    FPC Bench is a FREE java benchmark to test and compare the performance of a phone with others phones.
    FPC Bench is a complete test tool to test performance and features.
    - CPU/Memory benchmark (single threaded and multi threaded)
    - NetMeter benchmark (GPRS, EDGE, UMTS, HSDPA speed)
    - Check for total heap memory size
    - Check for free heap memory size
    - Check for full screen's maximum resolution in a Java canvas
    - Check for double buffering
    - Check for RMS size
    - Check for RMS speed (external/internal memory speed)
    - Check for your internet connection speed
    - Check for available profile/configuration
    - Check for the latest APIs:
    JSR 75: File System access API.
    JSR 82: Bluetooth/OBEX API.
    JSR 118: Mobile Information Device Profile API.
    JSR 120: Wireless Messaging API (WMA 1.1).
    JSR 135: Multimedia API (MMAPI)
    JSR 139: Connected Limited Device Configuration 1.1
    JSR 172: Wev service specification.
    JSR 177: Security and Truste Services API.
    JSR 179: Location API.
    JSR 180: SIP API.
    JSR 184: Mobile 3D Graphics.
    JSR 185: Java Tech for Wireless Industry API.
    JSR 205: Wireless Messaging API (WMA 2.0).
    JSR 209: Advanced graphics and user interface.
    JSR 211: Content Handler API.
    JSR 226: Scalable 2D vector graphics for JavaME.
    JSR 229: Payment API.
    JSR 234: Advanced Multimedia API.
    JSR 238: Mobile internationalization API.
    JSR 239: Java binding for OpenGL ES.
    JSR 248: MSA Umbrella.
    JSR 248: Fully featured MSA.
    JSR 256: Mobile Sensor API.
    JSR 257: Contactless communication API.
    This application runs on all Java Micro Edition MIDP platforms.
    We have a big database where you can check if a phone supports an API simply by filtering our database with the api of your interest.
    Please help us enlarging our results database by sending us your results directly from FPC Bench using Internet or SMS.
    Sending results by Internet is really cheaper than a normal SMS.
    You can find more info about our project here:
    http://www.dpsoftware.org
    Message was edited by:
    overtheclock

  • Strange double calculation result with JDK 1.4.2

    Hi,
    I've written a small benchmark to test the power of a Sharp Zaurus PDA. Wanting to compare it with my workstation, I ran it on the PC with JDK1.4.2 Beta and was really surprised to discover that the double calculation of Pi gave a non-correct result: Pi = 3.1413934230804443!!!
    I've tried to isolate the bug without success at the moment. It only happens when run from the Zjb program in JDK1.4.2, either from the command line or from Eclipse.
    The result is correct when run with JDK1.4.1, JDK1.4.1_01, JDK1.1.8 that are also setup on the PC. I extracted the faulty loop and executed the sniplet, but the result is OK. I added the previous lines (running the Ackerman function to test recursivity and stack management): still OK, from Eclipse and command line.
    I think the problem must be a configuration one on my computer: a 2xPII 350, Win2K SP3. Perhaps the 1.4.2 JVM is using an old library... I can't imagine that the Beta JVM would have such problem.
    Or there is a bug in the program which make the stack or the FPU registers break, but I can't find where: all other platforms I tested gave correct results.
    Could someone with a JDK1.4.2 installation test my program and post what were the results for the Pi calculation?
    The 10KB source is available on http://www.alterna.tv/zjb/Zjb.java
    Thanks.

    Yes, it was the Pentium, at the time when 100MHz was top speed...
    My CPUs are supposed not to suffer from that old disease.
    But if it were the case, new JVM can't drop software patches like this. Today, Intel started again the release of the new P4 at 3GHz, after adding a software patch for the hardware defect they had detected...
    I post the code for my small program here as my Web site is frequently down this week:
    import java.awt.BorderLayout;
    import java.awt.Button;
    import java.awt.Color;
    import java.awt.Dialog;
    import java.awt.Dimension;
    import java.awt.FlowLayout;
    import java.awt.Frame;
    import java.awt.Graphics;
    import java.awt.GridLayout;
    import java.awt.Label;
    import java.awt.List;
    import java.awt.Panel;
    import java.awt.TextField;
    import java.awt.Toolkit;
    import java.awt.event.ActionEvent;
    import java.awt.event.ActionListener;
    import java.awt.event.WindowAdapter;
    import java.awt.event.WindowEvent;
    import java.io.BufferedInputStream;
    import java.io.BufferedOutputStream;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.math.BigInteger;
    * Zjb: Zaurus Java Bechmark
    * @author GenePi
    class Zjb
         extends Frame
         static Zjb _mainWindow;
          * Number of benchmark runs.
         private final TextField _runs;
          * Results list
         private final List _results;
          * Wait, program is thinking...
         private final Label _wait;
          * Start button
         private final Button _start;
          * Benchmark running
         private volatile boolean _running = false;
          * Layout the main window.
         Zjb()
              super("Zaurus java benchmark 1.0");
              setLayout(new BorderLayout());
              // Input fields
              Panel top = new Panel(new GridLayout(1, 0));
              top.add(new Label("Number of runs"));
              _runs = new TextField("1");
              top.add(_runs);
              add(top, BorderLayout.NORTH);
              // Results list
              _results = new List();
              add(_results, BorderLayout.CENTER);
              // Start button
              final Panel bottom = new Panel(new FlowLayout(FlowLayout.RIGHT));
              _wait = new Label();
              bottom.add(_wait);
              _start = new Button("Start");
              _start.addActionListener(new ActionListener()
                   public void actionPerformed(final ActionEvent evt)
                        if (!_running)
                             // Clear previous results and start benchmark.
                             _results.clear();
                             _start.setLabel("Stop");
                             _wait.setText("Running...          ");
                             bottom.validate();
                             _running = true;
                        else
                             _start.setLabel("Start");
                             _wait.setText("");
                             _running = false;
              bottom.add(_start);
              // Quit button
              final Button quit = new Button("Quit");
              quit.addActionListener(new ActionListener()
                   public void actionPerformed(final ActionEvent evt)
                        System.exit(0);
              bottom.add(quit);
              add(bottom, BorderLayout.SOUTH);
              // Exit when main window closes
              addWindowListener(new WindowAdapter()
                   public void windowClosing(final WindowEvent evt)
                        System.exit(0);
              Dimension dim = Toolkit.getDefaultToolkit().getScreenSize();
              setSize(dim);
              validate();
          * The benchmarks
          * @param runs Number of runs
         private static void runBenchmarks(final int runs)
              long start;
              long end;
              long totalStart;
              long totalEnd;
              // Integer arithmetic
              start = System.currentTimeMillis();
              totalStart = start;
              int resultInt = 0;
              for (int i = 0; i < runs; i++)
                   resultInt = ackerman(3, 9);
                   // resultInt = ackerman(3, 7);
              end = System.currentTimeMillis();
              _mainWindow._results.add("Integer arithmetic: " + ((end - start) / 1000.0) + " s [Ack(3,9)=" + resultInt + "]");
              if (!_mainWindow._running)
                   return;
              // Float and double
              start = System.currentTimeMillis();
              double resultDouble = 0.0;
              for (int i = 0; i < runs; i++)
                   resultDouble = 0.0;
                   for (int j = 1; j < 1000000; j++)
                        resultDouble += 1.0 / ((double) j * (double) j);
                   System.out.println("resultDouble=" + resultDouble);
                   resultDouble = Math.sqrt(resultDouble * 6.0);
              end = System.currentTimeMillis();
              _mainWindow._results.add("Double arithmetic: " + ((end - start) / 1000.0) + " s [Pi=" + resultDouble + "]");
              if (!_mainWindow._running)
                   return;
              // Big operations
              start = System.currentTimeMillis();
              BigInteger resultBig = new BigInteger("1");
              for (int i = 0; i < runs; i++)
                   resultBig = fact(3000);
              end = System.currentTimeMillis();
              _mainWindow._results.add("Infinite arithmetic: " + ((end - start) / 1000.0) + " s [3000!=" + resultBig.toString().substring(1, 20) + "...]");
              if (!_mainWindow._running)
                   return;
              // Strings
              start = System.currentTimeMillis();
              String resultString = null;
              for (int i = 0; i < runs; i++)
                   final String alphabet = " qwertyuioplkjhgfdsazxcvbnm0789456123./*";
                   StringBuffer buf = new StringBuffer();
                   for (int j = 0; j < 100000; j++)
                        int pos = j % alphabet.length();
                        buf.append(alphabet.substring(pos, pos + 1));
                   resultString = buf.toString();
              end = System.currentTimeMillis();
              _mainWindow._results.add("Strings: " + ((end - start) / 1000.0) + " s [" + resultString.substring(1, 20) + "...]");
              if (!_mainWindow._running)
                   return;
              // Drawing
              start = System.currentTimeMillis();
              for (int i = 0; i < runs; i++)
                   final int size = 200;
                   Dialog dialog = new Dialog(_mainWindow, "Drawing...", true);
                   dialog.add(new TestPanel(dialog));
                   dialog.setSize(size, size);
                   dialog.show();
              end = System.currentTimeMillis();
              _mainWindow._results.add("Drawing: " + ((end - start) / 1000.0) + " s");
              if (!_mainWindow._running)
                   return;
              // File copy
              start = System.currentTimeMillis();
              String resultIO = "OK";
              loopIO:
              for (int i = 0; i < runs; i++)
                   final String tempName = "/tmp/Zjb.tmp";
                   try
                        BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(tempName));
                        for (int j = 0; j < 1000000; j++)
                             out.write((byte) j);
                        out.close();
                        BufferedInputStream in = new BufferedInputStream(new FileInputStream(tempName));
                        for (int j = 0; j < 1000000; j++)
                             int r = in.read();
                             if ((byte) r != (byte) j)
                                  resultIO = "Failed";
                                  System.err.println("Content mismatch at " + j);
                                  break loopIO;
                        in.close();
                        new File(tempName).delete();
                   catch(IOException ioe)
                        resultIO = "Failed";
                        System.err.println(ioe);
                        break loopIO;
              end = System.currentTimeMillis();
              _mainWindow._results.add("Files I/O: " + ((end - start) / 1000.0) + " s [1MB written/read/deleted: " + resultIO + "]");
              totalEnd = end;
              _mainWindow._results.add("");
              _mainWindow._results.add("Total: " + ((totalEnd - totalStart) / 1000.0) + " s");
          * Utility functions: Ackerman function
          * @param m
          * @param n
         private static int ackerman(final int m, final int n)
              if (m == 0)
                   return (n + 1);
              else if (n == 0)
                   return (ackerman(m - 1, 1));
              else
                   return ackerman(m - 1, ackerman(m, (n - 1)));
          * Factorial of big numbers.
          * @param n
          * @return
         private static BigInteger fact(final int n)
              final BigInteger one = new BigInteger("1");
              BigInteger num = new BigInteger("1");
              BigInteger fact = new BigInteger("1");
              for (int i = 2; i <= n; i++)
                   num = num.add(one);
                   fact = fact.multiply(num);
              return fact;
          * Benchmark entry point.
          * @param args Command line arguments
         public static void main(String[] args)
              _mainWindow = new Zjb();
              _mainWindow.show();
              synchronized (Zjb.class)
                   while (true)
                        try
                             Zjb.class.wait(500L);
                        catch (InterruptedException ie)
                             // Wake
                        if (_mainWindow._running)
                             try
                                  runBenchmarks(Integer.parseInt(_mainWindow._runs.getText()));
                             catch (NumberFormatException nfe)
                                  _mainWindow._runs.setText("1");
                                  runBenchmarks(1);
                             _mainWindow._running = false;
                             _mainWindow._start.setLabel("Start");
                             _mainWindow._wait.setText("");
    class TestPanel
         extends Panel
          * The dialog containing the panel.
         private final Dialog _dialog;
         TestPanel(final Dialog dialog)
              _dialog = dialog;
         public void paint(final Graphics g)
              Dimension dim = getSize();
              g.setColor(Color.white);
              g.fillRect(0, 0, dim.width, dim.height);
              for (int i = 0; i < 1000; i++)
                   Color color = new Color((int) (Math.random() * Integer.MAX_VALUE));
                   int x = (int) (Math.random() * dim.width);
                   int y = (int) (Math.random() * dim.height);
                   int width = (int) (Math.random() * dim.width);
                   int height = (int) (Math.random() * dim.height);
                   g.setColor(color);
                   g.fillRect(x, y, width, height);
              g.setColor(Color.white);
              g.fillRect(0, 0, dim.width, dim.height);
              for (int i = 0; i < 1000; i++)
                   Color color = new Color((int) (Math.random() * Integer.MAX_VALUE));
                   int x = (int) (Math.random() * dim.width);
                   int y = (int) (Math.random() * dim.height);
                   int width = (int) (Math.random() * dim.width);
                   int height = (int) (Math.random() * dim.height);
                   g.setColor(color);
                   g.fillOval(x, y, width, height);
              // Hide dialog when finished
              _dialog.hide();
    }

  • Running different JRE on the same system to measure JRE Performance

    Hey there,
    I want to run several java benchmarks on different runtime environments on the same system.
    Now, I need to know what I have to consider that those machines do not affect each other?
    What do I need to consider while installing those different runtime environments.
    How do I have to set the classpath, etc. - I guess it shouldn't be set...
    Are there any files in the operating system that could affect the different JRE versions, such as java.exe and javaw.exe in the Windows NT System directory? What do I have to do with those files?
    And which would be the proper steps to reach a meaningful result?!
    Right now I am using Windows NT, but I will also try to run some tests on Linux! So, I am open for any kind of help!
    Thanks,
    Marc

    Hi Marc,
    Have you considered installing one JRE at a time and running your benchmarks? Before installing the next JRE, uninstall
    the existing JRE. Thus eliminating any possibility of
    overwriting/corrupting any of the Windows NT System
    directories or registry settings.
    -Sun DTS

  • Slow performance when multiple threads access static variable

    Originally, I was trying to keep track of the number of function calls for a specific function that was called across many threads. I initially implemented this by incrementing a static variable, and noticed some pretty horrible performance. Does anyone have an ideas?
    (I know this code is "incorrect" since increments are not atomic, even with a volatile keyword)
    Essentially, I'm running two threads that try to increment a variable a billion times each. The first time through, they increment a shared static variable. As expected, the result is wrong 1339999601 instead of 2 billion, but the funny thing is it takes about 14 seconds. Now, the second time through, they increment a local variable and add it to the static variable at the end. This runs correctly (assuming the final increment doesn't interleave which is highly unprobable) and runs in about a second.
    Why the performance hit? I'm not even using volatile (just for refernce if I make the variable volatile runtime hits about 30 seconds)
    Again I realize this code is incorrect, this is purely an interesting side-expirement.
    package gui;
    public class SlowExample implements Runnable
         public static void main(String[] args)
              SlowExample se1 = new SlowExample(1, true);
              SlowExample se2 = new SlowExample(2, true);
              Thread t1 = new Thread(se1);
              Thread t2 = new Thread(se2);
              try
                   long time = System.nanoTime();
                   t1.start();
                   t2.start();
                   t1.join();
                   t2.join();
                   time = System.nanoTime() - time;
                   System.out.println(count + " - " + time/1000000000.0);
                   Thread.sleep(100);
              catch (InterruptedException e)
                   e.printStackTrace();
              count = 0;
              se1 = new SlowExample(1, false);
              se2 = new SlowExample(2, false);
              t1 = new Thread(se1);
              t2 = new Thread(se2);
              try
                   long time = System.nanoTime();
                   t1.start();
                   t2.start();
                   t1.join();
                   t2.join();
                   time = System.nanoTime() - time;
                   System.out.println(count + " - " + time/1000000000.0);
              catch (InterruptedException e)
                   e.printStackTrace();
               * Results:
               * 1339999601 - 14.25520115
               * 2000000000 - 1.102497384
         private static int count = 0;
         public int ID;
         boolean loopType;
         public SlowExample(int ID, boolean loopType)
              this.ID = ID;
              this.loopType = loopType;
         public void run()
              if (loopType)
                   //billion times
                   for (int a=0;a<1000000000;a++)
                        count++;
              else
                   int count1 = 0;
                   //billion times
                   for (int a=0;a<1000000000;a++)
                        count1++;
                   count += count1;
    }

    Peter__Lawrey wrote:
    Your computer has different types of memory
    - registers
    - level 1 cache
    - level 2 cache
    - main memory.
    - non CPU local main memory (if you have multiple CPUs with their own memory banks)
    These memory types have different speeds. Depending on how you use a variable affects which memory it is placed in.Plus you have the hotspot compiler kicking in sometime during the run. In other words for some time the VM is interpreting the code and then all of a sudden its compiled and executing the code compiled. Reliable micro benchmarking in java is not easy. See [Robust Java benchmarking, Part 1: Issues|http://www.ibm.com/developerworks/java/library/j-benchmark1.html]

  • Python vs Java - simple benchmark comparison

    Hi all,
    I recently posted a speed comparison of Python vs Java.
    Following Xentac's suggestion, I imported Psyco to see if I could get any JIT benefits in the Python scripts. I then tried the latested Java JDK6 dev binaries too. This follow-up can be found here.
    Please be aware that I'm totally aware of the vast limitations of micro-benchmarks like these. Still, although they are based on someone else's code, I like them because they represent the typical tasks I often carry out in both my Python and Java programming: IO, lists, hashes, for loops, etc.
    Any comments welcome, especially on how to optimise the Python code. I can already see a couple of ways that ought to improve the Java tests.

    I agree. I honestly am not trying to say that Java is better and Python is rubbish. I love em both. There's not a great deal of difference for the most part.
    I just felt that people assumed that Java was slow because they've heard it's slow, or had prior experience of the old versions.
    I have friend's saying "ugh, Java is sloowwww." And so avoid it. Yet, these same people rave about Perl and Python apps. My point was simply, hate Java for other reasons - not speed!
    You may notice that the Java executable is only 63K. The bulk of Java comes from its extensive class library that ships with the runtime. People think that Java must load up all the classes or something before running, when in fact it only ever loads with it needs. So, whilst the package itself is large, any sure, it does require more memory than other languages, I just don't believe it's as bulky as some assume.
    The motivation for those benchmarks was a comment on Frugalware's IRC channel where someone tried it out, said it was slow, removed it, and preferred the Python/gtk front-end they have instead. Followed with some insightful remark that Java only for web apps! I don't mind others prefering other front-ends. But is Jacman really slow? Please tell me, because it runs like a dream on my system. It would go even quicker if with pacman-optimize.

  • Java concurrency benchmarks - need ideas

    Hi,
    I'm doing a little research about concurrency in Java for my University and have to make series of benchmarks.
    What is has to be done is a conclusions such like "yeah, ReentrantLock is good for more scalable locking, but if you need simple lock a non-nested block of code, it's better to use synchronized because of less overhead in today versions of JDK 5 and 6".
    In my benchmarks I'm trying to follow mostly JCIP book, doing for example performance measurements of concurrent collection classes by implementing for example producer-consumer pattern with various consumer load of work degree, various consumer threads number, etc. I've also measured overhead of locking (my previous post). I will also measure mean Thread creation and start() time taken.
    But I don't have idea for simple use-cases of concurrency, that will lead me to make conclusions like above. Have you got any ideas what should be measured?

    Find the "Java Concurrency in Practice" -book is has some of the performance discussion that you talk about. Might be some code samples on the books website http://jcip.net/

  • Comparing performance of different Java code designs - benchmarking

    Here's the problem:
    How do I run the java compiler (preferably Sun's javac) without getting any compile time optimization?
    I'd like to be able to compile a number of different programs to java bytecode - without having any optimization done by the compiler.
    The metric I want to use on the design of these programs is the "total number of bytecode instructions executed".
    The designs I want to compare can be reduced to "straight-line programs" with no conditionals or loops so I can learn a lot just by looking at the bytecodes emitted the compiler.
    Any pointers or help greatly appreciated.
    Cheers,
    Dafydd

    CORBA is supported by Windows machines (Windows XP/2000 as I know of it) and other APIs may be bought or included in some enterprise applications.
    RMI and CORBA are about as fast as each other. RMI-IIOP is slower then RMI and CORBA, however, it can sometimes go a little faster depending on deployment and environment.

  • Graphical benchmark for Java (J2ME) phones and PDAs: JBenchmark

    JBenchmark measures the performance of Java (J2ME) enabled phones and PDAs, by running 5 small tests, each lasting for 10 seconds:
    1. Text
    2. 2D Shapes
    3. 3D Shapes
    4. Fill Rate
    5. Animation
    Check results and download software at www.jbenchmark.com!

    Wow.. i thought I bought a good phone. But reading the JBenchmark for the SonyEricsson T610 wasn't fun at at all. Are other phones really up to 3 times faster? Anyone with experiance from testing with other phones?

  • PL/SQL or Java?

    I am working in a project were we need to move a lot of data (100 Million records a year) into Oracle (8i).
    We have a Java program (EJB) that does the file and record integrity checks and loads the data into three Oracle tables.
    Then we run a set of SQL stored procedures to check the integrity and validity of the individual fields and store the results in yet another table. We believe this process could not be done with the original files, since they are just plain text files.
    The forth table is then use to create a report that will help the people that submitted the data for cleaning purposes, so they can send it again (and re-load it into the database). This process goes on until the data is as clean as possible. At that point the data in the original three tables is moved and converted to another database that will be used for reporting.
    This other database is normalized and read only.
    The DBA is telling us that we shouldn't use Oracle to do this checking (nor the moving to the final database), but an external application like Java or C++. Our opinion is that it would be faster, and less intensive for the network and the database, to do it in Oracle using PL/SQL.
    So the question is ... who is right? or how can I find information or benchmarks on this topic.
    Thanks in advance,
    Paco Morales

    I ran into the same issue before... there are trade offs to either way.
    If you are doing a lot of lookups then it will be a lot faster to do the validation in PL/SQL. You will see a large performance increase.
    If you do your validation on the server, you will need to increase your database resources. This is probably why the DBA is fighting your development.

  • I am trying to use an education program that needs Java applets to install and use and it will not use Safari. When I download IE from the web it will not install. How can I get a browser that will work on my MacAir for travel use of this program?

    I am trying to use and education program that needs Java applets and it will not run on Safari. IE will not install from the web. How do I get a browser that will work to install so I can use this program when I travel.

    Try using FireFox. IE will only run on a Mac if you run Windows on the Mac.
    Windows on Intel Macs
    There are presently several alternatives for running Windows on Intel Macs.
    Install the Apple Boot Camp software.  Purchase Windows 7 or Windows 8.  Follow instructions in the Boot Camp documentation on installation of Boot Camp, creating Driver CD, and installing Windows.  Boot Camp enables you to boot the computer into OS X or Windows.
    Parallels Desktop for Mac and Windows XP, Vista Business, Vista Ultimate, or Windows 7.  Parallels is software virtualization that enables running Windows concurrently with OS X.
    VM Fusion and Windows XP, Vista Business, Vista Ultimate, or Windows 7.  VM Fusion is software virtualization that enables running Windows concurrently with OS X.
    CrossOver which enables running many Windows applications without having to install Windows.  The Windows applications can run concurrently with OS X.
    VirtualBox is a new Open Source freeware virtual machine such as VM Fusion and Parallels that was developed by Solaris.  It is not as fully developed for the Mac as Parallels and VM Fusion.
    Note that Parallels and VM Fusion can also run other operating systems such as Linux, Unix, OS/2, Solaris, etc.  There are performance differences between dual-boot systems and virtualization.  The latter tend to be a little slower (not much) and do not provide the video performance of the dual-boot system. See MacTech.com's Virtualization Benchmarking for comparisons of Boot Camp, Parallels, and VM Fusion. A more recent comparison of Parallels, VM Fusion, and Virtual Box is found at Virtualization Benchmarks- Parallels 10 vs. Fusion 7 vs. VirtualBox. Boot Camp is only available with Leopard and later. Except for Crossover and a couple of similar alternatives like DarWine you must have a valid installer disc for Windows.
    You must also have an internal optical drive for installing Windows. Windows cannot be installed from an external optical drive.

  • Next-Generation Java 7 Plugin Performance on Windows 7 and IE 8

    Applet performance has historically been a black eye, to say the least, for Java Applets. Slow load times over today's networks are simply not tolerated by today's network standards. I'm currently supporting an Applet that is forced to move to the Java 7 Platform. As such, we are particularly sensitive to anything new that may further hinder applet performance. To that end, I've been doing quite a bit of benchmarking lately of Applet load times using various configurations with the Java 7 Plugin on Windows 7 using IE 8.
    To capture Java Applet load times, I've simply been marking the start time in Javascript from the HTML onLoad() event, and then calling out to a similar Javascript function to mark the end time from the bottom of the init() method in the Java Applet. I subtract the two times to get a general idea of how long it takes to load the applet.
    The best load times, so far, (when loading the JARs from the web server) occur when caching is employed (e.g., cache_option, cache_archive, cache_version). What I've noticed though, is that when everything else is the same, the performance is degraded by at least half when I check the 'Enable the next-generation Java Plug-in' in the Plugin Control Panel. Applet load times slow down by at least half when this option is enabled. When loading JAR files from the web server, with caching in effect, the applet load time performance is comparable to loading JAR files from the file system only when the next-generation plugin is not enabled. I assume this is because of the associated overhead of spinning-up this external JVM process, but I'm not certain.
    Does anyone know if this is a correct assumption? And if I'm correct, are there ways to speed up the loading of an applet when caching is used with the next-generation plugin? Is this another cold-start vs. warm-start issue for the JVM?
    My goal is to have applet load times for JARs loaded from the web server, using the next-generation plugin, as fast as when the JARs could be loaded from the local file system (which apparently is no longer possible using the next-generation plugin, sadly).
    Thanks!

    Thanks Igor.
    Web Browser: IE 8.0.76
    Java Plugin: 7u3 (1.7.0_03-b05)
    OS: Windows 7 Enterprise (32-bit)
    Server: Websphere 7
    Java Applet is in O&M phase and been around a while. Rich Internet Application with file system access requirements. Currently compiled with JDK 1.5. 10 JAR files total, 6 of which are third-party JARs. 4 JARs are custom and are signed.
    JAR1.jar -> 11077 bytes
    JAR2.jar -> 14207 bytes
    JAR3.jar -> 5093 bytes
    JAR4.jar -> 22233 bytes
    JAR5.jar -> 18722 bytes
    JAR6.jar -> 17578 bytes
    JAR7.jar -> 722237 bytes
    JAR8.jar -> 90688 bytes
    JAR9.jar -> 17521 bytes
    JAR10.jar -> 50686 bytes
    JSP Page is used to render the following HTML tags for loading the applet:
    <object classid="clsid:${UID}" name="preview" width="100%" height="300" id="poc">
    <PARAM name="java_code" value="com.loadfast.Applet.class"/>
    <param name="codebase" value="/www/applet"/>
    <PARAM name="cache_option" value="Plugin"/>
    <PARAM NAME="cache_archive" VALUE="
    JAR1.jar,
                        JAR2.jar,
                        JAR3.jar,
                        JAR4.jar,
                        JAR5.jar,
                        JAR6.jar,
                        JAR7.jar,
                        JAR8.jar,
                        JAR9.jar,
                        JAR10.jar
    "/>
    <PARAM NAME="cache_version" VALUE="
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11,
                        1.0.0.11"/>
    <PARAM name="type" value="application/x-java-applet"/>
    </object>
    Here's a brief synopsis of my test methodology:
    Assuming caching is the fastest performance I'm going to get by putting JARs on web server, goal was to determine if browser plugin and next-gen plugin will offer the same performance in terms of time to load the Applet.
    To test, I unchecked the 'Enable Next-Gen' plugin option in the Java Plugin. I updated the cache_version values for all JARs. I 'touched' all JAR files in the WAR (I use cygwin) and redeployed the WAR. I have a cli script that launches IE and points it at my applet. When the applet loads, a Javascript Alert box displays showing the number of milliseconds it took to load the applet. I document the time, quit the browser, and re-execute my script. I do this 10 times for each test scenario and take the average.
    The two basic test scenarios are using the Browser Plugin (not next-gen) and using the Next-Gen Plugin. That is the only variable I change between test scenarios. Here is the raw data I collected for each test scenario:
    Not Using Next-Gen Plugin (milliseconds):
    run1 run2 run3 run4 run5 run6 run7 run8 run9 run10
    1761 474 535 495 500 505 502 267 693 513
    Avg: 625ms
    Using Next-Generation Plugin (milliseconds):
    run1 run2 run3 run4 run5 run6 run7 run8 run9 run10
    5382 1529 983 1545 1622 1575 1544 1544 1545 1529
    Avg: 1880ms
    The time it takes to load for each first run indicates caching is happening as subsequent runs are faster. I verified that the JVM is not making http requests for cached JAR files by proxying these requests with Tcpmon just to confirm this was the case.
    I'm basically just looking for a logical explanation to account for the significant time difference that occurs from this Plugin configuration change. It seems to make logical sense to me that this can be explained by JVM Process start up time, but I'm looking for corroboration on that or another explanation.
    Thanks for any advice, help, etc. I'll start looking into JNLP and JAR index as well.

  • Use of MPSS on Solaris 9 and Java 141_03 - not getting 4M pagesizes

    Hi all,
    Anyone know how to get MPSS actually using large page sizes in 1.4 / SunOS 5.9 ??
    I have a 1.4.1_03-b02 JVM that is using the -XX:+UseMPSS option and using the LD_PRELOAD=/usr/lib/mpss.so.1 and MPSSHEAP=4M but when I use pmap -Fxs <PID> I always see 8k pages. My system is 5.9 Generic_122300-03 sun4u sparc SUNW,Sun-Fire-480R and pagesize -a give me:
    8192
    65536
    524288
    4194304
    so 4M should be OK to use...
    The full JVM options are:
    -XX:+TraceClassUnloading -XX:+UseParallelGC -XX:+UseMPSS -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=2 -XX:MaxTenuringThreshold=3 -XX:+DisableExplicitGC -Dsun.rmi.server.exceptionTrace=true -Xloggc:gc.log -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -server -ms2560m -mx2560m -Xmn1024m -Dsun.rmi.dgc.client.gcInterval=14400000 -Dsun.rmi.dgc.server.gcInterval=14400000
    I have also tried using LD_PRELOAD_32 and LD_PRELOAD_64 but still only see 8k pages in pmap for the heap...
    Thanks for any ideas, if I read the doc I should not need to do anything to use the MPSS option on SunOS 5.9...so maybe one of my other JVM options is preventing MPSS from being used?

    OK, bug 4845026 is giving me a clue:
    Bug ID:      4845026
    Votes      1
    Synopsis      MPSS broken on JDK 1.4.1_02
    Category      hotspot:jvm_interface
    Reported Against      1.4.1_02
    Release Fixed      
    State      Closed, will not be fixed
    Related Bugs      
    Submit Date      08-APR-2003
    Description      
    I am running SPECjAppServer2002 with WebLogic Server 8.1 and JDK 1.4.1_02.
    Here is the version of JDK 1.4.1_02 that I am using:
    <gar07.4> /export/VMs/j2sdk1.4.1_02/bin/java -version
    java version "1.4.1_02"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1_02-b06)
    Java HotSpot(TM) Client VM (build 1.4.1_02-b06, mixed mode)
    The system is a V240 with solaris S9U3:
    <gar07.5> more /etc/release
    Solaris 9 4/03 s9s_u3wos_04 SPARC
    Copyright 2003 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 16 December 2002
    After rebooting the system I use the following command line to start the appserver:
    + /export/VMs/j2sdk1.4.1_02/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps
    -XX:+UseMPSS -XX:+AggressiveHeap -Xms3500m -Xmx3500m -Xmn600m -Dweblogic.oci.sel
    ectBlobChunkSize=1600 -classpath ...
    The process should have some annon segments mapped to 4M, but it doesn't:
    <gar07.7> ps -ef | grep java
    ecuser 541 533 12 10:30:33 ? 0:50 /export/VMs/j2sdk1.4.1_02/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps -
    ecuser 566 343 0 10:31:24 pts/1 0:00 grep java
    <gar07.8> pmap -s 541 | grep 4M
    <gar07.9>
    If I do exactly the same using JDK 1.4.2 instead of JDK1.4.1_02 I am able to get
    4M pages. Here is the command line for 1.4.2:
    + /export/VMs/j2sdk1.4.2/bin/java -server -verbose:gc -XX:+PrintGCTimeStamps -XX
    :+PrintGCDetails -XX:+AggressiveHeap -Xms3500m -Xmx3500m -Dweblogic.oci.selectBl
    obChunkSize=1600 -classpath ...
    And here are my 4M pages:
    <gar07.20> pmap -s `pgrep java` | grep 4M
    1AC00000 282624K 4M rwx-- [ anon ]
    F5800000 16384K 4M rwx-- [ anon ]
    F6800000 4096K 4M rwx-- [ anon ]
    F6C00000 4096K 4M rwx-- [ anon ]
    F7000000 4096K 4M rwx-- [ anon ]
    F9C00000 4096K 4M rwx-- [ anon ]
    Without large pages the time spent in TLB misses for this benchmark is 25% (!)
    Using 4M pages that time is reduce to 3%. WLS8.1 was certified with 1.4.1_02 so
    we cannot use 1.4.2 for the benchmark.
    thanks for your help,
    Fernando Castano
    Posted Date : 2006-04-27 23:04:32.0
    Work Around      
    N/A
    Evaluation      
    Mukesh,
    Can you get someone to look into back porting this fix. Please see below
    attachment for additional info. 4845026 : (P1/S1) New Hotbug Created
    Is a new bug that only exists in JDk 1.4.1_x release. Its fixed in 1.4.2
    release from code related to bug 4737603.
    Thanks Jane & James for the heads up.
    Thanks
    Gary Collins
    Gary,
    I think the bug James referred to is
    4737603 Using MPSS with Parallel Garbage Collection doesn't yield 4mb
    pages
    which was fixed in mantis (according to the bug report).
    Looks like a simple fix to back-port.
    Jane
    xxxxx@xxxxx 2003-04-10
    This problem is partially because of bug 4737603, mainly because there is code cache mapping to large page in 1.4.1(4772288: New MPSS in mantis). This part of code will be ported into 1.4.1 from mantis.
    xxxxx@xxxxx 2003-04-18
    There's 2 things. MPSS wasn't used in the parallel GC collector AND not
    used for the code cache. Both need to be addressed.
    xxxxx@xxxxx 2003-04-21

  • WebServices and Java/Weblogic RPC Client

    Hi,
    I have a simple usability question :
    - Where would I want to use a java client that invokes the (WebLogic) Webservice
    using RPC/SOAP - especially the static client model?
    - Probably the corollary to that would be - why wouldn't I simply invoke the ejb
    using the EJB interface invocation?
    In both cases, the information required by the developer to write the code is
    same, the coding effort is same (only the Properties object being passed to obtain
    the InitialContext is populated with different values) - and everything is hardcoded
    i.e. no dynamic behavior advantage.
    I ran some quick and dirty benchmarks and the webService client is slower than
    the mundane ejb client to the order of magnitude of 1:4, 1:5. (duh .. xml!)
    Two advantages that I can think of are :
    - Because of HTTP, firewall/port issues may be circumvented when using WebServices.
    - The thin client.jar maybe easier to distribute than weblogic.jar.
    Shall deeply appreciate any insight to the utility from a business perspective
    (read ~ convincing clients).
    Thanks,
    Ajay

    It took me almost 3 seconds to find this so I can see why you would ask. http://java.sun.com/webservices/tutorial.html

Maybe you are looking for