Copying arrays, performance questions

Hello there
The JDK offers several ways to copy arrays so I ran some experiments to try and find out which would be the fastest.
I was measuring the time it takes to copy large arrays of integers. I wrote a program that allocates arrays of various sizes, and copy them several times using different methods. Then I measured the time each method took using the NetBeans profiler and calculated the frequencies.
Here are the results I obtained (click for full size):  http://i.share.pho.to/dc40172a_l.png
(what I call in-place copy is just iterating through the array with a for loop and copying the values one by one)
I generated a graph from those values:  http://i.share.pho.to/049e0f73_l.png
A zoom on the interesting part: http://i.share.pho.to/a9e9a6a4_l.png
According to these results, clone() becomes faster at some point (not sure why). I've re-ran these experiments a few times and it seems to always happen somewhere between 725 and 750.
Now here are my questions:
- Is what I did a valid and reliable way to test performances, or are my results completely irrelevant? And if it's not, what would be a smarter way to do this?
- Will clone be faster than arraycopy past 750 items on any PC or will these results be influences by other factors?
- Is there a way to write a method that would copy the array with optimal performances using clone and arraycopy, such that the cost of using it would be insignificant compared to systematically using one method over the other?
- Any idea why clone() can become faster for bigger arrays? I know arraycopy is a native method, I didn't try to look into what it does exactly but I can't imagine it's doing anything more complicating than copying elements from one location in the memory to another... How can another method be faster than that?
(just reminding I'm copying primitives, not objects)
Thanks!
Message was edited by: xStardust! Added links, mr forum decided to take away my images

yeh, everyone thinks that at some point. it relies,
however, on you being perfect and knowing everything
in advance, which you aren't, and don't (no offence,
the same applies to all of us!). time and time again,
people do this up-front and discover that what they
thought would be a bottleneck, isn't. plus,
the JVM is much smarter at optimizing code than you
think: trust it. the best way to get good performance
out of your code is to write simple, straightforward
good OO code. JVMs are at a point now where they can
optimize java to outperform equivalent C/C++ code
(no, really) but since they're written by human
beings, who have real deadlines and targets, the
optimizations that make it into a release are the
most common ones. just write your application, and
then see how it performs. trust me on this
have a read of
[url=http://java.sun.com/developer/technicalArticles/I
nterviews/goetz_qa.html]this for more info anda chance to see where I plagiarized that post from :-)
Thanks for that link you gave me :)
Was usefull to read.
About time and money of programming, that is not really an issue for me atm since i'm doing this project for a company, but through school (it's like working but not for money).
Of course it should not last entirely long but I got time to figure out alot of things.
For my next project I will try to focus some more on building first, optimizing performance later (if it can be done with a good margin, since it seems the biggest bottlenecks are not the code but things outside the code).
@promethuuzz
The idea was to put collection objects (an object that handles the orm objects initialized) in the request and pass them along to the jsp (this is all done through a customized mvc model).
So I wanted to see if this method was performance heavy so I won't end up writing the entire app and finding out halve of it is very performance heavy :)

Similar Messages

  • Copy Array Quick Question

    Just a quick question.
    Say I have an array A = [1,2,3,4,5]
    B = new int[5]
    I understand to copy the elements of A into B we could use a for loop.
    But can somebody explain to me why you can't just go B = A?
    Does this just cause B to point to A?
    So how do arrays work exactly? Is it like in C where each element is actually a pointer to the data, or does Java copy the data directly into the array element?

    Kayaman wrote:
    JT26 wrote:
    Array A=[1,2,3,4,5] ----> Here A is a reference variable which points to memory locations A[0],A[1],A[2],A[3],A[4] which could not be continueous.Actually no. What you have there is a syntax error. And please don't confuse people by talking about non continuous memory locations. That's simply wrong.And 100% irrelevant for the question at hand.
    >
    B = new int[5] -----> Here B is another reference variable points to an array which could hold 5 elemets.That is correct, basically.
    All the five element holders of B could be physically any where in the memory, thus copying the the elements is done via a for loop.No, Java is smart enough to store arrays sequentially in memory. And the smart way to copy large arrays is System.arrayCopy().Or java.util.Arrays.copyOf.

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • The System.arraycopy Functionality and copying array question

    When created arrays such as String[] myStringArray (for example), is it general good practice to use the System.arraycopy function to copy the array?
    Why isn't it good practice to use equal instead? Would this work just as well?
    String[] myStringArray = new String[] { "My", " Test"};
    String[] myStringArrayCopy = new String[myStringArray.length};
    myStringArrayCopy = myStringArrayCopy;Or is that just going to make them the same element in memory and if I change myStringArray in antoher part of the program that means myStringArrayCopy will change as well since it is the same thing?

    Youre right, the equals sign just assigns the new array same reference in memory as the old array so they are pointing to the same data.
    Im 90% sure of that.
    If you want to work with an array without changing the original array id suggest using the System.arraycopy method. If you dont mind the contents changing then use the = sign..(but then why use a new array at all?)
    Hope this helps, if not theres loads of more experienced people on the boards...
    Ocelot

  • Copy Performance question

    I have a Account's dim with some calculated values, the next step is to copy calculated values to GL accounts.
    Next, I copy from Local Currency to US currency.
    Is it faster to copy to US before the calc, or to calc then copy to US ?
    Thanks,
    Jz

    How much memory do you have? If 2 GB, I'd recommend not upgrading.
    Creating a partition isn't hard. Restart with command - R held down and open Disk Utility.
    Step 1 - select the top level hard drive.
    Step 2 - select the Partition tab.
    Step 3 - grab the //// to slide the partition up to make room for a new partition.
    Step 4 - select the vacant space and hit the + sign. 
    Step 5- Mac OS Extended (Journaled) and add a name.
    Step 5a, click and select GUID.
    Then Apply.
    System Preferences/Startup Disk will let you choose your boot disk or hold down the option/alt key during a restart to choose.

  • Controlfile on ASM performance question

    Seeing Controlfile Enqueue performance spikes, consideration are to move control file to separater diskgroup(need outage) ? or add some disk(from different luns,( i prefer this approach) in the same disk group , seems like slow disk is casing this issue...
    2nd question :can snapshot controlfile be placed on ASM storage?

    Following points may help:
    - Separating the control file to another diskgroup may make things even worse in case that the total number of disks are insufficient in the new disk group.
    - Those control file contention issues are usually nothing to do with the storage throughput you have but the number of operations requiring different levels of exclusion on the control files.
    - Since multiple copies of controlfiles are updated concurrently a possible, sometimes, problem is that the secondary copy of controlfile is slower than the other. Please check that this is not the issue (different tiers of storage may cause such problems)
    Regards,
    Husnu Sensoy

  • Copying Arrays - Instance Variables - Multiple Animations

    Hi All!
    Thanks so much, in advance, as always, for your assistance!
    So, here's a site I'm working on:
    http://www.mediamackenzie.com/cmix/cmix10.html
    I have 3 quick questions:
    - I tried, when I first started making this site, to load all of the artwork images into an array and then copy the array before resizing them for their specific functions (being seen as thumbnails or as full size pics.) Unfortunately, I ran into the well known issue of Array cloning only creating a pointer to the same group of items. I tried the newArray = oldArray.slice() trick, but it didn't seem to work. Finally, I just loaded the images twice into two separate arrays, and it works, but I hate this solution. Anyone got a better one?
    - I'm trying to maintain some sort of connection between the two sets of Arrays so that, for example, when someone clicks on Thumbnail 15, Fullsize Image 15 will open up but I couldn't find anything that worked. Renaming the Instance Name dynamically didn't seem to work and adding an Instance Variable dynamically doesn't seem possible either as I can't make the Class I am working with (Sprite, in this case) dynamic ahead of time. I'm sure there's a simple method for this. Any suggestions?
    - Lastly, notice how when the site opens up, the different animations seem to interfere with each other and slow each other down (they also seem to get interference from the time taken to load the image Arrays.) Anyone got any high level suggestions for how to avoid this?
    Thanks So Much!
    and Be Well
    Graham

    I'm still stuck, but close, I think. The URL: http://www.mediamackenzie.com/cmix/cmix11.html
    Here is the code from frame 1 of my Gallery MovieClip (an instance of which is created dynamically in the main timeline):
    stop();
    import fl.transitions.Tween;
    import fl.transitions.easing.*;
    var idealH = 120;
    var idealW = idealH + 50;
    var loadH = 300;
    var loadW = loadH * 2;
    var thumb = 0;
    var loadNum = 0;
    var thumby:Sprite;
    var darkStage:Sprite;
    var loadSprite:Sprite;
    var thumbArray:Array = new Array();
    var loadArray:Array = new Array();
    var bigLoad:Loader;
    var reSized:Boolean = false;
    this[thumb] = new Loader();
    this[thumb].contentLoaderInfo.addEventListener(Event.COMPLETE, thumbComplete);
    this[thumb].load(new URLRequest("images/0.png"));
    thumbArray.push(this[thumb]);
    function thumbComplete(e:Event):void {
        trace("thumbComplete");
        thumb++;
        this[thumb] = new Loader();
        this[thumb].load(new URLRequest("images/" + thumb + ".png"));
        thumbArray.push(this[thumb]);
        this[thumb].contentLoaderInfo.addEventListener(Event.COMPLETE, thumbComplete);
        this[thumb].contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR, thumbError);
    function thumbError(e:IOErrorEvent):void {
        trace("thumbError");
        thumb = 0;
        thumbArray.pop();
        loadArray = thumbArray.slice();
        gotoAndStop(2);
    Now, here is the code from frame 2:
    //stop();
    addArrows();
    thumbResize();
    loadResize();
    function thumbResize():void {
    trace("thumbResize");
        for (var batch = 0; batch < Math.ceil(thumbArray.length / 8); batch++) {
            trace("batch " + batch);
            var batchSprite: Sprite = new Sprite();
            batchSprite.x = (batchSprite.x + (idealW / 1.5) + (batch * 800));
            //batchSprite.mouseEnabled = false;
            addChild(batchSprite);
            for (var row = 0; row < 2; row++) {
                trace("     row " + row);
                for (var col = 0; col < 4; col++) {
                    trace("          col " + col);
                    trace("               thumb " + thumb);
                    //If the width of the image is greater than the ideal width, OR the height is greater than the ideal height...
                    if ( thumbArray[thumb].content.width > idealW || thumbArray[thumb].content.height > idealH) {
                        //And if the width of the image is greater than the height, apply Scaler 1...
                        if ( thumbArray[thumb].content.width > thumbArray[thumb].content.height ) {
                            //Scaler 1 is the ratio of the ideal width to the image width
                            var scaler1 = idealW / thumbArray[thumb].content.width;
                            trace("               scaler1 = " + scaler1);
                            //Apply Scaler 1 to both the width and height of the image
                            thumbArray[thumb].content.scaleX = thumbArray[thumb].content.scaleY = scaler1;
                            trace("               image width:" + thumbArray[thumb].content.width);
                            trace("               image height:" + thumbArray[thumb].content.height);
                            //Otherwise, apply Scaler 2
                        } else {
                            //Scaler 2 is the ratio of the ideal width to the image height
                            var scaler2 = idealW / thumbArray[thumb].content.height;
                            trace("               scaler2 = " + scaler2);
                            //Apply Scaler 2 to both the width and height of the image
                            thumbArray[thumb].content.scaleX = thumbArray[thumb].content.scaleY = scaler2;
                            trace("               image width:" + thumbArray[thumb].content.width);
                            trace("               image height:" + thumbArray[thumb].content.height);
                        //Otherwise... (that is, the image width and height are in both cases less than the ideal)
                    } else {
                        //And if the width of the image is greater than the heigh, apply Scaler 3
                        if ( thumbArray[thumb].content.width > thumbArray[thumb].content.height ) {
                            //Scaler 3 is the ratio of the ideal width to the image width
                            var scaler3 = idealW / thumbArray[thumb].content.width;
                            trace("               scaler3 = " + scaler3);
                            //Apply Scaler 3 to both the width and height of the image
                            thumbArray[thumb].content.scaleX = thumbArray[thumb].content.scaleY = scaler3;
                            trace("               image width:" + thumbArray[thumb].content.width);
                            trace("               image height:" + thumbArray[thumb].content.height);
                        } else {
                            //Scaler 4 is the ratio of the ideal width to the image height
                            var scaler4 = idealW / thumbArray[thumb].content.height;
                            trace("               scaler4 = " + scaler4);
                            //Apply Scaler 4 to both the width and height of the image
                            thumbArray[thumb].content.scaleX = thumbArray[thumb].content.scaleY = scaler4;
                            trace("               image width:" + thumbArray[thumb].content.width);
                            trace("               image height:" + thumbArray[thumb].content.height);
                    thumbArray[thumb].content.x = - (thumbArray[thumb].content.width / 2);
                    thumbArray[thumb].content.y = - (thumbArray[thumb].content.height / 2);
                    thumby = new Sprite();
                    thumby.addChild(thumbArray[thumb]);
                    thumby.y = (row * (idealW + (idealW / 8)));
                    thumby.x = (col * (idealW + (idealW / 8)));
                    thumby.buttonMode = true;
                    thumby.useHandCursor = true;
                    thumby.addEventListener(MouseEvent.CLICK, enLarge);
                    batchSprite.addChild(thumby);
                    thumb++;
    function loadResize():void {
        trace("loadResize");
        for (var ex = 0; ex < loadArray.length; ex++) {
            //If the width of the image is greater than the ideal width, OR the height is greater than the ideal height...
            if ( loadArray[loadNum].content.width > loadW || loadArray[loadNum].content.height > loadH) {
                //And if the width of the image is greater than the height, apply Scaler 1...
                if ( loadArray[loadNum].content.width > loadArray[loadNum].content.height ) {
                    //Scaler 1 is the ratio of the ideal width to the image width
                    var scaler1 = loadW / loadArray[loadNum].content.width;
                    //Apply Scaler 1 to both the width and height of the image
                    loadArray[loadNum].content.scaleX = loadArray[loadNum].content.scaleY = scaler1;
                    //Otherwise, apply Scaler 2
                } else {
                    //Scaler 2 is the ratio of the ideal width to the image height
                    var scaler2 = loadW / loadArray[loadNum].content.height;
                    //Apply Scaler 2 to both the width and height of the image
                    loadArray[loadNum].content.scaleX = loadArray[loadNum].content.scaleY = scaler2;
                //Otherwise... (that is, the image width and height are in both cases less than the ideal)
            } else {
                //And if the width of the image is greater than the heigh, apply Scaler 3
                if ( loadArray[loadNum].content.width > loadArray[loadNum].content.height ) {
                    //Scaler 3 is the ratio of the ideal width to the image width
                    var scaler3 = loadW / loadArray[loadNum].content.width;
                    //Apply Scaler 3 to both the width and height of the image
                    loadArray[loadNum].content.scaleX = loadArray[loadNum].content.scaleY = scaler3;
                } else {
                    //Scaler 4 is the ratio of the ideal width to the image height
                    var scaler4 = loadW / loadArray[loadNum].content.height;
                    //Apply Scaler 4 to both the width and height of the image
                    loadArray[loadNum].content.scaleX = loadArray[loadNum].content.scaleY = scaler4;
            loadArray[loadNum].content.x = - (loadArray[loadNum].content.width / 2);
            loadArray[loadNum].content.y = - (loadArray[loadNum].content.height / 2);
            loadNum++;
    function addArrows():void {
        trace("addArrows");
        var batches =  Math.ceil(thumbArray.length / 8);
        var m = 0;
        trace("batches = " + batches);
        if (batches > 1) {
            for (var k = 1; k < batches; k++) {
                var triW = 20;
                var startX = (((800 - triW) * k) + (triW * m));
                var startY = (idealW / 2);
                var tri:Sprite = new Sprite();
                tri.graphics.beginFill(0xFFFFFF);
                tri.graphics.moveTo(startX, startY);
                tri.graphics.lineTo(startX, (startY + triW));
                tri.graphics.lineTo((startX + triW), (startY + (triW/2)));
                tri.graphics.lineTo(startX, startY);
                tri.graphics.endFill();
                tri.buttonMode = true;
                tri.useHandCursor = true;
                tri.addEventListener(MouseEvent.CLICK, moveLeft);
                addChild(tri);
                var tri2:Sprite = new Sprite();
                var startX2 = (startX + (triW * 2));
                tri2.graphics.beginFill(0xFFFFFF);
                tri2.graphics.moveTo(startX2, startY);
                tri2.graphics.lineTo(startX2, (startY + triW));
                tri2.graphics.lineTo((startX2 - triW), (startY + (triW / 2)));
                tri2.graphics.lineTo(startX2, startY);
                tri2.graphics.endFill();
                tri2.buttonMode = true;
                tri2.useHandCursor = true;
                tri2.addEventListener(MouseEvent.CLICK, moveRight);
                addChild(tri2);
                m++;
    function moveLeft(event:MouseEvent):void {
        var leftTween:Tween = new Tween(this, "x", Regular.easeOut, this.x, (this.x - 800), .5, true);
    function moveRight(event:MouseEvent):void {
        var rightTween:Tween = new Tween(this, "x", Regular.easeOut, this.x, (this.x + 800), .5, true);
    function enLarge(event:MouseEvent):void {
        darkStage = new Sprite();
        darkStage.graphics.beginFill(0x000000, .75);
        darkStage.graphics.drawRect(0, 0, stage.stageWidth, stage.stageHeight);
        darkStage.addEventListener(MouseEvent.CLICK, reMove);
        darkStage.buttonMode = true;
        darkStage.useHandCursor = true;
        parent.addChild(darkStage);
        var nmbr = thumbArray.indexOf(event.currentTarget);
        bigLoad = new Loader();
        bigLoad = loadArray[nmbr];
        trace("bigLoad: " + bigLoad);
        loadSprite = new Sprite();
        loadSprite.addChild(bigLoad);
        loadSprite.x = (stage.stageWidth / 2);
        loadSprite.y = (stage.stageHeight / 2);
        loadSprite.addEventListener(MouseEvent.CLICK, reMove);
        loadSprite.buttonMode = true;
        loadSprite.useHandCursor = true;
        parent.addChild(loadSprite);
    function reMove(event:MouseEvent):void {
        parent.removeChild(darkStage);
        parent.removeChild(loadSprite);
    The function enLarge is the source of the issue (I want the enlarged image to show up and it's not.)
    Please help if you can!
    Thanks,
    Graham

  • Array Cast Question Puzzling me

    The question below puzzles me. The answer states that the result is a class cast exception because o1 is an int [] [] not a int []
    But I thought the point of line 7 is to say "I know it is a 2D array but I want to cast it to a 1D array - I know I am losing precision here".
    Given:
    1. class Dims {
    2. public static void main(String[] args) {
    3. int[][] a = {{1,2,}, {3,4}};
    4. int[] b = (int[]) a[1];
    5. Object o1 = a;
    6. int[][] a2 = (int[][]) o1;
    7. int[] b2 = (int[]) o1;
    8. System.out.println(b[1]);
    9. } }
    What is the result?
    A. 2
    B. 4
    C. An exception is thrown at runtime
    D. Compilation fails due to an error on line 4.
    E. Compilation fails due to an error on line 5.
    F. Compilation fails due to an error on line 6.
    G. Compilation fails due to an error on line 7.
    Answer:
    3 C is correct. A ClassCastException is thrown at line 7 because o1 refers to an int[][]
    not an int[]. If line 7 was removed, the output would be 4.
    &#730; A, B, D, E, F, and G are incorrect based on the above. (Objective 1.3)

    While you could approximate casting a 2D array to a 1D array in C/C++ by just grabbing a pointer to your first array and then overrunning array bounds (relying on how C/C++ allocates 2D arrays and the lack of bounds checking), Java's strong typing and bounds checking makes this impossible.
    If you want to do something similar in Java, you will need to create a new 1D array of the proper size and copy the elements stored in your 2D array into this new array. That being said, a database is almost guaranteed to be a better solution.

  • MBP with 27" Display performance question

    I'm looking for advice regarding improving the performance, if possible, of my  Macbook Pro and new 27'' Apple display combination.  I'm using a 13" Macbook Pro 2.53Ghz with 4GB RAM, NVIDIA GeForce 9400M graphics card and I have 114GB of the 250GB of HD space available.  What I'm really wondering is is this enough spec to run the 27" display easily.  Apple says it is… and it does work, but I suspect that I'm working at the limit of what my MCB is capable of.  My main applications are Photoshop CS5 with Camera RAW and Bridge.  Everything works but I sometimes get lock ups and things are basically a bit jerky.  Is the bottle neck my 2.53Ghz processor or the graphics card?  I have experimented with the Open GL settings in Photoshop and tried closing all unused applications.  Does anyone have any suggestions for tuning things and is there a feasible upgrade for the graphics card if such a thing would make a difference?  I have recently started working with 21mb RAW files which I realise isn't helping.  Any thoughts would be appreciated.
    Matt.

    I just added a gorgeous LCD 24" to my MBP setup (the G5 is not Happy) The answer to your question is yes. Just go into Display Preferences and drag the menu bar over to the the 24 this will make the 24 the Primary Display and the MBP the secondary when connected.

  • Performance question about 11.1.2 forms at runtime

    hi all,
    Currently we are investigating a forms/reports migration from 10 to 11.
    Initialy we were using v. 11.1.1.4 as the baseline for the migration. Now we are looking at 11.1.2.
    We have the impression that the performance has decreased significantly between these two releases.
    To give an example:
    A wizard screen contains an image alongside a number of items to enter details. In 11.1.1.4 this screen shows up immediately. In 11.1.2 you see the image rolling out on the canvas whilst the properties of the items seem to be set during this event.
    I saw that a number of features were added to be able to tune performance which ... need processing too.
    I get the impression that a big number of events are communicating over the network during the 'built' of the client side view of the screen. If I recall well during the migration of 6 to 9, events were bundled to be transmitted over the network so that delays couldn't come from network roundtrips. I have the impression that this has been reversed and things are communicated between the client and server when they arrive and are not bundled.
    My questions are:
    - is anyone out there experiencing the same kind of behaviour?
    - if so, is there some kind of property(ies) that exist to control the behaviour and improve performance?
    - are there properties for performance monitoring that are set but which cause the slowness as a kind of sideeffect and maybe can be unset.
    Your feedback will be dearly appreciated,
    Greetigns,
    Jan.

    The profile can't be changed although I suspect if there was an issue then banding the line would be something they could utilise if you were happy to do so.
    It's all theoretical right now until you get the service installed. Don't forget there's over 600000 customers now on FTTC and only a very small percentage of them have faults. It might seem like lots looking on this forum but that's only because forums are where people tend to come to complain.
    If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution’

  • SSD Performance Questions

    I've just ordered a MBP 2.4Ghz with 7200rpm 750GB drive, I will be adding either a single 8GB memory stick (total of 10GB) or a 16GB kit.  I'm looking for my first SSD use as a boot drive for OSX and potentially Win 7 in BootCamp and hope I can get some counsel on model selection and configuration.
    I read that SSDs suffer a performance drop (sometimes significant) as the drive fills up.  Also there are both 256GB and 240GB drives.  Ideally I'd like to avoid the frustration associated with seeing performance drop and maximize my formatted capacity.
    1)  Are there models that maintain performance over time?  Is there a specification that indicates how performance drops as the drive fills up?
    2)  Is there a performance difference (current / long-term) between 256 and 240GB drives?
    3)  Does an abundance of RAM improve performance and / or longevity? 
    4)  How much space (if any) should be kept free for swap files in OSX / Windows?
    5)  With 240 / 256GB SSDs, how much usable space is available after formatting?
    6)  Is there a difference in performance based on file format NTFS vs. HPS+?
    7)  Do I need to be concerned about major name brands (Intel, Samsung, OCZ, Kingston, etc.) being incompatible with MBPs?
    8)  Are some SSDs easier to install / configure / maintain on a MBP than others?
    9)  Are there any issues I should be aware of regarding the installation or use of a SSD that would impact my MBP's warranty?
    Based on performance, reliability and 5-year warranty, I've been attracted to the Intel 520.  I've also read good reports on the Samsung 830.  One review indicated that the 830 maintained performance over time while the 520 experienced a significantly greater drop.  True?
    I can buy either drive in the $330 range.  Here on the forum I've read many recommendations for the OWC drives and support.  For comparable performance and a 5-year warranty, it looks like the Extreme Pro would be the model to buy, however, at $460 it is 50% more expensive.  Thoughts?
    Thanks in advance!  This is the kind of purchase that I can only make once every 5-years, so I really appreciate any help.
    JD

    kayakjunkie wrote:
    I've just ordered a MBP 2.4Ghz with 7200rpm 750GB drive, I will be adding either a single 8GB memory stick (total of 10GB) or a 16GB kit.
    Your performance with the large 16GB should outweigh any drawback the 7,200 RPM drive or even a 5,400 RPM drive unless you start swaping then the 7,200 shoud be fine enough.
    The SSD is good for transfering large data sets off the machine, to another SSD, but little benefit on the same machine with most files as they are small so you don't see any benefit really in most day to day operations. If you had low RAM then the SSD would help with a faster memory swap. As you know SSD's wear out faster than hard drives.
    I'm looking for my first SSD use as a boot drive for OSX and potentially Win 7 in BootCamp and hope I can get some counsel on model selection and configuration.
    Here's the speed demon chart, note the fastest ones are smaller in capacity
    http://www.harddrivebenchmark.net/high_end_drives.html
    Again, unless your transfering large data to a external SSD via Thunderbolt on a constant basis (and can afford to replace the worn out SSD's) then a SSD as a boot drive really isn't worth it for most computers if you have a large amount of RAM.
    It used to be with 32bit processors/OS , 3.5GB RAM limits, that having a fast boot drive mattered in day to day because of the faster memory swap, but not anymore. My 4GB with  5,400 RPM stock is fast enough, but I will be getting 16GB soon for my virtual machine OS's.
    I read that SSDs suffer a performance drop (sometimes significant) as the drive fills up.  Also there are both 256GB and 240GB drives.  Ideally I'd like to avoid the frustration associated with seeing performance drop and maximize my formatted capacity.
    Hard drives do this too because the files have to be broken up more to fit into tiny spaces.
    Hard drives also suffer a bit past the 50% filled as the sectors get smaller.
    I use my 750GB partitioned 50/50, A cloned to B so I can option boot either as I won't use the second 50% of the drive day to day as it's too slow for my tastes.
    IMO 250GB is too small for a drive with Windows too, the fastest 500GB SSD would be better $$$. better balance of speed and onboard storage.
    1)  Are there models that maintain performance over time?  Is there a specification that indicates how performance drops as the drive fills up?
    2)  Is there a performance difference (current / long-term) between 256 and 240GB drives?Not that I know of.
    Not that I know of.
    3)  Does an abundance of RAM improve performance and / or longevity?
    Yes, more RAM = less swapping to the SSD means it will last longer and run faster.
    4)  How much space (if any) should be kept free for swap files in OSX / Windows?
    I would suggest 25% for a SSD should be free space, ideally 50% filled for a hard drive, but to 75% max is likely more realistic for most people.
    6)  Is there a difference in performance based on file format NTFS vs. HPS+?
    You will have little choice of format for OS X or Windows, OS X needs HFS+ and Windows needs NTFS.
    If you do a third partition (hard) then exFAT would likely be the best choice for both OS's to access.
    7)  Do I need to be concerned about major name brands (Intel, Samsung, OCZ, Kingston, etc.) being incompatible with MBPs?
    8)  Are some SSDs easier to install / configure / maintain on a MBP than others?
    Not that I know of.
    9)  Are there any issues I should be aware of regarding the installation or use of a SSD that would impact my MBP's warranty?
    Just don't break anything doing so, as one is allowed to replace the RAM/storage. However the warranty/AppleCare doesn't cover the newley added items of course.
    http://eshop.macsales.com/installvideos/
    Based on performance, reliability and 5-year warranty, I've been attracted to the Intel 520.  I've also read good reports on the Samsung 830.  One review indicated that the 830 maintained performance over time while the 520 experienced a significantly greater drop.  True?
    Performance isn't going to matter unless your dealing with large amounts of data on a constant basis, a long warranty is always good. But SSD's have no moving parts that I know of, so...easy to give a 5 year warranty. IMO.
    Look here
    http://www.harddrivebenchmark.net/
    I can buy either drive in the $330 range.  Here on the forum I've read many recommendations for the OWC drives and support.  For comparable performance and a 5-year warranty, it looks like the Extreme Pro would be the model to buy, however, at $460 it is 50% more expensive.  Thoughts?
    OWC is good, but your basically doing all the work anyway so you can choose to install what you want if you find a faster/larger SSD someplace else.
    You need to learn how Lion Recovery Partition works, there are no OS X install disks anymore, it'a all on a partition to boot to install Lion. If you remove the drive, you need to install Lion somehow again right?
    Carbon Copy Cloner, clones your entire Lion and Lion Recovery Partition to a external drive, can option boot from it and it's the same thing. Reverse clone onto the new SSD.
    Other info you will need.
    https://support.apple.com/kb/HT4718
    https://support.apple.com/kb/dl1433
    http://osxdaily.com/2011/08/08/lion-recovery-disk-assistant-tool-makes-external- lion-boot-recovery-drives/
    A option is you can choose is after you installed Windows in Bootcamp (as the machine won't boot a Windows disk from a external optical drive) on the SSD and used WinClone to clone Bootcamp for backup, is to replace the Superdrive with a kit and place the hard drive there for partitioning and storage.
    This way the SSD stays unchanged and fast, the hard drive takes all the work of the users files, changes etc, places the wear and tear on that instead.
    The Superdrive goes into a enclosure (sold with the kit) and is a external optical drive.
    This modification will of course void your warranty/Applecare.
    For more information, see Bmer (Dave Merten) over at MacOwnersSupportGroup as he has done this and knows all the tricks.

  • Performance question RE: using Lightroom 2 on a local disk to access files on a server

    Hello,
    I am considering moving to a server-based platform for my digital images.  That would allow me to go from linking together 18 external hard drives (500 GB to 1 TB) to accessing a single server to get to all of my information. 
    I currently have 4 Mac Pro towers with individually licensed copies of LR2.  My "Master Catalog" has over 300,000 images in it that I am referencing.
    Are there any performance restrictions to accessing, say, 12 terabytes (and growing daily) of images on a server from one catalog?  Each of the 4 Mac Pros has a copy of the catalog that is synchronized between them.  Right now, in our current setup, we have to swap the external drives between the individual towers in order to have everyone work simultaneously. 
    Is it possible to have 4 computers with individual catalogs running simultaneously on their local disks access the same server for reading/writing information?
    I have heard a rumor that this type of constant read/write on a server is actually bad for it, and can cause corruption, but it seems to me as though this is the exact purpose of a server, so that idea strikes me as odd.
    My biggest concern about all of this is that there will be some sort of limitation (performance or otherwise) in LR2 that will keep me from implementing this server-based setup. 
    Any thoughts?

    bceugene wrote:
    Yes, it makes it slower. That's what I was saying. I don't find it hard to find things spread across two catalogs instead of one. I'd much rather have two catalogs with 150k each than one with 300,000.
    James, have you tried ACDSee Pro? It lists IPTC metadata in the traditional OS "details" mode, which in my experience is much faster, easier to use, and analyze than LR. I love LR, but if I need to get an overview of the metadata in a set of files, ACDSee is much more effective and efficient. If you save to xmp automatically from LR, ADCSee dynamically updates the new XMP acroos its whole catalog. It's allowed me to get a handle on metadata across large sets of files in ways that are not possible with LR.
    And personnaly if I had 300K raw files I would not to additional burden them with XMP sidecar files and the constant auto write from LR. To me that would be worse than one large DB file!
    Don

Maybe you are looking for

  • Satellite U400-15B hangs on Internet and Touch media keys issue

    Hi All, Had two issues with my U400-15B which I am hoping to resolve. 1) The first issue is that I have noticed that my internet occasionally seems to hang on Internet Explorer when using website's which have a couple of images e.t.c. It will freeze

  • How do i get data from a structure using join?

    hi, what is the actual use of a structure.? my problem is : KUAGV is an existing STRUCTURE. it has got one fields each which links to MARA, AND VBKD tables. i want to fetch all related information from KUAGV, mara, vbkd . which is the better way : us

  • In the spry css menu, how do i know what does what?

    I have two questions, (1)    In the spry css menu, is it just a case of trial and error to find out what each one does or is there a recognisable freature of each line to tell me what each one does, there are so many of them and most of them are alik

  • Updating/changing track info

    I ripped a couple gb's of music from my 160gb ipod classic and imported them into itunes but for some reason the artist info was out of whack.  I tried changing the artist name info and it shows correctly in the overall view of my itunes library, but

  • During upgrade iphone, the iphone screen hang on the connect to itune screen

    I try to upgrade my iphone 5C. After connecting it with itune, the iphone hanged and it stopped at the "USB connecting to itune" screen.