Long execution times for TestStand conditional statements

I have two test stations – one here, one at the factory in China that has been operating for about a year. The test program uses TestStand 3.1 and calls primarily dll's developed using CVI. Up until a couple months ago, both test stations performed in a similar manner, then the computer at the factory died and was replaced with a new, faster computer. Now the same test sequence at the factory take three times as long to execute (30 min at the facotry, 10min here).
I have recoded the execution time at various point during the execution, and have found that the extra times seems to be occurring during the evaluation of conditional statements in TestStand (i.e. for loops, if statements, case statements). For example, one particular ‘for’ evaluation takes 30 ms on the test station here, but takes 400 ms at the test station at the factory (note: this is just the evaluation of the for condition, not the execution of the steps contained within the for loop).
The actual dll calls seem to be slightly faster with the new computer.
Also the ‘Module Times’ reported don’t seem to match the actual time for the module for the computer at the factory. For example, for the following piece of TestStand code:
Label1
Subsequence Call
Label2
I record the execution time to the report text in both Label1 and Label2. Subtracting one from the other gives me about 18 seconds. However the ‘Module Time’ recorded for ‘Subsequence Call’ is only 3.43 seconds.
Any body have any ideas why the long execution time with the new computer? I always setup the computers in exactly the same way, but maybe there is a TestStand setting somewhere that I have missed? Keep in mind, both test stations are running exactly the same revision of code.

Got some more results from the factory this morning:
1) Task Manager shows that the TestExec.exe is the only thing using CPU to any significant degree. Also CPU Usage History show that the CPU Usage never reaches 100%.
2) I sent a new test program that will log test execution time in more places. Longer execution times are seen in nearly every area of the program, but one area where this is very dramatic is the time taken to return from one particular subsequence call. In this subsequence I log the time just before the <End Group> at then end of Main. There is nothing in Cleanup. I then log the time immediately after returning from this sequence. On the test system I have here this takes approximately 160 ms. On the test system at the factory this takes approximately 14.5 seconds! The program seems to be hanging here for some reason. Every time this function is called the same thing happens and for the same amount of time (and this function is called about 40 times in the test program, so this is kill me).

Similar Messages

  • Long execution time for report MMREO050N

    Hi,
        I want ask a help for a report MMREO050N.
    I have executed a report MMREO050N in ECC6.0, for archive 42.000 materials, but the program run a long long time,  in 24 hours the report have elaborated only 254 materiel and with this performance, the program will run for months...
    do you have any suggestions?
    Thanks
    Regards
         Andrea Ciocca

    is it the first archiving program you execute?
    I am just asking this as I have seen many questions here and all started with archiving of material master while material master archiving is probably the very last step in an archving circle.
    Just execute SARA, enter MM_MATNR and click the button for network.
    SAP will shows you which objects should be archived prior to material master.
    wrong sequence of archiving has an enormous effect on run time, because SAP executes some hundred checks for each material master. And if you have a still dependend data for an material to be archived, then SAP reads a lot data before and writes error messages in the log and does not archive the material.
    Even after having archived the dependend data I am still facing about 3 minutes runtime per material.

  • Can I reduce the execution time for a step in a TestStand ?

    Hi,
    I calculated the a single step execution time for TestStand Ver 2.0. It comes to around 20 milliseconds/step. Can I reduce this excution time ?
    Are there any settings available for configuring execution time parameters except result logging and exception handlings to reduce the execution time ?

    It's difficult to tell how you what time you are reporting for your step. Clearly we don't have control of the time it takes your code to execute. However, we are constantly working on reducing the overhead of calling the code. In addition, you don't mention the type of step you are calling. One way to have a common reference is to use the example \Examples\Benchmarks\Benchmarks.seq. Below have have posted the results of running this sequence with both tracing and result collection enabled and then disabled. I have a 700 MHz, 128 MB RAM, Dell PIII laptop. In this example there is no code within the code modules. You notice that calling a DLL has the least overhead with a minimum of 7.459 ms with tracing and results enabled and 0.092 ms with tracing and results disabled. Although not included below, if I enable results be disable tracing I get a minimum time of 0.201 ms, a 100x improvement on your time.
    With Results and Tracing enabled.
    7.578 milliseconds per step for CVI Standard Prototype - Object File
    7.579 milliseconds per step for CVI Standard Prototype - DLL
    7.459 milliseconds per step for DLL Flexible Prototype
    8.589 milliseconds per step for DLL Flexible Prototype Numeric Limit
    9.563 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    10.015 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    7.868 milliseconds per step for ActiveX Automation
    8.892 milliseconds per step for LabVIEW Standard Prototype
    With tracing and results disabled.
    0.180 milliseconds per step for CVI Standard Prototype - Object File
    0.182 milliseconds per step for CVI Standard Prototype - DLL
    0.092 milliseconds per step for DLL Flexible Prototype
    0.178 milliseconds per step for DLL Flexible Prototype Numeric Limit
    0.277 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    0.400 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    0.270 milliseconds per step for ActiveX Automation
    1.235 milliseconds per step for LabVIEW Standard Prototype

  • Alert with event for delayed job and long execution time

    Dear All,
    We are planning to send alert via email in case job delayed or long execution time.
    I have followed below steps:
    1) Create event Raise Event when job is delayed.
    2) create job chain with STEP1, Job 1 and assign event in raise event parameter.
    3) Once job chain delayed it should raise events.
    4) Above event should trigger custom email but I can not put the Mail_To parameter as IN parameter. And can not be recognized during
    execution.
    It ends with the below error.
    Details:
    JCS-122035: Unable to persist: JCS-102075: Mandatory parameter not set: Parameter Mail_To for job 20413
    at com.redwood.scheduler.model.SchedulerSessionImpl.writeDirtyListLocal(SchedulerSessionImpl.java:805)
    at com.redwood.scheduler.model.SchedulerSessionImpl.persist(SchedulerSessionImpl.java:757)
    Please let us know if anybody knows how to add Mail_To parameter to script.
    Any help is appreciated.
    Thanks in advance.
    Regards,
    Jiggi

    Dear Jiggi,
    where will you define execution time of particular job? because some jobs will take only 1 or 2 minutes, but some jobs normally take more than hours, so how will you decide execution time of individual jobs?
    i thinks you can use P_TO Parameter for sending mail, if you want to add some output log activate spool output script.
    Thanks and regards
    Muhammad Asif

  • Estimate execution time for CTAS

    Hi,
    I am searching for long to find a way to estimate the execution time for CTAS commands. I am a DBA. Our users run CTAS commands to load millions of rows. The commands fetch data from 4-5 very big tables each with millions of records and process them using where clause and group by clause and finally create the table. All these things are coded in the CTAS command. These CTAS sometime takes long time like 5 , 8 Hrs. Users frequently ask me to find how long it's going to take. I use both OEM and TOAD. But I couldn't find the time estimated from these tools. I feel that there must be some way, but I don't know the method.
    Can any body please help me in this regard?
    Thanks & Regards
    Ananda Basak

    It depends on a number of factors chief among them how accurate your estimate needs to be but also including things like what version of Oracle you're using, how accurate your database statistics are, etc.
    One option is to look at the TIME column in the plan. For example, if I wanted to do a CTAS to create a copy of the EMP table, the optimizer expects that to take on the order of a second. Of course, the optimizer's estimates are only estimates and are only as accurate as the database statistics that are in place. If the optimizer generates a bad plan, it's likely because the optimizer expects some operation to take much more or much less time than it does in reality in which case the optimizer's runtime estimate is likely to be way off.
    SQL> explain plan for create table emp_copy as select * from emp;
    Explained.
    SQL> ed
    Wrote file afiedt.buf
      1  select *
      2*   from table( dbms_xplan.display() )
    SQL> /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2748781111
    | Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT |          |    14 |   546 |     4   (0)| 00:00:01 |
    |   1 |  LOAD AS SELECT        | EMP_COPY |       |       |            |          |
    |   2 |   TABLE ACCESS FULL    | EMP      |    14 |   546 |     3   (0)| 00:00:01 |
    -----------------------------------------------------------------------------------Depending on the query plan, you may be able to query the GV$SESSION_LONGOPS table to track the progress of any long-running operations in your session. If your query plan involves a lot of full table scans, sorts that take more than a few seconds, hash joins, etc. then it is likely that you'll be able to chart the progress of a query over time by watching GV$SESSION_LONGOPS change. Of course, if your query is going to need to do many long-running operations, you'll need to a human to interpret the data a bit in order to figure out where in the plan Oracle currently is and how far along that means the entire query is.
    SELECT *
      FROM gv$session_longops
    WHERE time_remaining > 0If you're using 11g and you have the performance and tuning pack licensed, you could also potentially use the V$SQL_PLAN_MONITOR view.
    Justin

  • Optimize long execution time due to 'db file sequential read'

    Hi to all,
    I have got a query that takes long execution time. Most of the time is due to 'db file sequential read'. The query is:
    SELECT * FROM Table_Name
    WHERE col1 = :some_value
    AND col2 BETWEEN :some_range_about_2_Million
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 21 | 504 | 26125 |
    | 1 | TABLE ACCESS BY INDEX ROWID| Table_Name | 21 | 504 | 26125 |
    | 2 | INDEX RANGE SCAN | Index Name | 1705K| | 4100 |
    The table is not partitioned having around 0.2 billion records. Record set for column 'col1' is around 1700K.
    Another index is available for the col2 and col3 is not getting used.
    Any suggestions to optimize it..
    Regards.

    Perhaps a combined index (col2, col1) would work...or try "parallel" hint.
    :p

  • Query Execution Time for a Query causing ORA-1555

    dear Gurus
    I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
    But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
    My question -
    1. Is it possible to accurately find the query duration time besides the Alert Log file ?

    abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
    1- Tune the query. The faster a query runs the less likely a 1555 will occur.
    2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
    If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
    3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
    Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
    HTH -- Mark D Powell --
    dear mark
    Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
    I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
    regards
    abhishek

  • How to find the Execution Time for Java Code?

    * Hi everyone , i want to calculate the execution time for my process in java
    * The following was the ouput for my coding,
    O/P:-
    This run took 0 Hours ;1.31 Minutes ;78.36 Seconds
    *** In the above output , the output should come exactly what hours , minutes and seconds for my process,
    but in my code the minutes are converted into seconds(It should not)...
    * Here is my coding,
        static long start_time;
        public static void startTime()
            start_time = System.currentTimeMillis();
        public static void endTime()
            DecimalFormat df = new DecimalFormat("##.##");
            long end_time = System.currentTimeMillis();
            float t = end_time - start_time;
            float sec = t / 1000;
            float min = 0, hr = 0;
            if (sec > 60) {
                min = sec / 60;
            if (min > 60) {
                hr = min / 60;
            System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
        }* How to Calcualte exact timing for my process....
    * Thanks

    * Hi flounder, Is following code will wotk perfectly?
         public static void endTime()
              DecimalFormat df = new DecimalFormat("##.##");
              long end_time = System.currentTimeMillis();
              float t = end_time - start_time;
              float sec = t / 1000;
              float min = 0, hr = 0;
              while(sec >= 60){
         min++;
         sec = sec -60;
         if (min >= 60){
         min = 0; //or min = min -60;
         hr++;
              System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
         }

  • Ideal execution time for any program

    Hi,
    Is there any method to determine the ideal execution time for a program ?
    Or else how to determine that ?
    I just wanted the max. time that a program can take so that the performance would not be hampered.
    Thanks,
    Binay.

    did you ask for the 'ideal execution time' or 'how to measure execution times'?
    The second question was answered in one of your other questions.
    Optimization:
    Do SQL Trace, go to Summary by SQL statement, check 10 Top contributions (time = duration).
    Try to optimize them, note minimal time per record, if larger than 10.000 microsecodns, then you should index usage.
    Do SE30, go to hit list, sort by net time, again address 10 Top contributions, try to optimize, check the coding.
    Do optimization and trace again, check again 10 Top contributions ....
    Siegfried

  • How to get execution time for a view inside procedure ?

    Hi,
    I want execution time for all the views in my database. I tried "execute immediate" but it does not seem to work.
    It is not waiting to complete the execution of view to go to next step.
    If I am executing the same statement in sqlplus, it is displaying correct time.
    Here is my code:
    Begin
    output_file := UTL_FILE.FOpen ('RECORDING',v_FileName, 'W', 32767);
    Open viewcur;
    Loop
    Fetch viewcur into v_view_name;
         Exit when viewcur%notfound;
         SELECT to_char(systimestamp,'DD-MON-YYYY HH24:MI:SS.FF') into v_start_time from dual;
         v_stmt := 'Select * from ' ||v_view_name ;
         Execute Immediate v_stmt;
    SELECT to_char(systimestamp,'DD-MON-YYYY HH24:MI:SS.FF') into v_end_time from dual;
    v_record_str := v_start_time||','||v_view_name||','||v_end_time;
         UTL_FILE.PUT_LINE(output_file, v_record_str);
    End Loop;
    Close viewcur;
    utl_file.fClose(output_file);
    End ;
    Oracle version: 11.1.0.6.0

    Hi,
    Running with a user with dba privileges:
    DECLARE
        CURSOR viewcur IS
            SELECT table_name
            FROM   dictionary d
            WHERE  d.table_name LIKE 'ALL_A%';
        output_file UTL_FILE.file_type;
        v_FileName  VARCHAR2(30) := 'TEST_VIEW_TIME.TXT';
        v_view_name dictionary.table_name%TYPE;
        v_start_time varchar2(30);
        v_end_time varchar2(30);
        v_record_str varchar2(200);
        v_stmt varchar2(200);
    BEGIN
        output_file := UTL_FILE.FOpen('EXT_FILES', v_FileName, 'W', 32767);
        OPEN viewcur;
        LOOP
            FETCH viewcur
                INTO v_view_name;
            EXIT WHEN viewcur%NOTFOUND;
            SELECT TO_CHAR(systimestamp, 'DD-MON-YYYY HH24:MI:SS.FF')
            INTO   v_start_time
            FROM   dual;
            v_stmt := 'Select * from ' || v_view_name;
            EXECUTE IMMEDIATE v_stmt;
            SELECT TO_CHAR(systimestamp, 'DD-MON-YYYY HH24:MI:SS.FF')
            INTO   v_end_time
            FROM   dual;
            v_record_str := v_start_time || ',' || v_view_name || ',' || v_end_time;
            UTL_FILE.PUT_LINE(output_file, v_record_str);
        END LOOP;
        CLOSE viewcur;
        utl_file.fClose(output_file);
    END;
    /TEST_VIEW_TIME.TXT:
    02-JUL-2009 11:48:47.953000,ALL_ARGUMENTS,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_ALL_TABLES,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_ASSOCIATIONS,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_AUDIT_POLICIES,02-JUL-2009 11:48:47.999000
    02-JUL-2009 11:48:47.999000,ALL_AUDIT_POLICY_COLUMNS,02-JUL-2009 11:48:48.093000
    02-JUL-2009 11:48:48.093000,ALL_AWS,02-JUL-2009 11:48:48.187000
    02-JUL-2009 11:48:48.187000,ALL_AW_PS,02-JUL-2009 11:48:48.187000
    02-JUL-2009 11:48:48.187000,ALL_APPLY,02-JUL-2009 11:48:48.343000
    02-JUL-2009 11:48:48.343000,ALL_APPLY_PARAMETERS,02-JUL-2009 11:48:48.421000
    02-JUL-2009 11:48:48.421000,ALL_APPLY_KEY_COLUMNS,02-JUL-2009 11:48:48.437000
    02-JUL-2009 11:48:48.437000,ALL_APPLY_CONFLICT_COLUMNS,02-JUL-2009 11:48:48.781000
    02-JUL-2009 11:48:48.781000,ALL_APPLY_TABLE_COLUMNS,02-JUL-2009 11:48:48.828000
    02-JUL-2009 11:48:48.828000,ALL_APPLY_DML_HANDLERS,02-JUL-2009 11:48:48.890000
    02-JUL-2009 11:48:48.890000,ALL_APPLY_PROGRESS,02-JUL-2009 11:48:48.968000
    02-JUL-2009 11:48:48.968000,ALL_APPLY_ERROR,02-JUL-2009 11:48:49.015000
    02-JUL-2009 11:48:49.015000,ALL_APPLY_ENQUEUE,02-JUL-2009 11:48:49.234000
    02-JUL-2009 11:48:49.234000,ALL_APPLY_EXECUTE,02-JUL-2009 11:48:49.281000
    02-JUL-2009 11:48:49.281000,ALL_AW_PROP,02-JUL-2009 11:48:49.531000
    02-JUL-2009 11:48:49.546000,ALL_AW_OBJ,02-JUL-2009 11:48:49.578000
    02-JUL-2009 11:48:49.578000,ALL_AW_PROP_NAME,02-JUL-2009 11:48:49.609000
    02-JUL-2009 11:48:49.609000,ALL_AW_AC,02-JUL-2009 11:48:49.624000
    02-JUL-2009 11:48:49.624000,ALL_AW_AC_10G,02-JUL-2009 11:48:49.640000Regards,

  • I had backed up my IPhone 4s on iCloud on Jan 19. I am now trying to do another back up but it says the time required is 7 hours. It appears to long a time for 1GB of data stored on the iCloud. Can someone help me please?

    I had backed up my IPhone 4s on iCloud on Jan 19. I am now trying to do another back up but it says the time required is 7 hours. It appears to long a time for 1GB of data stored on the iCloud. Can someone help me please?

    To be honest, that sounds about right.
    For example on my 8Mbps (megbits) down service I get around 0.4Mbps upload.  That is the equivalent of (very approximately) 3Mb (megabytes) per minute or 180Mb per hour.  Over 7 hours that would be just over 1Gb.
    Obviously, it all depends on your connection speed, but that is certainly what I would expect, and that is why I use my computer for backing up, not iCloud.  So much quicker.

  • Execution time for Call Library Function Node

    I am experimenting with the Call Library Function Node block in LabVIEW and am curious if it should be running faster than what I'm seeing.  For testing purposes, I have compiled and transfered to my RT target the .out file from the KB article http://digital.ni.com/public.nsf/allkb/81D1172E3C28A5E4862575CC0076A230 (I'm using the vxworks 6.1 version).  The function in the .out file just multiplies two inputs together, adds a constant, and returns the result.  I have put this inside a 1 kHz timed loop with a commanded period of 1 ms and via the Ticks(ms) block and shift registers I calculate the amount of time per loop execution.  This process is apparently taking 5 ms per cycle and to me that seems slow.  Is that roughly the correct execution time for this kind of setup?  I will attach my test .vi file.
    What I'm using:
    Windows 7
    LabVIEW 2009 SP1
    NI-cRIO 9024 with NI-RIO 3.4.0
    Solved!
    Go to Solution.
    Attachments:
    test DLL.vi ‏31 KB

    First off, the way you are doing timing isn't necessarily accurate because you don't know when the tick count VI is being called. For example, if it gets called on one iteration after your call library node executes, and the next iteration it gets called before the CLFN it executes, the subtraction doesn't include the call of the CLFN so you aren't seeing the true time it is taking for the dll to be called.
    Where it says "error" on the top left hand corner of your loop. left click and choose previous iteration timing. Also, do you have the ability to choose a 1 Mhz clock? Are you sure it's actually being run on the RT and not on your PC? Running it on the PC would definitely make it difficult to execute at a 1 kHz rate.
    CLA, LabVIEW Versions 2010-2013

  • Execution Time for T.Code

    Hi Experts,
    I want to know the exact execution time for a t.code. I check it in ST03 or ST03n but I can't get proper data. In ST03 I get the average and total response time but I want the exact ececution or response  time.
    Waiting for your inputs.
    Regards,
    Nisit

    From SAP
    For old verions
    In ST03 transaction
    click on "Performance Database" tab 
    Then double click on Total
    It will open Dialogue box CHOOSE TIME PERIOD  ,select the time period and then click on the required date
    Then Click on Transaction Profile ,here it will give the list of transaction executed and its execution time
    In STO3N , you will find "TRANSACTION PROFILE" under Analysis views, when you double click on it will give present days Transaction codes executed.
    If you want the month data then go to Export mode in ST03n ,you will get the data "TRANSACTION PROFILE" under Analysis views  .
    Regards,
    Beena

  • Execution time for web reports

    Hello every one,
    How to calculate execution time for web reports, for query execution we will go through RSRT, by giving query name and press execute + Debug button then select statistical data & Do not Cache buttons then press enter, after getting output press on back button, we will get duration of the query.....
    But my question is , can we calculate execution time for webreport, if so can you please guide me.
    and can you also tell me , if there is any RRI for one report, how to calculate execution time for these queries.
    Ex : Query ABC have XYZ as its drilldown report , i need to calculate execution time for XYZ report via ABC report.
    Thanks in advance,
    Best Regards.
    NP.

    Hi,
    For reports executed in java web you can add the parameter &PROFILING=X
    to the URL in order to record the execution time. Please have a look at SAP note 1048691 for further information.
    Best regards,
    Janine

  • Anyone else having extra long delivery time for the late 2013 macbook pro 15" 2.6ghz 1tb model?

    Anyone else having extra long delivery time for the late 2013 macbook pro 15" 2.6ghz 1tb model?
    As of today feb 15 i have been waiting for 78 days. Finland doesn't have an official apple store (only authorized resellers) It was my mistake because i should have bought from apple.com. But i think 2.5 months is too long and i have no information about when i can get it..

    The low-end 15” rMBP with a 8 GB or RAM and a 256 GB SSD costs $2,000. 
    The low-end 15” rMBP with a 8 GB or RAM and a 512 GB SSD costs $2,300. 
    With 16 GB of RAM and a 512 GB SSD it costs $2,500.
    The high-end 15” rMBP with a 16 GB of RAM and a 512 GB SSD costs $2,600.
    All cost $500 dollars more with a 1 TB SSD.
    So the SSD size is not a differentiating factor unless you prefer 256 GB.
    If you think you need 16 GB of RAM and a 512 GB SSD then you might as well spend the extra $100 for the high-end model.
    The high-end model will make a difference on gaming and graphics-oriented applications.
    They tend to run other applications at about the same speed.
    Both models are rated with 8 hours of battery life.
    Benchmarks on PhotoShop comparing memory sizes:  https://discussions.apple.com/thread/5659174?tstart=30
    The speed was about 10% different between 8 GB and 16 GB models.
    On other applications your mileage may vary.
    Application benchmarks:
    http://www.macworld.com/article/2059215/15-inch-retina-macbook-pro-review-a-tale -of-two-laptops.html
    The low-end and high-end performed the about the same on non-graphics oriented applications.
    Graphics oriented benchmarks:
    On the Cinebench OpenGL benchmark the high-end was about twice as fast.
    On the Unigine Heaven Benchmark benchmark they ran at about the same speed.
    On the Unigine Valley Benchmark benchmark the high-end was about 1.5x as fast.
    Message was edited by: hands4

Maybe you are looking for

  • How to change the SID name in the server

    hi all, i installed oracle 10g R1 on windows 2003 server. i gave an sid and created a database. the database is new and there is no data in the db. now i want to change the SID name of the database. pls help me in doing this. regards, nagarjuna

  • Songs only play for 30 seconds

    My CD Imports are only playing for 35 seconds before jumping to the next song.  How do I turn Off the preview function? Frustrating!!

  • Installation PSE 12 - Fehler gemeinsame Technologien

    Hallo Leider muss ich mich in die wohl sehr lange Schlange von Leuten einreihen, die PSE 12 einfach nicht installiert bekommen. Seit !!! 2 Tagen !!! versuche ich nun PSE12 auf einem Win7 64Bit zu installieren. Laptop. Core2Duo. 4GB Ram. Gigabyteweise

  • Cisco wap321 not responding

    We have one WAP321 which stopping responding after being online for 24hrs. Currently we are just rebooting the device to get it up and running. However I would like to find a permanent fix.

  • DVD project blinks, pixelates and freezes.

    Hello there, greetings from Mexico! First time poster on this forum. I've been attempting to compile a few projects with Spanish subtitles with DVD-SP4, using three tracks: 1. Video .m2v file. 2. Audio .ac3 file. 3. Subtitle .stl file. The projects b