Qmaster actually make a performance difference thats NOTICEABLE??

I have Apple Qmaster 2.3.1 installed on a Dual 2.7 Powermac and an i7 new intel based Macbook Pro. The FCP Suite is all installed on the PMac Tower, and Compressor doesnt allow you to use the same serial on a second computer in the network (the doc. doesnt state it MUST be on all machines, only the Qmaster).
I tried this setup with an ethernet cable as well as with Airport networking (computer-to-computer setup, no firewall turned on, internet access turned off for security)
According to the "getting started quickly" segment of the 'Apple Qmaster and Compressor 2' pdf that came with my FCP Suite, I have installed it correctly.
I can see and submit to the cluster that is originated on the Powermac, see the active service (Compressor cluster only allows one core of the Macbook Pro to be used according to documentation, and sadly only 3 kinds of render software can make use of all 4 but NOT ANY compressor or FCP rendering? LAME... or please correct me if Im wrong)
I did a transcode of resizing and change of codec and timed it using just This Computer first then through the cluster....
over a 22 min. encode in both settings, there was barely a difference of 5% between the trial encodings... is this thing actually working? is there a BETTER way to test the cluster to see if I actually would benefit keeping the laptop in cable length of the tower? (ie: is there a verifiably FAST encoding I could try to see ANY difference; .mov to .mpv or something?)
I then downloaded and tried to do a cluster using Pooch but that breaks shortly after submission with an app crash... so that cant even get any performance vector to compare with Qmaster....
Seriously... if a 5 to 10% increase in encoding speed is what Apple thinks is QUALITY performance, when using a virtual 6 chip system via Qmaster, I am underwhelmed... therefore Im thinking something must be wrong instead. Tested the cable with a high bandwidth video iChat and a few things, seems to have no prob with throughput, so cable doesnt look like the culprit.

Well on the i7 it also had its firewall turned off (hoping this opens all available ports by doing this), had the setup of the prefs pane of Qmaster set to 'services only' as per the client/cluster rules in the guidelines, had the Managed settings turned 'off', the Network choice under the Advanced tab set to All Interfaces and a working cable confirmed by doing a machineToMachine test of conductivity by making an iChat audio/video connection.
On the client machine (tower, serving the submission to the cluster) the setup was the same only with the prefpane set to Quickcluster with services, Services with Managed unchecked, Quickcluster option set to 'include unmanaged services from other computers', and the rest all identical settings.
I just heard a Pick Our Brains on DP Buzz podcast that said H.264 does not multithread, which maybe effecting things even if transcoding to a different size (RE: re-encoding perhaps after resizing?), so maybe I will try a QT file output using x264 codec which the author loosly is suggesting IS multithreading (dunno but will try)
Further more, I re-read the manual and on pg. 15 it is conflicting: "NOTE: the compressor 2... limited to computers that have either FCP Studio or DVDSP 4 installed". well, not possible without multi license release as it checks and warns you if you try installing the same software using the same license number... also is conflicting the message later on in the install instructions (below)
Later on the same page the restrictions are different...: "Each computer in the network will require Apple QMaster AND/OR Compressor 2...", then again muddled on pg 16 (installation) by saying "make sure the client software (meaning qmaster I think) is on atleast one computer" and further on down "software (either Compressor or Apple Qmaster"
SOOOOOO, what I gathered by the majority of the conflicting specs is that ONE machine should atleast have ALL the FCP and Qmaster installed on it, and the rest atleast have Qmaster installed... when it says either/or, I went with the 'or' boolean meaning I was allowed only to install Qmaster without the installer warning me about things.
(I am not doing Dolby D. Pro audio so I dont have to install FCP/DVDSP as a result thankfully...because I dont see buying another license just to test out improved cluster processing)
Wouldnt mind knowing if anyone found Pooch to be a better solution, and more flexible as it supposedly will work with a number of processes let alone with a QT codec for exporting a number of different formats... anyone try Pooch with their iMovie or FCP or DVDSP?

Similar Messages

  • Mac recognizes more ram but no performance difference?

    I purchase an extra 2 GB of ram for my macbook pro intel core 2 duo laptop for a total of 3 GB. I bought the ram at a pc store outside of apple but is built with the exact specifications of memory bought from apple.
    I successfully installed the memory myself and when I booted up the mac and looked under 'about this mac' it showed 3 GB of ram so hip hip hurray i did everything ok.
    However, there was no performance difference at all. Opening up music files, to video and even PDF files loaded up no faster than when I had just 1 GB of ram.
    I did a mem test and an additional memory test with tech tool pro and everything seemed to be working fine.
    Anyone know why I cannot see any performance difference? Actually to be exact, it feels as though the speed has improved about 512 MB but no where near 3GB.
    I then took my 1GB ram stick out and just inserted the 2GB ram stick (which is the newly purchased one) and no performance difference occurred.
    Even when I talked to customer support of where I bought the ram the guy said that this is quite odd and that the mac should perform much faster.
    Any suggestions or solutions?
    Thank you.
    Dorian

    +Opening up music files, to video and even PDF files loaded up no faster than when I had just 1 GB of ram.+
    The first time you open them, it's limited by the speed of reading from your hard drive, so the extra RAM isn't making any difference.
    +Actually to be exact, it feels as though the speed has improved about 512 MB but no where near 3GB.+
    RAM won't magically make your computer run faster. Adding more RAM simply stops the computer from slowing down by going to the hard drive only if it actually uses that much memory.
    It just sounds like the typical way in which you use your computer requires a bit more than 512 MB but less than 1 GB, and this is really quite normal.

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • Huge VO Performance Difference

    Performance question
    Im comparing the performance of two view object instance models:
    the structure is like below:
    SalesActivityTView1
    L ActivityEntryViewLink1
    L ActivityEntryTView1
         L ViewLink1
         L ViewLink2
         L ViewLink3
         L ViewLink8
    SalesActivityTView2
    L ActivityEntryViewLink2
    L ActivityEntryTView2
    (no view links below ActivityEntryView)
    - both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
    - both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
    - both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
    Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
    Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
    Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
    Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
    I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
    Can anyone help me out in resolving this performance issue?

    The difference is due to view-link coordination. It is more link's in the first application module.
    Keep an eye on the otn article Performance tips for swing based bc4j applications.
    Particulary this part i have copied for you:
    Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
    More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

  • Huge VO Performance Difference (Repost with corrections)

    Performance question
    Im comparing the performance of two view object instance models:
    the structure is like below:
    SalesActivityTView1
    --ActivityEntryViewLink1
    --ActivityEntryTView1
    --------ViewLink1
    --------ViewLink2
    --------ViewLink3
    --------ViewLink8
    SalesActivityTView2
    --ActivityEntryViewLink2
    --ActivityEntryTView2
    (no view links below ActivityEntryTView2)
    - both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
    - both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
    - both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
    Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
    Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
    Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
    Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
    I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
    Can anyone help me out in resolving this performance issue?
    -Neil

    The difference is due to view-link coordination. It is more link's in the first application module.
    Keep an eye on the otn article Performance tips for swing based bc4j applications.
    Particulary this part i have copied for you:
    Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
    More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

  • Huge performance difference for this?

    So I was optimizing a large piece of code and found that looping through a 2-dimensional array by row is a LOT faster than by column.
    If you look at the code below, the accessor to the row is moved from the outer loop to the inner loop in the second test, because it traverses the array by columns first.
                            double row[] = table[j]; Since the inner loop is more expensive with this change, it should be slower. That's fine.
    But how much slower? Running it on a dual 2GHz Pentium/Windows XP, the latter is 10x slower - consistently.
    Does anyone know why? I looked at the byte code and it's not obvious why the difference is so big. Maybe the memory cache is kicking in here because accessing row-wise has better locality? It's a pure guess and it would be great if someone can offer an explaination (or even better suggestions to make it faster :)).
    Thanks a lot!
        public void aggrArrByRow( final int numRow, final int numCol )
            final double table[][] = createArray( numRow, numCol );
            final double result[] = new double[ numCol ];
            Runnable r = new Runnable() {
                public void run() {
                    for( int i=0; i<numCol; i++ ) result[i] = 0.0;
                    for( int j=0; j<numRow; j++ )
                        double row[] = table[j];
                        for( int i=0; i<numCol; i++ )
                            result[i] += row;
    timeit( "aggrArrByRow of " + numRow + "x" + numCol + " table", r );
    public void aggrArrByCol( final int numRow, final int numCol )
    final double table[][] = createArray( numRow, numCol );
    final double result[] = new double[ numCol ];
    Runnable r = new Runnable() {
    public void run() {
    for( int i=0; i<numCol; i++ ) result[i] = 0.0;
    for( int i=0; i<numCol; i++ )
    for( int j=0; j<numRow; j++ )
    double row[] = table[j];
    result[i] += row[i];
    timeit( "aggrArrByCol of " + numRow + "x" + numCol + " table", r );
    public static long timeit( String label, Runnable r )
    long start = System.currentTimeMillis();
    r.run();
    long end = System.currentTimeMillis();
    System.out.println( label + " took " + (end-start) + " ms" );
    return (end-start);

    But how much slower? Running it on a dual 2GHz
    Pentium/Windows XP, the latter is 10x slower -
    consistently.
    Does anyone know why? I looked at the byte code and
    it's not obvious why the difference is so big.
    Maybe the memory cache is kicking in here because
    se accessing row-wise has better locality? It's a
    pure guess and it would be great if someone can offer
    an explaination (or even better suggestions to make
    it faster :)).Yep, that's probably it. I have some experience with some BLAS libraries (ATLAS and MKL), and they typically perform matrix multiplication 8-10x faster than "naively" coded routines. Most of this advantage is gained in cache blocking. Perhaps surprisingly, relatively little of that advantage is from pipelining and other processor features (for large matrices, anyway).
    To make it faster, you could try dropping the use of Java jagged arrays and use C style arrays instead. Then you'll be guaranteed to have the entire matrix in the same area of memory, not just the rows. Or, if you have to iterate the elements multiple times, see if you can decompose the problem in such a way that you can perform multiple passes on smaller blocks of data.
    Good luck.

  • Is there a performance difference between Automation Plug-ins and the scripting system?

    We currently have a tool that, through the scripting system, merges and hides layers by layer groups, exports them, and then moves to the next layer group.  There is some custom logic and channel merging that occasionally occurs in the merging of an individual layer group.  These operations are occuring through the scripting system (actually, through C# making direct function calls through Photoshop), and there are some images where these operations take ~30-40 minutes to complete on very large images.
    Is there a performance difference between doing the actions in this way as opposed to having these actions occur in an automation plug-in?
    Thanks,

    Thanks for the reply.    I ended up just benchmarking the current implementation that we are using (which goes through DOM from all indications, I wasn't the original author of the code) and found that accessing each layer was taking upwards of 300 ms.  I benchmarked iterating through the layers with PIUGetInfoByIndexIndex (in the Getter automation plug-in) and found that the first layer took ~300 ms, but the rest took ~1 ms.  With that information, I decided that it was worthwhile rewriting the functionality in an Automation plug-in.

  • Is the difference that significant?

    Is the difference that significant between the NEW 2.1 gig 20' and the older 1.8 20" iMacs?
    are the improvements in video card, bus speed, processor speed and memory THAT worth it to get the newer machine with the higher price?
    I am going to use my machine for small graphic jobs, inetrnet and some gaming at home. Do you think I need to spring for the most up-to-date machine or is a certified machine ok (I would add ram)?
    ps...disk storage is not really an issue as I have external now. I just need a superdrive I think...and I can ADD iSight if I really need it
    Opinions?
    chaz

    Chaz,
    Performance between a 1.8 GHz G5 vs. a 2.1 GHz G5 isn't going to be that great. The upgrade in video card will provide a more noticable improvement in gaming, if you are playing any games that have reached the limits of your 64MB nVidia 5200 card. But if your current video processor is handling the games you play just fine, say 40 fps, then getting 52 fps out of the new card isn't going to give any noticable difference.
    The key thing to look at is cost. What are you going to get for selling your old 20" 1.8 vs. the actual price of buying the new 20" iMac 2.1, after you add more RAM, sales tax, auction fees for your old iMac, etc. The cost difference is going to be several hundred, maybe even along the lines of $600-800. So is it worth it for the small speed increase? If you have the cash to spend, then go for it.

  • Logic Pro X performance difference between iMac i7 3.5ghz (non-Retina) and the i7 4.0 ghz (Retina)?

    Hello - on the verge of getting an iMac - it's for Logic Pro and some video editing with Premiere Pro (don't *really* need the Retina screen though) - i'd just like to get an idea of the performance difference (especially in Logic Pro X) between the quad i7 3.5ghz (non-retina) and the quad i7 4.0ghz (retina)?
    I use loads of plugin instruments (incl. big Kontakt libraries) and effects with just a few audio tracks (only ever record max a couple of audio tracks at a time)
    Thanks!

    I owned the imac and then returned it for the 2.6 MacBook Pro. After using both there is a noticeable speed difference between the 2. Not a huge difference but there is certainly more lag using the MacBook Pro. I found that lots of the lag went away when I attached to an external display though. At the end of the day I think you just need to decide if you need the portability or not. At first I thought I would keep the imac and then get a cheap MacBook Pro to use on the road but the thought of having multiple Lightroom catalogs sounds very annoying to me. Plus all the transferring back and forth. I will say also that I've been getting a lot of freezes in Photoshop on my mbp and weird errors like the sleep wake failure error some people are talking about when connected to a USB device. I didn't have any of these problems with the imac

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • How do you make a new action that adds a signature?

    I want to make a new action that performs OCR, and then adds a drag and droppable signature to the top of the document. I figured out how to do the OCR, but I dont see an option for adding a signiture. How can this be done?'
    I have acrobat pro x

    There are 2 very important points:
    1. Its bundled up with other stuff, so less work and easier to use than pressing each action individually
    2. Much easier for the intended users: someone unskilled in acrobat who just wants to press 1-2 buttons and be done with it.

  • Is there any performance difference in the order of columns referencing index?

    I wish to find out if there is any performance difference or efficiency in specifying those columns referencing index(es) first in the WHERE clause of SQL statements. That is, whether the order of columns referencing the index is important???.
    E.g. id is the column that is indexed
    SELECT * FROM a where a.id='1' and a.name='John';
    SELECT * FROM a where a.name='John' and a.id='1';
    Is there any differences in terms of efficiency of the 2 statements??
    Please advise. Thanks.

    There is no difference between the two statements under either the RBO or the CBO.
    sql>create table a as select * from all_objects;
    Table created.
    sql>create index a_index on a(object_id);
    Index created.
    sql>analyze table a compute statistics;
    Table analyzed.
    sql>select count(*)
      2    from a
      3   where object_id = 1
      4     and object_name = 'x';
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)
    sql>select count(*)
      2    from a
      3   where object_name = 'x'   
      4     and object_id = 1;
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)

  • AppleScript Performance Difference from Finder to System Events

    I've just made an interesting discovery. I have a folder that contains about 7000 other folders. When I use the code "tell application "Finder" to set folder_list to folders in folder base_folder" it takes a very long time to respond, about 40 seconds. When I use the code "tell application "System Events" to set folder_list to every folder of folder base_folder" to seems to produce the same result in only about 2 seconds. If I add the filtering criteria, like "whose name begins with search_name", the performance difference is even greater. Clearly I'll be using System Events from now on, but can anyone explain why there is such a large difference in performance? Is there anywhere I can find other performance tweaks like this?
    Note, I'm using system 10.6.5, but there is no automator section in that forum.

    It seems you're going in panic!
    First of all run mainteinance , look for system updates if any, and take care of any block key pressed
    on your keyboard.
    Do not abuse of Force Quit it 'd destroy preference applications, if your itunes take 3h to import your library you can setup in Energy Saver panel some tunings to protect lcd.
    I think 3h to import a music library is not normal, can you post some other info about?
    What can of device you are copying from?
    Did you import ,when you set up your new iMac , an user from your old mac?
    Take a look also at spotlight corner, if there's a little point inside the glass icon , spotlight is indexing your drive/s , this is normal on the first system's run and this 'd slow mac performance.

  • SQL performance difference in application an in SQL Server

    We are using Visual Studio 2010 and SQL Server 2012.
    We insert into the database using the stored procedure InsertData.
    When I timed in the program how long it takes to execute stored procedure InsertData, it takes more than 10 ms per insert. But, our SQL Admin told us that it took 0 ms to do the insert on SQL Server itself, and SQL Profiler also shows it took 0 ms to do that.
    What can make the program shows that it took more than 10 ms per insert, but yet in SQL Server itself it takes 0 ms ? In the program below, t1 is before .executeNonquery and t2 is after .executeNonquery. If t2-t1 is more than 10 ms, I wrote it to a log
    file, and I can see almost all records took longer than 10 ms. Thank you.
    Private _command As SqlCommand
    Private _connection As SqlConnection
    _connection = New SqlConnection(_connectionString)
    _connection.Open()
    _command = New SqlCommand()
    With _command
    .Connection = _connection
    .CommandType = CommandType.StoredProcedure
    .CommandText = "InsertData"
    .Parameters.Add("@sDateTime", SqlDbType.VarChar)
    .Parameters.Add("@sContract", SqlDbType.VarChar)
    .Parameters.Add("@sData", SqlDbType.VarChar)
    .Parameters.Add("@iVol", SqlDbType.Int)
    .Parameters.Add("@fTrade", SqlDbType.Float)
    End With
    With _command
    .Parameters(0).Value = strongCastQuote.MessageDateTime
    .Parameters(1).Value = sContract
    .Parameters(2).Value = strongCastQuote.Data
    .Parameters(3).Value = strongCastQuote.Volume.ToString()
    .Parameters(4).Value = strongCastQuote.Price.ToString()
    t1 = Date.Now.ToString("dd-MMM-yyyy HH:mm:ss") & "." & Date.Now.ToString("fffffff")
    .ExecuteNonQuery()
    t2 = Date.Now.ToString("dd-MMM-yyyy HH:mm:ss") & "." & Date.Now.ToString("fffffff")
    If CDate(t2).Subtract(CDate(t1)).Milliseconds > 10 Then
    LogExtra.SendToLog(sContract & " 1.strongCastQuote.MessageDateTime = " & strongCastQuote.MessageDateTime & " t1 = " & t1 & " t2 = " & t2)
    End If
    End With

    Also please read this article
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx   ---Minimal Logging changes in SQL Server 2008
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Graph axes assignment: performance difference between ATTR_ACTIVE_XAXIS and ATTR_PLOT_XAXIS

    Hi,
    I am using a xy graph with both x axes and both y axes. There are two possibilities when adding a new plot:
    1) PlotXY and SetPlotAttribute ( , , , ATTR_PLOT_XAXIS, );
    2) SetCtrlAttribute ( , , ATTR_ACTIVE_XAXIS, ) and PlotXY
    I tend to prefer the second method because I would assume it to be slightly faster, but what do the experts say?
    Thanks!  
    Solved!
    Go to Solution.

    Hi Wolfgang,
    thank you for your interesting question.
    First of all I want to say, that generally spoken, using the command "SetCtrlAttribute"is the best way to handle with your elements. I would suggest using this command when ever it is possible.
    Now, to your question regarding the performance difference between "SetCtrlAttribute" and "SetPlotAttribute".
    I think the performance difference occures, because in the background of the "SetPlotAttribute" command, another function called "ProcessDrawEvents" is executed. This event refreshes your plot again and again in the function whereas in the "SetCtrlAttribute" the refreshing is done once after the function has been finished. This might be a possible reason.
    For example you have a progress bar which shows you the progress of installing a driver:
    "SetPlotAttribute" would show you the progress bar moving step by step until installing the driver is done.
    "SetCtrlAttribute" would just show you an empty bar at the start and a full progress bar when the installing process is done.
    I think it is like that but I can't tell you 100%, therefore I would need to ask our developers.
    If you want, i can forward the question to them, this might need some times. Also, then I would need to know which version of CVI you are using.
    Please let me now if you want me to forward your question.
    Have a nice day,
    Abduelkerim
    Sales
    NI Germany

Maybe you are looking for