IMac 64-bit Performance Difference?

Is there a significant (or any) performance difference when running the iMac in 64-bit mode rather than the 32-bit mode? Any reason why I woudl not want to run it with the 64-bit kernel? I have an early 2009 20" iMac with 4 Gb and Intel 2.66 processor running Snow Leopard.
Thanks!

Running a computer in 64 bit mode will have no bearing on performance if the application software being run is 32 bit as are most current apps. As time passes more and more applications will be ported to 64 bit mode for a timeline you would need to contact each manufacturer and ask.
If you want to start your Mac in 64 bit mode on startup hold down the 6 and 4 keys. Then to see what applications are running in 64 bit start Activity Monitor and in the Kind column it will state what applications are 64 bit.
Regards,
Roger

Similar Messages

  • Logic Pro X performance difference between iMac i7 3.5ghz (non-Retina) and the i7 4.0 ghz (Retina)?

    Hello - on the verge of getting an iMac - it's for Logic Pro and some video editing with Premiere Pro (don't *really* need the Retina screen though) - i'd just like to get an idea of the performance difference (especially in Logic Pro X) between the quad i7 3.5ghz (non-retina) and the quad i7 4.0ghz (retina)?
    I use loads of plugin instruments (incl. big Kontakt libraries) and effects with just a few audio tracks (only ever record max a couple of audio tracks at a time)
    Thanks!

    I owned the imac and then returned it for the 2.6 MacBook Pro. After using both there is a noticeable speed difference between the 2. Not a huge difference but there is certainly more lag using the MacBook Pro. I found that lots of the lag went away when I attached to an external display though. At the end of the day I think you just need to decide if you need the portability or not. At first I thought I would keep the imac and then get a cheap MacBook Pro to use on the road but the thought of having multiple Lightroom catalogs sounds very annoying to me. Plus all the transferring back and forth. I will say also that I've been getting a lot of freezes in Photoshop on my mbp and weird errors like the sleep wake failure error some people are talking about when connected to a USB device. I didn't have any of these problems with the imac

  • Moving from a 2.4GHz iMac to a 2.66GHz iMac... much difference?

    Hi Guys,
    I am selling my previous-gen iMac 2.4GHz and upgrading to the current 2.66GHz version. Might seem crazy, but the actual reason for the upgrade is so I can fit 8GB of RAM into it (once the new 4GB sticks of RAM are just a BIT cheaper!!!)...
    But I'm wondering, will I get better performance from this than my current 2.4GHz machine? I'm sure it will be a little better, but will there be a big difference? The RAM is faster RAM (DDR3) than on my current computer too, will this help for AU instruments like BFD?
    I can hopefully do this change for just a few hundred dollars (AU), so it seems to be a good time to go for it. Do you guys agree?
    If I don't switch now, but leave it a few years, my current iMac will be worth nothing and it will be too expensive to make a switch up to an 8GB RAM compatible machine...
    Thoughts?
    Cheers,
    Mike

    Hi There,
    Yeah, I thought that the performance difference might not be that huge.
    I am finding the BFD2 splutters when I am recording drums in real time when there are already a lot of other tracks playing. I assume that RAM would be the main cause of this problem, so having the option of getting 8GB further down the track would have to be a good thing... am I right?
    Any other advice or thoughts?
    A Mac Pro (alas) is out of budget for me...
    Cheers,
    Mike

  • Connect Leopard iMac to old Performa running os9c

    I was going to use my old 6116 (os9, crescendo g3 card) as a fileserver. Cannot get my imac (Leopard) to see the old box. Pinging the 6116 is no problem, but i keep getting an error -36 msg when i try to connect. At the performa end, sharing is all set; I've added my imac to my users list. If i try to connect to the imac from the performa, about 3/4 of the time it won't take my password.....when it does, i can see the shared folders of the imac, but if i try to connect, it crashes chooser. I am just a wee bit perplexed. Anyone got any ideas?

    Hello Paul,
    I will, I really will try not to appear rude here.
    Why on earth are you trying this?
    Switch both machines on at the same time. The 6116 will go to the bathroom, examine teeth, check its waist line, wonder what should have been done last year, think about having some breakfast and then begin work - Maybe in an hour or so. The iMac will have learned to play the flute, solved a Rubics cube in 42 different ways, calculated Pi to 30 gazillion decimal places and all before you have had chance to open the mail.
    " Anyone got any ideas? "
    Give the old girl a decent burial and buy another computer to use as a server!
    Regards
    Ian

  • AppleScript Performance Difference from Finder to System Events

    I've just made an interesting discovery. I have a folder that contains about 7000 other folders. When I use the code "tell application "Finder" to set folder_list to folders in folder base_folder" it takes a very long time to respond, about 40 seconds. When I use the code "tell application "System Events" to set folder_list to every folder of folder base_folder" to seems to produce the same result in only about 2 seconds. If I add the filtering criteria, like "whose name begins with search_name", the performance difference is even greater. Clearly I'll be using System Events from now on, but can anyone explain why there is such a large difference in performance? Is there anywhere I can find other performance tweaks like this?
    Note, I'm using system 10.6.5, but there is no automator section in that forum.

    It seems you're going in panic!
    First of all run mainteinance , look for system updates if any, and take care of any block key pressed
    on your keyboard.
    Do not abuse of Force Quit it 'd destroy preference applications, if your itunes take 3h to import your library you can setup in Energy Saver panel some tunings to protect lcd.
    I think 3h to import a music library is not normal, can you post some other info about?
    What can of device you are copying from?
    Did you import ,when you set up your new iMac , an user from your old mac?
    Take a look also at spotlight corner, if there's a little point inside the glass icon , spotlight is indexing your drive/s , this is normal on the first system's run and this 'd slow mac performance.

  • Is my early 2006 iMac 32-bit or 64-bit?

    I ask because Mactracker says that it's 32-bit, so does that mean that Snow Leopard would be a waste of time? I thought the whole point was that it's 64-bit, but if it runs in 32-bit mode only then I can't see the performance increase being anything to talk about.

    I get:
    +| | "firmware-abi" = <EFI32>+
    This just tells me that the EFI (firmware?) is 32-bit, but not anything about the Core Duo processor or other hardware capabilities.
    There is nothing under Hardware in System Profiler that identifies the bit of the hardware, 32 or 64, it doesn't even tell me the CPU model number (in fact, there is no CPU information other than "Intel Core Duo", "1.84GHz", 2 cores and 2MB L2 Cache). Mactracker says that this is a "Yonah" T2400 Intel Core Duo.
    After a bit of digging I came up with this on the Intel website:
    http://ark.intel.com/Product.aspx?id=27235
    It states that the T2400 CPU has a 32-bit instruction set, which I take as final proof that it is a 32-bit CPU and (as far as my understanding of computing goes) is not capable of running true 64-bit code in a 64-bit address space. Therefore all Snow Leopard code should run in 32-bit mode (I really don't know how or believe it if someone says that it can run 64-bit applications on 32-bit hardware). I can only conclude that I should expect 32-bit performance, any performance increase over normal Leopard will be from more efficient code and not anything to do with it being 64-bit.
    I think my best option would be to wait for a few updates, perhaps by 10.6.5 they will have better 64-bit support, the iMacs will boot into 64-bit mode by default (I know it can be tweaked but that's not the point) and I can expect to see much improved performance from the latest hardware and software.'
    One of the things I do that takes a lot of CPU time is video crunching, whether it be converting or iMovie / iDVD. This process is what I am most interested in speeding up, I know there will be no substitute for a faster CPU but I'm hopeful that running in 64-bit mode will help with shifting such large amounts of data around the system.

  • Major performance difference - OS X vs. Windows (CC 2014)

    I've built an Action-based auto-painter, using the Art History Brush Tool plus the Tool Recording option within the Actions themselves. The development work was done on an iMac machine. Recently I've also been testing the very same Actions on a Windows PC.
    Even though the specs for the two machines are very similar - and both normally run Photoshop CC 2014 in a very comparble way speed wise - the auto-painting done on the Windows machine is very slow. Completing the painting of a full 3000 by 2000 pixel image takes between 8 and 35 minutes using Windows. The same image can be painted in between one and a half and three minutes using the iMac. The averaged time-to-completion is approximately ten times faster for the iMac.
    The actual specifications for the machines and the software preferences set are:
    iMac
    3.4 GHz Intel i7 processor
    16 GB RAM
    conventional HDD 1 TB
    ATI Technologies AMD Radeon HD6970M 2048 MB
    OS X 10.9.4
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Windows PC
    3.3 GHz Intel i5 processor
    16 GB RAM
    SSD Intel 120 GB
    AMD Radeon HD 6950 1024 MB
    Windows 7 Pro (SP1)
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Does anyone have any suggestions on what might be causing the performance difference?

    I've built an Action-based auto-painter, using the Art History Brush Tool plus the Tool Recording option within the Actions themselves. The development work was done on an iMac machine. Recently I've also been testing the very same Actions on a Windows PC.
    Even though the specs for the two machines are very similar - and both normally run Photoshop CC 2014 in a very comparble way speed wise - the auto-painting done on the Windows machine is very slow. Completing the painting of a full 3000 by 2000 pixel image takes between 8 and 35 minutes using Windows. The same image can be painted in between one and a half and three minutes using the iMac. The averaged time-to-completion is approximately ten times faster for the iMac.
    The actual specifications for the machines and the software preferences set are:
    iMac
    3.4 GHz Intel i7 processor
    16 GB RAM
    conventional HDD 1 TB
    ATI Technologies AMD Radeon HD6970M 2048 MB
    OS X 10.9.4
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Windows PC
    3.3 GHz Intel i5 processor
    16 GB RAM
    SSD Intel 120 GB
    AMD Radeon HD 6950 1024 MB
    Windows 7 Pro (SP1)
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Does anyone have any suggestions on what might be causing the performance difference?

  • Is there a big performance difference between HD's

    Hi All
      Is there a big performance difference between a 5400 & 7200 Rpm hardrive? I'm asking because I want to pick up a new Imac and am stuck between choosing the higher end 21.5" and lower end 27" and am trying to determine if the difference in price is worth the extra sheckles.

    I was wanting to know the same question (5400rpm vs. 7200rpm) in a 15" standard MBP, deciding on whether to get a 1TB serial ATA drive @ 5400rpm vs. a 750GB serial ATA drive @ 7200rpm. (Sorry to jump in)
    For the most part, I'm a general user - web surfing/research, Word processing/Excel/Powerpoint (pretty basic), etc. BUT I do like to take alot of photos and plan on doing some editing on them (nothing advanced) and creating slideshows with my photos/music (ex. my Europe trip of photos or a slideshow of the grandchildren as a gift to my parents, etc.)
    Some folks mention "video editing" in reference to gonig with the faster speed (7200rpm) if that's what you plan on doing. But, what do they exactly mean by "video editing"? Is slideshow creation the same? 
    Just wondering for my needs, whether I should go with the 750GB serial ATA drive @ 7200 rpm or the 1TB serial ATA drive @ 5400rpm ($50 more yet with more storage space which would help with my increasing photo files every year).
    Thanks

  • Mac recognizes more ram but no performance difference?

    I purchase an extra 2 GB of ram for my macbook pro intel core 2 duo laptop for a total of 3 GB. I bought the ram at a pc store outside of apple but is built with the exact specifications of memory bought from apple.
    I successfully installed the memory myself and when I booted up the mac and looked under 'about this mac' it showed 3 GB of ram so hip hip hurray i did everything ok.
    However, there was no performance difference at all. Opening up music files, to video and even PDF files loaded up no faster than when I had just 1 GB of ram.
    I did a mem test and an additional memory test with tech tool pro and everything seemed to be working fine.
    Anyone know why I cannot see any performance difference? Actually to be exact, it feels as though the speed has improved about 512 MB but no where near 3GB.
    I then took my 1GB ram stick out and just inserted the 2GB ram stick (which is the newly purchased one) and no performance difference occurred.
    Even when I talked to customer support of where I bought the ram the guy said that this is quite odd and that the mac should perform much faster.
    Any suggestions or solutions?
    Thank you.
    Dorian

    +Opening up music files, to video and even PDF files loaded up no faster than when I had just 1 GB of ram.+
    The first time you open them, it's limited by the speed of reading from your hard drive, so the extra RAM isn't making any difference.
    +Actually to be exact, it feels as though the speed has improved about 512 MB but no where near 3GB.+
    RAM won't magically make your computer run faster. Adding more RAM simply stops the computer from slowing down by going to the hard drive only if it actually uses that much memory.
    It just sounds like the typical way in which you use your computer requires a bit more than 512 MB but less than 1 GB, and this is really quite normal.

  • What is the performance difference between an Oct 2011 i5 and a June 2012 i7 in performance if they both have 16GB RAM

    what is the performance difference between an Oct 2011 i5 and a June 2012 i7 in performance if they both have 16GB RAM

    At least. The Core i7 can drive up to 8 concurrent 64-bit threads, more than an i5. Plus the 2011 models used the Sandy Bridge family of Intel chips, whereas the 2012 are the latest Ivy Bridge, that uses a much faster, less latency architecture.

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • Is there any performance difference in the order of columns referencing index?

    I wish to find out if there is any performance difference or efficiency in specifying those columns referencing index(es) first in the WHERE clause of SQL statements. That is, whether the order of columns referencing the index is important???.
    E.g. id is the column that is indexed
    SELECT * FROM a where a.id='1' and a.name='John';
    SELECT * FROM a where a.name='John' and a.id='1';
    Is there any differences in terms of efficiency of the 2 statements??
    Please advise. Thanks.

    There is no difference between the two statements under either the RBO or the CBO.
    sql>create table a as select * from all_objects;
    Table created.
    sql>create index a_index on a(object_id);
    Index created.
    sql>analyze table a compute statistics;
    Table analyzed.
    sql>select count(*)
      2    from a
      3   where object_id = 1
      4     and object_name = 'x';
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)
    sql>select count(*)
      2    from a
      3   where object_name = 'x'   
      4     and object_id = 1;
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)

  • Graph axes assignment: performance difference between ATTR_ACTIVE_XAXIS and ATTR_PLOT_XAXIS

    Hi,
    I am using a xy graph with both x axes and both y axes. There are two possibilities when adding a new plot:
    1) PlotXY and SetPlotAttribute ( , , , ATTR_PLOT_XAXIS, );
    2) SetCtrlAttribute ( , , ATTR_ACTIVE_XAXIS, ) and PlotXY
    I tend to prefer the second method because I would assume it to be slightly faster, but what do the experts say?
    Thanks!  
    Solved!
    Go to Solution.

    Hi Wolfgang,
    thank you for your interesting question.
    First of all I want to say, that generally spoken, using the command "SetCtrlAttribute"is the best way to handle with your elements. I would suggest using this command when ever it is possible.
    Now, to your question regarding the performance difference between "SetCtrlAttribute" and "SetPlotAttribute".
    I think the performance difference occures, because in the background of the "SetPlotAttribute" command, another function called "ProcessDrawEvents" is executed. This event refreshes your plot again and again in the function whereas in the "SetCtrlAttribute" the refreshing is done once after the function has been finished. This might be a possible reason.
    For example you have a progress bar which shows you the progress of installing a driver:
    "SetPlotAttribute" would show you the progress bar moving step by step until installing the driver is done.
    "SetCtrlAttribute" would just show you an empty bar at the start and a full progress bar when the installing process is done.
    I think it is like that but I can't tell you 100%, therefore I would need to ask our developers.
    If you want, i can forward the question to them, this might need some times. Also, then I would need to know which version of CVI you are using.
    Please let me now if you want me to forward your question.
    Have a nice day,
    Abduelkerim
    Sales
    NI Germany

  • SQL Server 2008R2 vs 2012 OLTP performance difference - log flushes size different

    Hi all,
    I'm doing some performance test against 2 identical virtual machine (each VM has the same virtual resources and use the same physical hardware).
    The 1° VM has Windows Server 2008R2 and SQL Server 2008R2 Standard Edition
    the 2° VM has Windows Server 2012R2 and SQL Server 2012 SP2 + CU1 Standard Edition
    I'm using hammerDB (http://hammerora.sourceforge.net/) has benchmark tool to simulate TPC-C test.
    I've noticed a significative performance difference between SQL2008R2 and SQL2012, 2008R2 does perform better. Let's explain what I've found:
    I use a third VM as client where HammerDB software is installed, I run the test against the two SQL Servers (one server at a time), in SQL2008R2 I reach an higher number of transaction per minutes.
    HammerDB creates a database on each database server (so the database are identical except for the compatibility level), and then HammerDB execute a sequence of query (insert-update) simulating the TPC-C standard, the sequence is identical on both servers.
    Using perfmon on the two servers I've found a very interesting thing:
    In the disk used by the hammerDB database's log (I use separate disk for data and log) I've monitored the Avg. Disk Bytes/Write and I've noticed tha the SQL2012 writes to the log with smaller packet (let's say an average of 3k against an average of 5k written
    by the SQL 2008R2).
    I've also checked the value of Log flushes / sec on both servers and noticed that SQL2012 do, on average, more log flushes per second, so more log flushes of less bytes...
    I've searched for any documented difference in the way log buffers are flushed to disk between 2008r2 and 2012 but found no difference.
    Anyone can piont me in the correct direction?

    Andrea,
    1) first of all fn_db_log exposes a lot of fields that do not exist in SQL2008R2
    This is correct, though I can't elaborate as I do not know how/why the changes were made.
    2) for the same DML or DDL the number of log record generated are different
    I thought as much (but didn't know the workload).
    I would like to read and to study what this changes are! Have you some usefu link to interals docs?
    Unfortunately I cannot offer anything as the function used is currently undocumented and there are no published papers or documentation by MS on reading log records/why/how. I would assume this to all be NDA information by Microsoft.
    Sorry I can't be of more help, but you at least know that the different versions do have behavior changes.
    Sean Gallardy | Blog | Microsoft Certified Master

Maybe you are looking for

  • Batch file reduction

    I need to batch process 500 jpegs at a time, reducing them to a consistant file size and dimension.  The originals are all between 4 and 7 megs, depending on the camera I use and my photo editor wants them all 8X10 or 12, 300dpi, 1-2 megs each.  Can

  • Css problem in ie

    Hi, I am adding a text node by my java script code and specifing a css class blackText for the text. It works well in mozilla but not working in IE. var div=document.createElement('div'); div.setAttribute("class", "blackText");      textWhere = docum

  • HT5622 i changed my apple id in my computer but the new id does not come up on the i phone when i need to log in

    i changed my apple id   on my computer     on my i phone  my new id does not come up and i cannot log into i tuenes

  • Firmware update help please

    Hello, I posted this in the optical forum but got no answers, so I hope someone could help here. Hi there, I have just fitted an MSI CR52-M CDRW and ran live update to check for firmware. It said I had version 3.30 and 3.70 was available. I started t

  • Help - need a screen saver effect in j2me

    Can someone please guide me, how I can create a screensaver in j2me. It Should be similar to the one in windows.... Thanks