Huge VO Performance Difference

Performance question
Im comparing the performance of two view object instance models:
the structure is like below:
SalesActivityTView1
L ActivityEntryViewLink1
L ActivityEntryTView1
     L ViewLink1
     L ViewLink2
     L ViewLink3
     L ViewLink8
SalesActivityTView2
L ActivityEntryViewLink2
L ActivityEntryTView2
(no view links below ActivityEntryView)
- both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
- both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
- both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
Can anyone help me out in resolving this performance issue?

The difference is due to view-link coordination. It is more link's in the first application module.
Keep an eye on the otn article Performance tips for swing based bc4j applications.
Particulary this part i have copied for you:
Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

Similar Messages

  • Huge VO Performance Difference (Repost with corrections)

    Performance question
    Im comparing the performance of two view object instance models:
    the structure is like below:
    SalesActivityTView1
    --ActivityEntryViewLink1
    --ActivityEntryTView1
    --------ViewLink1
    --------ViewLink2
    --------ViewLink3
    --------ViewLink8
    SalesActivityTView2
    --ActivityEntryViewLink2
    --ActivityEntryTView2
    (no view links below ActivityEntryTView2)
    - both SalesActivityTView1 and SalesActivityTView2 are based on the same view object definition
    - both ActivityEntryView1 and ActivityEntryView2 are based on the same view object definition
    - both ActivityEntryViewLink1 and ActivityEntryViewLink2 are based on different view link xml definitions but the entries of both files are identical
    Using the Oracle Business Component Browser, I clicked on ActivityEntryViewLinks (one at a time) to view both the SalesActivityTView and ActivityEntryTView screens in a master and detail form. I tried scrolling through the data using the navigation bar of SalesActivityTView and noticed a huge difference in performance.
    Scrolling through each SalesActivityTView1 entry took me about 5-6 seconds (before the data were refreshed) whereas using SalesActivityTView2 took less than a second.
    Please note that i haven't opened the forms for the other view objects of the view links under ActivityEntryTView1 yet.
    Removing the eight view links under ActivityEtnryTView1 is not an option for me as im using the panelbinding in my swing-based application to automatically synchronize the data based on the selected ActivityEntryTView.
    I'm currently using Jdev9i v 9.0.3.2; the performance difference is almost the same whether i use it in either three-tier or two-tier mode.
    Can anyone help me out in resolving this performance issue?
    -Neil

    The difference is due to view-link coordination. It is more link's in the first application module.
    Keep an eye on the otn article Performance tips for swing based bc4j applications.
    Particulary this part i have copied for you:
    Keep an Eye Out for Lazy Master/Detail Coordination Opportunities
    More sophisticated user interfaces might make use of Swing's tabs or card layouts to have a set of panels which are conditionally displayed (or displayed only when the user brings them to the foreground). While not automatic in the 9.0.3 release, BC4J does offer API's like setMasterRowSetIterator and removeMasterRowSetIterator on any RowSet which allow you to dynamically add and remove iterators from the list of ones that will cause that rowset to be actively coordinated by the framework. Using these API's in a clever way (where possible, from within a server-side AM custom method of course!) you can have you application automatically coordinate the detail queries for regions on the screen that the user can see, and suppress the active coordindation for data that the user cannot currently see on the screen.

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • Performance difference between 6490M and 6750M?

    I'm looking at a new 15" MBP. What performance difference will I likely see between the Radeon 6490M and 6750M? It's a huge jump in graphics memory, but how much will it help me? I'm not a gamer at all, but I do a fair amount of work with photos and video.
    Thanks!

    Since you don't play games, you'll notice a difference in graphic and video applications that are GPU accelerated, like Motion, and Photoshop CS4 & 5.

  • Performance difference with my MBP 15" 2010 and the new 2011 one(gaming?)

    Hey guys,
    I have a 2.67 i7 2010 15" MPB(the higher end model)that I got last year. I use it for work and for college but I also like to play games on it.
    I recently got an Apple newsletter as I noticed the huge performance boost of the newer models. I went to a review site and my specific model nearly doubled in benchmark scores with the newer model. That doesn't seem like something that would normally happen. It seems like a rarity thats just too good to ignore. And this summer with the semesters being over I probably will go back to full on gaming(probably Portal 2 and WoW for the summer).
    My question to you is, since I can't go by the opinion of one site, what is the performance difference between the 2010 model I have and the newer 2011 one? Basically I am wondering if its worth it to sell it for several hundred dollars less than I payed for it and get the newer model now, or wait until 2012 to get the newest model then(if its going to be an even bigger difference in performance)?
    Excuse my poor judgement if I'm wrong, I probably seem like a maniac for using this comp for a year and already wanting to get the newer model. Its just that a lot of sites are making it seem like my model is fodder compared to the newer ones, and if so I might not let this offer pass by.

    bump for good measures!

  • Logic Pro X performance difference between iMac i7 3.5ghz (non-Retina) and the i7 4.0 ghz (Retina)?

    Hello - on the verge of getting an iMac - it's for Logic Pro and some video editing with Premiere Pro (don't *really* need the Retina screen though) - i'd just like to get an idea of the performance difference (especially in Logic Pro X) between the quad i7 3.5ghz (non-retina) and the quad i7 4.0ghz (retina)?
    I use loads of plugin instruments (incl. big Kontakt libraries) and effects with just a few audio tracks (only ever record max a couple of audio tracks at a time)
    Thanks!

    I owned the imac and then returned it for the 2.6 MacBook Pro. After using both there is a noticeable speed difference between the 2. Not a huge difference but there is certainly more lag using the MacBook Pro. I found that lots of the lag went away when I attached to an external display though. At the end of the day I think you just need to decide if you need the portability or not. At first I thought I would keep the imac and then get a cheap MacBook Pro to use on the road but the thought of having multiple Lightroom catalogs sounds very annoying to me. Plus all the transferring back and forth. I will say also that I've been getting a lot of freezes in Photoshop on my mbp and weird errors like the sleep wake failure error some people are talking about when connected to a USB device. I didn't have any of these problems with the imac

  • Data in table is huge so performance is reduced

    Hi all,
    I have a tables in the oracle database where the number of records in the table is huge ie 6 years of data,due to which when i generate the report the retrieval of data is slow.
    How to increase the performance,so that the report can be generate fast.
    Thanks in advance

    Thank you for polluting TWO forums with the exact same request.
    Please stick to the thread with answers in it.
    Data in table is huge so performance is reduced

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • Is there any performance difference in the order of columns referencing index?

    I wish to find out if there is any performance difference or efficiency in specifying those columns referencing index(es) first in the WHERE clause of SQL statements. That is, whether the order of columns referencing the index is important???.
    E.g. id is the column that is indexed
    SELECT * FROM a where a.id='1' and a.name='John';
    SELECT * FROM a where a.name='John' and a.id='1';
    Is there any differences in terms of efficiency of the 2 statements??
    Please advise. Thanks.

    There is no difference between the two statements under either the RBO or the CBO.
    sql>create table a as select * from all_objects;
    Table created.
    sql>create index a_index on a(object_id);
    Index created.
    sql>analyze table a compute statistics;
    Table analyzed.
    sql>select count(*)
      2    from a
      3   where object_id = 1
      4     and object_name = 'x';
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)
    sql>select count(*)
      2    from a
      3   where object_name = 'x'   
      4     and object_id = 1;
    COUNT(*)
            0
    1 row selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=29)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'A' (Cost=1 Card=1 Bytes=29)
       3    2       INDEX (RANGE SCAN) OF 'A_INDEX' (NON-UNIQUE) (Cost=1 Card=1)

  • AppleScript Performance Difference from Finder to System Events

    I've just made an interesting discovery. I have a folder that contains about 7000 other folders. When I use the code "tell application "Finder" to set folder_list to folders in folder base_folder" it takes a very long time to respond, about 40 seconds. When I use the code "tell application "System Events" to set folder_list to every folder of folder base_folder" to seems to produce the same result in only about 2 seconds. If I add the filtering criteria, like "whose name begins with search_name", the performance difference is even greater. Clearly I'll be using System Events from now on, but can anyone explain why there is such a large difference in performance? Is there anywhere I can find other performance tweaks like this?
    Note, I'm using system 10.6.5, but there is no automator section in that forum.

    It seems you're going in panic!
    First of all run mainteinance , look for system updates if any, and take care of any block key pressed
    on your keyboard.
    Do not abuse of Force Quit it 'd destroy preference applications, if your itunes take 3h to import your library you can setup in Energy Saver panel some tunings to protect lcd.
    I think 3h to import a music library is not normal, can you post some other info about?
    What can of device you are copying from?
    Did you import ,when you set up your new iMac , an user from your old mac?
    Take a look also at spotlight corner, if there's a little point inside the glass icon , spotlight is indexing your drive/s , this is normal on the first system's run and this 'd slow mac performance.

  • Graph axes assignment: performance difference between ATTR_ACTIVE_XAXIS and ATTR_PLOT_XAXIS

    Hi,
    I am using a xy graph with both x axes and both y axes. There are two possibilities when adding a new plot:
    1) PlotXY and SetPlotAttribute ( , , , ATTR_PLOT_XAXIS, );
    2) SetCtrlAttribute ( , , ATTR_ACTIVE_XAXIS, ) and PlotXY
    I tend to prefer the second method because I would assume it to be slightly faster, but what do the experts say?
    Thanks!  
    Solved!
    Go to Solution.

    Hi Wolfgang,
    thank you for your interesting question.
    First of all I want to say, that generally spoken, using the command "SetCtrlAttribute"is the best way to handle with your elements. I would suggest using this command when ever it is possible.
    Now, to your question regarding the performance difference between "SetCtrlAttribute" and "SetPlotAttribute".
    I think the performance difference occures, because in the background of the "SetPlotAttribute" command, another function called "ProcessDrawEvents" is executed. This event refreshes your plot again and again in the function whereas in the "SetCtrlAttribute" the refreshing is done once after the function has been finished. This might be a possible reason.
    For example you have a progress bar which shows you the progress of installing a driver:
    "SetPlotAttribute" would show you the progress bar moving step by step until installing the driver is done.
    "SetCtrlAttribute" would just show you an empty bar at the start and a full progress bar when the installing process is done.
    I think it is like that but I can't tell you 100%, therefore I would need to ask our developers.
    If you want, i can forward the question to them, this might need some times. Also, then I would need to know which version of CVI you are using.
    Please let me now if you want me to forward your question.
    Have a nice day,
    Abduelkerim
    Sales
    NI Germany

  • SQL Server 2008R2 vs 2012 OLTP performance difference - log flushes size different

    Hi all,
    I'm doing some performance test against 2 identical virtual machine (each VM has the same virtual resources and use the same physical hardware).
    The 1° VM has Windows Server 2008R2 and SQL Server 2008R2 Standard Edition
    the 2° VM has Windows Server 2012R2 and SQL Server 2012 SP2 + CU1 Standard Edition
    I'm using hammerDB (http://hammerora.sourceforge.net/) has benchmark tool to simulate TPC-C test.
    I've noticed a significative performance difference between SQL2008R2 and SQL2012, 2008R2 does perform better. Let's explain what I've found:
    I use a third VM as client where HammerDB software is installed, I run the test against the two SQL Servers (one server at a time), in SQL2008R2 I reach an higher number of transaction per minutes.
    HammerDB creates a database on each database server (so the database are identical except for the compatibility level), and then HammerDB execute a sequence of query (insert-update) simulating the TPC-C standard, the sequence is identical on both servers.
    Using perfmon on the two servers I've found a very interesting thing:
    In the disk used by the hammerDB database's log (I use separate disk for data and log) I've monitored the Avg. Disk Bytes/Write and I've noticed tha the SQL2012 writes to the log with smaller packet (let's say an average of 3k against an average of 5k written
    by the SQL 2008R2).
    I've also checked the value of Log flushes / sec on both servers and noticed that SQL2012 do, on average, more log flushes per second, so more log flushes of less bytes...
    I've searched for any documented difference in the way log buffers are flushed to disk between 2008r2 and 2012 but found no difference.
    Anyone can piont me in the correct direction?

    Andrea,
    1) first of all fn_db_log exposes a lot of fields that do not exist in SQL2008R2
    This is correct, though I can't elaborate as I do not know how/why the changes were made.
    2) for the same DML or DDL the number of log record generated are different
    I thought as much (but didn't know the workload).
    I would like to read and to study what this changes are! Have you some usefu link to interals docs?
    Unfortunately I cannot offer anything as the function used is currently undocumented and there are no published papers or documentation by MS on reading log records/why/how. I would assume this to all be NDA information by Microsoft.
    Sorry I can't be of more help, but you at least know that the different versions do have behavior changes.
    Sean Gallardy | Blog | Microsoft Certified Master

  • Major performance difference - OS X vs. Windows (CC 2014)

    I've built an Action-based auto-painter, using the Art History Brush Tool plus the Tool Recording option within the Actions themselves. The development work was done on an iMac machine. Recently I've also been testing the very same Actions on a Windows PC.
    Even though the specs for the two machines are very similar - and both normally run Photoshop CC 2014 in a very comparble way speed wise - the auto-painting done on the Windows machine is very slow. Completing the painting of a full 3000 by 2000 pixel image takes between 8 and 35 minutes using Windows. The same image can be painted in between one and a half and three minutes using the iMac. The averaged time-to-completion is approximately ten times faster for the iMac.
    The actual specifications for the machines and the software preferences set are:
    iMac
    3.4 GHz Intel i7 processor
    16 GB RAM
    conventional HDD 1 TB
    ATI Technologies AMD Radeon HD6970M 2048 MB
    OS X 10.9.4
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Windows PC
    3.3 GHz Intel i5 processor
    16 GB RAM
    SSD Intel 120 GB
    AMD Radeon HD 6950 1024 MB
    Windows 7 Pro (SP1)
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Does anyone have any suggestions on what might be causing the performance difference?

    I've built an Action-based auto-painter, using the Art History Brush Tool plus the Tool Recording option within the Actions themselves. The development work was done on an iMac machine. Recently I've also been testing the very same Actions on a Windows PC.
    Even though the specs for the two machines are very similar - and both normally run Photoshop CC 2014 in a very comparble way speed wise - the auto-painting done on the Windows machine is very slow. Completing the painting of a full 3000 by 2000 pixel image takes between 8 and 35 minutes using Windows. The same image can be painted in between one and a half and three minutes using the iMac. The averaged time-to-completion is approximately ten times faster for the iMac.
    The actual specifications for the machines and the software preferences set are:
    iMac
    3.4 GHz Intel i7 processor
    16 GB RAM
    conventional HDD 1 TB
    ATI Technologies AMD Radeon HD6970M 2048 MB
    OS X 10.9.4
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Windows PC
    3.3 GHz Intel i5 processor
    16 GB RAM
    SSD Intel 120 GB
    AMD Radeon HD 6950 1024 MB
    Windows 7 Pro (SP1)
    Photoshop CC 2014
    - set to 75% available RAM
    - set to 20 History states
    - set to 6 cache levels, tile size 1024 kB
    no other primary application open
    Does anyone have any suggestions on what might be causing the performance difference?

  • Is there a performance difference between Automation Plug-ins and the scripting system?

    We currently have a tool that, through the scripting system, merges and hides layers by layer groups, exports them, and then moves to the next layer group.  There is some custom logic and channel merging that occasionally occurs in the merging of an individual layer group.  These operations are occuring through the scripting system (actually, through C# making direct function calls through Photoshop), and there are some images where these operations take ~30-40 minutes to complete on very large images.
    Is there a performance difference between doing the actions in this way as opposed to having these actions occur in an automation plug-in?
    Thanks,

    Thanks for the reply.    I ended up just benchmarking the current implementation that we are using (which goes through DOM from all indications, I wasn't the original author of the code) and found that accessing each layer was taking upwards of 300 ms.  I benchmarked iterating through the layers with PIUGetInfoByIndexIndex (in the Getter automation plug-in) and found that the first layer took ~300 ms, but the rest took ~1 ms.  With that information, I decided that it was worthwhile rewriting the functionality in an Automation plug-in.

  • IMac 64-bit Performance Difference?

    Is there a significant (or any) performance difference when running the iMac in 64-bit mode rather than the 32-bit mode? Any reason why I woudl not want to run it with the 64-bit kernel? I have an early 2009 20" iMac with 4 Gb and Intel 2.66 processor running Snow Leopard.
    Thanks!

    Running a computer in 64 bit mode will have no bearing on performance if the application software being run is 32 bit as are most current apps. As time passes more and more applications will be ported to 64 bit mode for a timeline you would need to contact each manufacturer and ask.
    If you want to start your Mac in 64 bit mode on startup hold down the 6 and 4 keys. Then to see what applications are running in 64 bit start Activity Monitor and in the Kind column it will state what applications are 64 bit.
    Regards,
    Roger

Maybe you are looking for

  • My laptop will not start - it's telling me that "the logon user interface dll csgina.dll fail to loa

    My Laptop will not start - it's a 4 year PC and it's runing on XP. When I start I get this message" The Logon USer Interface DLL failed to load - Contact your administratoe to replace the DLL or restore the original". How do I fix so I can start my w

  • One to one (mandatory participation)

    hi, if my db has a one to one relationship with mandatory participation on both sides (i.e. (1,1) ---- (1,1)), then according to my relational database notes, we can put the two relations as one. i.e. schema (X U Y) = X1,...Xn,Y1,...,Yn I assume this

  • All devices listed as wireless clients not showing up.

    I  have an airport extreme (newest version) I have my apple tv, computer and ipad showing up listed as wireless clients, however my iphone does not show up. It is connected to my network. I have tested with another iphone (same version 4S and same iO

  • Hyperlinks in spark RichEditableText issue!?!

    Hi, I found a really annoying bug in RichEditableText control. Has anyone tried to set a fixed width and textAlign to "center" or "right" (actually anything other then "left") while the text control contains a hyperlink? If you try this you will noti

  • BIN object names

    How can you remove bin files from an object name? I have attached a screen shot this occured after creating a new user so that I would not have all of the default install tables showing up but did not have the correct role. SUBSTR(OBJECT_NAME,1 OBJEC