Performance difference between left outer join / inner join

Hi,
I've got a complex query which among other things accesses quite a large table. If I use inner join to join this table, the response is quite fast and execution plan shows it uses nested loops to gather the data.
If I change inner join to left outer join, I get a big performance drop. Cost of query goes from 1441 to 28544.
I don't uderstand why there's such a difference. For inner join, database has to remove all the rows that don't have match in inner-joined table. For left join it can keep all records and just return NULL values for records that don't have a match. In my mind left join should be faster, as it seems simpler. And the access plan could be the same, couldn't it?
Execution plan for inner join:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 288 | 1441 (1)| 00:00:18 |
| 1 | HASH GROUP BY | | 1 | 288 | 1441 (1)| 00:00:18 |
| 2 | NESTED LOOPS OUTER | | 1 | 288 | 1440 (1)| 00:00:18 |
| 3 | NESTED LOOPS | | 1 | 261 | 1438 (1)| 00:00:18 |
| 4 | NESTED LOOPS | | 318 | 74412 | 508 (1)| 00:00:07 |
| 5 | NESTED LOOPS | | 318 | 51834 | 189 (0)| 00:00:03 |
| 6 | TABLE ACCESS BY INDEX ROWID| RESURCE | 1 | 106 | 1 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | RESURCE_PRINCIPAL_NAME_INDEX | 1 | | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| TASK_USES_RESURCE | 318 | 18126 | 188 (0)| 00:00:03 |
|* 9 | INDEX RANGE SCAN | TASK_USES_RESUR_IDX$$_0CDC0002 | 318 | | 3 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TASK | 1 | 71 | 1 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | TASK_PK | 1 | | 0 (0)| 00:00:01 |
| 12 | TABLE ACCESS BY INDEX ROWID | TASK_WORK_HISTORY | 1 | 27 | 3 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | TASK_WORK_HISTORY_INDEX1 | 1 | | 2 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID | TASK_USES_RESURCE | 1 | 27 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | TASK_USES_RESURCE_UK1 | 1 | | 1 (0)| 00:00:01 |
For left outer join:
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 318 | 1596K| | 28544 (2)| 00:05:43 |
|* 1 | HASH JOIN OUTER | | 318 | 1596K| 1584K| 28544 (2)| 00:05:43 |
| 2 | VIEW | | 318 | 1580K| | 508 (1)| 00:00:07 |
| 3 | NESTED LOOPS | | 318 | 74412 | | 508 (1)| 00:00:07 |
| 4 | NESTED LOOPS | | 318 | 51834 | | 189 (0)| 00:00:03 |
| 5 | TABLE ACCESS BY INDEX ROWID| RESURCE | 1 | 106 | | 1 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | RESURCE_PRINCIPAL_NAME_INDEX | 1 | | | 0 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| TASK_USES_RESURCE | 318 | 18126 | | 188 (0)| 00:00:03 |
|* 8 | INDEX RANGE SCAN | TASK_USES_RESUR_IDX$$_0CDC0002 | 318 | | | 3 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | TASK | 1 | 71 | | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | TASK_PK | 1 | | | 0 (0)| 00:00:01 |
| 11 | VIEW | | 1480K| 73M| | 23431 (2)| 00:04:42 |
|* 12 | HASH JOIN RIGHT OUTER | | 1480K| 76M| 38M| 23431 (2)| 00:04:42 |
| 13 | TABLE ACCESS FULL | TASK_USES_RESURCE | 1486K| 21M| | 2938 (2)| 00:00:36 |
| 14 | VIEW | | 1445K| 53M| | 15031 (2)| 00:03:01 |
| 15 | HASH GROUP BY | | 1445K| 37M| 110M| 15031 (2)| 00:03:01 |
| 16 | TABLE ACCESS FULL | TASK_WORK_HISTORY | 1445K| 37M| | 3897 (2)| 00:00:47 |
--------------------------------------------------------------------------------------------------------------------------

...continued
Complete execution plan for left join:
| Id  | Operation                       | Name                           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |                                                                                                                                                                                  
|   0 | SELECT STATEMENT                |                                |   318 |  1594K|       | 28544   (2)| 00:05:43 |                                                                                                                                                                                  
|*  1 |  HASH JOIN OUTER                |                                |   318 |  1594K|  1584K| 28544   (2)| 00:05:43 |                                                                                                                                                                                  
|   2 |   VIEW                          |                                |   318 |  1578K|       |   508   (1)| 00:00:07 |                                                                                                                                                                                  
|   3 |    NESTED LOOPS                 |                                |   318 | 74412 |       |   508   (1)| 00:00:07 |                                                                                                                                                                                  
|   4 |     NESTED LOOPS                |                                |   318 | 51834 |       |   189   (0)| 00:00:03 |                                                                                                                                                                                  
|   5 |      TABLE ACCESS BY INDEX ROWID| RESURCE                        |     1 |   106 |       |     1   (0)| 00:00:01 |                                                                                                                                                                                  
|*  6 |       INDEX UNIQUE SCAN         | RESURCE_PRINCIPAL_NAME_INDEX   |     1 |       |       |     0   (0)| 00:00:01 |                                                                                                                                                                                  
|   7 |      TABLE ACCESS BY INDEX ROWID| TASK_USES_RESURCE              |   318 | 18126 |       |   188   (0)| 00:00:03 |                                                                                                                                                                                  
|*  8 |       INDEX RANGE SCAN          | TASK_USES_RESUR_IDX$$_0CDC0002 |   318 |       |       |     3   (0)| 00:00:01 |                                                                                                                                                                                  
|   9 |     TABLE ACCESS BY INDEX ROWID | TASK                           |     1 |    71 |       |     1   (0)| 00:00:01 |                                                                                                                                                                                  
|* 10 |      INDEX UNIQUE SCAN          | TASK_PK                        |     1 |       |       |     0   (0)| 00:00:01 |                                                                                                                                                                                  
|  11 |   VIEW                          |                                |  1480K|    73M|       | 23431   (2)| 00:04:42 |                                                                                                                                                                                  
|* 12 |    HASH JOIN RIGHT OUTER        |                                |  1480K|    76M|    38M| 23431   (2)| 00:04:42 |                                                                                                                                                                                  
|  13 |     TABLE ACCESS FULL           | TASK_USES_RESURCE              |  1486K|    21M|       |  2938   (2)| 00:00:36 |                                                                                                                                                                                  
|  14 |     VIEW                        |                                |  1445K|    53M|       | 15031   (2)| 00:03:01 |                                                                                                                                                                                  
|  15 |      HASH GROUP BY              |                                |  1445K|    37M|   110M| 15031   (2)| 00:03:01 |                                                                                                                                                                                  
|  16 |       TABLE ACCESS FULL         | TASK_WORK_HISTORY              |  1445K|    37M|       |  3897   (2)| 00:00:47 |                                                                                                                                                                                  
Query Block Name / Object Alias (identified by operation id):                                                                                                                                                                                                                                               
   1 - SEL$1AFB0324                                                                                                                                                                                                                                                                                         
   2 - SEL$58A6D7F6 / from$_subquery$_005@SEL$8                                                                                                                                                                                                                                                             
   3 - SEL$58A6D7F6                                                                                                                                                                                                                                                                                         
   5 - SEL$58A6D7F6 / RESURCEWORKER@SEL$2                                                                                                                                                                                                                                                                   
   6 - SEL$58A6D7F6 / RESURCEWORKER@SEL$2                                                                                                                                                                                                                                                                   
   7 - SEL$58A6D7F6 / TASKUSESRESURCE@SEL$1                                                                                                                                                                                                                                                                 
   8 - SEL$58A6D7F6 / TASKUSESRESURCE@SEL$1                                                                                                                                                                                                                                                                 
   9 - SEL$58A6D7F6 / TASK@SEL$1                                                                                                                                                                                                                                                                            
  10 - SEL$58A6D7F6 / TASK@SEL$1                                                                                                                                                                                                                                                                            
  11 - SEL$7EBCC247 / TRW@SEL$3                                                                                                                                                                                                                                                                             
  12 - SEL$7EBCC247                                                                                                                                                                                                                                                                                         
  13 - SEL$7EBCC247 / TUR@SEL$4                                                                                                                                                                                                                                                                             
  14 - SEL$6        / TRW_IN@SEL$5                                                                                                                                                                                                                                                                          
  15 - SEL$6                                                                                                                                                                                                                                                                                                
  16 - SEL$6        / TWH@SEL$6                                                                                                                                                                                                                                                                             
Predicate Information (identified by operation id):                                                                                                                                                                                                                                                         
   1 - access("TRW"."RESURCE_ID"(+)="TASKUSESRESURCE"."RESURCE_ID" AND "TRW"."TASK_ID"(+)="TASK"."ID")                                                                                                                                                                                                      
   6 - access("RESURCEWORKER"."USER_PRINCIPAL_NAME"=U'jernej')                                                                                                                                                                                                                                              
   8 - access("TASKUSESRESURCE"."RESURCE_ID"="RESURCEWORKER"."ID")                                                                                                                                                                                                                                          
  10 - access("TASKUSESRESURCE"."TASK_ID"="TASK"."ID")                                                                                                                                                                                                                                                      
  12 - access("TUR"."RESURCE_ID"(+)="TRW_IN"."RESURCE_ID" AND "TUR"."TASK_ID"(+)="TRW_IN"."TASK_ID")                                                                                                                                                                                                         Jonathan, I've been to one of your workshops in Ljubljana. I'm still trying to understand everything you explained and use it, but there's much I have to learn and understand.
The way I see this query it should fist join and filter the first three tables and only then join the trw subquery. The problem with this subqrey is task_work_history table, which is accessed in very different ways in different places, so there will always be many reads to gather required data. For now, however, I'd be hapy to just bring the performance of inner join to left join...

Similar Messages

  • What is left join /right join / out join/ inner join/please give example!

    what is left join /right join / out join/ inner join/please give example!
    thanks

    Maybe these examples will give you an idea...
    SQL> select * from t1;
            ID
             1
             2
             3
             4
    SQL> select * from t2;
            ID
             3
             4
             5
             6
    -- LEFT OUTER JOIN
    SQL> select t1.id, t2.id
      2  from t1 LEFT OUTER JOIN t2 ON (t1.id = t2.id);
            ID         ID
             3          3
             4          4
             1
             2
    -- RIGHT OUTER JOIN
    SQL> select t1.id, t2.id
      2  from t1 RIGHT OUTER JOIN t2 ON (t1.id = t2.id);
            ID         ID
             3          3
             4          4
                        6
                        5
    -- LEFT JOIN (SAME AS LEFT OUTER JOIN)
    SQL> ed
    Wrote file afiedt.buf
      1  select t1.id, t2.id
      2* from t1 LEFT JOIN t2 ON (t1.id = t2.id)
    SQL> /
            ID         ID
             3          3
             4          4
             1
             2
    -- RIGHT JOIN (SAME AS RIGHT OUTER JOIN)
    SQL> ed
    Wrote file afiedt.buf
      1  select t1.id, t2.id
      2* from t1 RIGHT JOIN t2 ON (t1.id = t2.id)
    SQL> /
            ID         ID
             3          3
             4          4
                        6
                        5
    -- INNER JOIN (REGULAR JOIN)
    SQL> ed
    Wrote file afiedt.buf
      1  select t1.id, t2.id
      2* from t1 INNER JOIN t2 ON (t1.id = t2.id)
    SQL> /
            ID         ID
             3          3
             4          4
    -- FULL OUTER JOIN
    SQL> ed
    Wrote file afiedt.buf
      1  select t1.id, t2.id
      2* from t1 FULL OUTER JOIN t2 ON (t1.id = t2.id)
    SQL> /
            ID         ID
             3          3
             4          4
             1
             2
                        6
                        5
    6 rows selected.
    SQL>

  • Outer join/inner join

    I have 4 related tables and want to join them all but can't
    figure out how to do it. Here are the columns that need to join:
    Table 1: EventID, PromoterID
    Table 2: EventID, LeagueID
    Table 3: LeagueID
    Table 4: PromoterID
    I want to join like this:
    Table2.EventID *= Table1.EventID and
    Table2.PromoterID = Table4.PromoterID and
    Table2.LeagueID = Table3.LeagueID
    But I can't do an outer and inner join on the same table.
    Events can have multiple leagues related to it. If I use
    inner joins for everything, it turns out okay until you search for
    an event by League - if that happens and the event has more than
    one related league, the search results will only show the league
    searched by and not all the other related leagues. If you search by
    other info like month or year, all leagues related to a particular
    event show up in the search results for each event. When I try to
    do an outer join on table 2 based on eventid, I then get stuck
    trying to join table 2 and table 3.
    This is probably a simple problem but my brain is feeling too
    fried to figure it out - can anyone help? Thanks!!

    You can also pull in the eventids based on leagues if your
    SQL engine supports subqueries.
    where
    Table2.EventID = Table1.EventID and
    Table2.PromoterID = Table4.PromoterID and
    Table2.LeagueID = Table3.LeagueID
    <cfif isdefined("form.leagueid")>
    <cfif trim(form.leagueid) neq ''>
    and table2.eventid in
    (select t2a.eventid from table2 t2a
    where t2a.eventid = table2.eventid
    and t2a.leagueid = '#form.leagueid#')
    </cfif>
    </cfif>
    <cfif isdefined("form.eventid")>
    <cfif trim(form.eventid) neq ''>
    and table2.eventid = '#form.eventid#'
    </cfif>
    </cfif>
    <cfif isdefined("form.promoterid")>
    <cfif trim(form.promoterid) neq ''>
    and table2.prompterid = '#form.promoterid#'
    </cfif>
    </cfif>
    ...

  • Difference between roll out and implementation

    Dear sapians,
    What is the major difference between roll out and ene to end implementation ?
    Is it  possible to rollout to N no of plants?
    Gururajan .A

    The most important thing to do if you are planning to carry out an implementation followed by several roll-outs, is to do the analysis of the business processes across all potential future sites and not just try to use one central example.
    In one implementation I joined they had used a small site as the blueprint because this site was easier to manage and it used most of the areas of SAP. But when it came to rolling out the solution, the first few roll outs went well (because the sites were of a similar size to the original site), but then when a major site was to be rolled out to there were MAJOR problems. In the small initial site some users were carrying out many roles / tasks, but in the large site that we were rolling out to there were several users to each role / task and this meant a redesign of most of the processes and different configuration.
    So design for as many sites as you can and then you can roll out, don't just do the design for the initial sites then try to roll out.
    Steve B

  • Disk Utility: Differences between "Zero Out Data" and "7-Pass Erase"?

    I'm wondering if anyone knows if there's a significant difference between the "Zero Out Data" erase option in Disk Utility (specifically Disk Utility 10.5.5), and the "7-Pass Erase" and "35-Pass Erase" options in same software.
    Here's why I'm asking: I have a co-worker with an iMac G5 20" 1.8GHz with 160GB internal hard drive. As a result of the power supply overheating a week ago due to dust, some hard drive problems resulted. I'm trying to assess whether these are 'soft' formatting problems that can be recovered from, or 'hard' problems requiring replacement of the hard drive and/or power supply.
    Following the failure, I removed the dust and restored the iMac to servicable form. The power supply seems to be OK now. The next thing was to attempt to recover as much data as possible from the 160GB, as the last full backup was a week old. Carbon Copy Cloner, shell copy via 'sudo cp -p -R -v', Finder copy, and DiskWarrior recovery all met with problems. TechTool Pro identified a huge swatch of unreadable sectors during repeated surface scans. Unfortunately, these unreadable sectors were located midway in the OSX boot partition (an 80GB partition), and not in the other 80GB partition devoted to lower priority video data.
    When I was satisfied I had backed up the data to the best of my abilities, I next set out to reformat the drive and see if the bad sectors could be eliminated or remapped out of existence. I did a "Zero Out Data" erasure in Disk Utility (with no errors during the erase), but TechTool Pro showed the bad sectors persisted in equal strength at the same location. I next executed a sixteen hour "7-Pass Erase" (again no errors, and confirming that it takes about an hour per 10GB). The next day when I ran TechTool pro, all of the sector errors had disappeared. I'm a bit perplexed as to why the "7-Pass Erase" seems to have recovered the use of the drive. Is it possible that there are simply thousands of bad sectors now remapped that I'm not seeing? [If so, how do I check for this?] TechTool Pro has not reported any S.M.A.R.T. issues to date on the drive. What am I to make of that?
    There are some related threads I've checked into, but I'm not sure how to properly assess my situation based on this information:
    <http://discussions.apple.com/thread.jspa?threadID=232007>
    <http://discussions.apple.com/thread.jspa?threadID=138559>
    <http://discussions.apple.com/thread.jspa?threadID=118455>
    Since the iMac has three weeks left on it's one year warranty, and I've already moved the user to another machine temporarily, I'm thinking that the smart thing to so is to send it in to Apple to have them look at the power supply and hard drive. That way, when it returns, even if there is still a lingering hardware problem, at least it will be covered under warranty for another 90 days.
    Any thoughts?
    iMac G5 20" 1.8GHz   Mac OS X (10.4.6)   1.25GB RAM, 160GB hard disk, SuperDrive

    HI, Bret.
    The only differences between "Zero Out Data", "7-Pass Erase", and "35-Pass Erase" are the number of times a binary zero is written to every bit on the disk. "Zero Out Data" writes a binary zero once, whereas the 7- and 35-Pass options write a zero seven and 35 times, respectively.
    Technically, one pass with Zero Out Data should be sufficient to map bad sectors out of service, a process also known as sparing. If a bad sector is encountered, it is both marked as "in use" in the directory's allocation table and added to the directory's "bad blocks file."
    My understanding is that the Surface Scan of Tech Tool Pro should identify bad sectors every time it is run unless the bad sectors have been locked out by the drive controller of the ATA drive itself. This is because Surface Scan checks the entire surface of the disk.
    What may have happened is that running "Zero Out Data" spared the bad blocks from a directory standpoint, but did not result in the drive's controller locking out those sectors for reasons detailed in the "Surface Scan" section of the Tech Tool Pro manual. However, the 7-Pass Erase may have resulted in the drive's controller locking out the bad sectors and why Surface Scan did not pick them up after such.
    Given the problems you described, I concur with your plan to have Apple check the affected computer. You might also want to consider purchasing an AppleCare Protection Plan for that Mac: I recommend and buy these for all my Macs.
    For some additional information on bad sectors, see the "Bad Sectors" section of my "Resolving Disk, Permission, and Cache Corruption" FAQ.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • Huge performance differences between a map listener for a key and filter

    Hi all,
    I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
    Here are my results with a Dual Core of 2.13 GHz
    1) Map Listener for a Filter
    a) No Index
    Create (data always added, the key is composed by the type id and the time stamp)
    Cache.put
    Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
    Update (data added then updated, the key is only composed by the type id)
    Cache.put
    Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
    b) With Index
    Cache.put
    Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
    Cache.putAll
    Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
    Update
    Cache.put
    Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
    2) Map Listener for a Key
    Update
    Cache.put
    Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
    Cache.putAll
    Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
    Please help me to find what is wrong with my code because for now it is unusable.
    Best Regards,
    Nicolas
    Here is my code
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    import java.util.HashMap;
    import java.util.Map;
    import com.tangosol.io.ExternalizableLite;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Filter;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MapListener;
    import com.tangosol.util.extractor.ReflectionExtractor;
    import com.tangosol.util.filter.EqualsFilter;
    import com.tangosol.util.filter.MapEventFilter;
    public class TestFilter {
          * To run a specific test, just launch the program with one parameter which
          * is the test index
         public static void main(String[] args) {
              if (args.length != 1) {
                   System.out.println("Usage : java TestFilter 1-10|all");
                   System.exit(1);
              final String arg = args[0];
              if (arg.endsWith("all")) {
                   for (int i = 1; i <= 10; i++) {
                        test(i);
              } else {
                   final int testIndex = Integer.parseInt(args[0]);
                   if (testIndex < 1 || testIndex > 10) {
                        System.out.println("Usage : java TestFilter 1-10|all");
                        System.exit(1);               
                   test(testIndex);               
         @SuppressWarnings("unchecked")
         private static void test(int testIndex) {
              final NamedCache cache = CacheFactory.getCache("test-cache");
              final int totalObjects = 2000;
              final int totalTries = 40;
              if (testIndex >= 5 && testIndex <= 8) {
                   // Add index
                   cache.addIndex(new ReflectionExtractor("getKey"), false, null);               
              // Add listeners
              for (int i = 0; i < totalObjects; i++) {
                   final MapListener listener = new SimpleMapListener();
                   if (testIndex < 9) {
                        // Listen to data with a given filter
                        final Filter filter = new EqualsFilter("getKey", i);
                        cache.addMapListener(listener, new MapEventFilter(filter), false);                    
                   } else {
                        // Listen to data with a given key
                        cache.addMapListener(listener, new TestObjectSimple(i), false);                    
              // Load data
              long time = System.currentTimeMillis();
              for (int iTry = 0; iTry < totalTries; iTry++) {
                   final long currentTime = System.currentTimeMillis();
                   final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
                   for (int i = 0; i < totalObjects; i++) {               
                        final Object obj;
                        if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
                             // Create data with key with time stamp
                             obj = new TestObjectComplete(i, currentTime);
                        } else {
                             // Create data with key without time stamp
                             obj = new TestObjectSimple(i);
                        if ((testIndex & 1) == 1) {
                             // Load data directly into the cache
                             cache.put(obj, obj);                         
                        } else {
                             // Load data into a buffer first
                             buffer.put(obj, obj);                         
                   if (!buffer.isEmpty()) {
                        cache.putAll(buffer);                    
              time = System.currentTimeMillis() - time;
              System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
              cache.destroy();
         public static class SimpleMapListener implements MapListener {
              public void entryDeleted(MapEvent evt) {}
              public void entryInserted(MapEvent evt) {}
              public void entryUpdated(MapEvent evt) {}
         public static class TestObjectComplete implements ExternalizableLite {
              private static final long serialVersionUID = -400722070328560360L;
              private int key;
              private long time;
              public TestObjectComplete() {}          
              public TestObjectComplete(int key, long time) {
                   this.key = key;
                   this.time = time;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
                   this.time = in.readLong();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
                   out.writeLong(time);
         public static class TestObjectSimple implements ExternalizableLite {
              private static final long serialVersionUID = 6154040491849669837L;
              private int key;
              public TestObjectSimple() {}          
              public TestObjectSimple(int key) {
                   this.key = key;
              public int getKey() {
                   return key;
              public void readExternal(DataInput in) throws IOException {
                   this.key = in.readInt();
              public void writeExternal(DataOutput out) throws IOException {
                   out.writeInt(key);
              public int hashCode() {
                   return key;
              public boolean equals(Object o) {
                   return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
    }Here is my coherence config file
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>test-cache</cache-name>
                   <scheme-name>default-distributed</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>          
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <backing-map-scheme>
                        <class-scheme>
                             <scheme-ref>default-backing-map</scheme-ref>
                        </class-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <class-scheme>
                   <scheme-name>default-backing-map</scheme-name>
                   <class-name>com.tangosol.util.SafeHashMap</class-name>
              </class-scheme>
         </caching-schemes>
    </cache-config>Message was edited by:
    user620763

    Hi Robert,
    Indeed, only the Filter.evaluate(Object obj)
    method is invoked, but the object passed to it is a
    MapEvent.<< In fact, I do not need to implement EntryFilter to
    get a MapEvent, I could get the same result (in my
    last message) by writting
    cache.addMapListener(listener, filter,
    true)instead of
    cache.addMapListener(listener, new
    MapEventFilter(filter) filter, true)
    I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
    If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
    The hashCode() and equals() does not matter on
    the filter class<< I'm not so sure since I noticed that these methods
    were implemented in the EqualsFilter class, that they
    are called at runtime and that the performance
    results are better when you add them
    That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
    If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
    DataOutput.writeInt(int) writes 4 bytes.
    ExternalizableHelper.writeInt(DataOutput, int) writes
    1-5 bytes (or 1-6?), with numbers with small absolute
    values consuming less bytes.Similar differences exist
    for the long type as well, but your stamp attribute
    probably will be a large number...<< I tried it but in my use case, I got the same
    results. I guess that it must be interesting, if I
    serialiaze/deserialiaze many more objects.
    Also, if Coherence serializes an
    ExternalizableLite object, it writes out its
    class-name (except if it is a Coherence XmlBean). If
    you define your key as an XmlBean, and add your class
    into the classname cache configuration in
    ExternalizableHelper.xml, then instead of the
    classname, only an int will be written. This way you
    can spare a large percentage of bandwidth consumed by
    transferring your key instance as it has only a small
    number of attributes. For the value object, it might
    or might not be so relevant, considering that it will
    probably contain many more attributes. However, in
    case of a lite event, the value is not transferred at
    all.<< I tried it too and in my use case, I noticed that
    we get objects nearly twice lighter than an
    ExternalizableLite object but it's slower to get
    them. But it is very intersting to keep in mind, if
    we would like to reduce the network traffic.
    Yes, these are minor differences at the moment.
    As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
    Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
    >
    But finally, I guess that I found the best solution
    for my specific use case which is to use a map
    listener for a key which has no time stamp, but since
    the time stamp is never null, I had just to check
    properly the time stamp in the equals method.
    I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
    Best regards,
    Robert

  • Performance difference between 3.33 6 core and dual 2.4 quad core.

    Sorry if this seems like a silly question to the technically savvy out there, but I am looking to replace my older Mac Pro. I was wondering what the performance difference between those two machines.
    I am a film editor and run Final Cut as well as Avid Media Composer, plus I do effects work in After Effects. Which of the two would be bettervforvtgatbwork? Media storage issues aside, would I get better performance for video work from the lower ghz dual core or the larger ghz 6 core?
    Both machines are essentially the same cost. So I wonder if there is a better choice amongbthe two, and why.
    I tried some research, but couldn't find a direct comparison or a general guide that seemed to correspond to those configuration differences.

    Hello nibford,
    For photo editing and digital imaging I would recommend the 6-Core 3.33GHz as the better buy, but then I do not use Final Cut, Avid Media Composer, or After Effects.
    The following article, from the legendary, and much quoted, Mac Performance Guide, gives an excellent insight into the new 2010 range of Mac Pros:
    http://macperformanceguide.com/Reviews-MacProWestmere.html
    Unfortunately, from your point of view, but not mine, the performance tests are predominantly photography related, but there is a comparison of Performance with Adobe After Effects on Page 28:
    http://macperformanceguide.com/Reviews-MacProWestmere-AfterEffects.html
    There is virtually no difference in performance between the 6-Core 3.33GHz and the 8-Core 2.4GHz in those tests.
    However, in the Performance with Handbrake Video Encoding on Page 29, the 6-Core 3.33GHz is only marginally slower than the 8-Core 2.93GHz, which would indicate that it would be substantially faster than the 8-Core 2.4GHz for this process:
    http://macperformanceguide.com/Reviews-MacProWestmere-Handbrake.html
    Again the 6-Core 3.33GHz outperforms the 8-Core 2.4GHz in the Cinebench tests on Page 30:
    http://macperformanceguide.com/Reviews-MacProWestmere-Cinebench.html
    The only advantage that the 8-Core 2.4GHz currently appears to have over the 6-Core 3.33GHz is that it has twice the number of memory slots, and can accommodate a maximum of 64GB RAM compared to the 32GB of the latter.
    In the future, applications might take full advantage of the 8-Core 2.4GHz's additional cores, but at the moment, in my opinion, the 6-Core's much faster processor tips the scales in its favour.
    Regards,
    Bill

  • What is the difference between Video-out and mirroring?

    What is the difference between Video-out and mirroring? I can't get iPhone 4 video to work on my TV screen
    I have just bought an MD098ZM/A (Apple 30-pin Digital AV Adapter). I am struggling to get it to show a picture on my TV. I know I'm doing something right because the audio is coming out of my TV speakers but no picture on the TV screen.
    I have used the same HDMI channel (on the TV side) with the same cable and my thunderbolt port (MacBook Air) without any trouble - and on the same app (BBC iPlayer download then full-screen mode).
    Now I note that the packaging for the MD098ZM/A says video-out on iPhone 4 but mirroring only on iPhone 4S. I only have an iPhone 4 (not the 4S). Now if the lack of iPhone 4 support for mirroring means that I can't play video material out to my TV, then in what sense is there any video-out capability at all?
    There is only safety and warranty paperwork in the Apple adapter packaging - no help information. And I haven't found further guidance online either.
    I do note somewhere online that it suggests that basic non-mirroring video-out (for this adapter) only works with some external TV sets. Any way of finding out which? I'm using a Sanyo CE32LD90-B LCD TV if it helps.
    So far not doing very well.

    Now found these but have had to give up on this adapter!
    http://manuals.info.apple.com/en_US/iphone_user_guide.pdf
    http://support.apple.com/kb/HT4108

  • Graph axes assignment: performance difference between ATTR_ACTIVE_XAXIS and ATTR_PLOT_XAXIS

    Hi,
    I am using a xy graph with both x axes and both y axes. There are two possibilities when adding a new plot:
    1) PlotXY and SetPlotAttribute ( , , , ATTR_PLOT_XAXIS, );
    2) SetCtrlAttribute ( , , ATTR_ACTIVE_XAXIS, ) and PlotXY
    I tend to prefer the second method because I would assume it to be slightly faster, but what do the experts say?
    Thanks!  
    Solved!
    Go to Solution.

    Hi Wolfgang,
    thank you for your interesting question.
    First of all I want to say, that generally spoken, using the command "SetCtrlAttribute"is the best way to handle with your elements. I would suggest using this command when ever it is possible.
    Now, to your question regarding the performance difference between "SetCtrlAttribute" and "SetPlotAttribute".
    I think the performance difference occures, because in the background of the "SetPlotAttribute" command, another function called "ProcessDrawEvents" is executed. This event refreshes your plot again and again in the function whereas in the "SetCtrlAttribute" the refreshing is done once after the function has been finished. This might be a possible reason.
    For example you have a progress bar which shows you the progress of installing a driver:
    "SetPlotAttribute" would show you the progress bar moving step by step until installing the driver is done.
    "SetCtrlAttribute" would just show you an empty bar at the start and a full progress bar when the installing process is done.
    I think it is like that but I can't tell you 100%, therefore I would need to ask our developers.
    If you want, i can forward the question to them, this might need some times. Also, then I would need to know which version of CVI you are using.
    Please let me now if you want me to forward your question.
    Have a nice day,
    Abduelkerim
    Sales
    NI Germany

  • Is there a performance difference between Automation Plug-ins and the scripting system?

    We currently have a tool that, through the scripting system, merges and hides layers by layer groups, exports them, and then moves to the next layer group.  There is some custom logic and channel merging that occasionally occurs in the merging of an individual layer group.  These operations are occuring through the scripting system (actually, through C# making direct function calls through Photoshop), and there are some images where these operations take ~30-40 minutes to complete on very large images.
    Is there a performance difference between doing the actions in this way as opposed to having these actions occur in an automation plug-in?
    Thanks,

    Thanks for the reply.    I ended up just benchmarking the current implementation that we are using (which goes through DOM from all indications, I wasn't the original author of the code) and found that accessing each layer was taking upwards of 300 ms.  I benchmarked iterating through the layers with PIUGetInfoByIndexIndex (in the Getter automation plug-in) and found that the first layer took ~300 ms, but the rest took ~1 ms.  With that information, I decided that it was worthwhile rewriting the functionality in an Automation plug-in.

  • Is there a big performance difference between HD's

    Hi All
      Is there a big performance difference between a 5400 & 7200 Rpm hardrive? I'm asking because I want to pick up a new Imac and am stuck between choosing the higher end 21.5" and lower end 27" and am trying to determine if the difference in price is worth the extra sheckles.

    I was wanting to know the same question (5400rpm vs. 7200rpm) in a 15" standard MBP, deciding on whether to get a 1TB serial ATA drive @ 5400rpm vs. a 750GB serial ATA drive @ 7200rpm. (Sorry to jump in)
    For the most part, I'm a general user - web surfing/research, Word processing/Excel/Powerpoint (pretty basic), etc. BUT I do like to take alot of photos and plan on doing some editing on them (nothing advanced) and creating slideshows with my photos/music (ex. my Europe trip of photos or a slideshow of the grandchildren as a gift to my parents, etc.)
    Some folks mention "video editing" in reference to gonig with the faster speed (7200rpm) if that's what you plan on doing. But, what do they exactly mean by "video editing"? Is slideshow creation the same? 
    Just wondering for my needs, whether I should go with the 750GB serial ATA drive @ 7200 rpm or the 1TB serial ATA drive @ 5400rpm ($50 more yet with more storage space which would help with my increasing photo files every year).
    Thanks

  • What is the performance difference between an Oct 2011 i5 and a June 2012 i7 in performance if they both have 16GB RAM

    what is the performance difference between an Oct 2011 i5 and a June 2012 i7 in performance if they both have 16GB RAM

    At least. The Core i7 can drive up to 8 concurrent 64-bit threads, more than an i5. Plus the 2011 models used the Sandy Bridge family of Intel chips, whereas the 2012 are the latest Ivy Bridge, that uses a much faster, less latency architecture.

  • Difference between system.out.println() & out.println()

    Could anybody please tell what is the difference between System.out.println() & out.println() in JSP ? I have noticed sometimes times the former works & sometimes latter. But I don't know where to use what & the difference between the two.

    System.out is the console (or log file potentially)
    In JSPs, out is defined for you as the stream send back to the browser. e.g. the HTML the browser displays.
    System.out always works, but you might not see the output.
    out only works is a variable has been defined like; PrintStream out.

  • Difference between " system.out.print( ) " and " system.out.println( ) "?

    Hi frnds, i m a beginner in JAVA today only started with the complete refrence....can you help me and tell the the Difference between " system.out.print( ) " and " system.out.println( ) "?

    Rashid2753 wrote:
    hi,Yes. But it's a good idea for new Java programmers to become accustomed to using helpful resources like the API Javadocs because it's much faster then waiting for replies everytime you have a question. For experienced developers the API Javadocs are an indispensible resource.

  • Difference between System.out.println() and out.println()

    Hi,
    In JSP we want to write the JNLP file contents to a ouput stream using "out.printl()".
    The content of the JNLP file i am having in a String.
    The "System.out.println()" is printing the correct JNLP file contents but "out.println()" writing wrong contents which are taken from Server JNLP file.
    How to solve this problem?
    Please guide me in this.
    Thanks and Regards:
    Dheeraj

    Where is the "System.out.println()" running from? I don't think your problem has anything to do with the difference between System.out.println and out.println. Both methods print what is passed to them. It sounds like you are printing two different files because you are running in two different environments.
    JSPs run on a server and only have access to files on the server or on a network the server is on. If you are trying to print a file on a user's system, JSP can't do it.

Maybe you are looking for

  • Local video streaming and photo gallery displaying

    Hi, there. I have a macbook pro, an iPhone4, a video beamer and a wi-fi connection. I would like to know how I can, localy, stream a live video I'm shooting with my iPhone4 in order to see it in real time on the beamer (connected to the Macbook). I w

  • Drives show in disk utility, not on desktop

    A recent development. Some FW 800 drives stopped appearing.

  • Assessable value not maintained for the material X

    Dear all, i have maintained material chapter ID,Plant and materialchapter id  combination ,assessable value , cenvat determination, and vendor excise details in the transaction J1ID . Even after maintaining all these conditions , i am not able to sav

  • Workflow in SharePoint 2013: Update item in list

    I have a task list where anyone can post a suggestion for a blog. There is a string field for the status. I have a library where the author can upload the blog article when ready to send for review. Uploading the document triggers the workflow to sta

  • How Do I Preview QT Mpeg4 Video in Windows Explorer?

    I have tried everything I can to try to get QuickTime videos to be viewed at thumbnails in Windows Explorer, but only Windows AVI and DIVX videos will be previewed as thumbnails. I tried these things: 1. Installed QuickTime 7.6.2.14.0 updated this we