Query performance - Difference between OPERATOR - Exists and IN
Hi,
I Have Two Tables VD,ID .Each table containg one lakh rows.
CREATE TABLE VD (
SRNO NUMBER(12) ,
UNIT Varchar2(2) ,
CREATE TABLE ID ( SRNO NUMBER(12),
PID Varchar2(2) ,
Sid Varchar2(20)
In my Application i need to display Column(SRNO) from table VD If the Column(SRNO)exists in Table ID
for the given PID and SID.
Which query has better performance in below Queries ?
Select SRNO from VD Where SRNO in( Select SRNO From ID Where PID = :A
And Sid = :B )
Select SRNO from VD V Where Exists( Select 'X' From ID Z Where Z.PID = :A
And Z.Sid = :B
And Z.SRNO = V.SRNO )
Version : Oracle 10g
Thanks ....
Sathi
user10732947 wrote:
pls refer to :-)
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:953229842074
And which part are you referring to specifically?
It backs up what Toon already mentioned....
ops$tkyte@ORA10GR2> SELECT /* EXISTS example */
2 e.employee_id, e.first_name, e.last_name, e.salary
3 FROM employees e
4 WHERE EXISTS (SELECT 1 FROM orders o /* Note 1 */
5 WHERE e.employee_id = o.sales_rep_id /* Note 2 */
6 AND o.customer_id = 144);
Execution Plan
Plan hash value: 551415261
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 67 | 4087 | 49 (3)|
| 1 | NESTED LOOPS | | 67 | 4087 | 49 (3)|
| 2 | SORT UNIQUE | | 67 | 1139 | 14 (0)|
|* 3 | TABLE ACCESS FULL | ORDERS | 67 | 1139 | 14 (0)|
| 4 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 44 | 1 (0)|
|* 5 | INDEX UNIQUE SCAN | EMP_PK | 1 | | 0 (0)|
Predicate Information (identified by operation id):
3 - filter("O"."CUSTOMER_ID"=144)
5 - access("E"."EMPLOYEE_ID"="O"."SALES_REP_ID")
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> SELECT /* IN example */
2 e.employee_id, e.first_name, e.last_name, e.salary
3 FROM employees e
4 WHERE e.employee_id IN (SELECT o.sales_rep_id /* Note 4 */
5 FROM orders o
6 WHERE o.customer_id = 144);
Execution Plan
Plan hash value: 551415261
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 67 | 4087 | 49 (3)|
| 1 | NESTED LOOPS | | 67 | 4087 | 49 (3)|
| 2 | SORT UNIQUE | | 67 | 1139 | 14 (0)|
|* 3 | TABLE ACCESS FULL | ORDERS | 67 | 1139 | 14 (0)|
| 4 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 1 | 44 | 1 (0)|
|* 5 | INDEX UNIQUE SCAN | EMP_PK | 1 | | 0 (0)|
Predicate Information (identified by operation id):
3 - filter("O"."CUSTOMER_ID"=144)
5 - access("E"."EMPLOYEE_ID"="O"."SALES_REP_ID")Two identical Execution plans...
Similar Messages
-
Major query performance differnce between oracle 8 and 9
Hello, I have the following query
select distinct UPPER(rf.module), ruf.rpt_seq
from role_func rf, rd_url_func ruf
where role_name = 'ADMIN'
and UPPER(rf.module) not in (select UPPER(ruf.module)
from role_func rf2, rd_url_func ruf2
where UPPER(ruf2.module) = UPPER(rf.module)
and UPPER(rf2.module(+)) = UPPER(ruf2.module)
and rf2.url(+) = ruf2.url
and rf2.role_name(+) = 'ADMIN'
and rf2.url is null)
and UPPER(rf.module) = UPPER(ruf.module)
and ruf.rpt_seq = (SELECT min(rpt_seq)
FROM rd_url_func ruf3
WHERE ruf3.module = ruf.module)
order by ruf.rpt_seq; Now on Oracle 8, this executes almost instantly. On oracle 9, however, this takes a very long time (around 30 seconds?) In both databases I have the same data. Also none of the tables are extremely large- they both contain only about 400 rows. Any suggestions on what could be causing this difference, or at least how I can find out the problem?
Thanksok does this help:
explain plan for select distinct UPPER(rf.module), ruf.rpt_seq
from role_func rf, rd_url_func ruf
where role_name = 'ADMIN'
and UPPER(rf.module) not in (select UPPER(ruf.module)
from role_func rf2, rd_url_func ruf2
where UPPER(ruf2.module) = UPPER(rf.module)
and UPPER(rf2.module(+)) = UPPER(ruf2.module)
and rf2.url(+) = ruf2.url
and rf2.role_name(+) = 'ADMIN'
and rf2.url is null)
and UPPER(rf.module) = UPPER(ruf.module)
and ruf.rpt_seq = (SELECT min(rpt_seq)
FROM rd_url_func ruf3
WHERE ruf3.module = ruf.module)
order by ruf.rpt_seq;
select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,30 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;this is from oracle 8 where it executes fast:
Operation Object
SELECT STATEMENT ()
SORT (UNIQUE)
FILTER ()
HASH JOIN ()
TABLE ACCESS (FULL) RD_URL_FUNC
TABLE ACCESS (FULL) ROLE_FUNC
FILTER ()
FILTER ()
NESTED LOOPS (OUTER)
TABLE ACCESS (FULL) RD_URL_FUNC
TABLE ACCESS (BY INDEX R ROLE_FUNC
INDEX (UNIQUE SCAN) RFUN_PK
SORT (AGGREGATE)
TABLE ACCESS (FULL) RD_URL_FUNCthis is from oracle 9 where it executes slow
Operation Object
SELECT STATEMENT ()
SORT (UNIQUE)
FILTER ()
SORT (GROUP BY)
FILTER ()
HASH JOIN ()
TABLE ACCESS (FULL) ROLE_FUNC
HASH JOIN ()
TABLE ACCESS (FULL) RD_URL_FUNC
TABLE ACCESS (FULL) RD_URL_FUNC
FILTER ()
FILTER ()
NESTED LOOPS (OUTER)
TABLE ACCESS (FULL) RD_URL_FUNC
TABLE ACCESS (BY INDEX ROLE_FUNC
INDEX (UNIQUE SCAN) RFUN_PKcan someone help interpret the difference between the execution plan, and how to make oracle use the first one on oracle 9? -
Performance difference between After effects and Premiere Pro
Hi,
I'm sure this question has come up before and i've found lots of threads talking about After effects performance so I apologise if this is a repeat, i'm very new to Premiere/AE moving from FCP.
Why is it that performance in Premiere Pro is excellent with real time video playback even when applying effects where if I import that same clip into After Effects I get 15fps on a 720p 50fps clip. Am I missing something, a preference maybe or should I expect this kind of performance without completing a RAM preview first.
As a note, I'm testing this on a 2012 Macbook Air as I'm planning on doing simple cuts and adjustments on the road.Am I missing something,
Yes. The difference between an edit suite and a compositing program. Seriously, simply accept things as they are and don't try to apply principles from one program to the other when they are based on completely different workflow paradigms which in turn affect the underlying tech stuff.
Mylenium -
Difference between USER-EXISTs and Customer-Exits???
Hi,
Can anyone give me the difference between the user-exits and customer-exits?
Please respond at the earliest. Thanks in advance.Hi,
USER EXITS->
1. Introduction:
User exits (Function module exits) are exits developed by SAP.
The exit is implementerd as a call to a functionmodule.
The code for the function module is writeen by the developer.
You are not writing the code directly in the function module,
but in the include that is implemented in the function module.
The naming standard of function modules for functionmodule exits is:
EXIT_<program name><3 digit suffix>
The call to a functionmodule exit is implemented as:
CALL CUSTOMER.-FUNCTION <3 digit suffix>
http://www.sap-img.com/abap/a-short-tutorial-on-user-exits.htm
CUSTOMER EXITS-> t-code CMOD.
As of Release 4.6A SAP provides a new enhancement technique, the Business Add-Ins.
Among others, this enhancement technique has the advantage of
being based on a multi-level system landscape (SAP, country versions, IS solutions, partner,
customer, and so on)
instead of a two-level landscape (SAP, customer) as with the customer exits.
You can create definitions and implementations of business add-ins at any level of the system landscape.
To unify enhancements of the SAP Standard you can migrate customer exits to business add-ins.
http://help.sap.com/saphelp_nw04/helpdata/en/c8/1975cc43b111d1896f0000e8322d00/content.htm
In order to find out the user exits for any tcode,
1. get the developement class of the tcode from SE93.
2. Now goto transaction SMOD and press F4,
3. give in the Deve class in the dev class and Press ENTER
this will show u the exits for any tcode.
or execute this report
http://www.erpgenie.com/sap/abap/code/abap26.htm
which gives the list of exits for a tcode
http://help.sap.com/saphelp_nw04/helpdata/en/bf/ec079f5db911d295ae0000e82de14a/frameset.htm
For information on Exits, check these links
http://www.sap-img.com/abap/a-short-tutorial-on-user-exits.htm
http://www.sapgenie.com/abap/code/abap26.htm
http://www.sap-img.com/abap/what-is-user-exits.htm
http://wiki.ittoolbox.com/index.php/HOWTO:Implement_a_screen_exit_to_a_standard_SAP_transaction
http://www.easymarketplace.de/userexit.php
http://www.sap-img.com/abap/a-short-tutorial-on-user-exits.htm
http://www.sappoint.com/abap/userexit.pdfUser-Exit
Regards,
Lijo Joseph
*Reward if useful. -
Query on differences between table Icons and types in smartforms
Hello,
I have a question regarding the apparent differences between tables in smartforms.
I have noticed on some of the default smartforms that are supplied the table icon is the same as on the
'Create new session' button at the top of a Sap session window. The icon on a table that I am currently working on is like a 'spread sheet' design, a heading with columns, as shown in the current documentation. The way the two styles of tables work is different.
Is the difference down to the fact one was created in an older implementation of Sap?
The reason I ask is because the table I refered to initially, is easier when configuring cells.
Regards
Mike.Hello Karthik
Thanks for taking the time to reply to my question.
The difference in the icons but with essentially the same function has always confused me since starting Smartforms.
Thank you for enlightening me.
I asked the question because the Complex node has a feature that I could have used. I have managed though to solve my problem using a table node.
Ten points awarded.
Best Regards
Mike Spear. -
[8i] Performance difference between a view and an in-line view?
I have a query with a few 'UNION ALL' statements... each chunk of the query that is joined by the 'UNION ALL' references the same in-line view, but in each chunk it is joined to different tables. If I actually create the view and reference it in each chunk, will it still run the query behind the view for each chunk, or will it only do it once? I just want to know if it will improve the performance of my query. And, I'm not talking about creating a materialized view, just a regular one.
Because of the complexity of my query, I tried out a simple (really simple) example instead...
First, I created my simple view
Then, I ran a query with a UNION ALL in it against that view
Next, I ran the same UNION ALL query, but using in-line views instead of the one I created, and these are the results I got:
(against the view I created)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE
1 0 UNION-ALL
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
3 2 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
5 4 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
Statistics
14 recursive calls
0 db block gets
1080 consistent gets
583 physical reads
0 redo size
54543 bytes sent via SQL*Net to client
4559 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:01.67(with in-line views)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE
1 0 UNION-ALL
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
3 2 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
4 1 TABLE ACCESS (BY INDEX ROWID) OF 'PART'
5 4 INDEX (RANGE SCAN) OF 'PART_PK' (UNIQUE)
Statistics
0 recursive calls
0 db block gets
1076 consistent gets
582 physical reads
0 redo size
54543 bytes sent via SQL*Net to client
4559 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:00.70Here, it appears that the explain plans are the same, though the statistics and time show better performance with using in-line views...
Next, I tried the same 2 queries, but using the CHOOSE hint, since the explain plans above show that it defaults to using the RBO...
Here are those results:
(hint + use view)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=1840 Card=1071
Bytes=57834)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=642 Bytes=3
4668)
3 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=429 Bytes=2
3166)
Statistics
14 recursive calls
8 db block gets
12371 consistent gets
10850 physical reads
0 redo size
60726 bytes sent via SQL*Net to client
4441 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:02.90(hint + in-line view)
890 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=1840 Card=1071
Bytes=57834)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=642 Bytes=3
4668)
3 1 TABLE ACCESS (FULL) OF 'PART' (Cost=920 Card=429 Bytes=2
3166)
Statistics
0 recursive calls
8 db block gets
12367 consistent gets
10850 physical reads
0 redo size
60726 bytes sent via SQL*Net to client
4441 bytes received via SQL*Net from client
61 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
890 rows processed
timing for: query_timer
Elapsed: 00:00:02.99Obviously, for this simple example, using the CHOOSE hint caused worse performance than letting it default to the RBO (though the explain plans still look the same to me), but what I find interesting is that when I used the hint, the version of the query using the in-line view became at least equivalent to the one using the view, if not worse.
But, based on these results, I don't know that I can extrapolate to my complex query... or can I? I'm thinking I'm going to have to actually go through and make my views for the complex query and test it out.... -
Performance Difference Between Windows 7 and 8.1
When I was using Adobe Premiere near the end of last week, it was running pretty slow with Windows 8.1 especially the clips I put on the timeline, it also crashed occasionally when editing 20 minutes of video, adding text was though only sometimes, so I went back to Windows 7 and it seemed to run faster and there were none of these problems as I had with Windows 8.1.
My System specs are these:
AMD Athlon II X3 Processor, 2900 mhz and 3 cores
11GB's of RAM
2x 1 terabyte hard disks
1 x 60GB SSD Hard Drive (where my main operating is on)
ATI Radeon HD 4200 (Built on the motherboard)
[Please choose only a short description for the thread title.]
Message was edited by: Jim Simonhi Chris,
do you mean CC or CC 2014? I also work on Windows 8.1 and it seems to me that CC 2014 runs much slower than CC. I checked on Adobe`s website and the technical requirements for CC 2014 are the same like for CC, but it seems like the new version needs more CPU and faster hard drives. -
Difference between Generic extraction and Generic Delta
Hi Experts,
Pleasee give of some information below query:
1) Difference between Generic extraction and Generic Delta.
2) How to achieve Generic delta.
3) When we go for Generic delta instead of generic extraction.
Advance ThankU. Points vl be assigned.
Thanks,
Ragu.RHi,
Generic delta is the delta load done using Generic DS.
Generic extraction
Usage:
1. When the standard extractors are not supporting the extraction what you need. If SAP does not have a standard extractor for your need to get data from R3, you would have to go for generic extractor.
2. If you create a custom object say by combining certain base tables in R3 say custom tables ZTAB1 and ZTAB2. These two tables are not SAP provided tables and there will not be any standard extractors. So cases like this you will have to go for generic extractors.
How:
You have to use RSO2 transaction and you can also set delta based on, one of the three characteristics such as timestamp, calday or pointer (a sequence no).
once you create it and activate it. The extractor will be available in ROOSOURCE - (table in R3 where all the data sources are available).
Refer:
/people/siegfried.szameitat/blog/2005/09/29/generic-extraction-via-function-module
http://help.sap.com/saphelp_nw04/helpdata/en/3f/548c9ec754ee4d90188a4f108e0121/content.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33
Generic Extraction via Function Module
/people/siegfried.szameitat/blog/2005/09/29/generic-extraction-via-function-module
thanks,
JituK -
Huge performance differences between a map listener for a key and filter
Hi all,
I wanted to test different kind of map listener available in Coherence 3.3.1 as I would like to use it as an event bus. The result was that I found huge performance differences between them. In my use case, I have data which are time stamped so the full key of the data is the key which identifies its type and the time stamp. Unfortunately, when I had my map listener to the cache, I only know the type id but not the time stamp, thus I cannot add a listener for a key but for a filter which will test the value of the type id. When I launch my test, I got terrible performance results then I tried a listener for a key which gave me much better results but in my case I cannot use it.
Here are my results with a Dual Core of 2.13 GHz
1) Map Listener for a Filter
a) No Index
Create (data always added, the key is composed by the type id and the time stamp)
Cache.put
Test 1: Total 42094 millis, Avg 1052, Total Tries 40, Cache Size 80000
Cache.putAll
Test 2: Total 43860 millis, Avg 1096, Total Tries 40, Cache Size 80000
Update (data added then updated, the key is only composed by the type id)
Cache.put
Test 3: Total 56390 millis, Avg 1409, Total Tries 40, Cache Size 2000
Cache.putAll
Test 4: Total 51734 millis, Avg 1293, Total Tries 40, Cache Size 2000
b) With Index
Cache.put
Test 5: Total 39594 millis, Avg 989, Total Tries 40, Cache Size 80000
Cache.putAll
Test 6: Total 43313 millis, Avg 1082, Total Tries 40, Cache Size 80000
Update
Cache.put
Test 7: Total 55390 millis, Avg 1384, Total Tries 40, Cache Size 2000
Cache.putAll
Test 8: Total 51328 millis, Avg 1283, Total Tries 40, Cache Size 2000
2) Map Listener for a Key
Update
Cache.put
Test 9: Total 3937 millis, Avg 98, Total Tries 40, Cache Size 2000
Cache.putAll
Test 10: Total 1078 millis, Avg 26, Total Tries 40, Cache Size 2000
Please help me to find what is wrong with my code because for now it is unusable.
Best Regards,
Nicolas
Here is my code
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import com.tangosol.io.ExternalizableLite;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
import com.tangosol.util.Filter;
import com.tangosol.util.MapEvent;
import com.tangosol.util.MapListener;
import com.tangosol.util.extractor.ReflectionExtractor;
import com.tangosol.util.filter.EqualsFilter;
import com.tangosol.util.filter.MapEventFilter;
public class TestFilter {
* To run a specific test, just launch the program with one parameter which
* is the test index
public static void main(String[] args) {
if (args.length != 1) {
System.out.println("Usage : java TestFilter 1-10|all");
System.exit(1);
final String arg = args[0];
if (arg.endsWith("all")) {
for (int i = 1; i <= 10; i++) {
test(i);
} else {
final int testIndex = Integer.parseInt(args[0]);
if (testIndex < 1 || testIndex > 10) {
System.out.println("Usage : java TestFilter 1-10|all");
System.exit(1);
test(testIndex);
@SuppressWarnings("unchecked")
private static void test(int testIndex) {
final NamedCache cache = CacheFactory.getCache("test-cache");
final int totalObjects = 2000;
final int totalTries = 40;
if (testIndex >= 5 && testIndex <= 8) {
// Add index
cache.addIndex(new ReflectionExtractor("getKey"), false, null);
// Add listeners
for (int i = 0; i < totalObjects; i++) {
final MapListener listener = new SimpleMapListener();
if (testIndex < 9) {
// Listen to data with a given filter
final Filter filter = new EqualsFilter("getKey", i);
cache.addMapListener(listener, new MapEventFilter(filter), false);
} else {
// Listen to data with a given key
cache.addMapListener(listener, new TestObjectSimple(i), false);
// Load data
long time = System.currentTimeMillis();
for (int iTry = 0; iTry < totalTries; iTry++) {
final long currentTime = System.currentTimeMillis();
final Map<Object, Object> buffer = new HashMap<Object, Object>(totalObjects);
for (int i = 0; i < totalObjects; i++) {
final Object obj;
if (testIndex == 1 || testIndex == 2 || testIndex == 5 || testIndex == 6) {
// Create data with key with time stamp
obj = new TestObjectComplete(i, currentTime);
} else {
// Create data with key without time stamp
obj = new TestObjectSimple(i);
if ((testIndex & 1) == 1) {
// Load data directly into the cache
cache.put(obj, obj);
} else {
// Load data into a buffer first
buffer.put(obj, obj);
if (!buffer.isEmpty()) {
cache.putAll(buffer);
time = System.currentTimeMillis() - time;
System.out.println("Test " + testIndex + ": Total " + time + " millis, Avg " + (time / totalTries) + ", Total Tries " + totalTries + ", Cache Size " + cache.size());
cache.destroy();
public static class SimpleMapListener implements MapListener {
public void entryDeleted(MapEvent evt) {}
public void entryInserted(MapEvent evt) {}
public void entryUpdated(MapEvent evt) {}
public static class TestObjectComplete implements ExternalizableLite {
private static final long serialVersionUID = -400722070328560360L;
private int key;
private long time;
public TestObjectComplete() {}
public TestObjectComplete(int key, long time) {
this.key = key;
this.time = time;
public int getKey() {
return key;
public void readExternal(DataInput in) throws IOException {
this.key = in.readInt();
this.time = in.readLong();
public void writeExternal(DataOutput out) throws IOException {
out.writeInt(key);
out.writeLong(time);
public static class TestObjectSimple implements ExternalizableLite {
private static final long serialVersionUID = 6154040491849669837L;
private int key;
public TestObjectSimple() {}
public TestObjectSimple(int key) {
this.key = key;
public int getKey() {
return key;
public void readExternal(DataInput in) throws IOException {
this.key = in.readInt();
public void writeExternal(DataOutput out) throws IOException {
out.writeInt(key);
public int hashCode() {
return key;
public boolean equals(Object o) {
return o instanceof TestObjectSimple && key == ((TestObjectSimple) o).key;
}Here is my coherence config file
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>test-cache</cache-name>
<scheme-name>default-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>default-distributed</scheme-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
</distributed-scheme>
<class-scheme>
<scheme-name>default-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
</class-scheme>
</caching-schemes>
</cache-config>Message was edited by:
user620763Hi Robert,
Indeed, only the Filter.evaluate(Object obj)
method is invoked, but the object passed to it is a
MapEvent.<< In fact, I do not need to implement EntryFilter to
get a MapEvent, I could get the same result (in my
last message) by writting
cache.addMapListener(listener, filter,
true)instead of
cache.addMapListener(listener, new
MapEventFilter(filter) filter, true)
I believe, when the MapEventFilter delegates to your filter it always passes a value object to your filter (old or new), meaning a value will be deserialized.
If you instead used your own filter, you could avoid deserializing the value which usually is much larger, and go to only the key object. This would of course only be noticeable if you indeed used a much heavier cached value class.
The hashCode() and equals() does not matter on
the filter class<< I'm not so sure since I noticed that these methods
were implemented in the EqualsFilter class, that they
are called at runtime and that the performance
results are better when you add them
That interests me... In what circumstances did you see them invoked? On the storage node before sending an event, or upon registering a filtered listener?
If the second, then I guess the listeners are stored in a hash-based map of collections keyed by a filter, and indeed that might be relevant as in that case it will cause less passes on the filter for multiple listeners with an equalling filter.
DataOutput.writeInt(int) writes 4 bytes.
ExternalizableHelper.writeInt(DataOutput, int) writes
1-5 bytes (or 1-6?), with numbers with small absolute
values consuming less bytes.Similar differences exist
for the long type as well, but your stamp attribute
probably will be a large number...<< I tried it but in my use case, I got the same
results. I guess that it must be interesting, if I
serialiaze/deserialiaze many more objects.
Also, if Coherence serializes an
ExternalizableLite object, it writes out its
class-name (except if it is a Coherence XmlBean). If
you define your key as an XmlBean, and add your class
into the classname cache configuration in
ExternalizableHelper.xml, then instead of the
classname, only an int will be written. This way you
can spare a large percentage of bandwidth consumed by
transferring your key instance as it has only a small
number of attributes. For the value object, it might
or might not be so relevant, considering that it will
probably contain many more attributes. However, in
case of a lite event, the value is not transferred at
all.<< I tried it too and in my use case, I noticed that
we get objects nearly twice lighter than an
ExternalizableLite object but it's slower to get
them. But it is very intersting to keep in mind, if
we would like to reduce the network traffic.
Yes, these are minor differences at the moment.
As for the performance of XMLBean, it is a hack, but you might try overriding the readExternal/writeExternal method with your own usual ExternalizableLite implementation stuff. That way you get the advantages of the xmlbean classname cache, and avoid its reflection-based operation, at the cost of having to extend XMLBean.
Also, sooner or later the TCMP protocol and the distributed cache storages will also support using PortableObject as a transmission format, which enables using your own classname resolution and allow you to omit the classname from your objects. Unfortunately, I don't know when it will be implemented.
>
But finally, I guess that I found the best solution
for my specific use case which is to use a map
listener for a key which has no time stamp, but since
the time stamp is never null, I had just to check
properly the time stamp in the equals method.
I would still recommend to use a separate key class, use a custom filter which accesses only the key and not the value, and if possible register a lite listener instead of a heavy one. Try it with a much heavier cached value class where the differences are more pronounced.
Best regards,
Robert -
Differences between operational systems data modeling and data warehouse da
Hello Everyone,
Can anybody help me understand the differences between operational systems data modeling and data warehouse data modeling>
ThanksHello A S!
What you mean is the difference between modelling after normal form like in operational systems (OLTP) e. g. 3NF and modelling a InfoCube in a data warehouse (OLAP)?
While in a OLTP you want to have data tables free of redundance and ready for transactions meaning writing and reading few records often, in an OLAP-system you need to read a lot of data for every query you do on a database. Often in an OLAP-system you aggregate these amounts of data.
Therefore you use another principle for these database scheme. This is called star schema. This means that you have one central table (called fact table) which holds the key figures and have keys to another tables with characteristics. These other tables are called dimension tables. They hold combinations of the characteristics. Normally you design it that your dimensions are small, so the access on the data is more efficent.
the star scheme in SAP BI is a little more complex than explained here but it follows the same concept.
Best regards,
Peter -
Is there a performance difference between Automation Plug-ins and the scripting system?
We currently have a tool that, through the scripting system, merges and hides layers by layer groups, exports them, and then moves to the next layer group. There is some custom logic and channel merging that occasionally occurs in the merging of an individual layer group. These operations are occuring through the scripting system (actually, through C# making direct function calls through Photoshop), and there are some images where these operations take ~30-40 minutes to complete on very large images.
Is there a performance difference between doing the actions in this way as opposed to having these actions occur in an automation plug-in?
Thanks,Thanks for the reply. I ended up just benchmarking the current implementation that we are using (which goes through DOM from all indications, I wasn't the original author of the code) and found that accessing each layer was taking upwards of 300 ms. I benchmarked iterating through the layers with PIUGetInfoByIndexIndex (in the Getter automation plug-in) and found that the first layer took ~300 ms, but the rest took ~1 ms. With that information, I decided that it was worthwhile rewriting the functionality in an Automation plug-in.
-
Difference between parallel sequence and parallel operation in a routing.
Hi Experts,
Can any one explain me with example the difference between parallel sequence and parallel operation in a routing? wHEN CAN WE USE PARALLEL OPEARTION AND PARALLEL SEQUNCE WITH COMPONENT ALLOCATION.
Regards
Deepak sharmaI think u need to modify ur quest... i think u r asking about Parallel sequence and alternate seq. Below are the details from SAP site.
A parallel sequence enables you to process several operations at the same time.
You use an alternative sequence for example, if
--The production flow is different for certain lot-size ranges
For instance you can machine a work piece on conventional machine or on NC machines. A NC machine has a longer set-up time than a conventional machine. However the machining costs are considerably less. Therefore whether you use NC machines will depend on the lot size.
---The production flow changes under certain business conditions.
For instance, if you have a capacity problem, you have some production steps performed externally by a vendor. -
Difference between Temp table and Variable table and which one is better performance wise?
Hello,
Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
Which one is recommended to use for better performance?
also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
Is that Table variable using Memory or Disk space?
Thanks Shiven:) If Answer is Helpful, Please VoteCheck following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
But it also depends upon specific scenarios you are dealing with , can you share it?
~manoj | email: http://scr.im/m22g
http://sqlwithmanoj.wordpress.com
MCCA 2011 | My FB Page -
Hi,
I am using a xy graph with both x axes and both y axes. There are two possibilities when adding a new plot:
1) PlotXY and SetPlotAttribute ( , , , ATTR_PLOT_XAXIS, );
2) SetCtrlAttribute ( , , ATTR_ACTIVE_XAXIS, ) and PlotXY
I tend to prefer the second method because I would assume it to be slightly faster, but what do the experts say?
Thanks!
Solved!
Go to Solution.Hi Wolfgang,
thank you for your interesting question.
First of all I want to say, that generally spoken, using the command "SetCtrlAttribute"is the best way to handle with your elements. I would suggest using this command when ever it is possible.
Now, to your question regarding the performance difference between "SetCtrlAttribute" and "SetPlotAttribute".
I think the performance difference occures, because in the background of the "SetPlotAttribute" command, another function called "ProcessDrawEvents" is executed. This event refreshes your plot again and again in the function whereas in the "SetCtrlAttribute" the refreshing is done once after the function has been finished. This might be a possible reason.
For example you have a progress bar which shows you the progress of installing a driver:
"SetPlotAttribute" would show you the progress bar moving step by step until installing the driver is done.
"SetCtrlAttribute" would just show you an empty bar at the start and a full progress bar when the installing process is done.
I think it is like that but I can't tell you 100%, therefore I would need to ask our developers.
If you want, i can forward the question to them, this might need some times. Also, then I would need to know which version of CVI you are using.
Please let me now if you want me to forward your question.
Have a nice day,
Abduelkerim
Sales
NI Germany -
Difference between Report painter and abap query .
can anyone please tell me the difference between the report painter and the ordinary alv,clasical reporting and also the difference between Report painter and abap query. How the output format will be in Report painter. If anyone has any documents please send it to
[email protected]
Thanks,
Joseph.hi,
ABAP Query is an ABAP Workbench tool that enables users without knowledge of the ABAP programming language to define and execute their own reports.
In ABAP Query, you enter texts and select fields and options to determine the structure of the reports. Fields are selected from functional areas and can be assigned a sequence by numbering.
link for abap query --
https://forums.sdn.sap.com/click.jspa?searchID=221911&messageID=2790992
whereas the Report Painter enables you to report on data from various applications. It uses a graphical report structure that forms the basis for the report definition. When defining the report, you work with a structure that corresponds to the final structure of the report when the report data is output.
link for report painter --
https://forums.sdn.sap.com/click.jspa?searchID=221874&messageID=1818114
Regards,
pankaj singh
Message was edited by:
Pankaj Singh
Message was edited by:
Pankaj Singh
Maybe you are looking for
-
Deploy web application to multiple web front end servers
Hi, I have a SharePoint farm that include 8 web front end servers and 2 application servers. I have created web application from central administration and during creation I added the public URL to refer to one of the web front end servers. Now I wan
-
How to add elements to a list of Long Serializable
I am currently writing a set of generic DAOs with generic signatures such as DAO<PK extends Serializable> (you've all seen the pattern). I am using Hibernate to persist to the database whose method is; public Serializable save(Object object) However,
-
I tried downloading an app but it says there is an error and asked me to come here for help. How?
-
How to let Weblogic to load log4j.jar from application?
Hi all, Our company planed to move an application from WebSphere to WebLogic server (10.0.2). I am responsible for the technique aspect on this moving. But this is my first time to work on WebLogic, so it seems everything is new for me. Currently, I
-
HT4459 update from a Mac os x 10.5.8 to a 10.6
what is the next update after OS x 10.5.8 and how do i get it?