Whose performance is better
Hello guys,
I am developimg a scenario in which I am doing a RFC look up through BPM. Now I am having two ways to do it .
1. Call the RFC using synchronous step in BPM
2. use User defined function to have the RFC lookup.
Can u guys please tell me which option is good for the performance issue .
Thanks.
Hi Keith,
In terms of performance, ur second option is much better.
1. Here, in the case of synchronous send steps, consumption depends on the target system. If the target system is slow then overall performance will degrade.
Resource consumption is explained here
http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
2. Here the BPM overhead is avoided. So the overall performance increases
Have a look here
/people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit
Regards,
Prateek
Similar Messages
-
Revision: 10156
Author: [email protected]
Date: 2009-09-11 09:22:09 -0700 (Fri, 11 Sep 2009)
Log Message:
Resubmitting change for SDK-22035, should perform much better.
QE notes: None
Doc notes: None
Bugs: SDK-22035
Reviewer: Paul
Tests run: Binding cyclone, performance suite
Is noteworthy for integration: No
Ticket Links:
http://bugs.adobe.com/jira/browse/SDK-22035
http://bugs.adobe.com/jira/browse/SDK-22035
Modified Paths:
flex/sdk/trunk/modules/compiler/src/java/flex2/compiler/mxml/ImplementationGenerator.java
flex/sdk/trunk/modules/compiler/src/java/flex2/compiler/mxml/gen/ClassDefLib.vm
flex/sdk/trunk/modules/compiler/src/java/flex2/compiler/mxml/rep/BindingExpression.javaIf you're still using Buckminster 3.6, I strongly suggest switching to 3.7 - it has a number of bug fixes and improvements. This applies to both headless, and the IDE (assuming Eclipse 3.7 Indigo).
Matthew -
Which is Performance wise better MOVE or MOVE CORRESPONDING ...
Hi SAP-ABAP Experts .
Which is Performance wise better ...
MOVE or MOVE CORRESPONDING .
Regards : Rajneesh> A
>
> * access path and indexes
Indexes and hence access paths are defined when you design the data model. They are part of the model design.
> * too large numbers of records or executions
consider a datawarehouse environment - you have to deal with huge loads of data. a Million records are considered "small" here. Terms like "small" or "large" are depending on the context you are working in.
If you never heard of Star transformation, Partitioning and Parallel Query you will get lost here!
OLTP is different: you have short transactions, but a huge number of concurrent users.
You would not even consider Bitmap indexes in an OLTP environment - but maybe a design that evenly distributes data blocks over several files for avoiding hot spots on heavily used tables.
> * processing of internal tables => no quadratic coding
>
> these are the main performance issues!
>
> > Performance is defined at design time
> partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.
sorry, it's all about the data model design - sure you have to tune later in the development but you really can't tune successfully on a BAD data model ... you have to redesign.
If the model is good there is a chance the developers chooses the worst access to it , but then you have the potential to tune with success because your model allows for a better access strategy.
The decisions you make in the design phase are detemining the potential for tuning later.
>
> * database does not what you expect
I call this the black box view: The developer is not interested in the underlying database.
Why we have different DB vendors if they would all behave the same way? I.e. compare concurrency
and consistency implementations in various DB's - totally different. You can't simply apply your working knowledge from one database to another DB product. I learned the hard way while implementing on INFORMIX and ORACLE... -
Selecting "better performance" vs "better battery" via desktop icon
Several months ago, I recall being able to click on the battery icon on the top right when I'm running on battery power to allow me to select the type of battery consumption I wanted to use. I believe it was something to the effect of "better performance" and "better battery life." Now, when I click on the battery, I can not switch my power/performance preferences via the battery. Did I do something or did a software update remove this feature? How do I get it back?
Is this a Unibody MacBook Pro? It appears so, if so this setting is controlled in the Energy Saver preference pane ONLY now, because of the logout required to switch graphics processors.
-
Suggestions for increased performance and better memory utilization for FTE
We all know that there is a pretty big downside to creating potentially thousands of DisplayObjectContainers (TextLines).
o - they are slow to create
o - they may be short lived
o - they occupy lots of memory
o - they may need to be generated frequently
Currently, there is only one way to find out all the information you need and that is to create all the TextLines for a given data set.
This means that FTE does not scale well. It becomes very slow for large data sets that need to be regenerated.
I am proposing a possible solution and hope that Adobe will consider it.
If we had the right tools we could create a sliding window of display objects. With large data sets only a fraction of the content is actually visible. So only the objects that are actually visible need to be created. There is no way to do this efficiently with FTE at the present time.
So we need a few new methods and classes that parallel what you already have.
New Method 1)
TextBlock.getTextLineInfo (width:Number, lineOffset:Number, fitSomething:Boolean) : TextLineInfo
This method returns simple information about all the lines in a text block. No display objects are generated.
class TextLineInfo
public var maxWidth:Number; // maximum width of all the textlines in the textBlock
public var totalHeight:Number; // totalHeight of all the textlines in the textBlock
public var numLines:int; // number of lines in the lineInfo Array
public var lineInfo:Array; // array of LineInfo items for each textline
class LineInfo // sample - more or less may be needed
public var rawTextLength:int;
public var textWidth:Number;
public var textHeight:Number;
public var ascent;
public var descent;
public var textBlockBeginIndex:int;
Now getTextLineInfo needs to be as fast as possible. Find an advanced game programmer to optimize, write it in assembler, put grease on it.... do whatever it takes to make it fast.
New Method 2)
TextBlock.createTextLines (textLineInfo:TextLineInfo, begIdx:int, endIdx:int) : Array
Creates and returns an Array of TextLine objects based on the previously created TextLineInfo. A range can be specified.
It should be obvious that the above functions will improve the situation. Since this parallels what you already have it should not be earth shaking to implement.
New Display Object type
Much of the time you do not need a full blown Display Object Container for a TextLine. I suggest an additional lightweight TextLine class. A good parallel would be similiar to the difference between Sprite and Shape.
Now I have some done some testing with this idea. Since you cannot implement this fully as it stands, I had to make some concessions. This sample contains 100,004 characters of data. You can resize it and it will always be fast. This sample only creates the visible portion of the display, but you may scroll into view the invisible portions. Each time the page is resized, it will jump back to the top, because of the limitations of FTE currently.
The sample also contains a caret and allows the selection of an area but no editng, copy, paste etc., is available for this test.
If I did not do special handling for this, it would lock up for sure and be very user unfreindly.
Now it takes a moment to load 100k into the TextElement.so there may be a pause before you see that data. I may need to improve this. Once loaded it performs quite well.
Without the above or similiar optimizations. FTE is just not going to scale up very well at all.
DonJeff, I don't see how a fix for that bug means waiting for a major release. It seems it just does not work as you expected and perhaps documented. It should not break any code, should it ? This seems a somewhat major improvement.
Using recreateTextLine in 10.1 I have these results so far:
My test case is 668 lines and using my slow test machine so the timing can be picked up.
When using just createTextLine and creating all text lines:
......Using removeChildAt to first remove all the old textLines then create all textLines:
..........it takes ~670ms
......Removing all children at once by removing the container then create all textLines:
..........it takes ~570ms
Using recreateTextLine, getChildAt, then create all textLines:
..........it takes ~670ms
So recreateTextLine does not improve performance it seems, just better memory I suppose.
Don -
Which SQL query performance is better
Putting this in a new thread so that i dont confuse and mix the query with my similiar threads.
Based on all of your suggestions for the below query,i finally came up with 2 RESOLUTIONS possible for it.
So,which would be the best in PERFORMANCE from the 2 resolutions i pasted below?I mean which will PERFORM FASTER and is more effecient.
***The original QUERY is at the bottom.
Resolution 1:-/***Divided into 2 sep. queries and using UNION ALL ****/ Is UNION ALL costly?
SELECT null, null, null, null, null,null, null, null, null,
null,null, null, null, null, null,null,null, count(*) as total_results
FROM
test_person p,
test_contact c1,
test_org_person porg
WHERE p.CLM_ID ='11' and
p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
and porg.O_ORG_ID ='11'
UNION ALL
SELECT lastname, firstname,person_id, middlename,socsecnumber,
birthday, U_NAME,
U_ID,
PERSON_XML_DATA,
BUSPHONE,
EMLNAME,
ORG_NAME,
EMPID,
EMPSTATUS,
DEPARTMENT,
org_relationship,
enterprise_name,
null
FROM
SELECT
beta.*, rownum as alpha
FROM
SELECT
p.lastname, p.firstname, p.person_id, p.middlename, p.socsecnumber,
to_char(p.birthday,'mm-dd-yyyy') as birthday, p.username as U_NAME,
p.clm_id as U_ID,
p.PERSON_XML_DATA.extract('/').getStringVal() AS PERSON_XML_DATA,
c1.CONTACT_DATA.extract('//phone[1]/number/text()').getStringVal() AS BUSPHONE,
c1.CONTACT_DATA.extract('//email[2]/address/text()').getStringVal() AS EMLNAME,
c1.CONTACT_DATA.extract('//company/text()').getStringVal() AS ORG_NAME,
porg.emplid as EMPID, porg.empl_status as EMPSTATUS, porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name
FROM
test_person p,
test_contact c1,
test_org_person porg
WHERE p.CLM_ID ='11' and
p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
and porg.O_ORG_ID ='11'
ORDER BY
upper(p.lastname), upper(p.firstname)
) beta
WHERE
alpha BETWEEN 1 AND 100
Resolution 2:-
/****here,the INNER most count query is removed ****/
select *
FROM
SELECT
beta.*, rownum as alpha
FROM
SELECT
p.lastname, p.firstname, p.person_id, p.middlename, p.socsecnumber,
to_char(p.birthday,'mm-dd-yyyy') as birthday, p.username as U_NAME,
p.clm_id as U_ID,
p.PERSON_XML_DATA.extract('/').getStringVal() AS PERSON_XML_DATA,
c1.CONTACT_DATA.extract('//phone[1]/number/text()').getStringVal() AS BUSPHONE,
c1.CONTACT_DATA.extract('//email[2]/address/text()').getStringVal() AS EMLNAME,
c1.CONTACT_DATA.extract('//company/text()').getStringVal() AS ORG_NAME,
porg.emplid as EMPID, porg.empl_status as EMPSTATUS, porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name,
COUNT(*) OVER () cnt -----This is the function
FROM
test_person p,
test_contact c1,
test_org_person porg
WHERE p.CLM_ID ='11' and
p.person_id = c1.ref_id(+)
AND p.person_id = porg.o_person_id
and porg.O_ORG_ID ='11'
ORDER BY upper(p.lastname), upper(p.firstname)
) beta
WHERE
alpha BETWEEN 1 AND 100
ORIGINAL QUERY
SELECT
FROM
SELECT
beta.*, rownum as alpha
FROM
SELECT
p.lastname, p.firstname, porg.DEPARTMENT,
porg.org_relationship,
porg.enterprise_name,
SELECT
count(*)
FROM
test_person p, test_contact c1, test_org_person porg
WHERE
p.p_id = c1.ref_id(+)
AND p.p_id = porg.o_p_id
$where_clause$
) AS results
FROM
test_person p, test_contact c1, test_org_person porg
WHERE
p.p_id = c1.ref_id(+)
AND p.p_id = porg.o_p_id
$where_clause$
ORDER BY
upper(p.lastname), upper(p.firstname)
) beta
WHERE
alpha BETWEEN #startRec# AND #endRec#I have now run the explain plans and put them below seperately for each SQL.The SQL queries for each of the items are posted in the 1st post of this thread.
***The original QUERY is at the bottom.
Resolution 1:-/***Divided into 2 sep. queries and using UNION ALL ****/ Is UNION ALL costly?
EXPLAIN PLANS SECTION
1- Original
Plan hash value: 1981931315
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 22859 | 187M| | 26722 (81)| 00:05:21 |
| 1 | UNION-ALL | | | | | | |
| 2 | SORT AGGREGATE | | 1 | 68 | | | |
| 3 | MERGE JOIN OUTER | | 22858 | 1517K| | 5290 (1)| 00:01:04 |
| 4 | MERGE JOIN | | 22858 | 982K| | 4304 (1)| 00:00:52 |
|* 5 | INDEX RANGE SCAN | test_org_person_I3 | 24155 | 542K| | 363 (1)| 00:00:05 |
|* 6 | SORT JOIN | | 22858 | 468K| 1448K| 3941 (1)| 00:00:48 |
|* 7 | TABLE ACCESS FULL | test_PERSON | 22858 | 468K| | 3716 (1)| 00:00:45 |
|* 8 | SORT JOIN | | 68472 | 1604K| 4312K| 985 (2)| 00:00:12 |
| 9 | INDEX FAST FULL SCAN | test_CONTACT_FK1 | 68472 | 1604K| | 113 (1)| 00:00:02 |
|* 10 | VIEW | | 22858 | 187M| | 21433 (1)| 00:04:18 |
| 11 | COUNT | | | | | | |
| 12 | VIEW | | 22858 | 187M| | 21433 (1)| 00:04:18 |
| 13 | SORT ORDER BY | | 22858 | 6875K| 14M| 21433 (1)| 00:04:18 |
| 14 | MERGE JOIN OUTER | | 22858 | 6875K| | 18304 (1)| 00:03:40 |
| 15 | MERGE JOIN | | 22858 | 4397K| | 11337 (1)| 00:02:17 |
| 16 | SORT JOIN | | 22858 | 3013K| 7192K| 5148 (1)| 00:01:02 |
|* 17 | TABLE ACCESS FULL | test_PERSON | 22858 | 3013K| | 3716 (1)| 00:00:45 |
|* 18 | SORT JOIN | | 24155 | 1462K| 3800K| 6189 (1)| 00:01:15 |
| 19 | TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON | 24155 | 1462K| | 5535 (1)| 00:01:07 |
|* 20 | INDEX RANGE SCAN | test_ORG_PERSON_FK1| 24155 | | | 102 (1)| 00:00:02 |
|* 21 | SORT JOIN | | 68472 | 7422K| 15M| 6968 (1)| 00:01:24 |
| 22 | TABLE ACCESS FULL | test_CONTACT | 68472 | 7422K| | 2895 (1)| 00:00:35 |
Predicate Information (identified by operation id):
5 - access("PORG"."O_ORG_ID"='11')
6 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
7 - filter("P"."CLM_ID"='11')
8 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
filter("P"."PERSON_ID"="C1"."REF_ID"(+))
10 - filter("ALPHA"<=25 AND "ALPHA">=1)
17 - filter("P"."CLM_ID"='11')
18 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
20 - access("PORG"."O_ORG_ID"='11')
21 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
filter("P"."PERSON_ID"="C1"."REF_ID"(+))
-------------------------------------------------------------------------------terprise_name
Resolution 2:-
EXPLAIN PLANS SECTION
1- Original
Plan hash value: 1720299348
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 23518 | 13M| | 11545 (1)| 00:02:19 |
|* 1 | VIEW | | 23518 | 13M| | 11545 (1)| 00:02:19 |
| 2 | COUNT | | | | | | |
| 3 | VIEW | | 23518 | 13M| | 11545 (1)| 00:02:19 |
| 4 | WINDOW SORT | | 23518 | 3536K| | 11545 (1)| 00:02:19 |
| 5 | MERGE JOIN OUTER | | 23518 | 3536K| | 11545 (1)| 00:02:19 |
| 6 | MERGE JOIN | | 23518 | 2985K| | 10587 (1)| 00:02:08 |
| 7 | SORT JOIN | | 23518 | 1561K| 4104K| 4397 (1)| 00:00:53 |
|* 8 | TABLE ACCESS FULL | test_PERSON | 23518 | 1561K| | 3716 (1)| 00:00:45 |
|* 9 | SORT JOIN | | 24155 | 1462K| 3800K| 6189 (1)| 00:01:15 |
| 10 | TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON | 24155 | 1462K| | 5535 (1)| 00:01:07 |
|* 11 | INDEX RANGE SCAN | test_ORG_PERSON_FK1| 24155 | | | 102 (1)| 00:00:02 |
|* 12 | SORT JOIN | | 66873 | 1567K| 4216K| 958 (2)| 00:00:12 |
| 13 | INDEX FAST FULL SCAN | test_CONTACT_FK1 | 66873 | 1567K| | 110 (1)| 00:00:02 |
Predicate Information (identified by operation id):
1 - filter("ALPHA"<=25 AND "ALPHA">=1)
8 - filter("P"."CLM_ID"='11')
9 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
11 - access("PORG"."O_ORG_ID"='11')
12 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
filter("P"."PERSON_ID"="C1"."REF_ID"(+))
ORIGINAL QUERY
EXPLAIN PLANS SECTION
1- Original
Plan hash value: 319284042
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 22858 | 187M| | 21433 (1)| 00:04:18 |
|* 1 | VIEW | | 22858 | 187M| | 21433 (1)| 00:04:18 |
| 2 | COUNT | | | | | | |
| 3 | VIEW | | 22858 | 187M| | 21433 (1)| 00:04:18 |
| 4 | SORT ORDER BY | | 22858 | 6875K| 14M| 21433 (1)| 00:04:18 |
| 5 | MERGE JOIN OUTER | | 22858 | 6875K| | 18304 (1)| 00:03:40 |
| 6 | MERGE JOIN | | 22858 | 4397K| | 11337 (1)| 00:02:17 |
| 7 | SORT JOIN | | 22858 | 3013K| 7192K| 5148 (1)| 00:01:02 |
|* 8 | TABLE ACCESS FULL | test_PERSON | 22858 | 3013K| | 3716 (1)| 00:00:45 |
|* 9 | SORT JOIN | | 24155 | 1462K| 3800K| 6189 (1)| 00:01:15 |
| 10 | TABLE ACCESS BY INDEX ROWID| test_ORG_PERSON | 24155 | 1462K| | 5535 (1)| 00:01:07 |
|* 11 | INDEX RANGE SCAN | test_ORG_PERSON_FK1| 24155 | | | 102 (1)| 00:00:02 |
|* 12 | SORT JOIN | | 68472 | 7422K| 15M| 6968 (1)| 00:01:24 |
| 13 | TABLE ACCESS FULL | test_CONTACT | 68472 | 7422K| | 2895 (1)| 00:00:35 |
Predicate Information (identified by operation id):
1 - filter("ALPHA"<=25 AND "ALPHA">=1)
8 - filter("P"."CLM_ID"='1862')
9 - access("P"."PERSON_ID"="PORG"."O_PERSON_ID")
filter("P"."PERSON_ID"="PORG"."O_PERSON_ID")
11 - access("PORG"."O_ORG_ID"='1862')
12 - access("P"."PERSON_ID"="C1"."REF_ID"(+))
filter("P"."PERSON_ID"="C1"."REF_ID"(+))
-------------------------------------------------------------------------------Edited by: user10817659 on Feb 19, 2009 11:47 PM
Edited by: user10817659 on Feb 21, 2009 12:23 AM -
Processing in 2 internal tables -Performance wise better option
Hi Experts,
I have 2 internal tables.
ITAB1 and ITAB2 both are sorted by PSPHI.
ITAB1 has PSPHI some more fields INVOICE DATE and AMT
ITAB2 has PSPHI some more fields amount.
Both itab1 and itab2 will always have same amount of data.
I need to filter data from ITAB2 based invoice date given on selection screen.since ITAB2 doesnt have invoice date field.
i am doing further processing to filter the records.
I have thought of below processing logic and wanted to know if there is a better option performance wise?
loop at ITAB1 into wa where invoice_date > selection screen date. (table which has invoice date)
lv_index = sy-tabix.
read table itab2 where psphi = wa-psphi and index = lv_index.
if sy-subrc = 0.
delete itab2 index lv_index.
endif.
endloop.Hi Madhu,
My Requirement is as below could you please advice on this ?
ITAB1
Field 1 PSPHI , FIELD 2 INVOICE, FIELD 3 INVOICE_DATE , FIELD4 AMT
15245, INV1, 02/2011 , 400
15245 INV2 02/2012 , 430
ITAB2
Field 1 PSPHI , FIELD 2 PSNR, FIELD 3 MATNR , FIELD4 AMT
15245, PSNR1, X . 430
15245 IPSNR2 Y, 400
When user enteres date on sel screen as 02/2011
I want to delete the data from itab1 and itab2 for invoice date greater then 02/2011/
If i delere ITAB1 for date > selection screen date.
Loop itab1.
delete itab2 where psphi in itab1 will delete both rows in above example because the field psphi which is common can be mutiple.
endloop.
Can you advice ? -
5530 software performance is better than 5800?
Guys,
I'm a 5800XM user and and i found recently some of the feature of new released 5530XM is even better than 5800.
1. no kinetic scrolling in 5800. Really unhappy with this, Scrololing in 5800 is quite inconvenient, the V11 firmware is worst. Why there is no kinetic scrolling in 5800?
2. camera image quality. 3.2MP camera with carl zeis len, dual flash. Image quality i not comparable with 5530. 5800 image quality is really bad for me. The image capture is like a bit blue'ish. I wash an video on youtube, the image capture by 5530 camera is really much more better than 5800, it's quality is very good.
OMG, i'm sad with this. Can anyone tell me what is the reason of this? Nokia's developer, any problem with the firmware or it is a hardware problem. 5800XM cost higher than 5530XM.Gary Scotland wrote:
or Bean freeware also
With the following limitations (for Word Users) -
Why Flash player performance is better than Player comes with Adobe air ?
I built an Adobe Air application that played Video and working awesome on PC. now when i tried to start working on Tablet Version, What I found is
1. When i played 1 MB video file from Local , Audio and Video are not in sync and very disturbing. (not able to play a file more tha 2 Mb)
I created only video player using OSMF in the app. there is nothing else that can hamper the performace.
2. When i played same video using Web based Flex Application. It works perfectly
3. I tried with StageWebView in Adobe air and Results are equivelent to Web Version.
I am using Adobe Air 3.1 which is shipped with Flash Player 11.1, Then where things went Wrong.
Why I can't get the same performance in Air App?
IS AIR THE RIGHT CHOICE TO BUILD MOBILE APP?
IS THAT POSSIBLE TO GET NATIVE PLAYER/WEB VERSION LIKE PERFOMANCE (OR CLOSED TO) USING ADOBE AIR?
In addition,
I tried to read about encoding and other white papers. But basic thing that troubled me is, If Web Version is fine then why Air is not able to give the same performance.
Hardware configuration : 1G processer,
OS : Android 4.0.3
Thank you in advance.What games are you trying? Which browser do you use and have you tried an alternative browser to see if they are also impacted?
-
Query performance much better when i break the query to 2 queries
Hi
i have a a VERY slow query(takes hours)
select d.WAYBILL_NUMBER
from STOP_HEADER st, STOP_DELIVERY_WAYBILL d
where st.STOP_REFERENCE in ('4153830443')
and st.VISIT_TYPE in ('3','4')
and st.BRANCH_ID = d.BRANCH_ID
and st.ROUTE_ID = d.ROUTE_ID
and to_date(st.DATE_STARTED) = to_date(d.DATE_STARTED)
and st.STOP_NUMBER = d.STOP_NUMBER
with Plan
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 38 K 2346884
MERGE JOIN 38 K 3 M 2346884
SORT JOIN 99 K 4 M 3152
VIEW LOOMIS.V_STOP_HEADER 99 K 4 M 24
UNION-ALL
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_201001 20 380 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_201001 20 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200912 58 1 K 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200912 58 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200910 2 K 51 K 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200910 2 K 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200909 5 K 152 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200909 5 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200908 7 K 191 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200908 7 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200906 23 K 628 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200906 23 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200811 29 K 779 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200811 29 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200810 6 K 160 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200810 6 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200808 24 K 634 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200808 24 K 2
SORT JOIN 39 M 1G 2343732
VIEW LOOMIS.V_STOP_DELIVERY_WAYBILL 39 M 1G 56500
UNION-ALL
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_201001 6 M 158 M 8932
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200912 2 M 57 M 3168
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200910 4 M 115 M 6501
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200909 1 M 40 M 2191
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200908 1 M 50 M 2766
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200906 6 M 166 M 9401
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200811 8 M 206 M 11697
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200810 1 M 41 M 2261
INDEX FAST FULL SCAN LOOMIS.STOPDD_PK_200808 6 M 169 M 9583
if i convert the above query to 2 queries ( i simply a manual join)
1-
select from STOP_HEADER st*
where st.STOP_REFERENCE in ('4153830443')
and st.VISIT_TYPE in ('3','4')
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 686 8
VIEW LOOMIS.V_STOP_HEADER 686 40 K 8
UNION-ALL PARTITION
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_201001 20 1 K 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_201001 20 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200912 58 3 K 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200912 58 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200910 2 K 161 K 4
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200910 2 K 3
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200909 5 K 383 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200909 5 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200908 7 K 490 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200908 7 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200906 23 K 1 M 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200906 23 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200811 29 K 1 M 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200811 29 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200810 6 K 404 K 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200810 6 K 2
TABLE ACCESS BY INDEX ROWID LOOMIS.STOP_HEADER_200808 24 K 1 M 2
INDEX RANGE SCAN LOOMIS.STOPHDR_IDX_STOPREF_200808 24 K 2
2-
select d.WAYBILL_NUMBER
from STOP_DELIVERY_WAYBILL d
where d.BRANCH_ID = 'ESK'
and d.ROUTE_ID = '326'
and d.DATE_STARTED = to_date('31-mar-2010')
and d.STOP_NUMBER= 27
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 9 27
VIEW LOOMIS.V_STOP_DELIVERY_WAYBILL 9 414 27
UNION-ALL
INDEX RANGE SCAN LOOMIS.STOPDD_PK_201001 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200912 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200910 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200909 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200908 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200906 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200811 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200810 1 27 3
INDEX RANGE SCAN LOOMIS.STOPDD_PK_200808 1 27 3
each one take half a second !!!
i am on 8.1.4.7 with compatible = 8.0.5.0.0, i will move to 11g in the near future
any help please why this query behaviour when i do a join
Thanks very much
Edited by: user9145417 on 31-Mar-2010 17:24user9145417 wrote:
I assume that this is a real challenge for Oracle Performance geeks , and i ask any Guru in this field to contribute to explain this weired behaviour.
ThanksOr possibly it's not very interesting, on a version of Oracle that is at least two versions in the past for most people, using a feature (partiiton views) that has been deprecated by Oracle even though newer versions of the optimizer can handle "UNION ALL" views far more effectively that 8i used to - and you haven't really made much effort to supply the information needed to help others solve your problem, e.g. how about making sure that your execution plan is readable. (See the note about "code" tags below).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"Science is more than a body of knowledge; it is a way of thinking"+
+Carl Sagan+ -
Will restoring my iphone 4 make its performance any better?
my iphone can only run one app at a time therefore rendering the multitasking feature useless is the iphone 4 just an outdated model? would restoring show any performance improvements?
It could, doesn't hurt to try http://support.apple.com/kb/HT1414?viewlocale=en_US&locale=en_US
-
What's the Difference Between Better Performance and Better Quality
I am making a DVD of a wedding from a tape and I usually create an image and then transfer it to my PC to burn because my Mac Mini doesn't have a DVD burner. It is telling me that my project has exceded 4.2 GB which I figured would happen but would dread if it did. Well i was wondering what type of quality loss would happen if I used Best Quality? Or is there a way to make a Disc image that I dont know of? Any suggestions would be helpful.
http://docs.info.apple.com/article.html?artnum=164975
If you don't have a DL SuperDrive, I don't think you can create a disk image over two hours.
Well i was wondering what type of quality loss would happen if I used Best Quality?
Best Quality is just that, best quality encoding. I use it for every project. -
Hello expert,
following statement is left outer join or right outer join? and comparing with statement " from PCM RIGHT OUTER JOIN R ON PCM.practice_state_code = R.practice_state_code ", whose performance is better? Appreciate very much.
PCM.practice_state_code (+) = R.practice_state_codeHi,
843178 wrote:
Hello expert,
following statement is left outer join or right outer join?
PCM.practice_state_code (+) = R.practice_state_codeNeither. It's the old outer join notation. Using that syntax, the order in which you put the tables in the FROM clause doesn't matter.
and comparing with statement " from PCM RIGHT OUTER JOIN R ON PCM.practice_state_code = R.practice_state_code ", whose performance is better? They're equal. -
Difference between select single & select upto 1 row
Hi,
what is the Difference between select single & select upto 1 row?
Whose performance is better?
Regards,
MayankHi
According to SAP Performance course the SELECT UP TO 1 ROWS is faster than SELECT SINGLE because you are not using all the primary key fields.
The 'SELECT SINGLE' statement selects the first row in the database that it finds that fulfils the 'WHERE' clause If this results in multiple records then only the first one will be returned and therefore may not be unique.
Mainly: to read data from
The 'SELECT .... UP TO 1 ROWS' statement is subtly different. The database selects all of the relevant records that are defined by the WHERE clause, applies any aggregate, ordering or grouping functions to them and then returns the first record of the result set. -
Performance In Simple Scenarios
I have done some performance testing to see if asynchronous triggers performs any better than synchronous triggers in a simple audit scenario -- capturing record snapshots at insert, update and delete events to a separate database within the same instance of SQL Server.
Synchronous triggers performed 50% better than asynchronous triggers; this was with conversation reuse and the receive queue activation turned off, so the poor performance was just in the act of forming and sending the message, not receiving and processing. This was not necessarily surprising to me, and yet I have to wonder under what conditions would we see real performance benefits for audit scenarios.
I am interested if anyone has done similar testing, and if they received similar or different results. If anyone had conditions where asynchronous triggers pulled ahead for audit scenarios, I would really like to hear back from them. I invite any comments or suggestions for better performance.
The asynchronous trigger:
Code Snippet
ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
FOR INSERT AS
BEGIN
DECLARE
@CONVERSATION UNIQUEIDENTIFIER ,
@MESSAGE XML ,
@LOG_OPERATION CHAR(1) ,
@LOG_USER VARCHAR(35) ,
@LOG_DATE DATETIME;
SELECT TOP(1)
@CONVERSATION = CONVERSATION_HANDLE ,
@LOG_OPERATION = 'I' ,
@LOG_USER = USER() ,
@LOG_DATE = GETDATE()
FROM SYS.CONVERSATION_ENDPOINTS;
SET @MESSAGE =
( SELECT
CUST_ID = NEW.CUST_ID ,
CUST_DESCR = NEW.CUST_DESCR ,
CUST_ADDRESS = NEW.CUST_ADDRESS ,
LOG_OPERATION = @LOG_OPERATION ,
LOG_USER = @LOG_USER ,
LOG_DATE = @LOG_DATE
FROM INSERTED NEW
FOR XML AUTO );
SEND ON CONVERSATION @CONVERSATION
MESSAGE TYPE CUSTOMER_LOG_MESSAGE ( @MESSAGE );
END;
The synchronous trigger:
Code Snippet
ALTER TRIGGER TR_CUSTOMER_INSERT ON DBO.CUSTOMER
FOR INSERT AS
BEGIN
DECLARE
@LOG_OPERATION CHAR(1) ,
@LOG_USER VARCHAR(15) ,
@LOG_DATE DATETIME;
SELECT
@LOG_OPERATION = 'I' ,
@LOG_USER = USER() ,
@LOG_DATE = GETDATE()
INSERT INTO SALES_LOG.DBO.CUSTOMER
SELECT
CUST_ID = NEW.CUST_ID ,
CUST_DESCR = NEW.CUST_DESCR ,
CUST_ADDRESS = NEW.CUST_ADDRESS ,
LOG_OPERATION = @LOG_OPERATION ,
LOG_USER = @LOG_USER ,
LOG_DATE = @LOG_DATE
FROM INSERTED NEW
END;Synchronous audit has to do one database write (one insert). Asynchronous audit has to do at least an insert and an update (the SEND) plus a delete (the RECEIVE) and an insert (the audit itself), so that is 4 database writes. If the destination audit service is remote, then the sys.transmission_queue operations have to be added (one insert and one delete). So clearly there is no way asynchronous audit can be on pair with synchronous audit, there are at least 3 more writes to complete. And that is neglecting all the reads (like looking up the conversation handle etc) and all the marshaling/unmarshaling of the message (usually some fairly expensive XML processing).
Within one database the asynchronous pattern is apealing when the trigger processing is expensive (so that the extra cost of going async is negligible) and reducing the original call response time is important. It could also help if the audit operations create high contention and defering the audit reduces this. Some more esoteric reasons is when asynchronous processing is desired for architecture reasons, like the posibility to add a workflow triggered by the original operation and desire to change this workflow on-the-fly without impact/down time (eg. more consumers of the async message are added, the message is schreded/dispatched to more processing apps and triggers more messages downstream etc etc).
If the audit is between different databases even within same instance then the problem of availabilty arrises (audit table/database may be down for intervals, blocking the orginal operations/application).
If the audit is remote (different SQL Server isntances) then using Service Broker solves the most difficult problem (communication) in addition to asynchronicity and availability, in that case the the synchrnous pattern (e.g. using a linked server) is really a bad choice.
Maybe you are looking for
-
Cannot read PDFs in browser with Windows 7
Since I installed Windows 7 I have not been able to read PDFs in IE8 - all I get is a blank page. I can open them fine when they are in a stand alone file. I have tried Reader 9.2, 8.1.3 and 8.1.4. I tried unchecking 'display PDF in browser' but t
-
Downgrading quad G5 to 40GB SATA - Kernel Panic
Howdy folks - I own a G5 Quadcore that came with a 250GB SATA HD. I purchased a second drive (both are Western Digital Caviar SE) and installed it as my backup drive. Everything works great, no issues. I bought a Western Digital 40G SATA (also a Cavi
-
HT4910 Can I backup my Powerpoint and Word docs on iCloud
I want to be able to access my Powerpoint docs via iCloud, is it possible?
-
Error 4 S N S/1 / C 00 00 00 8 : TLDp- - 1 2 4
I did a AHT and an Error 4 S N S/1 / C 00 00 00 8 : TLDp- - 1 2 4 The Apple chat said he did not know ,then said it is a software issus ! then to Reset SMU and then VRAM but it still comes and the Fan is on full power.
-
Gen1 Apple TV. Can't get my photos (jpeg) on my PC to My Photos on Apple TV. PC and Apple TV appear to be synched. When I go to itunes devices for the TV I have two tabs ( Settings & Photos ). I've check the box for Share Photos From: and selecte