Explain plan: "Select statement"
Hi,
I'm using the explain plan command to retrieve information about query execution. In particular I want to estimate query execution time.
Let's consider the following example:
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 4775 | 484K| 98886 (1)| 00:19:47 |"
"| 1 | HASH GROUP BY | | 4775 | 484K| 98886 (1)| 00:19:47 |"
"|* 2 | MAT_VIEW ACCESS FULL| Materialized_view1 | 4775 | 484K| 98884 (1)| 00:19:47 |"
In the calculation of total execution time should i consider the "select statement" operation?
The total execution time can be calculated like this: Time(Select statement) + Time(hash group by) + Time(access full)=19.47*3=58.41. Is it right?
Thanks
No, the expected time is 00:19:47. No time is expected to be spent in steps 0 and 1.
Similar Messages
-
Problems with explain plan and statement
Hi community,
I have migrated a j2ee application from DB2 to Oracle.
First some facts of our application and database instance:
We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
First the statement itself:
select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
Now the explain plan from our application:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("LINVIN"."BBASE"=:1)
filter("LINVIN"."BBASE"=:1)
5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
Now the one from the standalone tool:
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
| 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
|* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
| 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
|* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
|* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
Predicate Information (identified by operation id):
2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
4 - access("LSUPPLIER"."BBASE"=:1)
6 - access("LINVIN"."BBASE"=:1)
The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
I also tried to play with some parameters:
I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
Thanks in advance,
Tobias
Edited by: tobiwan on Sep 3, 2008 11:49 PM
Edited by: tobiwan on Sep 3, 2008 11:50 PM
Edited by: tobiwan on Sep 4, 2008 12:01 AM
Edited by: tobiwan on Sep 4, 2008 12:02 AM
Edited by: tobiwan on Sep 4, 2008 12:04 AM
Edited by: tobiwan on Sep 4, 2008 12:06 AM
Edited by: tobiwan on Sep 4, 2008 12:06 AM
Edited by: tobiwan on Sep 4, 2008 12:07 AMtobiwan wrote:
Hi again,
Here ist the answer:
The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
For more information regarding this issue, see my blog note I've written about this some time ago:
http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
For more information, see this comprehensive description of the issue:
http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Query Regarding Explain Plan on Query
Hello,
I have one big query which shows report of 50000 daily records from @ 20,00,000 records.
I have two databases UAT and PROD.when i do Explain Plan on the query is these different database i get the different plan where everything is same in both database.
In UAT it is doing Index scan where as in PROD it is doing Full TableScan. Below are the results.
In production it is not using any of the indexes present but in UAT it is.What could be the reasong behind this?Sure.
UAT Explain Plan (Please copy in Textpad for better View)
SELECT STATEMENT, GOAL = HINT: ALL_ROWS Cost=371 Cardinality=238 Optimizer=HINT: ALL_ROWS Bytes=134470
VIEW Object owner=SWNET1 Cost=371 Cardinality=238 Bytes=134470
COUNT STOPKEY
VIEW Object owner=SWNET1 Cost=371 Cardinality=238 Bytes=131376
SORT ORDER BY STOPKEY Cost=371 Cardinality=238 Bytes=54026
FILTER
HASH JOIN RIGHT ANTI Cost=370 Cardinality=238 Bytes=54026
INLIST ITERATOR
TABLE ACCESS BY INDEX ROWID Object owner=SWNET1 Object name=IS_TB_END_POINT Cost=1 Cardinality=1 Optimizer=ANALYZED Bytes=31
INDEX RANGE SCAN Object owner=SWNET1 Object name=IS_UK_EP_NAME Cost=1 Cardinality=1 Optimizer=ANALYZED
TABLE ACCESS BY INDEX ROWID Object owner=SWNET1 Object name=IS_TB_TRANSACTION Cost=368 Cardinality=253 Optimizer=ANALYZED Bytes=49588
INDEX FULL SCAN Object owner=SWNET1 Object name=IS_IX_T_DESTINATION_EP Cost=18 Cardinality=13909 Optimizer=ANALYZED
PRODUCTION Explain Plan
SELECT STATEMENT, GOAL = HINT: ALL_ROWS Cost=65702 Cardinality=1000 Optimizer=HINT: ALL_ROWS Bytes=565000
VIEW Object owner=SWNET1 Cost=65702 Cardinality=1000 Bytes=565000
COUNT STOPKEY
VIEW Object owner=SWNET1 Cost=65702 Cardinality=38739 Bytes=21383928
SORT ORDER BY STOPKEY Cost=65702 Cardinality=38739 Bytes=9646011
FILTER
HASH JOIN RIGHT ANTI Cost=63616 Cardinality=38739 Bytes=9646011
INLIST ITERATOR
TABLE ACCESS BY INDEX ROWID Object owner=SWNET1 Object name=IS_TB_END_POINT Cost=1 Cardinality=2 Optimizer=ANALYZED Bytes=64
INDEX UNIQUE SCAN Object owner=SWNET1 Object name=IS_UK_EP_NAME Cost=1 Cardinality=2 Optimizer=ANALYZED
TABLE ACCESS FULL Object owner=SWNET1 Object name=IS_TB_TRANSACTION Cost=63614 Cardinality=44697 Optimizer=ANALYZED Bytes=9699249
Index Query (Same on both places)
create index IS_IX_T_DESTINATION_EP on IS_TB_TRANSACTION (T_DESTINATION_EP)
tablespace IS_XML_IND
pctfree 10
initrans 2
maxtrans 255
storage
initial 128M
next 128K
minextents 1
maxextents unlimited
pctincrease 0
); -
[8i] Can someone help me on using explain plan, tkprof, etc.?
I am trying to follow the instructions at When your query takes too long ...
I am trying to figure out why a simple query takes so long.
The query is:
SELECT COUNT(*) AS tot_rows FROM my_table;It takes a good 5 minutes or so to run (best case), and the result is around 22 million (total rows).
My generic username does not (evidently) allow access to PLAN_TABLE, so I had to log on as SYSTEM to run explain plan. In SQL*Plus, I typed in:
explain plan for (SELECT COUNT(*) AS tot_rows FROM my_table);and the response was "Explained."
Isn't this supposed to give me some sort of output, or am I missing something?
Then, the next step in the post I linked is to use tkprof. I see that it says it will output a file to a path specified in a parameter. The only problem is, I don't have access to the db's server. I am working remotely, and do not have any way to remotely (or directly) access the db server. Is there any way to have the file output to my local machine, or am I just S.O.L.?SomeoneElse used "create table as" (CTAS), wich automatically gathers the stats. You can see the differende before and after stats clearly in this example.
This is the script:
drop table ttemp;
create table ttemp (object_id number not null, owner varchar2(30), object_name varchar2(200));
alter table ttemp add constraint ttemp_pk primary key (object_id);
insert into ttemp
select object_id, owner, object_name
from dba_objects
where object_id is not null;
set autotrace on
select count(*) from ttemp;
exec dbms_stats.gather_table_stats('PROD','TTEMP');
select count(*) from ttemp;And the result:
Table dropped.
Table created.
Table altered.
46888 rows created.
COUNT(*)
46888
1 row selected.
Execution Plan
SELECT STATEMENT Optimizer Mode=CHOOSE
1 SORT AGGREGATE
2 1 TABLE ACCESS FULL PROD.TTEMP
Statistics
1 recursive calls
1 db block gets
252 consistent gets
0 physical reads
120 redo size
0 PX remote messages sent
0 PX remote messages recv'd
0 buffer is pinned count
0 workarea memory allocated
4 workarea executions - optimal
1 rows processed
PL/SQL procedure successfully completed.
COUNT(*)
46888
1 row selected.
Execution Plan
SELECT STATEMENT Optimizer Mode=CHOOSE (Cost=4 Card=1)
1 SORT AGGREGATE (Card=1)
2 1 INDEX FAST FULL SCAN PROD.TTEMP_PK (Cost=4 Card=46 K)
Statistics
1 recursive calls
2 db block gets
328 consistent gets
0 physical reads
8856 redo size
0 PX remote messages sent
0 PX remote messages recv'd
0 buffer is pinned count
0 workarea memory allocated
4 workarea executions - optimal
1 rows processed -
Understand the output of explain plan
I am trying to understand the output of explain plan. I have 2 plans below and don't understand it completely.
In below SQL I would expect optimizer to fetch "ROWNUM < 500" first and then do the outer join. But below explain plan doesn't list it as No. 1. So I don't really understand how to intepret the sequence from the explain plan:
select TASK0_.TASK_ID from
( select TASK0_.TASK_ID from
( select task0_.task_id from task task0_) TASK0_ where ROWNUM < 500 ) TASK0_
left outer join f_message_task task0_1_ on task0_.task_id=task0_1_.task_id
left outer join b_a_task task0_2_ on task0_.task_id=task0_2_.task_id
left outer join i_task task0_3_ on task0_.task_id=task0_3_.task_id
left outer join o_task task0_4_ on task0_.task_id=task0_4_.task_id
left outer join r_transmission_task task0_5_ on task0_.task_id=task0_5_.task_id
left outer join s_error_task task0_6_ on task0_.task_id=task0_6_.task_id
PLAN_TABLE_OUTPUT
Plan hash value: 707970537
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 499 | 19461 | 1042 (6)| 00:00:13 |
|* 1 | HASH JOIN OUTER | | 499 | 19461 | 1042 (6)| 00:00:13 |
|* 2 | HASH JOIN OUTER | | 499 | 16966 | 757 (6)| 00:00:10 |
| 3 | NESTED LOOPS OUTER | | 499 | 14471 | 589 (4)| 00:00:08 |
| 4 | NESTED LOOPS OUTER | | 499 | 12475 | 588 (4)| 00:00:08 |
| 5 | NESTED LOOPS OUTER | | 499 | 10479 | 588 (4)| 00:00:08 |
| 6 | NESTED LOOPS OUTER | | 499 | 8982 | 588 (4)| 00:00:08 |
| 7 | VIEW | | 499 | 2495 | 588 (4)| 00:00:08 |
|* 8 | COUNT STOPKEY | | | | | |
| 9 | INDEX FAST FULL SCAN| PK_TASK | 697K| 3403K| 588 (4)| 00:00:08 |
|* 10 | INDEX UNIQUE SCAN | PK_r_TRANSMISSION | 1 | 13 | 0 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_b_a_TASK | 1 | 3 | 0 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | PK_s_ERROR_TASK | 1 | 4 | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PK_i_TASK | 1 | 4 | 0 (0)| 00:00:01 |
| 14 | INDEX FAST FULL SCAN | PK_o_TASK | 347K| 1695K| 161 (6)| 00:00:02 |
| 15 | INDEX FAST FULL SCAN | PK_f_MESSAGE | 392K| 1917K| 276 (4)| 00:00:04 |
Predicate Information (identified by operation id):
1 - access("TASK0_"."TASK_ID"="TASK0_1_"."TASK_ID"(+))
2 - access("TASK0_"."TASK_ID"="TASK0_4_"."TASK_ID"(+))
8 - filter(ROWNUM<500)
10 - access("TASK0_"."TASK_ID"="TASK0_5_"."TASK_ID"(+))
11 - access("TASK0_"."TASK_ID"="TASK0_2_"."TASK_ID"(+))
12 - access("TASK0_"."TASK_ID"="TASK0_6_"."TASK_ID"(+))
13 - access("TASK0_"."TASK_ID"="TASK0_3_"."TASK_ID"(+))
In below SQL I expect rownum to be applied at the end but it gets applied first:
select * from ( select TASK0_.TASK_ID from ( select task0_.task_id from task task0_
left outer join f_message_task task0_1_ on task0_.task_id=task0_1_.task_id
left outer join b_a_task task0_2_ on task0_.task_id=task0_2_.task_id
left outer join i_task task0_3_ on task0_.task_id=task0_3_.task_id
left outer join o_task task0_4_ on task0_.task_id=task0_4_.task_id
left outer join r_t_task task0_5_ on task0_.task_id=task0_5_.task_id
left outer join s_error_task task0_6_ on task0_.task_id=task0_6_.task_id
) TASK0_ where ROWNUM < 500 ) TASK0_;
PLAN_TABLE_OUTPUT
Plan hash value: 673345378
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 499 | 6487 | 507 (1)| 00:00:07 |
| 1 | VIEW | | 499 | 6487 | 507 (1)| 00:00:07 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | NESTED LOOPS OUTER | | 501 | 19539 | 507 (1)| 00:00:07 |
| 4 | NESTED LOOPS OUTER | | 501 | 17034 | 5 (20)| 00:00:01 |
| 5 | NESTED LOOPS OUTER | | 501 | 15030 | 5 (20)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 501 | 13026 | 5 (20)| 00:00:01 |
| 7 | NESTED LOOPS OUTER | | 501 | 11523 | 5 (20)| 00:00:01 |
| 8 | NESTED LOOPS OUTER | | 501 | 5010 | 5 (20)| 00:00:01 |
| 9 | INDEX FAST FULL SCAN| PK_TASK | 499 | 2495 | 2 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PK_o_TASK | 1 | 5 | 1 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_r_T | 1 | 13 | 0 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | PK_b_a_TASK | 1 | 3 | 0 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | PK_s_ERROR_TASK | 1 | 4 | 0 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | PK_i_TASK | 1 | 4 | 0 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_f_MESSAGE | 1 | 5 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(ROWNUM<500)
10 - access("TASK0_"."TASK_ID"="TASK0_4_"."TASK_ID"(+))
11 - access("TASK0_"."TASK_ID"="TASK0_5_"."TASK_ID"(+))
12 - access("TASK0_"."TASK_ID"="TASK0_2_"."TASK_ID"(+))
13 - access("TASK0_"."TASK_ID"="TASK0_6_"."TASK_ID"(+))
14 - access("TASK0_"."TASK_ID"="TASK0_3_"."TASK_ID"(+))
15 - access("TASK0_"."TASK_ID"="TASK0_1_"."TASK_ID"(+))Edited by: user628400 on Feb 20, 2009 12:14 PM
Edited by: user628400 on Feb 20, 2009 12:15 PMPlease read the FAQ: http://forums.oracle.com/forums/help.jspa
And learn how to post code and explain plans using the tags. -
Understanding the COST column of an explain plan
Hello,
I executed the following query, and obtained the corresponding explain plan:
select * from isis.clas_rost where cour_off_# = 28
Description COST Cardinality Bytes
SELECT STATEMENT, GOAL = FIRST_ROWS 2 10 1540
TABLE ACCESS BY INDEX ROWID ISIS CLAS_ROST 2 10 1540
INDEX RANGE SCAN ISIS CLAS_ROST_N2 1 10
I don't understand how these cost values add up. What is the significance of the cost in each row of the explain plan output?
By comparison, here is another plan output for the following query:
select * from isis.clas_rost where clas_rost_# = 28
Description COST Cardinality Bytes
SELECT STATEMENT, GOAL = FIRST_ROWS 1 1 154
TABLE ACCESS BY INDEX ROWID ISIS CLAS_ROST 1 1 154
INDEX UNIQUE SCAN ISIS CLAS_ROST_U1 1 1
Thanks!For the most part, you probably want to ignore the cost column. The cardinality column is generally what you want to pay attention to.
Ideally, the cost column is Oracle's estimate of the amount of work that will be required to execute a query. It is a unitless value that attempts to combine the cost of I/O and CPU (depending on the Oracle version and whether CPU costing is enabled) and to scale physical and logical I/O appropriately). As a unitless number, it doesn't really relate to something "real" like the expected number of buffer gets. It is also determined in part by initialization parameters,session settings, system statistics, etc. that may artificially increase or decrease the cost of certain operations.
Beyond that, however, cost is problematic because it is only as accurate as the optimizer's estimates. If the optimizer's estimates are accurate, that implies that the cost is reasonably representative (in the sense that a query with a cost of 200 will run in less time than a query with a cost of 20000). But if you're looking at a query plan, it's generally because you believe there may be a problem which means that you are inherently suspicious that some of the optimizer's estimates are incorrect. If that's the case, you should generally distrust the cost.
Justin -
SQLDeveloper can't generate an explain-plan when using "cube"
If I want to create an explain-plan from the following statement, I get no explain-plan:
SELECT 1
FROM dual
GROUP BY CUBE( 2, 2, 2, 2, 2, 2, 2, 2, 2 )If I now want to create an explain-plan again, I get the following message (and still no explain-plan):
http://i.imgur.com/mGO6Z.jpg
I tried this a few times and of course with a fresh db-session where i didn't run any statements before.
I get this with:
SQLDeveloper Version 3.0.04 Build MAIN-04.34 (i.e. production)
DB 9.2.0.1.0
Oracle Instant Client 11.1.0.6.0
In Toad this works btw.
(Of course it makes no sense to run it on this statement, we encountered this problem with a really big SQL-statement where "cube" was used in an inline-view. SQLDeveloper then wasn't able to generate an explain-plan for the whole-statement)
Regards
Markusthat is correct. I wanted to keep the login page redirect inside my class method so that I could do the check every time someone came to pages that require authentication. I wanted it in the LoadState method so I can do a check there, redirect
them to login page or just get a cookie and then pass that cookie to page to build the UI for the page
I can do what you are suggesting and have actually tried it but then I have to track which page to take the user to after they log in...
I have multiple clicks in the appbar and pages from where the user can come to these authentication-bound pages..
Suggestions?
Also, what am I doing wrong in my class method that it doesn't navigate to the login page in the LoadState method?
Thanks
mujno -
Hello,
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
I have this issue:
On the sited above database first I execute a SELECT query, no matter what is it, with EXPLAIN PLAN FOR:
EXPLAIN PLAN FOR SELECT <some query comes here>This executes successfully.
Next, I do this to see the explain plan:
SELECT *
FROM TABLE (dbms_xplan.display());This generates the following error, which I read from the column "PLAN_TABLE_OUTPUT" of the result set:
ERROR: an uncaught error in function display has happened; please contact Oracle support
Please provide also a DMP file of the used plan table PLAN_TABLE
ORA-00904: "OTHER_TAG": invalid identifierI see that it is said to contact Oracle support, but unfortunately in this firm I am not in position to contact Oracle, when there is an issue.
Probably it is obvious to most of you, but since I receive this error for first time, I am wondering where the reason for the error could be.
The table PLAN_TABLE exists, which I know is needed to hold the output of an EXPLAIN PLAN statement.
Generally, in this database, when I try to see the explain plan for any query, the plan shows no values for niether parameter: Cost, CPU Cost, I/O Cost, Cardinality, whatever.
Could anyone presume, what could be changed in order the problem to be fixed.
What else .. the reason is not into the tool I am using, which is PL/SQL Developer, version 7.1.5, because for other databases there is no problem with EXPLAIN PLAN.
Thanks.You have an invalid PLAN_TABLE that has been created by some utility or has come from a script from a lower version.
See the script $ORACLE_HOME/rdbms/admin/catplan.sql for the correct 10.2.0.4 PLAN_TABLE (script executed by SYS AS SYSDBA)
Alternatively, use $ORACLE_HOME/rdbms/admin/utlxplan.sql to create a private PLAN_TABLE
Hemant K Chitale -
Why two different explain plan for same objects?
Believe or not there are two different databases, one for processing and one for reporting, plan is show different for same query. Table structure and indexes are same. It's 11G
Thanks
Good explain plan .. works fine..
Plan
SELECT STATEMENT ALL_ROWSCost: 12,775 Bytes: 184 Cardinality: 1
27 SORT UNIQUE Cost: 12,775 Bytes: 184 Cardinality: 1
26 NESTED LOOPS
24 NESTED LOOPS Cost: 12,774 Bytes: 184 Cardinality: 1
22 HASH JOIN Cost: 12,772 Bytes: 178 Cardinality: 1
20 NESTED LOOPS SEMI Cost: 30 Bytes: 166 Cardinality: 1
17 NESTED LOOPS Cost: 19 Bytes: 140 Cardinality: 1
14 NESTED LOOPS OUTER Cost: 16 Bytes: 84 Cardinality: 1
11 VIEW DSSADM. Cost: 14 Bytes: 37 Cardinality: 1
10 NESTED LOOPS
8 NESTED LOOPS Cost: 14 Bytes: 103 Cardinality: 1
6 NESTED LOOPS Cost: 13 Bytes: 87 Cardinality: 1
3 INLIST ITERATOR
2 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_FAMILY_TBL Cost: 10 Bytes: 51 Cardinality: 1
1 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9 Cardinality: 1
5 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3 Bytes: 36 Cardinality: 1
4 INDEX RANGE SCAN INDEX DSSADM.STAN_JB_FN_IDX Cost: 2 Cardinality: 1
7 INDEX UNIQUE SCAN INDEX (UNIQUE) DSSODS.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0 Cardinality: 1
9 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOBCODE_TBL_RPT Cost: 1 Bytes: 16 Cardinality: 1
13 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PSXLATITEM_RPT Cost: 2 Bytes: 47 Cardinality: 1
12 INDEX RANGE SCAN INDEX DSSODS.PK_DRV_RIXLATITEM_RPT Cost: 1 Cardinality: 1
16 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3 Bytes: 56 Cardinality: 1
15 INDEX RANGE SCAN INDEX DSSADM.DIM_JOBCODE_EXPDT1 Cost: 2 Cardinality: 1
19 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_RPT Cost: 11 Bytes: 438,906 Cardinality: 16,881
18 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_JOBCODE_RPT Cost: 2 Cardinality: 8
21 INDEX FAST FULL SCAN INDEX (UNIQUE) DSSADM.Z_PK_JOBCODE_PROMPT_TBL Cost: 12,699 Bytes: 66,790,236 Cardinality: 5,565,853
23 INDEX RANGE SCAN INDEX DSSADM.DIM_PERSON_EMPL_RCD_SEQ_KEY Cost: 1 Cardinality: 1
25 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_PERSON_EMPL_RCD Cost: 2 Bytes: 6 Cardinality: 1 This bad plan ... show merge join cartesian and full table ..
Plan
SELECT STATEMENT ALL_ROWSCost: 3,585 Bytes: 237 Cardinality: 1
26 SORT UNIQUE Cost: 3,585 Bytes: 237 Cardinality: 1
25 NESTED LOOPS SEMI Cost: 3,584 Bytes: 237 Cardinality: 1
22 NESTED LOOPS Cost: 3,573 Bytes: 211 Cardinality: 1
20 MERGE JOIN CARTESIAN Cost: 2,864 Bytes: 70,446 Cardinality: 354
17 NESTED LOOPS
15 NESTED LOOPS Cost: 51 Bytes: 191 Cardinality: 1
13 NESTED LOOPS OUTER Cost: 50 Bytes: 180 Cardinality: 1
10 HASH JOIN Cost: 48 Bytes: 133 Cardinality: 1
6 NESTED LOOPS
4 NESTED LOOPS Cost: 38 Bytes: 656 Cardinality: 8
2 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 14 Bytes: 448 Cardinality: 8
1 INDEX RANGE SCAN INDEX REPORT2.STAN_PROM_JB_IDX Cost: 6 Cardinality: 95
3 INDEX RANGE SCAN INDEX REPORT2.SETID_JC_IDX Cost: 2 Cardinality: 1
5 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 3 Bytes: 26 Cardinality: 1
9 INLIST ITERATOR
8 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_FAMILY_TBL Cost: 10 Bytes: 51 Cardinality: 1
7 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PSXLATITEM_RPT Cost: 2 Bytes: 47 Cardinality: 1
11 INDEX RANGE SCAN INDEX REPORT2.PK_DRV_RIXLATITEM_RPT Cost: 1 Cardinality: 1
14 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORT2.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0 Cardinality: 1
16 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOBCODE_TBL_RPT Cost: 1 Bytes: 11 Cardinality: 1
19 BUFFER SORT Cost: 2,863 Bytes: 4,295,552 Cardinality: 536,944
18 TABLE ACCESS FULL TABLE REPORT2.DIM_PERSON_EMPL_RCD Cost: 2,813 Bytes: 4,295,552 Cardinality: 536,944
21 INDEX RANGE SCAN INDEX (UNIQUE) REPORT2.Z_PK_JOBCODE_PROMPT_TBL Cost: 2 Bytes: 12 Cardinality: 1
24 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_RPT Cost: 11 Bytes: 1,349,920 Cardinality: 51,920
23 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_JOBCODE_RPT Cost: 2 Cardinality: 8user550024 wrote:
I am really surprise that the stat for good sql are little old. I just computed the states of bad sql so they are uptodate..
There is something terribly wrong..Not necessarily. Just using the default stats collection I've seen a few cases of things suddenly going wrong. As the data increases, it gets closer to an edge case where the inadequacy of the statistics convinces the optimizer to do a wrong plan. To fix, I could just go into dbconsole, set the stats back to a time when they worked, and locked them. In most cases it's definitely better to figure out what is really going on, though, to give the optimizer better information to work with. Aside from the value of learning how to do it, for some cases it's not so simple. Also, many think the default settings of the database statistic collection may be wrong in general (in 10.2.x, at least). So much depends on your application and data that you can't make too many generalizations. You have to look at the evidence and figure it out. There is still a steep learning curve for the tools to look at the evidence. People are here to help with that.
Most of the time it works better than a dumb rule based optimizer, but at the cost of a few situations where people are smarter than computers. It's taken a lot of years to get to this point. -
Facing Merge Join Cartersian in the explain plan after adding gl periods
Hi All
I have added gl periods table to the below query , checked the explain plan and it shows merge join cartesian. This query is taking long time to fetch the results.
Need help ASAP , Please let me know where i am going wrong . Any suggestions will be appreciated.
SELECT gljh.period_name, gljh.ledger_id, gljh.je_source,
glcc.segment2,
SUM ( NVL (gljl.accounted_dr, 0)
- NVL (gljl.accounted_cr, 0)
) total_amt,
gljh.currency_code
FROM gl_je_headers gljh,
gl_je_lines gljl,
gl_code_combinations glcc,
gl_periods gps
WHERE 1=1
AND gljh.period_name = gps.period_name
AND gljl.period_name = gps.period_name
AND gps.period_set_name = 'MCD_MONTH_'
AND gps.start_date >= :p_from_date
AND gps.start_date <= :p_to_date
AND gljh.ledger_id = :p_ledger_id
AND gljh.je_header_id = gljl.je_header_id
AND gljl.code_combination_id = glcc.code_combination_id
AND glcc.segment2 = '10007'--get_segment2_rec.flex_value
AND gljh.currency_code <> 'STAT'
GROUP BY gljh.je_source,
gljh.period_name,
glcc.segment2,
gljh.ledger_id,
gljh.currency_code
HAVING SUM ( NVL (gljl.accounted_dr, 0)
- NVL (gljl.accounted_cr, 0)
) <> 0;
Plan
SELECT STATEMENT ALL_ROWSCost: 73,146 Bytes: 2,266 Cardinality: 22
15 FILTER
14 HASH GROUP BY Cost: 73,146 Bytes: 2,266 Cardinality: 22
13 FILTER
12 NESTED LOOPS Cost: 73,145 Bytes: 61,079 Cardinality: 593
9 NESTED LOOPS Cost: 31,603 Bytes: 1,452,780 Cardinality: 20,754
6 MERGE JOIN CARTESIAN Cost: 2,108 Bytes: 394,181 Cardinality: 9,167
2 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_PERIODS Cost: 4 Bytes: 31 Cardinality: 1
1 INDEX RANGE SCAN INDEX (UNIQUE) GL.GL_PERIODS_U2 Cost: 1 Cardinality: 64
5 BUFFER SORT Cost: 2,104 Bytes: 683,988 Cardinality: 56,999
4 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 2,104 Bytes: 683,988 Cardinality: 56,999
3 INDEX RANGE SCAN INDEX GL.GL_CODE_COMBINATIONS_N2 Cost: 155 Cardinality: 56,999
8 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_JE_LINES Cost: 18 Bytes: 54 Cardinality: 2
7 INDEX RANGE SCAN INDEX GL.GL_JE_LINES_N1 Cost: 3 Cardinality: 37
11 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_JE_HEADERS Cost: 2 Bytes: 33 Cardinality: 1
10 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_JE_HEADERS_U1 Cost: 1 Cardinality: 1
Thanks
ChandraLots of things come into play when you're tuning a query.
An (unformatted) execution plan isn't enough.
Tuning takes time and understanding how (a lot of) things work, there is no ASAP in the world of tuning.
Please post other important details, like your database version, optimizer settings, how/when are table statistics gathered etc.
So, read the following informative threads (and please take your time, this really is important stuff), and adust your thread as needed.
That way you'll have a bigger chance of getting help that makes sense...
Your DBA should/ought to be able to help you in this as well.
Re: HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html -
Hello ALL,
I have the following explain plan, can any body explain the meaning of this explain plan
SELECT * FROM
2 TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE','111'));
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 401 | 25263 | | 751K|
| 1 | MERGE JOIN SEMI | | 401 | 25263 | | 751K|
| 2 | SORT JOIN | | 17M| 820M| 2108M| 75297 |
| 3 | TABLE ACCESS FULL | TABLE1 | 17M| 820M| | 3520 |
|* 4 | SORT UNIQUE | | 275M| 3412M| 10G| 676K|
| 5 | VIEW | VW_NSO_1 | 275M| 3412M| | 3538 |
|* 6 | HASH JOIN | | 275M| 7874M| | 3538 |
|* 7 | TABLE ACCESS FULL| TABLE2 | 16 | 128 | | 2 |
|* 8 | TABLE ACCESS FULL| TABLE1 | 17M| 360M| | 3520 |
Predicate Information (identified by operation id):
4 - access("TABLE1"."POSITION"="VW_NSO_1"."$nso_col_1")
filter("TABLE1"."POSITION"="VW_NSO_1"."$nso_col_1")
6 - access("TABLE2"."VERSION_NO"="TABLE1"."VERSION_NO")
7 - filter("TABLE2"."STATIC_UPD_FLAG"='N')
8 - filter("TABLE1"."DATETIME_INSERTED">TO_DATE('0004-01-01 00:00:00',
'yyyy-mm-dd hh24:mi:ss'))
Note: cpu costing is off
26 rows selected.
SQL>There is a section in the manual on interpreting the output of explain plan http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/ex_plan.htm#16972 Tom Kyte also discusses interpreting the plan http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:231814117467#7344298017927 (page down about halfway where he starts his book excerpt).
Rows, bytes, and temp space are the cost-based optimizer's guess about the number of rows, bytes, and temp space that will be touched (or consumed) by the operation. The cost is an internal number that has no significance to you when you're reading an explain plan-- it does have some significance when you are examining an event 10046 trace.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
Hello
I've a problem with a sql. On the production database the explain plan goes over the index, an that's not so fast. on the test system the explain plan makes a bitmap conversion from rowids, that's faster. What can I do, that the explain plan goes also over the bitmap conversion from rowids?
Thanks.
rogerSelect:
select * from info a
where datum = '19-Jan-2011'
and gruppen = 90
and dokument = 90
and Nummer not in (select nummer from info b
where b.datum = '19-Jan-2011'
and b.gruppen = 3
and b.dokument = 1)
Test-System:
Plan
SELECT STATEMENT ALL_ROWSCost: 342 Bytes: 340 Cardinality: 1
15 HASH JOIN ANTI Cost: 342 Bytes: 340 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE INFO Cost: 150 Bytes: 318 Cardinality: 1
6 BITMAP CONVERSION TO ROWIDS
5 BITMAP AND
2 BITMAP CONVERSION FROM ROWIDS
1 INDEX RANGE SCAN INDEX INFO_PERDATUM-IDX Cost: 11 Cardinality: 3.151
4 BITMAP CONVERSION FROM ROWIDS
3 INDEX RANGE SCAN INDEX INFO_DOKUMENT_IDX Cost: 134 Cardinality: 3.151
14 TABLE ACCESS BY INDEX ROWID TABLE INFO Cost: 192 Bytes: 22 Cardinality: 1
13 BITMAP CONVERSION TO ROWIDS
12 BITMAP AND
9 BITMAP CONVERSION FROM ROWIDS
8 INDEX RANGE SCAN INDEX INFO_PERDATUM-IDX Cost: 11 Cardinality: 3.151
11 BITMAP CONVERSION FROM ROWIDS
10 INDEX RANGE SCAN INDEX INFO_DOKUMENT_IDX Cost: 175 Cardinality: 3.151
Prod-System:
Plan
SELECT STATEMENT ALL_ROWSCost: 7.436 Bytes: 339 Cardinality: 1
5 HASH JOIN RIGHT ANTI Cost: 7.436 Bytes: 339 Cardinality: 1
2 TABLE ACCESS BY INDEX ROWID TABLE INFO Cost: 3.718 Bytes: 1.056 Cardinality: 48
1 INDEX RANGE SCAN INDEX INFO_PERDATUM-IDX Cost: 775 Cardinality: 3.801
4 TABLE ACCESS BY INDEX ROWID TABLE INFO Cost: 3.718 Bytes: 15.216 Cardinality: 48
3 INDEX RANGE SCAN INDEX INFO_PERDATUM-IDX Cost: 775 Cardinality: 3.801 -
Inaccurate EXPLAIN PLAN...!!!
This is frightening... I've been working on a nasty query on a poorly designed data model and checking my work by hitting the Explain Plan button in SQLDeveloper. I was scratching my head over tons of hash joins and such, when all of a sudden I had a query with five tables showing an explain plan with only three!
I'll try to post the screen shot if I can.
Is there something that can cause this, and how can I fix it? Maybe the plan table is out of date?What's the status of the plan_table.
A new 10gR2 database creates it as a GLOBAL TEMPORARY TABLE, so there shouldn't be any risk of tripping over any other sessions.
Otherwise there's always the risk of two people (or less likely, one person and multiple sessions) doing an EXPLAIN at the same time and picking up the wrong one.
If you don't have a GLOBAL TEMPORARY TABLE (and can't turn it into one), go the old-fashioned route and use EXPLAIN PLAN FOR STATEMENT make_up_an_id SELECT....
And use that statement id when querying the explain plan -
Slow query results for simple select statement on Exadata
I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
I ran an execution plan it returns just this:
Plan
SELECT STATEMENT ALL_ROWSCost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576
1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576 I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
Edited by: k1ng87 on Apr 24, 2013 7:58 AMk1ng87 wrote:
I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?) -
Select statement performance improvement.
HI Guru's,
I am new to ABAP.
I have the below select stement
000304 SELECT mandt msgguid pid exetimest
000305 INTO TABLE lt_key
000306 UP TO lv_del_rows ROWS
000307 FROM (gv_master)
000308 WHERE
000309 * msgstate IN rt_msgstate
000310 * AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Can anyone help me how i can improve the performance of this statement?
Here is the sql trace for the statement:
SELECT
/*+
FIRST_ROWS (100)
"MANDT" , "MSGGUID" , "PID" , "EXETIMEST"
FROM
"SXMSPMAST"
WHERE
"MANDT" = :A0 AND "EXETIMEST" <= :A1 AND "EXETIMEST" >= :A2 AND "REORG" = :A3
ORDER BY
"MANDT" , "ITFACTION" , "REORG" , "EXETIMEST"
Execution Plan
SELECT STATEMENT ( Estimated Costs = 3 , Estimated #Rows = 544 )
4 SORT ORDER BY
( Estim. Costs = 2 , Estim. #Rows = 544 )
Estim. CPU-Costs = 15.671.852 Estim. IO-Costs = 1
3 FILTER
2 TABLE ACCESS BY INDEX ROWID SXMSPMAST
( Estim. Costs = 1 , Estim. #Rows = 544 )
Estim. CPU-Costs = 11.130 Estim. IO-Costs = 1
1 INDEX RANGE SCAN SXMSPMAST~TST
Search Columns: 2
Estim. CPU-Costs = 3.329 Estim. IO-Costs = 0
Do I need to create any new index ? Do i need to remove the Order By clause?
Thanks in advance.why is there an
UP TO lv_del_rows ROWS
together with an ORDER BY?
The database will find all rows fulfilling the condition but returns only the largest Top lv_del_rows.
Therefore it can take a while.
Your index, always put the client field at first position.
actually I am not really convinced by your logic:
itfaction reorg exetimest.
itfaction is the first in the sort order, so all records with the smallest itfactio will come first, but itfaction is not specified, is this really what you want?
Change the index to mandt reorg exetimest reorg
and change the ORDER BY to mandt reorg exetimest
then it will become fast.
* AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Maybe you are looking for
-
Problem: An error occured during the uploading process.
Hello! I've searched around but I am having difficulty finding a solution to this problem. I am trying to upload videos through Adobe Premiere Elements 9 to YouTube, using the preset options in the share tab. This worked previously without issue, b
-
How can I display images in drop down.
Hi All, How can I display images in drop down. <select><option>image here</option></select> please reply soon. anser please Thanks
-
Star-up disk Full; cannot start my 2008 iMac w/ 500 GB hard drive
Kept erasing empty files from my early 2008 Intel iMac(with 500 GB hard drive). Kept getitng message 'Strt-up disk almost full.' Used Hard drive as start-up disk. Kept getting a file labeled (approx.) EFileS1...(didn't write it down). Tried once to
-
Emails not being received from all recipients
Hello We are using SBS 2011 with SP1. The issue is that some emails are coming in and some are not. We can send emails fine. Some people are getting a 550 SMTP error. I did change some sort of Anti-Spam setting, but I'm pretty sure that this was on
-
ICloud Voicemail Management?
I use Google Voice to manage my voicemail, as I find it much more useful than the Visual Voicemail built into the phone application. Why won't Apple develop a similar solution to manage and store voicemail on iCloud, as well as integrate this into th