Inlist operator vs join
Dear Sirs, hope question will be new in this forum.
I've found out following :
select count(*) from TEST_TAB s
where hash_key = :hk
and( R$APP in (x1, x2, ..., x10 )
or R_CALLS_LOG in ( y1, y2, ..., y756)
run time: 380 sec
select count(*) from TEST_TAB s
where hash_key = :hk
and( R$APP in (x1, x2, ..., x10 )
or R_CALLS_LOG in ( select n from BUF_TAB)
), where BUF_TAB contains list of values of 1st query
run time: 17 sec.
Description of TEST_TAB
SQL> desc TEST_TAB
Name Type Nullable Default Comments
LDN NUMBER
CID NUMBER Y
R_CALLS_LOG NUMBER Y
TRANSAC NUMBER Y
R$APP NUMBER Y
STRT DATE Y
ROW_ID ROWID Y
DEST_HASH_KEY NUMBER Y
HASH_KEY NUMBER Y
INDEXES: NO
LIST-PARTITIONED: BY "HASH_KEY"
RECORDS in tested section: ~ 1.5 mln
It seems that INLIST operator in WHERE works dramatically slowly that join with the same list stored in separate table.
Is it true for all cases or there is optimal amount of elements in filtration list when INLIST operator will be still best choice?
Thanks in advance.
user618999 wrote:
Is it true for all cases when question has only 1 parameter - amount of items in list, while
TEST_TAB has no indexes ...As rgeier pointed out don't think that one query technique is better than another in all circumstances. You can over time see what usually works, but nothing is always true. You've found a situation where the inlist iterator works less efficiently but based on other circumstances the join might not work as well.
An indexed lookup for a table join or subquery is usually a good choice for faster retrieval than a full table scan - unless you need to access all of the rows in the list, in which case it will probably be slower. Without an index an in list might be faster.
I personally don't use in lists a lot, preferring joins or correlated EXISTS subqueries when the subqueries are properly lindexed. If I remember correctly a hard-coded in list has a limit of 1000 items anyway. In lists might be more efficient if they return a very small set of rows, perhaps less than 5 for fewer rows to search through, and if the underlying query (if any) can only be executed once to produce the list.
Adding to the confusion of tuning is that fact that recent versions of oracle can perform query transformations before execution. That is, the query can physically change before execution if the optimizer thinks the change will work better. Thus, in lists and table joins can sometimes be converted to the other. In some cases entire table joins can be added or eliminated.
Edited by: riedelme on Oct 1, 2009 12:55 PM
Similar Messages
-
Error when using "inlist operator" in the query filter of Webi 3.1
Hi,
We are currently in the process of migrating Deski to webi (BOXI 3.1).
The problem is, Deski is using the "inlist" operator which is working fine but after migrating to webi the inlist operator in the query filter is throwing the below error,
*Error Message :*
A database error occured. The database error text is: ORA-00907: missing right parenthesis. (WIS 10901)
Appreciate your assistance on this.
Thanks !
Regards,
PerialtKarthik ,
Yes I am seeing an additional paranthesis in Webi SQL query.
For example plz consider the product table below,
SELECT
Product.ID,
Product.Name
FROM Product
WHERE
Product.Name IN ( @Prompt('4) Name:','C','Product\Name-M',multi,free) )
As a work around in Custom SQL, If I remove the paranthesis the below query is running fine in webi
SELECT
Product.ID,
Product.Name
FROM Product
WHERE
Product.Name IN @Prompt('4) Name:','C','Product\Name-M',multi,free)
But I want a permanent solution. -
Formula as inlist operator parameter
hi all,
i'm using webi XI 3.1 on an OLAP universe.
My question is this:
is it possible to use a formula as a parameter of inlist operator?
like this
[measure] where([dimension] inlist([formula]))
I want to use a formula that lists the values of a dimension in every section on that dimension
thank you very much
AlessandroIts Not possible becoz the inslist parameters take the value as string .How can u give formula to the String value. suppose if you given the formula like
measure where (dimension inlist( measure where dim=" "))
THE inlist patameter takes string value so the entire formula is displayed as string instead of return value .
Edited by: K.sunil on Oct 24, 2011 1:08 PM -
Mappings between table operator and joiner operator
HI all,
I am new to owb. I am trying to create a mapping in the OWB Mapping Editor where the scenario looks like :
table1 operator ---------> joiner file operator
i am looking at earlier mappings' sql code and found that there is a where clause condition that governing the above such scenario such as:
where table1.column value = "something" then load into joiner file operator.
But in the mapping editor, i cannot find any way to add this condition by clicking on the column name of table1. Please help.
thank you.Hi
A joiner will make one output from 2 or more inputs. A joiner has a join condition which if you select the operator (not a group or an attribute with operator) the property inspector will have a join condition for detailing the 'where' clause.
If you want to filter a single input then use a filter operator. Again like condition the filter operator has a filter condition which if you select the operator (not a group or an attribute with operator) the property inspector will have a filter condition for detailing the 'where' clause.
Cheers
David -
How to change operator of join conditions in where clause?
Hello
I have a situation... I want to change the operator between each join conditions in the where clause when these join conditions are not from the same join..
For example, I have the following schema:
Dim1 ------ DimA -------Fact1
Dim1-------DimB -----Fact1
So DimA and DimB are aliasas of one dim table, but the join is different.
Now if I run this model, what I will get in the where clause of the query is:
Where Dim1 = DimA and Dim1 = DimB and DimA= Fact1 and DimB = fact1.
Is there a way I can change these "and" operator to "OR", so that the where clause would look like this: Where Dim1 = DimA and Dim1 = DimB and DimA= Fact1 OR DimB = fact1?
This is different from simply changing the join operator within the same join, because these are different joins and I'd like to control how they relate to each other..
Please help
ThanksSometimes, business rules are complex, so there isn't always a way to simplify things. Is your issue that it's complex and error prone, or is it performance due to the OR clauses?
One possibility that will at least make it easier to test and debug is something like this: (pseudocode)
From Table1 Inner join Table2 on x=y etc.etc.
CROSS APPLY
(Select case when a=b and (c=d or e=f) then 1 else 0 end) as Situation1
, case when h=i or j = k then 1 else 0 end) as situation2
, case when l = m then 1 else 0 end) as situation 3
) as CA_Logic_Simplifier
Where situation1 = 1 and situation2 = 1 and situation3 = 1
Although you could say, "Hey, this is basically doing the same thing as before", this approach would be far easier to test and debug, because you can at a glance look at the values for situation1, 2, 3, etc. to see where errors are being introduced.
The cross apply makes the columns situation1/2/3 "instantiated", so they are usable in the where clause. Divide and conquer. -
I've been following the
Field Engineer example project from the Windows Development Center to guide into mapping many-to-many relationships on my model. What's been bothering me is how to insert entries into many-to-many relationships.
Take my model as example:
An Organization can have many Users.
An User can belong to many Organizations.
My model class for User looks like this:
public class User
public User()
this.Organizations = new HashSet<Organization>();
public int Id { get; set; }
public string Email { get; set; }
public string Password { get; set; }
public string PasswordSalt { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public bool Active { get; set; }
public string Picture { get; set; }
public string PictureMimeType { get; set; }
public bool Confirmed { get; set; }
public string ConfirmationKey { get; set; }
[JsonIgnore]
public virtual ICollection<Organization> Organizations { get; set; }
My UserDTO class is this:
public class UserDTO : EntityData
public string Email { get; set; }
public string Password { get; set; }
public string PasswordSalt { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public bool Active { get; set; }
public string Picture { get; set; }
public string PictureMimeType { get; set; }
public bool Confirmed { get; set; }
public string ConfirmationKey { get; set; }
My Organization class:
public class Organization
public Organization()
this.Users = new List<User>();
public string Id { get; set; }
public string Title { get; set; }
public string LegalName { get; set; }
public string TaxReference { get; set; }
[JsonIgnore]
public virtual ICollection<User> Users { get; set; }
My OrganizationDTO class:
public class OrganizationDTO : EntityData
public string Title { get; set; }
public string LegalName { get; set; }
public string TaxReference { get; set; }
public virtual ICollection<UserDTO> Users { get; set; }
With those classes in mind I created the controller classes, I mapped the DTO and the Model classes using AutoMapper as follows:
cfg.CreateMap<Organization, OrganizationDTO>()
.ForMember(organizationDTO => organizationDTO.Id,
map => map.MapFrom(organization => MySqlFuncs.LTRIM(MySqlFuncs.StringConvert(organization.Id))))
.ForMember(organizationDTO => organizationDTO.Users,
map => map.MapFrom(organization => organization.Users));
cfg.CreateMap<OrganizationDTO, Organization>()
.ForMember(organization => organization.Id,
map => map.MapFrom(organizationDTO => MySqlFuncs.LongParse(organizationDTO.Id)))
.ForMember(organization => organization.Users,
map => map.Ignore());
Using Fluent API, I defined the relationship between these two entities using an EntityTypeConfiguration class like this:
public class UserEntityConfiguration : EntityTypeConfiguration<User>
public UserEntityConfiguration()
this.Property(u => u.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
this.HasMany<Organization>(u => u.Organizations).WithMany(o => o.Users).Map(uo =>
uo.MapLeftKey("UserId");
uo.MapRightKey("OrganizationId");
uo.ToTable("OrganizationsUsers");
I created a TableController class to handle UserDTO and OrganizationDTO, I have no problem inserting new Users or new Organizations, but the endpoints from each TableController class only allows me to add Users or Organizations individually as far as I understand.
To create an entry into the OrganizationsUser table, how can I achieve this?
I was thinking a PATCH request should be a way to do this, but is it the right way? Do I have to define a TableController for this? How can I expose the Insert, Update, Select and Delete operation for the elements in this relationship? What would be the JSON
to be sent to the endpoints?Hi,
if you accept lists to hold the LOV data, then here is an example : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html#CodeCornerSamples --> sample 70
Frank -
How can we change the default inlist operator in condition pane to equalto
When a user creates a condition in a WebIntelligence report, the
default operator is "In List". Can this default be changed to "Equal
To" for all users?No.
Sincerely,
Ted Ueda -
Change a MINUS set operation to JOINS?
Hi guys,
I have a question about an SQL statement that I have to do. The complexity is big (lots of tables), but the problem can be resumed to this. Imagine I have the following query:
SELECT t.x
FROM the_table t
WHERE t.y = 'a'
MINUS
SELECT t.x
FROM the_table t
WHERE t.y = 'b'In my original statement, I don't have only one table but many. But the issue is the same. I select a set of data and I minus the same statement with another criteria. I tried with some OR combination but never had the same result...
Does anyone have an idea?
Thanks,Please post a realistic example that demonstrates your problem. As Nikolay stated your MINUS isn't even needed making what you posted useless in trying to understand what you need to do.
So for starters post the following:
1. What Oracle version you are using for each server.
2. How many servers are involved.
3. Is this a new process or a current process that has performance or other issues? If a current process how is it being done now? What are the issues?
4. A statement of exactly what you are trying to do; not how you think it ought to be done but what the goal is.
5. A realistic example with actual tables, columns and queries
6. How many columns are there in the result set?
A sample set of source data and the set of result data you want to produce from that source. -
Set operator NE in Database View creation in join condition
Hi Experts,
I have a requirement to set NE(not equal) operator in join condition of Database View creation. Could you please help me how to set in operator.
Join condition :
Ex : BSAK-AUGBL NE BSAK-BELNR.
You know that by default operator is '='. i want to set NE in place of '='.
Thanks,Hi Chinna,
Check whether if there is any possibility or not to include more key fields like bukrs, lifnr, gjahr etc in the where condition, so that you query may result faster. Then, there won't be any necessary to create the view.
Hope this helps.
Please reward if useful.
Thanks,
Srinivasa -
Max sal using set or join operations
Hi,
how to write a sql query to retrieve max sal of an emp table without using max()function.
and using only set operations or join conditions..
cheers..You can use an analytic function for this. You can either use ROW_NUMBER or RANK or DENSE_RANK
with my_emp
as
select empno, ename, sal, rank() over(order by sal desc) rk
from emp
select empno, ename, sal
from my_emp
where rk = 1 -
Query Degradation--Hash Join Degraded
Hi All,
I found one query degradation issue.I am on 10.2.0.3.0 (Sun OS) with optimizer_mode=ALL_ROWS.
This is a dataware house db.
All 3 tables involved are parition tables (with daily partitions).Partitions are created in advance and ELT jobs loads bulk data into daily partitions.
I have checked that CBO is not using local indexes-created on them which i believe,is appropriate because when i used INDEX HINT, elapsed time increses.
I checked giving index hint for all tables one by one but dint get any performance improvement.
Partitions are daily loaded and after loading,partition-level stats are gathered with dbms_stats.
We are collecting stats at partition level(granularity=>'PARTITION').Even after collecting global stats,there is no change in access pattern.Stats gather command is given below.
PROCEDURE gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Only SOT_KEYMAP.IPK_SOT_KEYMAP is GLOBAL.Rest all indexes are LOCAL.
Earlier,we were having BIND PEEKING issue,which i fixed but introducing NO_INVALIDATE=>FALSE in stats gather job.
Here,Partition_name (20090219) is being passed through bind variables.
SELECT a.sotrelstg_sot_ud sotcrct_sot_ud,
b.sotkey_ud sotcrct_orig_sot_ud, a.ROWID stage_rowid
FROM (SELECT sotrelstg_sot_ud, sotrelstg_sys_ud,
sotrelstg_orig_sys_ord_id, sotrelstg_orig_sys_ord_vseq
FROM sot_rel_stage
WHERE sotrelstg_trd_date_ymd_part = '20090219'
AND sotrelstg_crct_proc_stat_cd = 'N'
AND sotrelstg_sot_ud NOT IN(
SELECT sotcrct_sot_ud
FROM sot_correct
WHERE sotcrct_trd_date_ymd_part ='20090219')) a,
(SELECT MAX(sotkey_ud) sotkey_ud, sotkey_sys_ud,
sotkey_sys_ord_id, sotkey_sys_ord_vseq,
sotkey_trd_date_ymd_part
FROM sot_keymap
WHERE sotkey_trd_date_ymd_part = '20090219'
AND sotkey_iud_cd = 'I'
--not to select logical deleted rows
GROUP BY sotkey_trd_date_ymd_part,
sotkey_sys_ud,
sotkey_sys_ord_id,
sotkey_sys_ord_vseq) b
WHERE a.sotrelstg_sys_ud = b.sotkey_sys_ud
AND a.sotrelstg_orig_sys_ord_id = b.sotkey_sys_ord_id
AND NVL(a.sotrelstg_orig_sys_ord_vseq, 1) = NVL(b.sotkey_sys_ord_vseq, 1);
During normal business hr, i found that query takes 5-7 min(which is also not acceptable), but during high load business hr,it is taking 30-50 min.
I found that most of the time it is spending on HASH JOIN (direct path write temp).We have sufficient RAM (64 GB total/41 GB available).
Below is the execution plan i got during normal business hr.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp|
| 1 | HASH GROUP BY | | 1 | 1 | 7844K|00:05:28.78 | 16M| 217K| 35969 | | | | |
|* 2 | HASH JOIN | | 1 | 1 | 9977K|00:04:34.02 | 16M| 202K| 20779 | 580M| 10M| 563M (1)| 650K|
| 3 | NESTED LOOPS ANTI | | 1 | 6 | 7855K|00:01:26.41 | 16M| 1149 | 0 | | | | |
| 4 | PARTITION RANGE SINGLE| | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
|* 5 | TABLE ACCESS FULL | SOT_REL_STAGE | 1 | 258K| 8183K|00:00:16.37 | 25576 | 1149 | 0 | | | | |
| 6 | PARTITION RANGE SINGLE| | 8183K| 326K| 327K|00:01:10.53 | 16M| 0 | 0 | | | | |
|* 7 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 8183K| 326K| 327K|00:00:53.37 | 16M| 0 | 0 | | | | |
| 8 | PARTITION RANGE SINGLE | | 1 | 846K| 14M|00:02:06.36 | 289K| 180K| 0 | | | | |
|* 9 | TABLE ACCESS FULL | SOT_KEYMAP | 1 | 846K| 14M|00:01:52.32 | 289K| 180K| 0 | | | | |
I will attached the same for high load business hr once query gives results.It is still executing for last 50 mins.
INDEX STATS (INDEXES ARE LOCAL INDEXES)
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_REL_STAGE IDXL_SOTRELSTG_SOT_UD SOTRELSTG_SOT_UD 1 25461560 25461560 184180
SOT_REL_STAGE SOTRELSTG_TRD_DATE 2 25461560 25461560 184180
_YMD_PART
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_KEYMAP IDXL_SOTKEY_ENTORDSYS_UD SOTKEY_ENTRY_ORD_S 1 1012306940 3 38308680
YS_UD
SOT_KEYMAP IDXL_SOTKEY_HASH SOTKEY_HASH 1 1049582320 1049582320 1049579520
SOT_KEYMAP SOTKEY_TRD_DATE_YM 2 1049582320 1049582320 1049579520
D_PART
SOT_KEYMAP IDXL_SOTKEY_SOM_ORD SOTKEY_SOM_UD 1 1023998560 268949136 559414840
SOT_KEYMAP SOTKEY_SYS_ORD_ID 2 1023998560 268949136 559414840
SOT_KEYMAP IPK_SOT_KEYMAP SOTKEY_UD 1 1030369480 1015378900 24226580
TABLE_NAME INDEX_NAME COLUMN_NAME COLUMN_POSITION NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
SOT_CORRECT IDXL_SOTCRCT_SOT_UD SOTCRCT_SOT_UD 1 412484756 412484756 411710982
SOT_CORRECT SOTCRCT_TRD_DATE_Y 2 412484756 412484756 411710982
MD_PART
INDEX partiton stas (from dba_ind_partitions)
INDEX_NAME PARTITION_NAME STATUS BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE LAST_ANALYZ GLO
IDXL_SOTCRCT_SOT_UD P20090219 USABLE 1 372 327879 216663 327879 327879 20-Feb-2009 YES
IDXL_SOTKEY_ENTORDSYS_UD P20090219 USABLE 2 2910 3 36618 856229 856229 19-Feb-2009 YES
IDXL_SOTKEY_HASH P20090219 USABLE 2 7783 853956 853914 853956 119705 19-Feb-2009 YES
IDXL_SOTKEY_SOM_ORD P20090219 USABLE 2 6411 531492 157147 799758 132610 19-Feb-2009 YES
IDXL_SOTRELSTG_SOT_UD P20090219 USABLE 2 13897 9682052 45867 9682052 794958 20-Feb-2009 YESThanks in advance.
Bhavik DesaiHi Randolf,
Thanks for the time you spent on this issue.I appreciate it.
Please see my comments below:
1. You've mentioned several times that you're passing the partition name as bind variable, but you're obviously testing the statement with literals rather than bind
variables. So your tests obviously don't reflect what is going to happen in case of the actual execution. The cardinality estimates are potentially quite different when
using bind variables for the partition key.
Yes.I intentionaly used literals in my tests.I found couple of times that plan used by the application and plan generated by AUTOTRACE+EXPLAIN PLAN command...is same and
caused hrly elapsed time.
As i pointed out earlier,last month we solved couple of bind peeking issue by intproducing NO_VALIDATE=>FALSE in stats gather procedure,which we execute just after data
load into such daily partitions and before start of jobs which executes this query.
Execution plans From AWR (with parallelism on at table level DEGREE>1)-->This plan is one which CBO has used when degradation occured.This plan is used most of the times.
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1918506000 46154275 918 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 1213870831
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 19655 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 116 | 19655 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 116 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 102 | 19654 (1)| 00:05:54 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 13M| 664M| 2427 (3)| 00:00:44 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_REL_STAGE | 13M| 664M| 2427 (3)| 00:00:44 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 27M| 1270M| 17209 (1)| 00:05:10 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_KEYMAP | 27M| 1270M| 17209 (1)| 00:05:10 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 16185 | 221K| 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
Other Execution plan from AWR
ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
1053251381 0 2925 CURSOR STATEMENT : 4
CURSOR STATEMENT : 4
PLAN_TABLE_OUTPUT
SQL_ID 39708a3azmks7
SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
:B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
Plan hash value: 3434900850
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 1830 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | P->P | HASH |
| 6 | HASH GROUP BY | | 1 | 131 | 1830 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 7 | NESTED LOOPS ANTI | | 1 | 131 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 8 | HASH JOIN | | 1 | 117 | 1829 (2)| 00:00:33 | | | Q1,02 | PCWP | |
| 9 | PX JOIN FILTER CREATE| :BF0000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10000 | 1010K| 50M| 694 (1)| 00:00:13 | | | Q1,00 | P->P | HASH |
| 12 | PX BLOCK ITERATOR | | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| SOT_KEYMAP | 1010K| 50M| 694 (1)| 00:00:13 | KEY | KEY | Q1,00 | PCWP | |
| 14 | PX RECEIVE | | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,02 | PCWP | |
| 15 | PX SEND HASH | :TQ10001 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | P->P | HASH |
| 16 | PX JOIN FILTER USE | :BF0000 | 11M| 688M| 1129 (3)| 00:00:21 | | | Q1,01 | PCWP | |
| 17 | PX BLOCK ITERATOR | | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWC | |
| 18 | TABLE ACCESS FULL| SOT_REL_STAGE | 11M| 688M| 1129 (3)| 00:00:21 | KEY | KEY | Q1,01 | PCWP | |
| 19 | PARTITION RANGE SINGLE| | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
| 20 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 5209 | 72926 | 0 (0)| | KEY | KEY | Q1,02 | PCWP | |
EXECUTION PLAN AFTER SETTING DEGREE=1 (It was also degraded)
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 1 | HASH GROUP BY | | 1 | 129 | | 42336 (2)| 00:12:43 | | |
| 2 | NESTED LOOPS ANTI | | 1 | 129 | | 42335 (2)| 00:12:43 | | |
|* 3 | HASH JOIN | | 1 | 115 | 51M| 42334 (2)| 00:12:43 | | |
| 4 | PARTITION RANGE SINGLE| | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
|* 5 | TABLE ACCESS FULL | SOT_KEYMAP | 846K| 41M| | 8241 (1)| 00:02:29 | 81 | 81 |
| 6 | PARTITION RANGE SINGLE| | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
|* 7 | TABLE ACCESS FULL | SOT_REL_STAGE | 8161K| 490M| | 12664 (3)| 00:03:48 | 81 | 81 |
| 8 | PARTITION RANGE SINGLE | | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
|* 9 | INDEX RANGE SCAN | IDXL_SOTCRCT_SOT_UD | 6525K| 87M| | 1 (0)| 00:00:01 | 81 | 81 |
Predicate Information (identified by operation id):
3 - access("SOTRELSTG_SYS_UD"="SOTKEY_SYS_UD" AND "SOTRELSTG_ORIG_SYS_ORD_ID"="SOTKEY_SYS_ORD_ID" AND
NVL("SOTRELSTG_ORIG_SYS_ORD_VSEQ",1)=NVL("SOTKEY_SYS_ORD_VSEQ",1))
5 - filter("SOTKEY_TRD_DATE_YMD_PART"=20090219 AND "SOTKEY_IUD_CD"='I')
7 - filter("SOTRELSTG_CRCT_PROC_STAT_CD"='N' AND "SOTRELSTG_TRD_DATE_YMD_PART"=20090219)
9 - access("SOTRELSTG_SOT_UD"="SOTCRCT_SOT_UD" AND "SOTCRCT_TRD_DATE_YMD_PART"=20090219)2. Why are you passing the partition name as bind variable? A statement executing 5 mins. best, > 2 hours worst obviously doesn't suffer from hard parsing issues and
doesn't need to (shouldn't) share execution plans therefore. So I strongly suggest to use literals instead of bind variables. This also solves any potential issues caused
by bind variable peeking.
This is a custom application which uses bind variables to extract data from daily partitions.So,daily automated data extract from daily paritions after load and ELT process.
Here,Value of bind variable is being passed through a procedure parameter.It would be bit difficult to use literals in such application.
3. All your posted plans suffer from bad cardinality estimates. The NO_MERGE hint suggested by Timur only caused a (significant) damage limitation by obviously reducing
the row source size by the group by operation before joining, but still the optimizer is way off, apart from the obviously wrong join order (larger row set first) in
particular the NESTED LOOP operation is causing the main troubles due to excessive logical I/O, as already pointed out by Timur.
Can i ask for alternatives to NESTED LOOP?
4. Your PLAN_TABLE seems to be old (you should see a corresponding note at the bottom of the DBMS_XPLAN.DISPLAY output), because none of the operations have a
filter/access predicate information attached. Since your main issue are the bad cardinality estimates, I strongly suggest to drop any existing PLAN_TABLEs in any non-Oracle
owned schemas because 10g already provides one in the SYS schema (GTT PLAN_TABLE$) exposed via a public synonym, so that the EXPLAIN PLAN information provides the
"Predicate Information" section below the plan covering the "Filter/Access" predicates.
Please post a revised explain plan output including this crucial information so that we get a clue why the cardinality estimates are way off.
I have dropped the old plan.Got above execution plan(listed above in first point) with PREDICATE information.
"As already mentioned the usage of bind variables for the partition name makes this issue potentially worse."
Is there any workaround without replacing bind variable.I am on 10g so 11g's feature will not help !!!
How are you gathering the statistics daily, can you post the exact command(s) used?
gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
Thanks & Regards,
Bhavik Desai -
Illegal use of sequence operator
Hi All,
In my map I am using a sequence in the expression operator and then I am putting the output to a joiner and after that data is moved to target table but while validating this map I am getting error:sequence operator r used in complex mapping which requires generation using nested subquery.sequences r not allowed to be used in nested subqueries. Can anyone tell me how can this issue be resolved.
Thanks n Regards,
AmritHi,
You can try moving the Sequence operator after join, ie as a final step close to target.
Mahesh -
Multiple Joins from one Dimension Table to single Fact table
Hi all,
I have a single fact table with attributes as such:
Action_ID
Action_Type
Date_Started
Date_Completed
and a Time Dimension to connect to the fact table. Currently the 2 Date columns in my fact table are in the format of 20090101 (which is the same as my key in the time dimension), if it means anything. I've tried to create multiple joins but have been getting error messages. What is the easiest way to link the time dimension with the two date columns in my fact table? Thanks.hi..
it seems to be, you need to use between operator to join time dimension with fact (i.e. Non-equi join)
If it's then you should create complex join in physical layer by connecting dimension with your fact.
select columns and operators in such a way that it resembles something like the below.
timeDimension.timeKey BETWEEN FactTable.Date_Started AND FactTable.Date_Completed -
I have a very basic question related to JPA.
I am using dict tables and connecting it to MaxDB.
In my Entity I have a select statement like this,
@NamedQuery (name = "getOperations", query = "SELECT operT.id FROM SII_ARC_Oper operT " +
"LEFT JOIN SII_ARC_Service serviceT " +
"ON operT.sid = serviceT.sid AND " +
"serviceT.duuid =:duuid")
When I call this in my Bean, I get this error,
An error occurred processing the named query >>getOperations<< with the query string >>SELECT operT.id FROM SII_ARC_Oper operT LEFT JOIN SII_ARC_Service serviceT ON operT.sid = serviceT.sid AND serviceT.duuid =:duuid<<. The exception text is: line 1: Variable 'SII_ARC_Service' not declared
SELECT operT.id FROM SII_ARC_Oper operT LEFT JOIN SII_ARC_Service serviceT ON operT.sid = serviceT.sid AND serviceT.duuid =:duuid
^
line 1: unexpected token: serviceT
SELECT operT.id FROM SII_ARC_Oper operT LEFT JOIN SII_ARC_Service serviceT ON operT.sid = serviceT.sid AND serviceT.duuid =:duuid
I am just wondering how do I declare the u201CSII_ARC_Serviceu201D in my Entity before the Named Query. Or am I not declaring anything else..?
Thanks
Domnic
Edited by: domnic savio on Jul 21, 2008 12:16 PMI have some improvements now although the problem is not solved yet,
I have the JPQL as
// The Named Query
@NamedQuery (name = "getOperations", query = "SELECT oper.id FROM SII_ARC_Oper oper " +
"LEFT JOIN oper.service serviceT " +
"WHERE serviceT.duuid =:duuid")
// The many to one relationship
@Column(name= "SERVICE_TABLE")
@ManyToOne(targetEntity=com.sap.sii.archeiver.SII_ARC_Service.class)
@JoinColumn(referencedColumnName = "SID")
private SII_ARC_Service service;
On calling the NamedQuery and passing a parameter, I get the error,
Errors have occurred during the precompilation of named queries:
An error occurred processing the named query >>getOperations<< with the query string >>SELECT operT.id FROM SII_ARC_Oper operT LEFT JOIN operT.service serviceT WHERE serviceT.duuid =:duuid<<. The exception text is: line 1: Path 'opert.service' is not association path.
SELECT operT.id FROM SII_ARC_Oper operT LEFT JOIN operT.service serviceT WHERE serviceT.duuid =:duuid
1) What association path does the compiler refer too !?!.
2) Is the JPQL Querry valid without an ON clause ?!.
anyone has an idea..!?!
thanks in advance
Domnic -
Can someone explain me, why result of this query:
select * from sys.v_$sql
where hash_value in
(select hash_value from sys.v_$sql_plan
where options = 'CARTESIAN' and operation LIKE '%JOIN%' )
order by hash_value; is that statement:
select count(1) from all_objects where object_name = :1;My point to run this query is find all cartesian join statements.I am not sure but may it has something to do with multiple tables used through the view "all_objects" ...
Regards
Maybe you are looking for
-
The iPod "my ipod name" cannot be updated. The required file is in use.
Installed iTunes 7 on my mac. Everything went fine. Updated my ipod everything went fine. Ever since then I've been getting a dialogue pop up window stating "The iPod "my ipod name" cannot be updated. The required file is in use." This pop up happens
-
I'm dissapointed in the lack of Music Videos available?
Now I have my 80Gb video ipod since Christmas and I want to build a nice collection of music videos. However, the selection on iTunes is poor and not all music videos are accessible from the main music videos page. Many seem hidden. Apple needs to ma
-
Is it possible to have someone log into my web site from a remote site? I would like to offer my service for other design companies and would like them to have the login process on their own web site and have it query my database and log into the pro
-
"for me" is missing from my itunes page, how do i get it back ?
i find a ton of great music usually in the "for me" part on itunes but it is not on my itunes right now. anyone else ever have this problem ?
-
Web service call fails - type mismatch
Hi All I hope someone can help, as I have been struggling with this issue for a few days. If I have missed any useful information off of this post, please ask and I will try to supply. I am trying to call a web service, but contantly receiving the er