Bad exec plan when joining tables using primary keys together w/ Contains

Hello all...this is something that confuzzles me....
When joining 2 tables, the exec plan shows that the domain index is first accessed, before checking if there is a record in the other table using a highly selective index.
create table users
(userid varchar2(20),
name varchar2(100),
resume clob
create table seeker
(seekerid varchar2(20),
userid varchar2(20),
jobid varchar2(20)
create index user_idx on users(userid)
create index job_resume_index on users(resume)
indextype is ctxsys.context
create unique index seeker_unik on seeker(seekerid)
create index seeker_index on seeker(jobid,userid)
then sample records where inserted, then the text index was sync'ed.
Query and Execution Plan:
select u.userid, u.name
from users u, seeker s
where s.jobid = 'HJOBP000000000218627'
and u.userid = s.userid
and contains(resume,'texas') > 1
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=72)
1 0 NESTED LOOPS (Cost=3 Card=1 Bytes=72)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'USERS' (Cost=2 Card=1
Bytes=30)
3 2 DOMAIN INDEX OF 'JOB_RESUME_INDEX' (Cost=0)
4 1 INDEX (RANGE SCAN) OF 'SEEKER_INDEX' (NON-UNIQUE) (Cost=
1 Card=1 Bytes=42)
The problem with execution plan is if the domain index returns huge number of records i.e. 40k rows, then it has to check each userid with the SEEKER table, and returns only 10 rows. I can add a hint to specify the use of the user_idx instead of the domain index to improve performance.
My question is how does the database determine when to use the domain index or to use other highly selective index with out the aid of a HINT. The trace file shows the use of "CTXSYS"."TEXTOPTSTATS".ODCIStatsIndexCost and "CTXSYS"."TEXTOPTSTATS".ODCIStatsFunctionCost. Are these used to compute and compare the cost of each index and decide on what index to used? If so, why does it return a lower cost for domain index when the btree index is more efficient? I have all statistics gathered for all tables and indexes (inc dr$ tables and its indexes)
Thanks,
jojo

Hi,
What I'm pasting here is actually from Ch9 of Expert PL/SQL. It shows what happens during a select with CONTAINS. It does not include a join with another table, or an additional column in the where clause. Hope it helps.
================================================
By default, the Extensible Query Optimizer is enabled. When enabled, the Extensible Query Optimizer can determine the I/O and CPU cost associated with the CONTAINS predicate, find the cost of each call to the CONTAINS() function, and determine the selectivity of the CONTAINS predicate.
To see this in action I ran the basic select earlier. Examining the SQL trace shows the steps.
Step 1 - Determine the I/O and CPU cost of the CONTAINS() function:
--Available online as part of contains_trace.doc
"CTXSYS"."TEXTOPTSTATS".ODCIStatsFunctionCost(
sys.ODCIFuncInfo('CTXSYS',
'CTX_CONTAINS',
'TEXTCONTAINS',
2),
cost,
sys.ODCIARGDESCLIST(
sys.ODCIARGDESC(
2, 'DOCUMENT_REPOSITORY', 'PLSQL',
'"DOCUMENT"', NULL, NULL, NULL),
sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
NULL,
'constitution',
sys.ODCIENV(0,0,0,1));
Step 2 – Determine the selectivity of the CONTAINS predicate
-- Available online as part of contains_trace.doc
"CTXSYS"."TEXTOPTSTATS".ODCIStatsSelectivity(
sys.ODCIPREDINFO('CTXSYS',
'CTX_CONTAINS',
'TEXTCONTAINS',
32),
sel,
sys.ODCIARGDESCLIST(
sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL),
sys.ODCIARGDESC(5, NULL, NULL, NULL, NULL, NULL, NULL),
sys.ODCIARGDESC(2, 'DOCUMENT_REPOSITORY', 'PLSQL',
"DOCUMENT"', NULL, NULL, NULL),
sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
0,
NULL,
NULL,
'constitution',
sys.ODCIENV(0,0,0,1));
Step 3: Determine the I/O and CPU cost of the CONTAINS predicate
-- Available online as part of contains_trace.doc
"CTXSYS"."TEXTOPTSTATS".ODCIStatsIndexCost(
sys.ODCIINDEXINFO('PLSQL',
'EXPERT_IDX',
sys.ODCICOLINFOLIST(
sys.ODCICOLINFO('PLSQL', 'DOCUMENT_REPOSITORY',
'"DOCUMENT"', 'BFILE', NULL, NULL)),
NULL,
0,
0),
50.00000000,
cost,
sys.ODCIQUERYINFO(
2,
sys.ODCIOBJECTLIST(sys.ODCIOBJECT('SCORE', 'CTXSYS'))),
sys.ODCIPREDINFO('CTXSYS', 'CONTAINS', NULL, 0),
sys.ODCIARGDESCLIST(
sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL),
sys.ODCIARGDESC(5, NULL, NULL, NULL, NULL, NULL, NULL),
sys.ODCIARGDESC(2, 'DOCUMENT_REPOSITORY', 'PLSQL',
'"DOCUMENT"', NULL, NULL, NULL),
sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
1,
NULL,
'constitution',
sys.ODCIENV(0,0,0,1));
Note: The 50.00000000 in this last call is the selectivity retrieved from step 2.
Only after these three steps complete does the query actually get executed:
-- Available online as part of contains_trace.doc
SELECT score(1), title
FROM document_repository
WHERE CONTAINS(document, 'constitution', 1) > 0
ORDER BY 1 DESC;
This is not the end of the processing though. To retrieve the records, the $I table (the index data table) is queried as follows:
-- Available online as part of contains_trace.doc
SELECT /*+ INDEX(i) */ TOKEN_FIRST,TOKEN_LAST,TOKEN_COUNT,ROWID
FROM "PLSQL"."DR$EXPERT_IDX$I" i
WHERE TOKEN_TEXT = 'CONSTITUTION'
AND TOKEN_TYPE = 0
ORDER BY TOKEN_TEXT, TOKEN_TYPE, TOKEN_FIRST;
Note: The search term is UPPERCASE when querying against the DR$EXPERT_IDX$I table because all of the tokens are stored in uppercase by default. The original SELECT was in lowercase. This automatic conversion to uppercase allows the Text index to provide case-insensitive searching.
This query returns the following result:
TOKEN_FIRST TOKEN_LAST TOKEN_COUNT ROWID
1 2 2 AAAN54AAEAAAOvEAAo
Finally, the $R table (the rowid table) is queried.
-- Available online as part of contains_trace.doc
SELECT data
FROM "PLSQL"."DR$EXPERT_IDX$R"
WHERE row_no = 0;
Only now do I see the results of my query:
SCORE(1) TITLE
55 United States Constitution
8 Bill of Rights
Keep in mind that modifications to the query, such as the addition of other columns in the where clause or the addition of operators, will result in a different trace.
===============================

Similar Messages

  • Cannot join tables used in the workbook. Join "" not found in the EUL. Atte

    Trying to migrate workbooks from one instance to other using discoverer 10.1.2.48.18 i am getting the following error
    Unable to open the worksheet requested.
    - OracleBI Discoverer was unable to find the worksheet that was requested.
    - BIB-10310 An unknown exception occurred.
    - Cannot join tables used in the workbook. Join "" not found in the EUL. Attempt to open workbook failed.
    but its fine one instance , and in prod i am getting above weird error.
    does anybody know how to rectify this.

    Note:267476.1 - for those who can't get into metalink but still need to know the answer
    Subject: Discoverer Workbook Fails With 'Cannot join tables.Item dependency "" not found in EUL' After EUL Migration, Importing Or Cloning
    Doc ID:      Note:267476.1      Type:      PROBLEM
    Last Revision Date:      28-APR-2005      Status:      PUBLISHED
    The information in this document applies to:
    Oracle Discoverer - Version: 4.1 to 10.1
    This problem can occur on any platform.
    Symptoms
    Opening a workbook migrated from one instance to another results in the following errors:
    * Unable to load EUL item ...
    * Cannot join tables. Item dependency "" not found in EUL.
    * Illegal operation... DIS4USR OR DIS5USR caused an invalid page fault in module....
    If using the workbook dump utility, you may notice:
    * Found in EUL by id *** Name has changed to .. where the new name is really the wrong column
    It seems that items are matched by internal item identifiers (IDs) and not by developer keys and names.
    Changes
    Migrating from one environment to another. For example, you may have performed a database export / import of the EUL schema from DEV to TEST
    Cause
    The timestamp is the same for both EULs, what can be confirmed by the following SQL statement:
    * Discoverer 4:
    SQL> select ver_eul_timestamp from eul4_versions;
    * Discoverer 9.0.2 , 9.0.4, 10.1 the script is:
    SQL> select ver_eul_timestamp from eul5_versions;
    If the result of the sql output is the same in both EULs, it means that timestamp of both EULs are the same. Discoverer assumes that the EUL are exactly the same (e.g. this can happedn when one of them is created with export/import of the other EUL schema using database export/import utilities) and this is the reason it tries to match items by identifiers and not by developer keys or names.
    Solution
    Change the column ver_eul_timestamp in the eul4_versions (4.1) or eul5_versions(9.0.2 - 10.1)tables to be something different in the destination EUL. The format of this column is like year:month:day:seconds:milliseconds
    Example:
    * Discoverer 4.1:
    sql> update EUL4_VERSIONS set VER_EUL_TIMESTAMP =TO_CHAR(SYSDATE, 'YYYYMMDDHH24MISS');
    sql> commit;
    * Discoverer 9.0.2:
    SQL> update EUL5_VERSIONS set VER_EUL_TIMESTAMP = TO_CHAR(SYSDATE, 'YYYYMMDDHH24MISS');
    sql> commit;
    * Discoverer 9.0.4 -10.1:
    Run the script: EUL5_id.sql as the eulowner ID
    o For 9.0.4 -- This is found on a PC with the Discoverer Administrator installed in drive:\Oracle Home\discoverer\sql
    o For 10.1.0 -- This is found on a PC with the Discoverer Administrator installed in drive:\Oracle Home\discoverer\util
    This is documented in:
    Oracle® Business Intelligence Discoverer
    Administration Guide
    10g Release 2 (10.1.2.0.0)
    Part No. B13916-02
    http://download.oracle.com/docs/pdf/B13916_02.pdf
    Chapter 4 "About copying EULs and EUL objects by exporting and importing"
    Oracle® Discoverer Administrator
    Administration Guide
    10g (9.0.4)
    Part No. B10270-01
    http://download.oracle.com/docs/pdf/B10270_01.pdf
    Chapter 3 "About copying EULs and EUL objects by exporting and importing"
    References
    Bug 3202329 - Migrating Workbooks Between Instances Shows Incorrect Column

  • Null value in detail table's primary key

    I use Business Components Data Form to define a master-detail java panel.
    I don't display the primary key column in the detail grid control.
    Then I click on "+" to add a row, enter a value (the 2nd part of the detail table's primary key) and click commit.
    I get an error saying that the primary key column that's not displayed is null and the row cann't be committed.
    How do I fix this error?

    My fields are varchar2's.
    -- simplified to protect the guilty:
    -- (made up from memory, so sql keywords may --be misspelled, but you get the idea)
    create table proj (
    proj varchar2(10) not null,
    constraint proj_pk
    primary key(proj)
    insert into proj values ('abc');
    commit;
    -- this table has legal values for -- the master.legal field.
    -- in master, legal was changed to a -- ComboBoxControl control from -- TextFieldControl that the wizard -- generated.
    create table legal_values (
    proj varchar2(10) not null,
    legal varchar2(10) not null,
    display_order integer not null,
    constraint legal_values_pk
    primary key (proj, legal)
    insert into legal_values
    values ('abc', 'one', 1);
    insert into legal_values
    values ('abc', 'two', 2);
    insert into legal_values
    values ('abc', 'three', 3);
    commit;
    create table master (
    proj varchar2(10) not null,
    masterKey varchar2(15) not null,
    legal varchar2(10) not null,
    constraint master_pk
    primary key (proj, masterKey)
    create table detail (
    proj varchar(10) not null,
    masterKey varchar(15) not null,
    detailkey varchar(10) not null,
    display_order integer not null,
    status varchar2(12) not null, -- user can only update this field
    constraint detail_pk
    primary key(proj, masterkey, detailkey)
    -- legal rows for detail table
    create table detail_values (
    proj varchar2(10) not null,
    detailkey varchar2(10) not null,
    display_order integer not null,
    constraint detail_values_pk
    primary key(proj, detailkey)
    insert into detail_values values
    ('abc', 'status a', 1);
    insert into detail_values values
    ('abc', 'status b', 2);
    commit;
    -- if their is a foriegn key on details back
    -- to master, then you get the
    -- "mutating table" error when trigger
    -- occurs.
    create or replace trigger master_tr
    after insert on master
    for each row
    begin
    insert into detail
    (masterKey, detailKey, status, display_order, status)
    select :new.masterkey, detail_key, display_order, ' '
    from detail_values;
    end;
    -- detail.status is a comboboxControl getting --its values of of another db table too.
    -- there's also "after update" triggers on master and detail tables that copy the record to master_history and "detail_history" tables if the corresponding table is modified. Thus we have a history of the changes made to the records. The 2 history tables have 2 additional fields than their parents, "modified_date date default sysdate not null" and "modified_by varchar2(30) default user not null" that record when and who made the change. There's also a change type field that records where the change is an insert, delete or update (there 3 triggers per table , so we can correctly record change type as being insert, delete or update into the change_type field).
    If you want to make it more real to life, the status value is initially blank, then can be changed to a legal status value (complete or not complete), but can not be changed back to blank.
    ***P.S. I have to have this ready for production by monday.****
    P.P.S After you reply and change to ComboBoxControls in 2 places, try changing the table names and/or the column names (say to follow the DBA's naming convention) or add another field to the composite primary key (and unstated foriegn key) (using drop table/create table). Now try getting the existing code to run. No writing down the old/new table (or column) name pair names. Just run the code and try to fix the code based on the error messages. Are the error messages giving you adequate information to find the problem?

  • How do I use Primary Key and RowID in Materialized View Logs and MVs

    How do I use Primary Key and RowID in Materialized View Logs and Materialized Views????
    I don’t understand in the Materalized View Logs the diference between Primary Key and RowID. Besides, I could choose both Primary Key and RowID.
    When I have to use RowID?? Why?? And Primary Key??? And both, Primary Key and RowID????
    Thank you very much!

    Yes, I have already read it...
    But for example I don’t Understand:
    This is the example 8-1
    CREATE MATERIALIZED VIEW LOG ON products
    WITH SEQUENCE, ROWID
    (prod_id, prod_name, prod_desc, prod_subcategory, prod_subcat_desc, prod_
    category, prod_cat_desc, prod_weight_class, prod_unit_of_measure, prod_pack_
    size, supplier_id, prod_status, prod_list_price, prod_min_price)
    INCLUDING NEW VALUES;
    But if I create a Materialized View with TOAD if I choose a KEY field I receive the error:
    ORA-12026: invalid filter column detected
    Then I have to take out the Key (in the above example prod_id)
    And then the script is
    CREATE MATERIALIZED VIEW LOG ON products
    WITH ROWID, SEQUENCE, PRIMARY KEY!!!!!!!!!!!!!!!!!!!!
    (prod_id, prod_name, prod_desc, prod_subcategory, prod_subcat_desc, prod_
    category, prod_cat_desc, prod_weight_class, prod_unit_of_measure, prod_pack_
    size, supplier_id, prod_status, prod_list_price, prod_min_price)
    INCLUDING NEW VALUES;
    I have PRIMARY KEY in the definition (I don’t choose it) and I don’t have the prod_id field
    Why is it????
    Note: If I execute the script to create the MV Log manually the PRIMARY KEY option NO IS in the script and the prod_id field either is in the script.
    And on the other hand,
    What is this:
    CREATE MATERIALIZED VIEW LOG ON sales
    WITH ROWID;
    CREATE MATERIALIZED VIEW LOG ON times
    WITH ROWID;
    CREATE MATERIALIZED VIEW LOG ON customers
    WITH ROWID;
    These MATERIALIZED VIEW LOG contain any fields????
    Or it contain the primary key fields of this tables (sales, times and customers)??? Then, Why is it ROWID instead of PRIMARY KEY????
    Thanks!

  • Acrobat X Pro 10.1.12 under Yosemite crashes when trying to use arrow keys to navigate.

    Acrobat X Pro 10.1.12 Mac
    Yosemite OS X 10.10.1
    Acrobat now crashes when attempting to use arrow keys to navigate. This is true on both MacBook Pro 15 retina and Mac Pro.

    I ended up upgrading to Acrobat XI. This works, but Adobe is up to its old tricks changing the interface from what we're used to using. They now have decided to hide all the tools. You have to delve deep into the program to find the tools, then set up your own tool menus. Lord help you finding a tool that you use too occasionally to put on a custom menu.
    Adobe has been doing this for years now. In Photoshop of olden days, it was simple to set up a brush, say, that you used frequently. Quite a few updates ago they made that vastly more difficult. And they've replaced words with happy little icons, that are not at all intuitive to use. I challenge you, too, to look at the icon for the airbrush and tell me whether it is on or off. (Sorry, this is mainly about Photoshop, but what they've done to the latest Acrobat is just as bad.)
    Of course, their ultimate cuteness in making things difficult for the user is their essential abandonment of a Help system. You can't do a simple look-up of a function -- you have to go to an internet site and sort through a bunch of forums and user comments to get any information.
    JEEZ Adobe!

  • Table and Primary Key Case Sensitive in Automated Row Fetch

    Why are the table and primary key fields case sensitive in the Automated Row Fetch process? I'm debugging an error in this process and I'm not sure what the case should be.

    Russ - It's a defect of sorts. Use upper case in the process definition. I don't think these processes will work with tables or columns whose names are not known to the database as upper-case values, i.e., defined using double-quoted non-upper case.
    Scott

  • Creating an unique index instaed of using primary key index

    Hi ,
    I heard in a debate sometimes it's better to create a unique index on a column and using it instaed of using primary key index in oracle.I couldn't understand what the reason propely.
    Can anyone please help me in thsi topic if it is a valid one .
    Thanks in advance

    Hi,
    They are exactly NOT identical.
    1. Unique key can have NULL values where primary keys can't.
    2. Primary key is fundamentally those keys which do not change. I mean updating a primary key is not a good idea.
    SQL> drop table test;
    Table dropped.
    SQL> create table test ( a number(2));
    Table created.
    SQL> ed
    Wrote file afiedt.buf
      1* create unique index test_idx on test(a)
    SQL> /
    Index created.
    SQL> ed
    Wrote file afiedt.buf
      1* insert into test values(NULL)
    SQL> /
    1 row created.
    SQL> drop table test;
    Table dropped.
    SQL>
    SQL> create table test ( a number(2) primary key);
    Table created.
    SQL> insert into test values(NULL);
    insert into test values(NULL)
    ERROR at line 1:
    ORA-01400: cannot insert NULL into ("HR"."TEST"."A")
    SQL>Cheers,
    Edited by: Avinash Tripathi on Nov 24, 2009 11:17 AM

  • How to add a new data element for existing table filed(Primary key field)

    Hi Experts,
    How to add a new data element for existing table field(Primary key field)
    For this filed ther is no foreign key relation ships and even check table.
    while activating table it is giving message like below.
    can you help any one to solve this and wil steps to add new dataelement for existing primary key filed of a table.
    Check table (NAMING SPACE/TABLE NAME(EX:/TC/VENDOR)) (username/19.02.10/03:29)           
    Primary key change not permitted for value table /TC/VENDOR
    Check on table  /TC/VENDOR resulted in errors              
    Thanks
    Ravi

    Hi,
    Easiest way is to download the table eg into an Excel table (if possible) or text table. Drop the table from the database. Build your table with the new key field. Build the database table again and fill it.
    You can do it also over the database into a new table. Drop the old one. Build the enhanced one and fill it. Afterwards drop your (temporary) table.
    Maybe there are other ways, but this works.
    Success,
    Rob

  • Every table need primary key in toplink mapping to fetch the data

    Hi everyone,
    we are working on toplink mapping , it neccessary every table needs primary key. if yes, gave me example.

    Hi everyone,
    Alterations in database will not be automatically reflected in toplink so, there is any alternative
    with regards
    abusufian

  • Null in Composite Primary Key and "Primary keys must not contain null"

    Hello all.
    I'm a newbie concerning to JPA/EJB3, but I was wondering if toplinks doesn't support composite primary keys with null in some field (something perfectly right in any RDBMS).
    I used JDeveloper (I'm using Oracle 10g database and JDeveloper 10.1.3.2.) wizards to generate JPA classes and I checked out generated files (with annotations), so they should be right (by the way, other O-R mappings for my model are working right, but this one).
    I'm getting the next error:
    Exception Description: The primary key read from the row [DatabaseRecord(
         TSUBGRUPOSLDI.CD_GRUP => 01
         TSUBGRUPOSLDI.CD_SUBGRUP => null
         TSUBGRUPOSLDI.CG_POBL => 058
         TSUBGRUPOSLDI.CG_PROV => 28
         TSUBGRUPOSLDI.DSCR => Sanidad)] during the execution of the query was detected to be null. Primary keys must not contain null.
    Compound primary key is (CD_GRUP, CD_SUBGRUP). No foreign keys, no joins (only a NamedQuery: "select o from ..."). It's the simplest case!
    I checked out that everything runs ok if there's no "null" value in CD_SUBGRUP.
    After some research (this and other forums) I'm beginning to believe that it's not supported, but not sure.
    Am I doing sth wrong? If not, what is the reason to not support this? Will it be supported in the future?
    Thanks in advance.

    Null is a special value and in many databases is not comparable to another null value (hence the isNull operator) and may pose problems when used to uniquely identify an Entity. TopLink does not support null values within a composite PK. As the nullable column is most likely not designated as a PK within your database table (many databases do not allow this) I recommend updating the Entity PKs to match that of the database.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Named query, primary keys must not contain null

    We have an question on the named query. Here is the scenario:
    Class JoinClassC (id, name, desc, comment) is mapped to two tables Join_Table_A (id, name, description) and Join_Table_B (id, name, comments); Join_Table_A is the primary table; two tables are associated via primary key.
    Defined a named query (findByName, ReadAllQuery/SQL) for JoinClassC descriptor. The sql is:
    SELECT a.ID,a.NAME,a.DESCRIPTION,b.COMMENTS FROM JOIN_TABLE_A a, JOIN_TABLE_B b WHERE a.NAME=#name AND a.ID=b.ID AND a.NAME=b.NAME
    Getting QueryException while executing the query. Here is the trace:
    [3/2/05 13:50:24:795 EST] f8b62aa SystemOut O 2005.03.02 01:50:24.795--ServerSession(738697921)--Thread[Servlet.Engine.Transports : 4,5,main]--Connection(1095574161)--SELECT a.ID,a.NAME,a.DESCRIPTION,b.COMMENTS FROM JOIN_TABLE_A a, JOIN_TABLE_B b WHERE a.NAME='persistence' AND a.ID=b.ID AND a.NAME=b.NAME
    2005.03.02 01:50:24.835--ClientSession(1127670417)--Thread[Servlet.Engine.Transports : 4,5,main]--Exception [TOPLINK-6044] (OracleAS TopLink - 10g (9.0.4.4) (Build 040627)): oracle.toplink.exceptions.QueryException
    Exception Description: The primary key read from the row [DatabaseRow(
         USER04.JOIN_TABLE_B.ID => 2
         USER04.JOIN_TABLE_A.NAME => persistence
         USER04.JOIN_TABLE_A.DESCRIPTION => FJF Persistence Fram
         USER04.JOIN_TABLE_B.COMMENTS => The Persistence Fram)] during the execution of the query was detected to be null. Primary keys must not contain null.
    Query: ReadAllQuery(com.ford.it.persistence.unittest.dom.JoinClassC)Local Exception Stack:
    Exception [TOPLINK-6044] (OracleAS TopLink - 10g (9.0.4.4) (Build 040627)): oracle.toplink.exceptions.QueryException
    Exception Description: The primary key read from the row [DatabaseRow(
         USER04.JOIN_TABLE_B.ID => 2
         USER04.JOIN_TABLE_A.NAME => persistence
         USER04.JOIN_TABLE_A.DESCRIPTION => FJF Persistence Fram
         USER04.JOIN_TABLE_B.COMMENTS => The Persistence Fram)] during the execution of the query was detected to be null. Primary keys must not contain null.
    Query: ReadAllQuery(com.ford.it.persistence.unittest.dom.JoinClassC)
         at oracle.toplink.exceptions.QueryException.nullPrimaryKeyInBuildingObject(QueryException.java:542)
    Adding b.ID to the sql select will solve the problem.
    Question: does the primary key must be present in the sql select to run this kind of query? Any way to work around the restriction?
    Your help would be greatly appreciated.
    Haiwei

    The problem is that you need to select both primary keys for the multiple table descriptor.
    i.e.
    SELECT a.ID,a.NAME,a.DESCRIPTION,b.ID, b.COMMENTS FROM JOIN_TABLE_A a, JOIN_TABLE_B b WHERE a.NAME=#name AND a.ID=b.ID AND a.NAME=b.NAME
    Your SQL needs to return the same data that TopLink would have selected, which includes the primary key for both tables. TopLink expects both a.ID and b.ID fields in the select, because your only select one and the field names are the same TopLink thought the ID field was for b.ID not a.ID, if you select both it will be ok.

  • Bad execution plans when using parameters, subquery and partition by

    We have an issue with the following...
    We have this table:
    CREATE TABLE dbo.Test (
    Dt date NOT NULL,
    Nm int NOT NULL,
    CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (Dt)
    This table can have data but it will not matter for this topic.
    Consider this query (thanks Tom Phillips for simplifying my question):
    declare @date as date = '2013-01-01'
    select *
    from (
    select Dt,
    row_number() over (partition by Dt order by Nm desc) a
    from Test
    where Dt = @date
    ) x
    go
    This query generates an execution plan with a clustered seek.
    Our queries however needs the parameter outside of the sub-query:
    declare @date as date = '2013-01-01'
    select *
    from (
    select Dt,
    row_number() over (partition by Dt order by Nm desc) a
    from Test
    ) x
    where Dt = @date
    go
    The query plan now does a scan followed by a filter on the date.
    This is extremely inefficient.
    This query plan is the same if you change the subquery into a CTE or a view (which is what we use).
    We have this kind of setup all over the place and the only way to generate a good plan is if we use dynamic sql to select data from our views.
    Is there any kind of solution for this?
    We have tested this (with the same result) on SQL 2008R2, 2012 SP1, 2014 CU1.
    Any help is appreciated.

    Hi Tom,
    Parameter sniffing is a different problem. A query plan is made based on the value which may be really off in the data distribution. We do have this problem as well e.g. when searching for today's data when the statistics think the table only has yesterday's
    data (a problem we have a lot involves the lack of sufficient amount of buckets in statistics objects).
    This problem is different.
    - It doesn't matter what the parameter value is.
    - You can reproduce this example with a fresh table. I.e. without statistics.
    - No matter what the distribution of data (or even if the statistics are correct), there is absolutely no case possible (in my example) where doing a scan followed by a filter should have better results than doing a seek.
    - You can't change the behavior with hints like "option(recompile)" or "optimize for".
    This problem has to do with the specific combination of "partition by" and the filter being done outside of the immediate query. Try it out, play with the code, e.g. move the where clause inside the subquery. You'll see what I mean.
    Thanks for responding.
    Michel

  • Dynamic SQL Joining between tables and Primary keys being configured within master tables

    Team , Thanks for your help in advance !
    I'm looking out to code a dynamic SQL which should refer Master tables for table names and Primary keys and then Join for insertion into target tables .
    EG:
    INSERT INTO HUB.dbo.lp_order
    SELECT *
    FROM del.dbo.lp_order t1
    where not exists ( select *
    from hub.dbo.lp_order tw
    where t1.order_id = t2.order_id )
    SET @rows = @@ROWCOUNT
    PRINT 'Table: lp_order; Inserted Records: '+ Cast(@rows AS VARCHAR)
    -- Please note Databse names are going to remain the same but table names and join conditions on keys
    -- should vary for each table(s) being configured in master tables
    Sample of Master configuration tables with table info and PK Info :
    Table Info         
    Table_info_ID    Table_Name    
    1        lp_order    
    7        lp__transition_record    
    Table_PK_Info        
    Table_PK_Info_ID    Table_info_ID    PK_Column_Name
    2                1    order_id
    8                7    transition_record_id
    There can be more than one join condition for each table
    Thanks you !
    Rajkumar Yelugu

    Hi Rajkumar,
    It is glad to hear that you figured the question out by yourself.
    There's a flaw with your while loop in your sample code, just in case you hadn't noticed that, please see below.
    --In this case, it goes to infinite loop
    DECLARE @T TABLE(ID INT)
    INSERT INTO @T VALUES(1),(3),(2)
    DECLARE @ID INT
    SELECT @ID = MIN(ID) FROM @T
    WHILE @ID IS NOT NULL
    PRINT @ID
    SELECT @ID =ID FROM @T WHERE ID > @ID
    So a cursor would be the appropriate option in your case, please reference below.
    DECLARE @Table_Info TABLE
    Table_info_ID INT,
    Table_Name VARCHAR(99)
    INSERT INTO @Table_Info VALUES(1,'lp_order'),(7,'lp__transition_record');
    DECLARE @Table_PK_Info TABLE
    Table_PK_Info_ID INT,
    Table_info_ID INT,
    PK_Column_Name VARCHAR(99)
    INSERT INTO @Table_PK_Info VALUES(2,1,'order_id'),(8,7,'transition_record_id'),(3,1,'order_id2')
    DECLARE @SQL NVarchar(MAX),
    @ID INT,
    @Table_Name VARCHAR(20),
    @whereCondition VARCHAR(99)
    DECLARE cur_Tabel_Info CURSOR
    FOR SELECT Table_info_ID,Table_Name FROM @Table_Info
    OPEN cur_Tabel_Info
    FETCH NEXT FROM cur_Tabel_Info
    INTO @ID, @Table_Name
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SELECT @whereCondition =ISNULL(@whereCondition+' AND ','') +'t1.'+PK_Column_Name+'='+'t2.'+PK_Column_Name FROM @Table_PK_Info WHERE Table_info_ID=@ID
    SET @SQL = 'INSERT INTO hub.dbo.'+@Table_Name+'
    SELECT * FROM del.dbo.'+@Table_Name+' AS T1
    WHERE NOT EXISTS (
    SELECT *
    FROM hub.dbo.'+@Table_Name+' AS T2
    WHERE '+@whereCondition+')'
    SELECT @SQL
    --EXEC(@SQL)
    SET @whereCondition = NULL
    FETCH NEXT FROM cur_Tabel_Info
    INTO @ID, @Table_Name
    END
    Supposing you had noticed and fixed the flaw, your answer sharing is always welcome.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • SQVI - Join tables using similar fields (but not same)

    Hi folks!
    I would like to know if there is a way to do a join between two tables thru fields that are similar but not exactly the same. For example, I need to join tables EKKO and CDHDR, where the PO numbers do exist in both the tables (EKKO-EBELN and CDHDR-OBJECTID), but the fields are different. Is it possible to achieve such kind of a join?
    Appreciate your help.
    Regards,
    Anwar.

    >
    Mohammad Anwar wrote:
    though their data type is the same (CHAR).
    unfortunately it is not enough... You have to select separately from the two tables and merge the data somehow in the program.

  • JOINED inheritance; no primary key joins being generated?

    I'm working with the Sun Application Server version 9. I'm using the javax.persistence APIs in conjunction with EJB3. I've told the app server to generate tables automatically (via toplink; I've set it to drop-and-create tables).
    I have a three-level-deep inheritance hierarchy (like Person extends Party extends BaseObject). BaseObject has its @Inheritance strategy set to JOINED. Everything comes out OK, but I am not seeing any foreign keys being created between the subclass primary keys.
    That is, there is no foreign key that enforces that a Person's PK must refer to a Party's PK, which must, in turn, refer to a BaseObject's PK.
    I've also tried using the @PrimaryKeyJoin annotation, but that doesn't seem to affect the outcome.
    So far this is not a big deal, but I'd like to know if it is a bug in Glassfish/Sun App Server or whether this is by design.
    Thanks,
    Laird

    OK,
    My apologies it seems in the aim of simplifying the explanation of what is
    occuring I missed some important pieces of information. It seems as though
    the redundant joins are only occuring when do a mapping that contains a
    constant value.
    ie
    <field name="classB">
    <jdbc-field-map type="one-one" column.columnA="columnA"
    column.columnB="XXXXXXXX" />
    </field>
    And looking at the Section 7.5 of the reference manual the SQL generated
    has the redundant join also. Can someone explain why the join is necessary
    in these circumstances. All the values of the join columns in tableB are
    known, so why does tableA need to be referenced at all?

Maybe you are looking for

  • Firmware Upgrade now no file sharing

    I recently installed a firmware upgrade on my E3000 to firmware version 1.0.04.build 6.  Since the upgrade I am unable to see any of my files on my network computer which is running windows 7.  Before everything was fine and I could stream to my dvd

  • Port Forwarding For WinXP (RDC) (Using AE 802.11n)

    I have some problem and I need some thoughts and input from someone else, Here goes. Here is what im looking to accomplish. From my work I want to be able to connect into my machine at home (windows XP). I cannot use the standard RDC (Remote Desktop

  • Problem with garbage Matte 8-point

    I filmed a video in one go and did not touch the camera at all. I split the clip and then put one on top of the other to use the eight-point garbage matte effect. The problem is the two clips don't match. There's a small difference in colour and it d

  • Desktop wallpaper missing

    I'm not sure at what point this happened but I'm running Lion 10.7.5 and not all of the Desktop wallpaper shows... for example I can see "Desktop Pictures" but that's it, the black and white pictures, or Nature is gone... and I know they were there b

  • Problem in Open dataset in background

    Hi All, Iam facing the problem in Open dataset in background. It is working fine in foreground. Iam able to read only one line from the application server file in background. Please find below the code.   IF sy-batch = wl_x.     OPEN DATASET pa_sfile