Denormalize many-to-many

I'm trying to create a denormalized view of my data that uses a typical join table to accomplish a many-to-many structure. The three tables involved are:
CREATE TABLE  "POC_SUBNETS"
   (     "ID" NUMBER,
     "OCTET1" NUMBER,
     "OCTET2" NUMBER,
     "OCTET3" NUMBER,
     "OCTET4" NUMBER,
     "SLASH" VARCHAR2(30),
     "MASK" VARCHAR2(30),
     "DESCRIPTION" VARCHAR2(255),
     "ASSIGNED" VARCHAR2(3),
      CONSTRAINT "POC_SUBNETS_PK" PRIMARY KEY ("ID") ENABLE,
      CONSTRAINT "POC_SN_ASSGN_CON" CHECK ( "ASSIGNED" in ('YES', 'NO')) ENABLE
CREATE TABLE  "POC_CONTACTS"
   (     "UNID" VARCHAR2(30),
     "FNAME" VARCHAR2(30),
     "LNAME" VARCHAR2(30),
     "DESK_PHONE" VARCHAR2(30),
     "CELL_PHONE" VARCHAR2(30),
     "PAGER" VARCHAR2(30),
     "OTHER_PHONE" VARCHAR2(30),
     "EMAIL" VARCHAR2(50),
     "ORG_ID" VARCHAR2(10),
      CONSTRAINT "POC_CONTACTS_PK" PRIMARY KEY ("UNID") ENABLE,
      CONSTRAINT "POC_CONTACTS_CON" UNIQUE ("FNAME", "LNAME") ENABLE
CREATE TABLE  "POC_SUBNET_CONTACTS"
   (     "SUBNET_ID" NUMBER,
     "UNID" VARCHAR2(30),
     "CTYPE" VARCHAR2(30),
      CONSTRAINT "POC_SUBNET_CONTACTS_PK" PRIMARY KEY ("SUBNET_ID", "UNID", "CTYPE") ENABLE
   )Obviously, this structure will allow many contacts to have many subnets but each entry in poc_subnet_contacts must be a unique combination of subnet_id, unid, and ctype.
Well, now that we've satisfied our requirement to allow this arrangement the users want to see a denormalized view of the data whereby the subnets show each of the zero to n contacts on the same row as the subnet. Something like this:
subnet_id, unid1, unid2, unid3, unidn, etc...
In truth, I can limit the number of contacts shown per subnet to 3, but I'm curious what the best way of accomplishing this denormalized view is. Here's something that I came up with:
select id, octet1, octet2, octet3, octet4,
(select unid from poc_subnet_contacts where id = subnet_id and ctype = 'LAN') LAN_UNID,
(select unid from poc_subnet_contacts where id = subnet_id and ctype = 'SEC') SEC_UNID,
(select unid from poc_subnet_contacts where id = subnet_id and ctype = 'NSC') NSC_UNID,
(select unid from poc_subnet_contacts where id = subnet_id and ctype = 'FAC') FAC_UNID
from poc_subnetsThis has the obvious limitation that the contact has to be of a specific type to appear and those may not always exist.
There also could be multiple contacts of the same type that I would like to show, if possible. This situation has caused the dreaded ORA-01427: single-row subquery returns more than one row error.
Would a materialized view help in this situation?
Earl

biswabijay,
Thanks for the reply. What you've suggested works similarly to my original - presumably without the problem of returning multiple rows for a single row query because of the use of MAX. That's a step in the right direction. Thank you for that.
What about the idea of being able to more dynamically add columns and the case where I might want to show multiple contacts of the same type on one row? Any ideas?
In Excel and Access this is called a crosstab query. Is there anything similar in Oracle?
Earl
P.S. I apologize in advance for mentioning the 'E' and 'A' words here on the Oracle forums. And no, I don't want to use 'E' OR 'A'. Thank you.

Similar Messages

  • Self referencing table many to many relationship

    I am in a bit of a logic pickle, and I was wondering if someone could help me out.
    I have a table in a database I am designing called document, it holds information on surprisingly on documents in our DMS.
    I want to create the notion that a document can have multiple related documents, and those documents can be the related documents for many documents.
    So it is a self referencing table, I have done these before so no big deal, but this time is a many to many relation, it wasnt before.
    Maybe something like:
    document
    docid (pk)
    related_doc
    docid (pk) (fk to document.docid)
    related_docid (pk) fk to document.docid)
    Does anyone have any experience with this or any advise I might find sueful?
    Thanks!

    A junction table can be used to resolve a many-to-many relationship as is in your example. There are two PK/FK relationships between document and related_document table. This will prevent denormalization of data.
    The other option could be to have just one table with two columns (parent_doc_id and child_doc_id) and have a PK constraint on both the columns - just like bill-of-materials.
    But I think the approach you have in your posting will work well.
    Shakti
    http://www.impact-sol.com
    Developers of Guggi Oracle - Tool for DBAs and Developers

  • Hibernate - Spring - problem with mapping (many-to-many)

    Hello,
    I want to map the following situation. I have a table called EDUCATION and a table called SCHOOLS. Between those tables I have an associative table called EDUCATION_SCHOOLS. The (usefull) fields:
    EDUCATION:
    id (long) - PK
    name (varchar)
    versionNr (long)
    SCHOOLS:
    id (long) - PK
    name (varchar)
    versionNr (long)
    EDUCATION_SCHOOLS:
    id (long) - PK
    education_id (long) (FK to EDUCATION.id)
    school_id (long) (FK to SCHOOLS.id)
    name (varchar)
    versionNr (long)
    Their is a Unique Constraint between EDUCATION_SCHOOLS.education_id and EDUCATION_SCHOOLS.school_id.
    What I want to be able to do:
    EDUCATION: select, update, insert
    SCHOOLS: select, update, insert
    EDUCATION_SCHOOLS: select, update (only the non-FK fields), insert
    I never want to delete anything in those tables. (and it's never ever going to be an option either)
    Hibernate version:
    Hibernate-Version: 3.0.5
    Mapping documents:
    Education:
    <hibernate-mapping>
         <class name="##.Education" table="EDUCATION">
              <id name="id" column="ID" type="java.lang.Long">
                   <generator class="sequence">
                        <param name="sequence">EDUCATION_SEQ</param>
                   </generator>
              </id>
              <version name="versionNr" column="VERSIONNR" type="long"/>
              <property name="name" column="NAME" type="string" />
            <set name="SCHOOLS" table="EDUCATION_SCHOOLS">
                <key column="EDUCATION_ID" />
                <many-to-many class="##.Schools" column="SCHOOL_ID" lazy="false" />
            </set>
    </hibernate-mapping>
    Schools:
    <hibernate-mapping>
         <class name="##.Schools" table="SCHOOLS">
              <id name="id" column="ID" type="java.lang.Long">
                   <generator class="sequence">
                        <param name="sequence">SCHOOLS_SEQ</param>
                   </generator>
              </id>
              <version name="versionNr" column="VERSIONNR" type="long"/>
              <property name="name" column="NAAM_NAME" type="string" />
            <set name="educations" table="EDUCATION_SCHOOLS" inverse="true" cascade="none">
                <key column="SCHOOL_ID" />
                <many-to-many class="##.Schools" column="SCHOOL_ID" lazy="proxy"/>
            </set>
    </hibernate-mapping>
    Education_schools:
    <hibernate-mapping>
    <class name="##.EducationSchools" table="EDUCATION_SCHOOLS">
               <id name="id" column="ID" type="java.lang.Long" unsaved-value="0">
                   <generator class="sequence">
                        <param name="sequence">SEQ_EDUCATION_SCHOOLS</param>
                   </generator>
              </id>
              <version name="versionNr" column="VERSIONNR" type="long" />
              <many-to-one name="education" class="##.Education" cascade="none" lazy="proxy"
                           column="EDUCATION_ID" not-null="true"/>
            <many-to-one name="schools" class="##.Schools" cascade="none" lazy="proxy"
                        column="SCHOOL_ID" not-null="true"/>  
    </hibernate-mapping>   
    Name and version of the database you are using:
    Oracle XE 10g
    I am able to:
    EDUCATION: select, insert, update
    SCHOOLS: select, insert, update
    EDUCATION_SCHOOLS: select
    Problems:
    EDUCATION_SCHOOLS: when I try to insert, I sometimes get unique constraint violations. (when I should get them, thus I'm trying to insert something that already exists .. but how do I stop Hibernate from Inserting?)
    EDUCATION_SCHOOLS: when I try to update, sometimes it works, but often I get:
    23:03:55,484 [http-8081-1] ERROR be.vlaanderen.lne.vea.epb.ui.struts.EpbExceptionHandler - org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [##.EducationSchools] with identifier [null]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [##.EducationSchools#<null>]
    ex.getMessage() Object of class [##.EducationSchools] with identifier [null]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [##.EducationSchools#<null>]
    org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [##.EducationSchools] with identifier [null]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [##.EducationSchools#<null>]
    Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect):As you can see from the stacktrace I use Spring for the transactionManager: org.springframework.orm.hibernate3.HibernateTransactionManager in which I use e sessionFactory: org.springframework.orm.hibernate3.LocalSessionFactoryBean
    In my DAO, I try to save with the regular this.getHibernateTemplate().saveOrUpdate that has always worked for me.
    Another problem I have:
    when i update "name" in EDUCATION, the records with that ID are delete from EDUCATION_SCHOOLS ...
    As I am experiencing 3 different problems, I'm pretty sure something is wrong in the mapping files .. however I fail to find out what .. Any input would be greatly appreciated.
    (I translated some class/table-names, that's what the ## cause)
    Edited by: Bart_Blommaerts on Jul 29, 2008 11:53 PM

    Thank you for your input.
    When I try what you suggest, I still get the same error:
    16:39:30,406 [http-8081-1] ERROR ###.EpbExceptionHandler - org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [###.EducationSchools] with identifier [2]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [###.EducationSchools#2]
    ex.getMessage() Object of class [###.EducationSchools] with identifier [2]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [###.EducationSchools#2]
    org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Object of class [###.EducationSchools] with identifier [2]: optimistic locking failed; nested exception is org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [###.EducationSchools#2]
    Caused by: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [###.EducationSchools#2]If you had to map the database tables, I'm trying to map, how would you do it?

  • Many-to-many performance issue

    I realize that many-to-many joins have been discussed before (yes, I looked through many threads), but I'm having a slight variation on the issue. Our data warehouse has been functioning for a couple of years now, but we're now experiencing a dramatic degradation in report performance. I'll tell you everything I know and what I've tried. My hope is that someone will have an idea that hasn't occurred to me yet.
    The troubling data links deal with accounts and account_types. Each transaction will have one account, but each account can have multiple account_types and each account_type is made up of multiple accounts. It ends up looking like this:
    Transaction_cube --< account_dimension >--< account_type_table
    Given the many-to-many relationship between account and account_type, this is the only architecture I could come up with that will maintain data integrity in the transaction cube.
    I know that this is the cause of the performance issues because the reports run normally when this is removed. The volume of data obviously increases over time, but the problem appeared very suddenly -- not a gradual degradation that one would expect from a volume issue. The cube is partitioned by year and we're a little below last year's growth.
    The other fact to throw in is that the account_type table did increase in size by an additional 30% when we first noticed the problem. However, the business was able to go back and remove half of the account_types (unused types) so now the table has fewer rows than it had before we noticed the problem (~15k rows in the account_type table).
    We have tried pinning the table so that it remain in memory, but that did not help. I tried creating a materialized view combining accounts and account_types with a similar lack of improvement. I've tried adding indexes, but there is still a full-table scan. All database objects are analyzed nightly after the data load is completed.
    I'm fresh out of ideas at this point. Any suggestions and/or ideas would be greatly appreciated.

    I've thought about that. What it would mean would be aprox. 20 additional columns for each of the different account_types. Unfortunately, that would also mean that all the reports that use the account_type would have to have a condition:
    WHERE acct_type1='Income Stmt." OR acct_type2='Income Stmt." OR ....
    Since the account_types are not set up in a hierarchy and there must be only one row for account, I'm not sure that this is a feasible solution.
    Thank you for the suggestion.

  • Master - detail block (many-to-many)

    Hello, , I have 2 tables:
    1-(parts)its primary key : p_id
    2-(suppliers)its primary key : s_id
    and they have many-to-many relationship between them .
    the table Resulted from the relationship is(part_supp)
    How can I create master-detail block Based on this relationship.In other words, which of tables will be the master and which of them well be detail.
    I am using form 6i ..
    please help.....

    M3ATH wrote:
    Hello, , I have 2 tables:
    1-(parts)its primary key : p_id
    2-(suppliers)its primary key : s_id
    and they have many-to-many relationship between them .Many to many relation between two table isn't a good database design. You have to a junction table between them.
    the table Resulted from the relationship is(part_supp)Is this your junction table(part_supp) ?
    >
    How can I create master-detail block Based on this relationship.In other words, which of tables will be the master and which of them well be detail.If part_supp is your junction table between parts and suppliers then, try this..
    Use the junction table as a hidden block, then establish a standard Forms relation between
    Parts and (hidden)part_supp then between part_supp and suppliersHope this helps

  • How to resolve many-to-many join by 2 one-to-many joins

    Hi,
       I was asked many times how to resolve many to many relationship between two tables. I read to use 2 one -to- many relationships to resolve this. Can some expalin me when many to many relationship occurs between two tables and how to reslove them with practicle examples. Is there any article on this?
    Regards,
    Nanda Kishore

    Hi,
    Please check below link.
    http://www.forumtopics.com/busobj/viewtopic.php?p=859029&sid=20d79e3df07b0d8b41aadfbd902bb6b2
    http://blog.oaktonsoftware.com/2011/04/bridge-tables-and-many-to-many.html
    Thanks,
    Amit

  • Many to Many Relationships in ejb.....Urgent Pls help

    hi all,
    Am working in EJB.Now we are coding for many to many relationship.
    Company and Salary register
    One Company can have many salary register and More than one company can have one salary register
    Companyand Salary register are the two tables having details of it.And the intermediate table having pk of company and salary register is Company_salary_register table,I want to add the value in this table having seperate interface.
    How to model it? I have given the add method of this company_salary_register table in company bean.I am getting the value of this one as collection.But it is asking me to add the values as the instance of LocalSalary(interfaceand entitybean).If i type cast it as Local Salary Class castexception occurs
    Please help me.Even Material of Modelling Many to Many relationship in EJB is enough.
    thanx in advance,

    THaaaaaanks alottt for the quick reply...
    I used the first way to insert the document and i was successful...
    but thing is that..
    when i do
    SQL>desc message_type
    message_type is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    to VARCHAR2(4000 CHAR)
    from VARCHAR2(4000 CHAR)
    subject VARCHAR2(4000 CHAR)
    body VARCHAR2(4000 CHAR)
    i can see the structure...
    and when i
    select count(*) from messages
    it says count is 1.
    so how to know where the data is ..like below in the data :
    'to' it has '[email protected]'
    and for 'from' it has '[email protected]'..and so on..
    <message>
    <to>[email protected]</to>
    <from>[email protected]</from>
    <subject>Question</subject>
    <body>Does this demo work?</body>
    </message>
    so what is the command to get that data...
    because when i do
    select * from message_type..it says
    select * from message_type
    ERROR at line 1:
    ORA-04044: procedure, function, package, or type is not allowed here
    Plss help me outt.......this will be my final step...
    Thankssssssssssssssss

  • Creating data in a many-to-many-relationship

    Hello,
    we really have problems in implementing a JClient dialog based on BC4J for creating data in a many to many relationship - especially with cascade delete on both sides.
    Simplified our tables look like:
    create table A_TABLE
    A_ID VARCHAR2(5) not null,
    A_NAME VARCHAR2(30) not null,
    constraint PK_A_TABLE primary key (A_ID),
    constraint UK_A_TABLE unique (A_NAME)
    create table B_TABLE
    B_ID VARCHAR2(5) not null,
    B_NAME VARCHAR2(30) not null,
    constraint PK_B_TABLE primary key (B_ID),
    constraint UK_B_TABLE unique (B_NAME)
    create table AB_TABLE
    A_ID VARCHAR2(5) not null,
    B_ID VARCHAR2(5) not null,
    constraint PK_AB_TABLE primary key (A_ID, B_ID),
    constraint FK_AB_A foreign key (A_ID) references A_TABLE (A_ID) on delete cascade,
    constraint FK_AB_B foreign key (B_ID) references B_TABLE (B_ID) on delete cascade
    Could JDev Team please provide a BC4J/JClient sample that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE as detail. The detail displays the names associated with the IDs. Next to AB_TABLE should be a view of B_TABLE which only displays rows that are currently not in AB_TABLE. Two buttons are used for adding and removing rows in AB_TABLE. After adding or removing rows in the intersection the B_TABLE view should be updated. The whole thing should work in the middle and client tier. This means no database round trips after each add/remove, no posts for AB_TABLE and no query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group (A_TABLE) one can select and deselect items (AB_TABLE) from a list of available items (B_TABLE). Most of JDeveloper4s wizards use this. They can handle multi/single selections, selections from complex structures like trees and so on. Ok, the wizards are not based on BC4J - or? How can we do it with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove reflecting the current selection
    2. A good strategy for displaying the names instead of the IDs (subqueries or joining the three tables)
    3. A JBO-27101 DeadEntityAccessException when removing an existing row from AB_TABLE and adding it again
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when creating a row in AB_TABLE. This is caused by the composition. We workaround this using createAndInitRow(AttributeList) on the view object (instead of create()). This is our add-Action:
    ViewObject abVO = panelBinding.getApplicationModule().findViewObject("ABView");
    ViewObject bVO = panelBinding.getApplicationModule().findViewObject("BView");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);
    5. After inserting the new row the NavigationBar has enabled commit/rollback buttons and AB_TABLE displays the row. But performing a commit does nothing. With the following statement after insertRow(newRow) the new row is created in the database:
    newRow.setAttribute("BId", bRow.getAttribute("BId"));
    Please give us some help on this subject.
    Best regards
    Michael Thal

    <Another attempt to post a reply. >
    Could JDev Team please provide a BC4J/JClient sample
    that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE
    as detail. The detail displays the names associated
    with the IDs. Next to AB_TABLE should be a view of
    B_TABLE which only displays rows that are currently
    not in AB_TABLE. Two buttons are used for adding and
    removing rows in AB_TABLE. After adding or removing
    rows in the intersection the B_TABLE view should be
    updated. The whole thing should work in the middle
    and client tier. This means no database round trips
    after each add/remove, no posts for AB_TABLE and no
    query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group
    (A_TABLE) one can select and deselect items
    (AB_TABLE) from a list of available items (B_TABLE).
    Most of JDeveloper4s wizards use this. They can
    handle multi/single selections, selections from
    complex structures like trees and so on. Ok, the
    wizards are not based on BC4J - or? How can we do it
    with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove
    reflecting the current selectionYou should be able to use insertRow() to insert the row into proper collection.
    However to remove a row only from the collection, you need to add a method on the VO subclasses (and perhaps export this method so that the client side should see it) to unlink a row from a collection (but not remove the associated entities from the cache).
    This new method should use ViewRowSetImpl.removeRowAt() method to remove the row entry at the given index from it's collection. Note that this is an absolute index and not a range index in the collection.
    2. A good strategy for displaying the names instead
    of the IDs (subqueries or joining the three tables)You should join the three tables by using reference (and perhaps readonly) entities.
    3. A JBO-27101 DeadEntityAccessException when
    removing an existing row from AB_TABLE and adding it
    againThis is happening due to remove() method on the Row which is marking the row as removed. Attempts to add this row into another collection will throw a DeadEntityAccessException.
    You may 'remove the row from it's collection, then call 'Row.refresh' on it to revert the entity back to undeleted state.
    >
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when
    creating a row in AB_TABLE. This is caused by the
    composition. We workaround this using
    createAndInitRow(AttributeList) on the view object
    (instead of create()). This is our add-Action:
    ViewObject abVO =
    O =
    panelBinding.getApplicationModule().findViewObject("AB
    iew");
    ViewObject bVO =
    O =
    panelBinding.getApplicationModule().findViewObject("BV
    ew");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new
    String[]{"BId"}, new
    Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);This is a handy approach. Note that Bc4j framework does not support dual composition where the same detail can be owned by two or more masters. In those cases, you also need to implement post ordering to post the masters before the detail (and reverse ordering for deletes).
    >
    5. After inserting the new row the NavigationBar has
    enabled commit/rollback buttons and AB_TABLE displays
    the row. But performing a commit does nothing. With
    the following statement after insertRow(newRow) the
    new row is created in the database:
    newRow.setAttribute("BId",
    d", bRow.getAttribute("BId"));This bug in JDev 903 was fixed and a patch set (9.0.3.1) is (I believe) available now via MetaLink.
    >
    >
    Please give us some help on this subject.
    Best regards
    Michael Thal

  • ADF-Newbie.How can I create a many-to-many row from an action method?

    Hi all,
    First of all, this is my scenario:
    - JDeveloper 10.1.3.0.4
    - My application is running ADF + Faces + Toplink + EJB Facade
    - My database model includes some many-to-many relationships and some one-to-may relationships.
    I have developed several ADF pages displaying some CRUD operations over my master tables and my master-detail tables without any problem.
    I am now trying to implement some CRUDs over my many-to-many relationship tables.
    Lets call the tables involved as master1, master2, and relationship (the many-to-many table)
    Because usability reasons, my user wants me to implement an interface where he can:
    -Select a row from the master1 table
    -Select a row from the master2 table
    -Click a button called "Create relationship"
    I need to implement the code inside the action button to add my new record on the relationship table, and honestly, I have not found any clue.
    Please, give me some directions to solve my problem.
    Thank you in advance for your help...

    The key here is to work with the intersection table for your MtM relationship.
    The way I do it, is to have the user view Master Table I, and click on the add Relationship button, this button is a link to a add form for the Intersection Table. On the Add form, the MasterTable 1 Id is already filled in and I hide it. I then setup MasterTable 2 as an LOV on the page. The used selects the item from the LOV and clicks on the commit button. I refresh my many-to-many view and all is grand.
    Using ADF/JSF, I have made it even nicer by turning the add relationship page into a dialog page, that way the user can see the previous page while adding the new relationship.
    Kelly

  • Many-to-many Performance Problem (Using FAQ Template)

    Having read "HOW TO: Post a SQL statement tuning request - template posting" I have gathered:
    I have included some background information at the bottom of the post
    The following SQL statement has been identified as performing poorly. It takes ~160 seconds to execute, but similar (shown below first statement) SQL statements executes in ~1 second.
    SQL taking 160 seconds:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1);SQL taking ~1 second or less
    ab.channel_fk IN (7);Or even:
    ab.channel_fk IN (6, 9, 170, 89);The purpose of the SQL is to return rows from table_a that are associated with table_b (not in SQL) through the junction table table_a_b.
    The version of the database is 10.2.0.4.0
    These are the parameters relevant to the optimizer:
    show parameter optimizer;
    NAME                                               TYPE        VALUE
    optimizer_dynamic_sampling                         integer     2
    optimizer_features_enable                          string      10.2.0.4
    optimizer_index_caching                            integer     0
    optimizer_index_cost_adj                           integer     100
    optimizer_mode                                     string      ALL_ROWS
    optimizer_secure_view_merging                      boolean     TRUE
    show parameter db_file_multi;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    show parameter db_block_size;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                          PNAME                          PVAL1                  PVAL2
    SYSSTATS_INFO                  STATUS                                                COMPLETED
    SYSSTATS_INFO                  DSTART                                                07-18-2006 23:19
    SYSSTATS_INFO                  DSTOP                                                 07-25-2006 23:19
    SYSSTATS_INFO                  FLAGS                          0
    SYSSTATS_MAIN                  SREADTIM                       5.918
    SYSSTATS_MAIN                  MREADTIM                       7.889
    SYSSTATS_MAIN                  CPUSPEED                       1383
    SYSSTATS_MAIN                  MBRC                           8
    SYSSTATS_MAIN                  MAXTHR                         1457152
    SYSSTATS_MAIN                  SLAVETHR                       -1Here is the output of EXPLAIN PLAN:
    PLAN_TABLE_OUTPUT
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       2 - filter(""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7)
    Note
       - 'PLAN_TABLE' is old versionFor reference, the EXPLAIN PLAN when using
    ab.channel_fk IN (6, 9, 170, 89);which executes in ~1 second is:
    PLAN_TABLE_OUTPUT
    Plan hash value: 794334170
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |           |   143K|    81M|       | 58982   (3)| 00:05:50 |
    |*  1 |  HASH JOIN         |           |   143K|    81M|  2952K| 58982   (3)| 00:05:50 |
    |   2 |   INLIST ITERATOR  |           |       |       |       |            |          |
    |*  3 |    INDEX RANGE SCAN| C_M_INDEX |   143K|  1262K|       |  1264   (1)| 00:00:08 |
    |   4 |   TABLE ACCESS FULL| TABLE_A   |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       3 - access(""AB"".""CHANNEL_FK""=6 OR ""AB"".""CHANNEL_FK""=9 OR
                  ""AB"".""CHANNEL_FK""=89 OR ""AB"".""CHANNEL_FK""=170)
    Note
       - 'PLAN_TABLE' is old versionHere is the output of SQL*Plus AUTOTRACE including the TIMING information:
    SQL> set autotrace traceonly arraysize 100;
    SQL> SELECT
      2  a.*
      3  FROM
      4  table_a a
      5  INNER JOIN table_a_b ab ON a.id = ab.media_fk
      6  WHERE
      7  ab.channel_fk IN (7, 1);
    1336148 rows selected.
    Execution Plan
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access("AB"."MEDIA_FK"="A"."ID")
       2 - filter("AB"."CHANNEL_FK"=1 OR "AB"."CHANNEL_FK"=7)
    Note
       - 'PLAN_TABLE' is old version
    Statistics
          10586  recursive calls
              0  db block gets
         200457  consistent gets
         408343  physical reads
              0  redo size
      498740848  bytes sent via SQL*Net to client
         147371  bytes received via SQL*Net from client
          13363  SQL*Net roundtrips to/from client
             49  sorts (memory)
              0  sorts (disk)
        1336148  rows processedThe TKPROF output for this statement looks like the following:
    TKPROF: Release 10.2.0.4.0 - Production on Mon Oct 1 12:23:21 2012
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: ..._ora_4896.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    ALTER SYSTEM SET TIMED_STATISTICS = TRUE
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.03          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.03          0          0          0           0
    Misses in library cache during parse: 0
    Parsing user id: 21
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        4     27.25     163.58     179906     198394          0          16
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 21
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.01       0.00          0          0          0           0
    Execute      2      0.00       0.03          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        6     27.25     163.62     179906     198394          0          16
    Misses in library cache during parse: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: ..._ora_4896.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
          46  lines in trace file.
         187  elapsed seconds in trace file.The DBMS_XPLAN.DISPLAY_CURSOR output:
    select * from table(dbms_xplan.display_cursor('474frsqbc1n4d', null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  474frsqbc1n4d, child number 0
    SELECT /*+ gather_plan_statistics */ c.* FROM table_a c INNER JOIN table_a_b ab ON c.id = ab.media_fk WHERE ab.channel_fk IN (7, 1)
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
    |*  1 |  HASH JOIN            |                    |      1 |   1352K|   1050 |00:00:40.93 |     198K|    182K|    209K|    29M|  5266K| 3320K (1)|
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |      1 |   1352K|   1336K|00:00:01.34 |   10874 |      0 |      0 |       |       |          |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |      1 |   2190K|   2267K|00:02:45.56 |     187K|    182K|      0 |       |       |          |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""C"".""ID"")
       2 - filter((""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7))Thank you for reading I'm looking forward for suggestions how to improve the performance of this statement.
    h3. Backgroud
    Many years ago my company made the decision to store many-to-many relationships in our database using pipe delimited fields. An example field value:
    '|ABC|XYZ|VTR|DVD|'Each delimited value refers to a unique 'short code' in TABLE_B (There is also a true numeric foreign key in TABLE_B which is what I'm using in the junction table). We regularly search using these column with the following style SQL:
    WHERE
    INSTR(pipedcolumn, '|ABC|') > 0
    OR INSTR(pipedcolumn, '|XYZ|' > 0
    ...Appropriate indexes have been created over the years to make this process as fast a possible.
    We now have an opportunity to fix some of these design mistakes and implement junction tables to replace the piped field. Before this we decided to take a copy of a database from a customer with the largest record set and test. I created a new junction table:
    TABLE_A_B DDL:
        CREATE TABLE TABLE_A_B (
            media_fk NUMBER,
            channel_fk NUMBER,
            PRIMARY KEY (media_fk, channel_fk),
            FOREIGN KEY (media_fk) REFERENCES TABLE_A (ID),
            FOREIGN KEY (channel_fk) REFERENCES TABLE_B (ID)
        ) ORGANIZATION INDEX COMPRESS;
        CREATE INDEX C_M_INDEX ON TABLE_A_B (channel_fk, media_fk) COMPRESS;And parsing out a pipe delimited field, populated this new table.
    I then compared the performance of the following SQL:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (x, y, n); -- Can be Many Minutes
    --vs.
    SELECT
    a.*
    FROM
    table_a a
    WHERE
    INSTR(OWNERS,'|x|')    >0
    OR INSTR(OWNERS,'|y|')    >0
    OR INSTR(OWNERS,'|n|')    >0; -- About 1 second seemingly regardlessWhen x, y, n are values that occur less frequently in TABLE_A_B.CHANNEL_FK the performance is comparable. However once the frequency of x, y, n increases the performance suffers. Here is a summary of the CHANNEL_FK data in TABLE_A_B:
    --SQL For Summary Data
    SELECT channel_fk, count(channel_fk) FROM table_a_b GROUP BY channel_fk ORDER BY COUNT(channel_fk) DESC;
    CHANNEL_FK             COUNT(CHANNEL_FK)
    7                      780741
    1                      555407
    2                      422493
    3                      189493
    169                    144663
    9                      79457
    6                      53051
    171                    28401
    170                    19857
    49                     12603
    ...I've noticed that once I use any combination of values which occur more than about 800,000 times (i.e. IN (7, 1) = 780741 + 555407 = 1336148) then I get performance issues.
    I'm finding it very difficult to accept that the old pipe delimited fields are a better solution (ignoring everything other than this search criteria!).
    Thank you for reading this far. I truly look forward to suggestions on how to improve the performance of this statement.
    Edited by: user1950227 on Oct 1, 2012 12:06 PM
    Renamed link table in DDL.

    Possibly not, I followed the instructions as best as I could but may have missed things.
    h5. 1. DDL for all tables and indexes?
    h6. - TABLE_A_B is described above and has a total of 2,304,642 rows. TABLE_A and TABLE_B are described below.
    h5. 2. row counts for all tables?
    h6. - See below
    h5. 3. row counts for the predicates involved?
    h6. - Not sure what your asking for, I have a summary of data in TABLE_A_B above. Could you clarify please?
    h5. 4. Method and command used to collect stats on the tables and indexes?
    h6. - For the stats I collected above I have included the command used to collect the data. If you are asking for further data I am happy to provide it but need more information. Thanks.
    TABLE_A has 2,267,980 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_A
      CREATE TABLE "NS"."TABLE_A"
       (     "ID" NUMBER
         --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index ID_PK
      CREATE UNIQUE INDEX "NS"."MI_PK" ON "NS"."TABLE_A" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_A
      ALTER TABLE "NS"."TABLE_A" ADD CONSTRAINT "MI_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_A" MODIFY ("ID" NOT NULL ENABLE);TABLE_B has 22 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_B
      CREATE TABLE "NS"."TABLE_B"
         "ID" NUMBER
      --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index CID_PK
      CREATE UNIQUE INDEX "NS"."CID_PK" ON "NS"."TABLE_B" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_B
      ALTER TABLE "NS"."TABLE_B" ADD CONSTRAINT "CID_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_B" MODIFY ("ID" NOT NULL ENABLE);Edited by: davebcast on Oct 1, 2012 8:51 PM
    Index name incorrect
    Edited by: davebcast on Oct 1, 2012 8:52 PM

  • Many-to-many question

    Just asking for some help with some search terms, as I know that this topic has been argued out somewhere. I just haven't been able to find it:
    If I'm setting up a cross-reference/junction table in order to fulfill a many-to-many requirement, should I create a technical key as a primary key with a unique index on the combined foreign keys? Or should I make the combination of the foreign keys the primary key?
    Our newest standards force a technical key on all tables, and one of the Developers is asking why that's necessary on a cross-reference table. I've searched on "cross reference table primary key", "junction table primary key". Most answers suggesting a technical key are due to database performance issues that can be solved with it (not suggested for that reason for Oracle that I've found yet).
    --=Chuck

    >
    If I'm setting up a cross-reference/junction table in order to fulfill a many-to-many requirement, should I create a technical key as a primary key with a unique index on the combined foreign keys? Or should I make the combination of the foreign keys the primary key?
    >
    Assuming that each of your foreign keys is a surrogate key and not an actual data value, I have to agree with Dan on this one.
    The two primary attributes that a key value should have are: 1) it should be unique and 2) it should have no inherent meaning.
    By 'no inherent meaning' I mean it should NOT be a data value or have any meaning whatsoever attached to its value. If the value were to change it would have no impact at all on the actual data though naturally the value has to be changed the same everywhere it is used.
    A classic example in the U.S. is social security number. It should NOT be used as a key even though it might be considered a 'natural' key.
    The value is a data item that has meaning and is subject to change if for no other reason than typographical error. It is the problems involved in making changes to data items used as keys that surrogate keys are intended to prevent.
    A surrogate key (even a sequence) has no inherent meaning. If a value of 2 is being used in a system and, unknown to you, someone changes the value to a 3 everywhere in your system that it is used you will never notice it.
    I have seen no standard suggesting that multi-column surrogate keys are inappropriate. Your intersect table can use the combination of the foreign keys as the primary key and fulfill the contract of using surrogate keys.
    Sven suggested this
    >
    For example you later discover that you want a detail table for your joined table. Then the detail table would need only one column as a foreign key constraint, instead of two (which can have major performance implications).
    >
    If if was desired to meet that future requirement by using one column you can easily do so by adding a new column specifically for that purpose without any impact whatsoeve on the existing system.
    As for this possible architecture change
    >
    Or you add some third column to include into the uniqueness logic.
    >
    Well that requirement is a fundamental change to your existing one. The existing requirement is that the combination of the two foreign keys should be unique. It is not clear if Sven was suggesting that that requirement is no longer valid or if rather suggesting that there is no an ADDITIONAL requirement that those two values as well as a third should also be unique.
    If you need to implement uniqueness for the two values you currently have as well as for those combined with a third value that would be better implemented by creating yet another INTERSECT table specifically for that purpose and leave the existing intersect table in place. That architecture approach expands the data model without disrupting the existing one.
    The main design goal, in my opinion, should always be to produce a design which most easily meets current requirements without foreclosing the ability to meet unknown future requirements.

  • Many-to-many relationship - show all values

    Hi,
    I'm building a cube in 2008R2 and have a many-to-many relationship through a bridging table but when displaying the results it is effectively an inner join and I would like to see a full outer join with the unknown rows set to "unknown".
    I've managed to achieve this by doing a full outer join in the view that creates my bridging table and having an unknown member in my dimensions but then when I added in a dimension that wasn't directly related to the bridging table the unknown rows
    were removed again. I was able to get past this by adding a row to my fact table with all the keys set to unknown and the metrics set to zero.
    Whilst this works it really does not seem like an ideal solution, especially as previously empty metrics are now returning a zero.
    Is there any way to achieve this in SSAS? Perhaps in the way unknown members are processed?
    Thanks,

    cccparkhill,
    What you did is the only way to achieve what you want.
    In cubes you connect Dimensions throw Facts. By definition dimensions are
    “How do business people describe the data that
    results from the business process?”
     and facts are “What are we measuring?”, noting the underlined
    we can understand that _normally and usually_ the relation between the dimensions and the facts represent incidents happened in the real world ... the way you approach, think, and deal with dimensions and facts in cubes is different than dealing with tables
    in normal relational transnational database.
    Personally I do what you did when I want to achieve "Left join"
    Please mark this reply as the answer
    or vote as helpful, as appropriate, to make it useful for other readers

  • Many to Many with Custom Objects 4 and up

    Hello,
    Pls. I need to define a new CO4 that will have a many to many relationship with Contact.
    From OD help pg.775 I understand that I should use another CO (f.ex. CO5) to be the intersection. Help file indicates how to rename the objects...and put them in the related info. section... but I don´t know what FIELDS (if any) I should define in the intersection CO5.
    Pls. Anyone could help ?
    Txs. a lot.
    Antonio

    Yes. You have to define them in CO5 Layout.
    You should see Name, Contact and Custom Object 04 (you may have renamed it). Not the Id field.
    Then when you add THE CO5 layout to Contact Related section, you can add new association between Contact and CO4.
    Don't forgot to enable the Access Profiles in Contact, CO4 and CO5.

  • Many-to-Many CMR

    hi all,
    i defined two entity beans, Quiz and Question, with many-to-many cmr and
    a table named Quiz_questions_Question_quizesTable is created in the database after deployment process. the formed table columns consist of primary key of Quiz and primary key of Question, as expected. what i want to do is adding one more column to this table that shows the sequence number of the related question in the related quiz. is it possible? and how? or what should i do?
    thanks in advance..

    "what i want to do is adding one more column to this table that shows the sequence number of the related question in the related quiz."
    Makes sense, but then the association table is no longer just an association table.
    Perhaps a way to get you in the right frame of mind is, "how would you model this with just objects?"
    Basically a CMR just represents an association between two objects. You would see in your revised object model that you do not have a simple n-to-n association between a Quiz objects and a Question objects.[Quiz](0..n)---->(1..n)[Question]The above (bastardized UML) model does not represent the enumeration of Question objects within the scope of a single Quiz object.
    Perhaps the following is more accurate:[Quiz](1)---->(1..n)[QuizQuestion](0..n)---->(1)[Question]or                     [Question]
                              A
                              |
                              |
    [Quiz](1)---->(1..n)[QuizQuestion]It is only after you have your object model that you should worry about the entity bean stuff.

  • Many to Many Relationship

    In powerpivot we are facing problem in resolving the many-to-many relationship between tables.
    Current table schema is:
    Having 4 tables-
    Bills - columns are- Bill No(primary key)
    Billing Fee Table-Fee ID(primary key), Bill Type, Sub Bill Type
    FeeList Table-FeeList ID(primary key),Fee ID(Foriegn Key to Billingble),
    Bill No(Foriegn Key to Bills)
    Payments Table-Pay ID(Primary Key), Bill No(Foriegn Key to Bills)
    Bills is the primary table for Feelist Table
    and Payments Table -Bill No is the primary key.
    Billing Fee Table is primary table for FeeList Table -
    Fee ID is the primary key.
    To create a pivot table using Payments Table, I need
    Bill Type and Sub Bill Type. Hence fetched from
    Biling Fee Table to FeeList Table based on the
    Fee ID column using RELATED function.
    As FeeList Table and Payments Table is having duplicate
    Bill Nos, it is not possible to get Bill Type and
    Sub Bill Type to Payments Table. I am trying to use LOOKUPVALUE
    function to get data based on the Bill No, as there is no direct relationship exists between these two tables.
    I have tried using
    =LOOKUPVALUE('FeeList Table'[Sub Bill Type], 'FeeList Table'[Bill No], 'Payments Table'[Bill No]) function. But
    the lookupvalue function returns an error if a duplicate is found.
    Error is as below:
    Calculation error in column 'Payments Table'[Sub Bill Type]: A table of multiple values was supplied where a single value was expected.
    Kindly assist to resolve this issue.
    Thanks

    Hi Rekha,
    What should your LOOKUPVALUE return if multiple rows are found in the FeeListTable? By the way it should work if a really finds "duplicates" (i.e. multiple times the same return value) but fails if it returns multiple rows with different values.
    If you for example would want to return the first entry that was found, you could try (have not checked in PP for typos):
    CALCULATE(MIN('FeeList Table'[Sub Bill Type]),
      FILTER('FeeList Table',
         'FeeList Table'[Bill No] = 'Payments Table'[Bill No]
    Edit: a very nice way to handle Many-to-Many relationships is explained here:
    http://blog.gbrueckl.at/2012/05/resolving-many-to-many-relationships-leveraging-dax-cross-table-filtering/
    Julian Wissel | BI for NAV @ http://en.navbi.com | Blog @ blog.navida.eu

Maybe you are looking for