Many-to-many relationship performance problem

Hi:
I have model that uses a bridge table to solve a many-to-many relationship. Here it is:
dimension 1 -< fact table >-dimension 2-< bridge table >-additional data table
Now, when I drive from the additional data table with a specific value and join to the bridge table and dimension 2, the performance is fine. But, as soon as I add the fact table to the query, the query never returns. I'm in a development environment, so my fact table has 220K records, and the bridge table has 200K records. The dimension 2 and additional data tables have hundreds of records. In other words, it's not much data. I have indexes and referential constraints on all relevant columns.
Can anyone suggest what is happening to cause such a performance hit when I add the fact table to the query?
Thanks for any suggestions!

sybrand_b wrote:
The way you write it yes, but there is one minor detail.
You can have a 0,1 or many relationship: one employee has zero, one or many phone numbers
but you can not have the opposite, as it doesn't make sense.
0, 1 or many phone numbers belong to 1 employee.
It is customary to use 0,1 or m when the relationship is optional.
Sybrand Bakker
Senior Oracle DBAIs this correct now ?
one to many : one employee has 0,1 or multiple phone numbers
many to one : 1 or multiple phone numbers to one employee

Similar Messages

  • Why do I have so many v4 performance problems?

    Virtually every time I click on something the performance is horrendous, 15+ second response time, get the revolving circle thing or a not responding message.
    PDF files don't display
    YouTube videos can't be viewed

    Intermittent problems are often a result of interference. Interference can be caused by other networks in the neighbourhood or from household electrical items.
    You can download and install iStumbler (NetStumbler for windows users) to help you see which channels are used by neighbouring networks so that you can avoid them, but iStumbler will not see household items.
    Refer to your router manual for instructions on changing your wifi channel or adjusting your multicast rate.
    There are other types of problems that can affect networks, but this is by far the most common, hence worth mentioning first.

  • Problem with 1-to-many relationship between entity beans

    Hi All!
    I have two tables TMP_GROUP and TMP_EMPLOYEE with following fields:
    TMP_GROUP: ID, CAPTION, COMMENT, STATUS.
    TMP_EMPLOYEE: ID, LOGIN, GROUP_ID.
    For this tables i create two entity beans GROUP and EMPLOYEE respectively.
    The relationship looks like this
    descriptor ejb.xml:
    <ejb-relation>
                <description>description</description>
                <ejb-relation-name>employeesOfGroup</ejb-relation-name>
                <ejb-relationship-role>
                    <ejb-relationship-role-name>com.mypackage.GroupBean</ejb-relationship-role-name>
                    <multiplicity>One</multiplicity>
                    <relationship-role-source>
                        <ejb-name>GroupBean</ejb-name>
                    </relationship-role-source>
                    <cmr-field>
                        <cmr-field-name>employees</cmr-field-name>
                        <cmr-field-type>java.util.Collection</cmr-field-type>
                    </cmr-field>
                </ejb-relationship-role>
                <ejb-relationship-role>
                    <ejb-relationship-role-name>com.mypackage.EmployeeBean</ejb-relationship-role-name>
                    <multiplicity>Many</multiplicity>
                    <relationship-role-source>
                        <ejb-name>EmployeeBean</ejb-name>
                    </relationship-role-source>
                </ejb-relationship-role>
            </ejb-relation>
    descriptor persistent.xml:
    <table-relation>
                   <table-relationship-role
                        key-type="PrimaryKey">
                        <ejb-name>GroupBean</ejb-name>
                        <cmr-field>employees</cmr-field>
                   </table-relationship-role>
                   <table-relationship-role
                        key-type="NoKey">
                        <ejb-name>EmployeeBean</ejb-name>
                        <fk-column>
                             <column-name>GROUP_ID</column-name>
                             <pk-field-name>ejb_pk</pk-field-name>
                        </fk-column>
                   </table-relationship-role>
              </table-relation>
    Now i implement business method:
    public Long addEmployee(String login, long groupId) {
              Long result;
              try {
                   EmployeeLocal employee = employeeHome.create(login);
                   GroupLocal group =
                        groupHome.findByPrimaryKey(new Long(groupId));
                   Collection employees = group.getEmployees();
                   employees.add(employee);
                   result = (Long) employee.getPrimaryKey();
              } catch (CreateException ex) {
                   result = new Long(0);
              } catch (FinderException ex) {
                   result = new Long(0);
              return result;
    When i call this method from web service, the following exception is raised:
    com.sap.engine.services.ejb.exceptions.BaseTransactionRolledbackLocalException: Exception in method com.mypackage.GroupLocalHomeImpl0.findByPrimaryKey(java.lang.Object).
    P.S.
    1) I have transaction attribute set to "Required" for all methods of all beans
    2) I have unique index for each table:
    TMP_GROUP_I1: CAPTION
    TMP_EMPLOYEE_I1: LOGIN (however i think GROUP_ID must be added here too)
    3) I tried many:many relationship with this tables and it works fine
    4) I try another implementation of addEmployee method with
    EmployeeLocal employee = employeeHome.create(login, groupId);
    without using GroupLocal cmr-field and GroupLocalHome findByPrimaryKey method, the result is same error.
    Can somebody help me with this problem?
    Thanks in advance.
    Best regards, Abramov Andrey.

    gimbal2 wrote:
    1: The @JoinColumn on the listOfDepartments collection in Company is wrong. It should be something like this for a bidirectional relationship:
    @OneToMany(mappedBy="company") // company is the matching property name in Department
    private List <Department> listOfDepartment = new ArrayList<Department>();Note that I removed the fetch configuration; onetomany is fetched lazy by design. Saves some clutter eh.
    2: use a Set in stead of a List (hibernate doesn't like lists much in entities and will break when you create slightly more complex entity relations)
    3: don't just slap cascades on collections, especially of type ALL. Do it with care. In the many years I've been using JPA I've only had to cascade deletes in very specific situations, maybe once or twice. I never needed any of the other cascade types, they just invite sloppy code if you ask me. When working with persistence you should apply a little precision.I made all changes as you mentioned but still compID in department table shows null value...

  • Problem with 1:many relationship between entity beans.

    Hi All!
    I have two tables TMP_GROUP and TMP_EMPLOYEE with following fields:
    TMP_GROUP: ID, CAPTION, COMMENT, STATUS.
    TMP_EMPLOYEE: ID, LOGIN, GROUP_ID.
    For this tables i create two entity beans GROUP and EMPLOYEE respectively.
    The relationship looks like this
    descriptor ejb.xml:
    <ejb-relation>
                <description>description</description>
                <ejb-relation-name>employeesOfGroup</ejb-relation-name>
                <ejb-relationship-role>
                    <ejb-relationship-role-name>com.mypackage.GroupBean</ejb-relationship-role-name>
                    <multiplicity>One</multiplicity>
                    <relationship-role-source>
                        <ejb-name>GroupBean</ejb-name>
                    </relationship-role-source>
                    <cmr-field>
                        <cmr-field-name>employees</cmr-field-name>
                        <cmr-field-type>java.util.Collection</cmr-field-type>
                    </cmr-field>
                </ejb-relationship-role>
                <ejb-relationship-role>
                    <ejb-relationship-role-name>com.mypackage.EmployeeBean</ejb-relationship-role-name>
                    <multiplicity>Many</multiplicity>
                    <relationship-role-source>
                        <ejb-name>EmployeeBean</ejb-name>
                    </relationship-role-source>
                </ejb-relationship-role>
            </ejb-relation>
    descriptor persistent.xml:
    <table-relation>
                   <table-relationship-role
                        key-type="PrimaryKey">
                        <ejb-name>GroupBean</ejb-name>
                        <cmr-field>employees</cmr-field>
                   </table-relationship-role>
                   <table-relationship-role
                        key-type="NoKey">
                        <ejb-name>EmployeeBean</ejb-name>
                        <fk-column>
                             <column-name>GROUP_ID</column-name>
                             <pk-field-name>ejb_pk</pk-field-name>
                        </fk-column>
                   </table-relationship-role>
              </table-relation>
    Now i implement business method:
    public Long addEmployee(String login, long groupId) {
              Long result;
              try {
                   EmployeeLocal employee = employeeHome.create(login);
                   GroupLocal group =
                        groupHome.findByPrimaryKey(new Long(groupId));
                   Collection employees = group.getEmployees();
                   employees.add(employee);
                   result = (Long) employee.getPrimaryKey();
              } catch (CreateException ex) {
                   result = new Long(0);
              } catch (FinderException ex) {
                   result = new Long(0);
              return result;
    When i call this method from web service, the following exception is raised:
    com.sap.engine.services.ejb.exceptions.BaseTransactionRolledbackLocalException: Exception in method com.mypackage.GroupLocalHomeImpl0.findByPrimaryKey(java.lang.Object).
    P.S.
    1) I have transaction attribute set to "Required" for all methods of all beans
    2) I have unique index for each table:
    TMP_GROUP_I1: CAPTION
    TMP_EMPLOYEE_I1: LOGIN (however i think GROUP_ID must be added here too)
    3) I tried many:many relationship with this tables and it works fine
    4) I try another implementation of addEmployee method with
    EmployeeLocal employee = employeeHome.create(login, groupId);
    without using GroupLocal cmr-field and GroupLocalHome findByPrimaryKey method, the result is same error.
    Can somebody help me with this problem?
    Thanks in advance.
    Best regards, Abramov Andrey.

    I have posted excerpts from my orion-ejb-jar.xml file in this posting: Problem mapping a 1:M relationship between two entity EJBs w/ a compound PK
    Sorry for the duplicate postings, but I was getting errors on the submission.
    April

  • Many-to-many Performance Problem (Using FAQ Template)

    Having read "HOW TO: Post a SQL statement tuning request - template posting" I have gathered:
    I have included some background information at the bottom of the post
    The following SQL statement has been identified as performing poorly. It takes ~160 seconds to execute, but similar (shown below first statement) SQL statements executes in ~1 second.
    SQL taking 160 seconds:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1);SQL taking ~1 second or less
    ab.channel_fk IN (7);Or even:
    ab.channel_fk IN (6, 9, 170, 89);The purpose of the SQL is to return rows from table_a that are associated with table_b (not in SQL) through the junction table table_a_b.
    The version of the database is 10.2.0.4.0
    These are the parameters relevant to the optimizer:
    show parameter optimizer;
    NAME                                               TYPE        VALUE
    optimizer_dynamic_sampling                         integer     2
    optimizer_features_enable                          string      10.2.0.4
    optimizer_index_caching                            integer     0
    optimizer_index_cost_adj                           integer     100
    optimizer_mode                                     string      ALL_ROWS
    optimizer_secure_view_merging                      boolean     TRUE
    show parameter db_file_multi;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    show parameter db_block_size;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                          PNAME                          PVAL1                  PVAL2
    SYSSTATS_INFO                  STATUS                                                COMPLETED
    SYSSTATS_INFO                  DSTART                                                07-18-2006 23:19
    SYSSTATS_INFO                  DSTOP                                                 07-25-2006 23:19
    SYSSTATS_INFO                  FLAGS                          0
    SYSSTATS_MAIN                  SREADTIM                       5.918
    SYSSTATS_MAIN                  MREADTIM                       7.889
    SYSSTATS_MAIN                  CPUSPEED                       1383
    SYSSTATS_MAIN                  MBRC                           8
    SYSSTATS_MAIN                  MAXTHR                         1457152
    SYSSTATS_MAIN                  SLAVETHR                       -1Here is the output of EXPLAIN PLAN:
    PLAN_TABLE_OUTPUT
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       2 - filter(""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7)
    Note
       - 'PLAN_TABLE' is old versionFor reference, the EXPLAIN PLAN when using
    ab.channel_fk IN (6, 9, 170, 89);which executes in ~1 second is:
    PLAN_TABLE_OUTPUT
    Plan hash value: 794334170
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |           |   143K|    81M|       | 58982   (3)| 00:05:50 |
    |*  1 |  HASH JOIN         |           |   143K|    81M|  2952K| 58982   (3)| 00:05:50 |
    |   2 |   INLIST ITERATOR  |           |       |       |       |            |          |
    |*  3 |    INDEX RANGE SCAN| C_M_INDEX |   143K|  1262K|       |  1264   (1)| 00:00:08 |
    |   4 |   TABLE ACCESS FULL| TABLE_A   |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       3 - access(""AB"".""CHANNEL_FK""=6 OR ""AB"".""CHANNEL_FK""=9 OR
                  ""AB"".""CHANNEL_FK""=89 OR ""AB"".""CHANNEL_FK""=170)
    Note
       - 'PLAN_TABLE' is old versionHere is the output of SQL*Plus AUTOTRACE including the TIMING information:
    SQL> set autotrace traceonly arraysize 100;
    SQL> SELECT
      2  a.*
      3  FROM
      4  table_a a
      5  INNER JOIN table_a_b ab ON a.id = ab.media_fk
      6  WHERE
      7  ab.channel_fk IN (7, 1);
    1336148 rows selected.
    Execution Plan
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access("AB"."MEDIA_FK"="A"."ID")
       2 - filter("AB"."CHANNEL_FK"=1 OR "AB"."CHANNEL_FK"=7)
    Note
       - 'PLAN_TABLE' is old version
    Statistics
          10586  recursive calls
              0  db block gets
         200457  consistent gets
         408343  physical reads
              0  redo size
      498740848  bytes sent via SQL*Net to client
         147371  bytes received via SQL*Net from client
          13363  SQL*Net roundtrips to/from client
             49  sorts (memory)
              0  sorts (disk)
        1336148  rows processedThe TKPROF output for this statement looks like the following:
    TKPROF: Release 10.2.0.4.0 - Production on Mon Oct 1 12:23:21 2012
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: ..._ora_4896.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    ALTER SYSTEM SET TIMED_STATISTICS = TRUE
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.03          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.03          0          0          0           0
    Misses in library cache during parse: 0
    Parsing user id: 21
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        4     27.25     163.58     179906     198394          0          16
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 21
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.01       0.00          0          0          0           0
    Execute      2      0.00       0.03          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        6     27.25     163.62     179906     198394          0          16
    Misses in library cache during parse: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: ..._ora_4896.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
          46  lines in trace file.
         187  elapsed seconds in trace file.The DBMS_XPLAN.DISPLAY_CURSOR output:
    select * from table(dbms_xplan.display_cursor('474frsqbc1n4d', null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  474frsqbc1n4d, child number 0
    SELECT /*+ gather_plan_statistics */ c.* FROM table_a c INNER JOIN table_a_b ab ON c.id = ab.media_fk WHERE ab.channel_fk IN (7, 1)
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
    |*  1 |  HASH JOIN            |                    |      1 |   1352K|   1050 |00:00:40.93 |     198K|    182K|    209K|    29M|  5266K| 3320K (1)|
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |      1 |   1352K|   1336K|00:00:01.34 |   10874 |      0 |      0 |       |       |          |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |      1 |   2190K|   2267K|00:02:45.56 |     187K|    182K|      0 |       |       |          |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""C"".""ID"")
       2 - filter((""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7))Thank you for reading I'm looking forward for suggestions how to improve the performance of this statement.
    h3. Backgroud
    Many years ago my company made the decision to store many-to-many relationships in our database using pipe delimited fields. An example field value:
    '|ABC|XYZ|VTR|DVD|'Each delimited value refers to a unique 'short code' in TABLE_B (There is also a true numeric foreign key in TABLE_B which is what I'm using in the junction table). We regularly search using these column with the following style SQL:
    WHERE
    INSTR(pipedcolumn, '|ABC|') > 0
    OR INSTR(pipedcolumn, '|XYZ|' > 0
    ...Appropriate indexes have been created over the years to make this process as fast a possible.
    We now have an opportunity to fix some of these design mistakes and implement junction tables to replace the piped field. Before this we decided to take a copy of a database from a customer with the largest record set and test. I created a new junction table:
    TABLE_A_B DDL:
        CREATE TABLE TABLE_A_B (
            media_fk NUMBER,
            channel_fk NUMBER,
            PRIMARY KEY (media_fk, channel_fk),
            FOREIGN KEY (media_fk) REFERENCES TABLE_A (ID),
            FOREIGN KEY (channel_fk) REFERENCES TABLE_B (ID)
        ) ORGANIZATION INDEX COMPRESS;
        CREATE INDEX C_M_INDEX ON TABLE_A_B (channel_fk, media_fk) COMPRESS;And parsing out a pipe delimited field, populated this new table.
    I then compared the performance of the following SQL:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (x, y, n); -- Can be Many Minutes
    --vs.
    SELECT
    a.*
    FROM
    table_a a
    WHERE
    INSTR(OWNERS,'|x|')    >0
    OR INSTR(OWNERS,'|y|')    >0
    OR INSTR(OWNERS,'|n|')    >0; -- About 1 second seemingly regardlessWhen x, y, n are values that occur less frequently in TABLE_A_B.CHANNEL_FK the performance is comparable. However once the frequency of x, y, n increases the performance suffers. Here is a summary of the CHANNEL_FK data in TABLE_A_B:
    --SQL For Summary Data
    SELECT channel_fk, count(channel_fk) FROM table_a_b GROUP BY channel_fk ORDER BY COUNT(channel_fk) DESC;
    CHANNEL_FK             COUNT(CHANNEL_FK)
    7                      780741
    1                      555407
    2                      422493
    3                      189493
    169                    144663
    9                      79457
    6                      53051
    171                    28401
    170                    19857
    49                     12603
    ...I've noticed that once I use any combination of values which occur more than about 800,000 times (i.e. IN (7, 1) = 780741 + 555407 = 1336148) then I get performance issues.
    I'm finding it very difficult to accept that the old pipe delimited fields are a better solution (ignoring everything other than this search criteria!).
    Thank you for reading this far. I truly look forward to suggestions on how to improve the performance of this statement.
    Edited by: user1950227 on Oct 1, 2012 12:06 PM
    Renamed link table in DDL.

    Possibly not, I followed the instructions as best as I could but may have missed things.
    h5. 1. DDL for all tables and indexes?
    h6. - TABLE_A_B is described above and has a total of 2,304,642 rows. TABLE_A and TABLE_B are described below.
    h5. 2. row counts for all tables?
    h6. - See below
    h5. 3. row counts for the predicates involved?
    h6. - Not sure what your asking for, I have a summary of data in TABLE_A_B above. Could you clarify please?
    h5. 4. Method and command used to collect stats on the tables and indexes?
    h6. - For the stats I collected above I have included the command used to collect the data. If you are asking for further data I am happy to provide it but need more information. Thanks.
    TABLE_A has 2,267,980 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_A
      CREATE TABLE "NS"."TABLE_A"
       (     "ID" NUMBER
         --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index ID_PK
      CREATE UNIQUE INDEX "NS"."MI_PK" ON "NS"."TABLE_A" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_A
      ALTER TABLE "NS"."TABLE_A" ADD CONSTRAINT "MI_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_A" MODIFY ("ID" NOT NULL ENABLE);TABLE_B has 22 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_B
      CREATE TABLE "NS"."TABLE_B"
         "ID" NUMBER
      --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index CID_PK
      CREATE UNIQUE INDEX "NS"."CID_PK" ON "NS"."TABLE_B" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_B
      ALTER TABLE "NS"."TABLE_B" ADD CONSTRAINT "CID_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_B" MODIFY ("ID" NOT NULL ENABLE);Edited by: davebcast on Oct 1, 2012 8:51 PM
    Index name incorrect
    Edited by: davebcast on Oct 1, 2012 8:52 PM

  • There is any Performance problem with Creation of many Y or Z  Programs.

    HI,
    There is any Performance problem with Creation of many Y or Z  Programs. plz give clarity regarding this to me......
    regards
    ganesh

    Ganesh,
    Can you please mention the context and the purpose of creating these custom program.  And application are you referring to?
    Regards,
    Rohit

  • Many to many relationship problem

    Hi folks!
    I am having the following problem when mapping many-to-many relationships: "More than one writable many-to-many mapping cannot use the same relation table".
    This is what I am doing: I have a class Person and a class Meeting. One Person has many meetings and one Meeting has many participants (person).
    I did the M-to-M relationship for the class Person and it worked fine in the workbench. But when I tried to do it in the Meeting class the workbench complained with that error.
    I believe it might be a problem with the source and target references but I couldn't figure out how to solve it.
    If I check in the intermediate table there are only 2 references created.
    Can anyone help?

    Hello,
    Firstly, think about object model consistency (without TopLink or other persistence layer at all): should you be changing one side (eg Person), adding a meeting, without changing the other side? Is that object model self-consistent? It doesn't appear so to me- I can navigate from meeting to person and not get the same results as navigating from person to meeting. All this is irrespective of the persistence layer.
    TopLink, first and foremost, expects that your object model is self-consistent. If it is, then you will find that the problem you indicate is no longer a problem- if i update a person with a new meeting, i change the meeting to person relationship as well. This means that although one direction is read-only to TopLink, the other is not- so the data will be written.
    I hope this helps,
    Christian

  • One-to-many relationship: problem with several tables on the one side...

    Hello
    I'm having problems developing a database for a content management system. Apart from details, I've got one main table, that holds the tree structure of the content ("resources") and several other tables that contain data of a particular datatype ("documents", "images", etc.). Now, there's one-to-many relationship between "resources" table and all the datatype tables - that is, in the "resources" table there's "resource_id" column, being a foreign key referenced to the "id" columns in the datatype tables.
    The problem is that this design is deficient. I can't tell form the "resource_id" column from which datatype table to get the data. It seems to me that one-to-many relationship only works with two tables. If the data on the one side of the relationship is contained in several tables, problems arise.
    Anybody knows a solution? I would be obliged.
    Regards
    Havocado

    Hi;
    A simple way may be create a view on referenced tables:
    Connected to Oracle Database 10g Express Edition Release 10.2.0.1.0
    Connected as hr
    SQL>
    SQL> drop table resources;
    Table dropped
    SQL> create table resources(id number, name varchar2(12));
    Table created
    SQL> insert into resources values(1,'Doc....');
    1 row inserted
    SQL> insert into resources values(2,'Img....');
    1 row inserted
    SQL> drop table documents;
    Table dropped
    SQL> create table documents(id number, resource_id number,type varchar2(12));
    Table created
    SQL> insert into documents values(1,1,'txt');
    1 row inserted
    SQL> drop table images;
    Table dropped
    SQL> create table images(id number, resource_id number,path varchar2(24));
    Table created
    SQL> insert into images values(1,2,'/data01/images/img01.jpg');
    1 row inserted
    SQL> create or replace view vw_resource_ref as
      2    select id, resource_id, type, null as path from documents
      3      union
      4     select id, resource_id, null as type, path from images;
    View created
    SQL> select * from resources r inner join vw_resource_ref rv on r.id = rv.resource_id;
            ID NAME                 ID RESOURCE_ID TYPE         PATH
             1 Doc....               1           1 txt         
             2 Img....               1           2              /data01/images/img01.jpg
    SQL> Regards....

  • One-to-many relationships problem

    Hi,
    I'm having some problems with one-to-many relationships.
    I've a two way relationship beetwen Person and Document.
    Person has a collection of Documents (1-n).
    Document has an instance for Person.
    In my database schema Document has a FK for person.
    The problem occurs when I'm trying to save the Person object
    with a collection of Documents.
    Oracle gives me the following message:
    ORA-00001: unique constraint (SMS.PK_PERSON) violated
    In my kodo.properties file I put:
    kodo.jdbc.ForeignKeyConstraints: true
    It seems that when the collection of Documents is persisted, kodo tries to
    save a new Person object for each Document.
    When I have just one instance of Document in my collection kodo persists
    it ok, but when I have move instances I get
    that problem.
    Can somebody help me?
    Thanks in advance,
    Joan Caffee

    You need to make sure each Document references the same Person object in
    the JVM. JDO persists your object model as-is. Each distinct JVM
    object becomes a distinct database record. You can't have multiple JVM
    objects in the same PersistenceManager representing the same database
    record.

  • Performance problem with many lookups

    Hello,
    In my corporate system, I have a database table with many relationships. The JHeadstart Group used to represent it has 13 lookups, some of them are choices and some are lovs. All lookups are based on "free" lookup VOs, withou any kind of View Link relationships.
    So the initial model was:
    One "main" view object representing one entity object, with many (13) lookups to set some attributes.
    It was just working fine while under development, but when we put it on production servers with real data, it started running very very slow when quering in the main UIX table. In create mode it was working fine. I realized that it could be all of those lookups with all that data (some of lookup tables has >300 entries). If I undestood correctly, those attributes associated with lookups are "rendered" by matching.
    So I changed the model to:
    One "main" view object representing all possible entity objects, to fetch all attributes coming from relationships directly in that view object, and many lookups used just in create mode.
    It worked fine, and now response times are acceptable. But I'm still with difficult in the detail disclousure feature, where I'm still gettting up to 15-20s to render the details. In detail disclousure, I think it uses lookups to render attributes and so it slow down again.
    Is there anything I can do do speed up these queries?
    Thanks,
    Eduardo

    I've changed some choices to lovs and now it's working fine.
    Eduardo

  • Messaging performance problem on RAID disk with many IMAP users

    The performance problem can be solved by tunning the disc RAID system.

    The performance problem can be solved by tunning the disc RAID system.

  • Creating data in a many-to-many-relationship

    Hello,
    we really have problems in implementing a JClient dialog based on BC4J for creating data in a many to many relationship - especially with cascade delete on both sides.
    Simplified our tables look like:
    create table A_TABLE
    A_ID VARCHAR2(5) not null,
    A_NAME VARCHAR2(30) not null,
    constraint PK_A_TABLE primary key (A_ID),
    constraint UK_A_TABLE unique (A_NAME)
    create table B_TABLE
    B_ID VARCHAR2(5) not null,
    B_NAME VARCHAR2(30) not null,
    constraint PK_B_TABLE primary key (B_ID),
    constraint UK_B_TABLE unique (B_NAME)
    create table AB_TABLE
    A_ID VARCHAR2(5) not null,
    B_ID VARCHAR2(5) not null,
    constraint PK_AB_TABLE primary key (A_ID, B_ID),
    constraint FK_AB_A foreign key (A_ID) references A_TABLE (A_ID) on delete cascade,
    constraint FK_AB_B foreign key (B_ID) references B_TABLE (B_ID) on delete cascade
    Could JDev Team please provide a BC4J/JClient sample that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE as detail. The detail displays the names associated with the IDs. Next to AB_TABLE should be a view of B_TABLE which only displays rows that are currently not in AB_TABLE. Two buttons are used for adding and removing rows in AB_TABLE. After adding or removing rows in the intersection the B_TABLE view should be updated. The whole thing should work in the middle and client tier. This means no database round trips after each add/remove, no posts for AB_TABLE and no query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group (A_TABLE) one can select and deselect items (AB_TABLE) from a list of available items (B_TABLE). Most of JDeveloper4s wizards use this. They can handle multi/single selections, selections from complex structures like trees and so on. Ok, the wizards are not based on BC4J - or? How can we do it with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove reflecting the current selection
    2. A good strategy for displaying the names instead of the IDs (subqueries or joining the three tables)
    3. A JBO-27101 DeadEntityAccessException when removing an existing row from AB_TABLE and adding it again
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when creating a row in AB_TABLE. This is caused by the composition. We workaround this using createAndInitRow(AttributeList) on the view object (instead of create()). This is our add-Action:
    ViewObject abVO = panelBinding.getApplicationModule().findViewObject("ABView");
    ViewObject bVO = panelBinding.getApplicationModule().findViewObject("BView");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);
    5. After inserting the new row the NavigationBar has enabled commit/rollback buttons and AB_TABLE displays the row. But performing a commit does nothing. With the following statement after insertRow(newRow) the new row is created in the database:
    newRow.setAttribute("BId", bRow.getAttribute("BId"));
    Please give us some help on this subject.
    Best regards
    Michael Thal

    <Another attempt to post a reply. >
    Could JDev Team please provide a BC4J/JClient sample
    that performs the following task:
    The dialog should use A_TABLE as master and AB_TABLE
    as detail. The detail displays the names associated
    with the IDs. Next to AB_TABLE should be a view of
    B_TABLE which only displays rows that are currently
    not in AB_TABLE. Two buttons are used for adding and
    removing rows in AB_TABLE. After adding or removing
    rows in the intersection the B_TABLE view should be
    updated. The whole thing should work in the middle
    and client tier. This means no database round trips
    after each add/remove, no posts for AB_TABLE and no
    query reexecution for B_TABLE until commit/rollback.
    This is a very common szenario: For an item group
    (A_TABLE) one can select and deselect items
    (AB_TABLE) from a list of available items (B_TABLE).
    Most of JDeveloper4s wizards use this. They can
    handle multi/single selections, selections from
    complex structures like trees and so on. Ok, the
    wizards are not based on BC4J - or? How can we do it
    with BC4J?
    Our main problems are:
    1. Updating the view of B_TABLE after add/remove
    reflecting the current selectionYou should be able to use insertRow() to insert the row into proper collection.
    However to remove a row only from the collection, you need to add a method on the VO subclasses (and perhaps export this method so that the client side should see it) to unlink a row from a collection (but not remove the associated entities from the cache).
    This new method should use ViewRowSetImpl.removeRowAt() method to remove the row entry at the given index from it's collection. Note that this is an absolute index and not a range index in the collection.
    2. A good strategy for displaying the names instead
    of the IDs (subqueries or joining the three tables)You should join the three tables by using reference (and perhaps readonly) entities.
    3. A JBO-27101 DeadEntityAccessException when
    removing an existing row from AB_TABLE and adding it
    againThis is happening due to remove() method on the Row which is marking the row as removed. Attempts to add this row into another collection will throw a DeadEntityAccessException.
    You may 'remove the row from it's collection, then call 'Row.refresh' on it to revert the entity back to undeleted state.
    >
    Other problems:
    4. We get a JBO-25030 InvalidOwnerException when
    creating a row in AB_TABLE. This is caused by the
    composition. We workaround this using
    createAndInitRow(AttributeList) on the view object
    (instead of create()). This is our add-Action:
    ViewObject abVO =
    O =
    panelBinding.getApplicationModule().findViewObject("AB
    iew");
    ViewObject bVO =
    O =
    panelBinding.getApplicationModule().findViewObject("BV
    ew");
    Row bRow = bVO.getCurrentRow();
    NameValuePairs attribList = new NameValuePairs(
    new String[]{"BId"}, new
    String[]{"BId"}, new
    Object[]{bRow.getAttribute("BId")});
    Row newRow = abVO.createAndInitRow(attribList);
    abVO.insertRow(newRow);This is a handy approach. Note that Bc4j framework does not support dual composition where the same detail can be owned by two or more masters. In those cases, you also need to implement post ordering to post the masters before the detail (and reverse ordering for deletes).
    >
    5. After inserting the new row the NavigationBar has
    enabled commit/rollback buttons and AB_TABLE displays
    the row. But performing a commit does nothing. With
    the following statement after insertRow(newRow) the
    new row is created in the database:
    newRow.setAttribute("BId",
    d", bRow.getAttribute("BId"));This bug in JDev 903 was fixed and a patch set (9.0.3.1) is (I believe) available now via MetaLink.
    >
    >
    Please give us some help on this subject.
    Best regards
    Michael Thal

  • Hierarchical query with many-to-many relationship

    I have read with interest the creative solutions to complex hierarchical queries posted previously; they have been instructive but have not quite addressed this scenario.
    We have a hierarchy table H, with columns for ID, name, parentID, and other attributes.
    Within this table are a number of independent hierarchies, each existing for a different purpose.
    We have a master list of hierarchies in table T which describes the purpose of each hierarchy, provides some default attributes which the nodes can inherit, and stores a unique id for each hierarchy and a pointer to the root node of the corresponding hierarchy in table H.
    We have a master list of items M, with identically named columns to those in H, along with many other attributes.
    The members of table M ALL belong to EACH of the Hierarchies. So we have a link table I to define the intersection of H and M.
    So the leaf nodes of H are really containers for the list of elements from M which may be attached to them.
    The universe of M is very volatile, with new members being added, old ones deleted, and existing ones being reclassified frequently from node to node within each hierarchy. Since the hierarchies have to be built to handle every possible scenario, so that the members of M can always find a suitable node to reside in, quite often, in fact more often than not, the majority of leaf nodes for each hierarchy are empty at any given moment.
    Therefore, although we always know the root sector of a given hierarchy and can traverse downwards from there, if we worked our way up from the intersection table, we could eliminate up to 70% of the nodes of any given hierarchy from further consideration, as they don't need to be (in fact, must not be) included in reports.
    As implied by the above, rows in M are structurally similar (in terms of columns, but not in any real world sense) and are a superset of rows in H. But combining them into the one table doesn't seem to help the reporting process due to the many-to-many relationship which prevents the ID/parentID relationship from being carried through to this level.
    There are a number of other considerations of which the most pertinent is that the people using this database generally have an interest in only a subset of the master list of items in M. This relationship is also dynamic but important enough and rigid enough that another link table P exists to combine the Users in table U with the subset of M in which they are interested. (The users are also grouped into hierarchies of a totally different nature, but this aspect is secondary for now.)
    The reporting is reasonably straightforward for any single combination of User and Hierarchy; they want to see all the items they are interested in, listed in hierarchical sequence, totalled on change of level with the individual items M listed beneath the nodes of H. This is unfortunately required in real time, so retrieval performance is paramount.
    Some statistics might help to determine the optimum approach:
    The largest hierarchy has 10,000 nodes. The smallest about 100.
    The largest would have 70% or more of its nodes unused at any point in time, and even the smallest would have 25% unused.
    The hierarchies tend to be broad rather than deep, the maximum number of levels being about 5; but the larger ones should be twice as deep as this if performance was not compromised.
    There are dozens of hierarchies, but it may be possible to sharply reduce this number by exploiting the Order Siblings By clause.
    The number of rows in M varies between 500,000 and 50,000; depending upon how long historical data is retained on-line (and performance permitting, it would be retained indefinitely).
    The number of users varies between 1000 and 2000 but the range of M in which they are interested varies greatly; from as few as 100 to as many as 10,000+. So it is almost always worth beginning by eliminating the items in which they are not interested, implying once again that the hierarchy should be traversed upwards rather than down from the root.
    The current system is very old and survives by a tactic of building what are essentially materialised views of the database structure for each user overnight using, ahem, non-relational technology. This is inefficient and not easily scaled (but it works) and hence this redevelopment project needs to (a) work, and (b) work better and faster.
    I am happy to provide some DDL scripts if that helps explain the problem better than this narrative.
    I can't help feeling that the solution lies in somehow extending the hierarchical query past the many-to-many link table so that the Master list can be merged directly into the hierarchy such that the M items become the leaf nodes rather than the design outlined above - but I don't know how to do that. But I am sure everyone reading this does! :)
    All advice appreciated. Database version is not an issue; we are currently using version 10XE for experimentation, but production usage could be on 11 if that contains helpful features.
    Thank you
    CS

    Hi,
    ChrisS. wrote:
    I am happy to provide some DDL scripts if that helps explain the problem better than this narrative.Yes, please do.
    The problem seems interesting, I'm sure many people here (including myself) are willing to help you in this matter.
    So yes, post DDL for the tables, as well as INSERTs to populate them with representative data. Please also include the output you require along with detailed explanations about the logic to get it.
    Don't forget to put lines of code between &#x007B;code&#x007D; tags in order to preserve formatting and readability, like this :
    SELECT sysdate FROM dual;Thanks.

  • How to design many to many relationship in the fact and dimension

    There is a problem in my project what is the subject.And i wanna know how to implement in owb.I use the warehouse builder 10. Thanks.

    You may design and load whatever db model you want to.
    But If you set a unique key, you may find some integrity issues. I wouldn't do a many to many relationship between facts and dimensions. This could cause you lots of headaches when users start to submit queries using this tables. You'll probably face performance issues.
    Regards,
    Marcos

  • One to many relationship leads to recursion?

    Hi,
    I've created a web services function which returns an object retrieved from a database using the persistence api (the class was generated by the Netbeans 5.5 "Entity Class from Database" feature.
    I can successfully return an object which has no foreign keys. However, when I try to retrieve an entity which is on either side of a 1:N relationship, I receive a stackOverflowError.
    Here's my sample schema (I'm using Derby bundled with SJAS 9 for testing)
    create table Address (
       Id int not null,
       StreetName varchar(255),
       City varchar(255),
       PostalCode varchar(255),
       Party int,
       primary key (Id)
    create table Party (
       Id int,
       Name varchar(255),
       primary key(Id)
    alter table Address add constraint foreign key (Party) references Party;So as you can see, one Party might have many addresses.
    I can retrieve either an Address or Party from the database using something like:
    em.createNamedQuery("Party.findById")
       .setParameter("id", 1)
       .getSingleResult();When I try to return it, I get a stackOverflowError in the server log. I'm using the default Toplink provider so enabling full logging on this, I can see the SQL.
    When retrieving an address, it selects all of the address fields and correctly binds the supplied Id value. It then performs a select on the Party table using the Address.Party value. This in turn causes it to attempt to retrieve an Address. After three queries, the error is thrown.
    I can't see why the above schema should cause this problem. Where an Address is requested, I would expect it to retrieve the address fields, including the value of the Party field but not the Party object itself. On the other hand. when a Party is retrieved, I would expect it to recurse through and return a collection of Address objects.
    Have I misunderstood this or am I missing some fundamental point about JAX-WS?
    Thanks for your help,
    -Phil

    You have to set both sides of the relations...
    for example to do this in a more seamless manner:
    public UserProfile
         public void addLog (UserActivityLog activity)
              logs.add (activity);
              if (!equals (activity.getUserProfile ()))
                   activity.setUserProfile (this);
    Now this is the most primitive of ways to maintain two sided relations.
    There are a lot of other patterns to get around this.
    If you want to set the UserActivityLog to a specific UserProfile without
    having a firm reference to ther UserProfile, you should get the object
    from the database/persistenceManager... as you are using application
    identity, you can do something like pm.getObjectById.
    JDO does not do magic references. If you set something to null, it will
    remain null until you set it.
    Srini wrote:
    Hi Guys
    I am trying to create a one to many relationship between 2 JDOs.
    UserProfile - User JDO having user info
    UserActivityLog - JDO having user log info
    UserProfile -- UserActivityLog
    1 many
    UserProfile - has a collection of UserActivityLog objects
    UserActivityLog - has a reference to UserProfile
    Qn 1:
    I created a UserProfile along with a collection of 2 UserActivityLog
    objects.
    Navigation:
    Now given a UserProfile object,i am able to get the UserActivityLog objects.
    But given a UserActivityLog object ,i get a NULL UserProfile object.
    Issues:
    Why this is happening???Do i need to set the reference for "UserProfile" in
    UserActivityLog before inserting???
    If so,then how do i add a UserActivityLog to an existing
    UserProfile???cos,if i create a UserActivityLog with reference to
    UserProfile ,then it will throw a Duplicate Key violation for UserProfile as
    it is already persisted in DB.
    It will be great if you can direct me to some one to many relationship
    examples already implemented.
    Thanks
    Srini
    Stephen Kim
    [email protected]
    SolarMetric, Inc.
    http://www.solarmetric.com

Maybe you are looking for