Many-to-many performance issue

I realize that many-to-many joins have been discussed before (yes, I looked through many threads), but I'm having a slight variation on the issue. Our data warehouse has been functioning for a couple of years now, but we're now experiencing a dramatic degradation in report performance. I'll tell you everything I know and what I've tried. My hope is that someone will have an idea that hasn't occurred to me yet.
The troubling data links deal with accounts and account_types. Each transaction will have one account, but each account can have multiple account_types and each account_type is made up of multiple accounts. It ends up looking like this:
Transaction_cube --< account_dimension >--< account_type_table
Given the many-to-many relationship between account and account_type, this is the only architecture I could come up with that will maintain data integrity in the transaction cube.
I know that this is the cause of the performance issues because the reports run normally when this is removed. The volume of data obviously increases over time, but the problem appeared very suddenly -- not a gradual degradation that one would expect from a volume issue. The cube is partitioned by year and we're a little below last year's growth.
The other fact to throw in is that the account_type table did increase in size by an additional 30% when we first noticed the problem. However, the business was able to go back and remove half of the account_types (unused types) so now the table has fewer rows than it had before we noticed the problem (~15k rows in the account_type table).
We have tried pinning the table so that it remain in memory, but that did not help. I tried creating a materialized view combining accounts and account_types with a similar lack of improvement. I've tried adding indexes, but there is still a full-table scan. All database objects are analyzed nightly after the data load is completed.
I'm fresh out of ideas at this point. Any suggestions and/or ideas would be greatly appreciated.

I've thought about that. What it would mean would be aprox. 20 additional columns for each of the different account_types. Unfortunately, that would also mean that all the reports that use the account_type would have to have a condition:
WHERE acct_type1='Income Stmt." OR acct_type2='Income Stmt." OR ....
Since the account_types are not set up in a hierarchy and there must be only one row for account, I'm not sure that this is a feasible solution.
Thank you for the suggestion.

Similar Messages

  • Voyager Performance Issues

    We are having slow response times with BO Voyager, I'm not sure if this is due to the distance between the client session (US) and the physical location of the Server (Europe). Has anyone else experienced this before?
    Are there any general performance improvement measures that can be adopted/employed for Voyager as there doesn't seem to be any obvious measures that can be adopted?
    Is the performance entirely dependent on the speed of the OLAP connections supporting the Voyager workspaces
    Thanks

    Hello Shariff
    You don't mention what OLAP server you are using or any specifics about performance. Are your comments based on a comparison against another tool. The network distance won't help. However are all components located in Europe (OLAP server, BOE and MDAS) or just the OLAP server?
    The following are general guidelines that we follow when tracking down performance issues. I am assuming you are using Microsoft Analysis Services.
    Many of the performance issues raised against Voyager turn out to be problems with cube design. Microsoft has fairly extensive resources talking about best practices and performance tuning so it is always good to make sure customers are aware and are using these best practices. This post contains links to some of the resources:
    MSAS 2005
    [http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/ssasqptb.mspx |http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/ssasqptb.mspx] and
    [http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc|http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc]
    MSAS 2008
    [http://www.microsoft.com/downloads/details.aspx?FamilyID=3be0488d-e7aa-4078-a050-ae39912d2e43&DisplayLang=en|http://www.microsoft.com/downloads/details.aspx?FamilyID=3be0488d-e7aa-4078-a050-ae39912d2e43&DisplayLang=en]
    As for the cube optimization, here's the advise from Microsoft about how to speed up the cube response time:
    These are the two whitepapers that I recommend for background on performance tuning SSAS 2005:
    u2022         [Identifying and Resolving MDX Bottlenecks|http://tinyurl.com/33uxob]: This was produced by the SQL Customer Advisory Team, with input from a variety of sources. This is very focused on query tuning. It will explain how to determine whether a bottleneck is in the storage engine (responsible for retrieving data from partitions and aggregations) or in the formula engine (responsible for pretty much everything else). It explains relevant information about these components and how they work, and provides guidance on tuning issues in either component. Frankly, tuning problems in the storage engine is a much simpler task than tuning problems in the formula engine.  <URL:  http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx >
    u2022         [SQL Server 2005 Analysis Services Performance Guide|http://tinyurl.com/yr5hrv]: This predates the previous whitepaper and discusses performance more generally, including query and processing performance. It provides some very important guidance, especially with relation to cube design, that will help to achieve better performance with SSAS.
    Once you identified the query that's running very slow, then download Microsoft Business Intelligence Developer Studio Helper ([http://www.codeplex.com/bidshelper |http://www.codeplex.com/bidshelper]) and MANUALLY modify the aggregation at the level that takes the longest time.  Here's the writeup from Microsoft about step-by-step guide to BIDS: <see attachment>.
    Here is the general strategy we want to pursue to see if we can get Voyager to perform faster:
    First it would be good to get a better quantification of the problem. For example are they experiencing Voyager being in general u201Cxu201D times slower than Excel or another BI tool? And does this mean:
    u2022             Most operations in Excel take less than a second and most Voyager operations take 5 or 6 seconds
    u2022             Operations that take 10 seconds in Excel take around about a minute in Voyager
    u2022             Operations that take a minute in Excel take five or six minutes in Voyager
    u2022             All of the above
    The first answer is most likely to be Voyageru2019s scalable N tier environment versus Excelu2019s 2 tier environment. The others more likely suggest problems with the way that Voyager is retrieving the data from the OLAP server. However, these are not hard and fast rules as the reason for a performance difference is often workflow specific.
    The second step would be to do a sanity check on their environment. For example, make sure they havenu2019t deployed the web app server, mdas server on a VM machine with only 500 megs of ram. The things to check would be  CPU and memory usage:
    u2022             On the browser
    u2022             On the web application server machine
    u2022             On the MDAS machine
    u2022             On the Analysis Services machine (though this it could be assumed is OK if the performance of other tools is acceptable)
    The objective here would be to see if they have an under resourced environment. Unfortunately we donu2019t have a Voyager sizing guide so I cannot give pro-active recommendations about what things should look like other than to say obvious things like u2018if your MDAS machine is running out of memory with one user you need more memoryu2019. Our very general comment is that one MDAS can support 15 queries.
    The final step would be to identify any specific workflows that demonstrate the problem, in particular any workflows that demonstrate an extreme difference between Voyager and the other products, especially if itu2019s a high value workflow to the customer. The objective here would be to do some detailed profiling to see if there is an opportunity for a bug fix which could be released as a patch.
    It would also be good to know which version Voyager they are using (which fix packs etc.), which browser they are using, and their deployment environment (OS, how many machines, CPU speed, memory). There are various fixes that give better performance for specific  workflows that have gone into Voyager over time, so (without specific knowledge of what the problems are) it would be great to see if an XI 3.1 version of Voyager meets their expectations.
    General performance problems are unlikely to be fixed with bug fixes , and having a better system configuration will only take you so far before you hit the limits of what you can get out of an N tier system opposed to a 2 tier thick client one. Historically we have tended to find very specific problems which we were able to identify the root cause of and fix by issuing patches.
    I hope this helps somewhat.
    Regards

  • Discoverer 9.0.4.45.02 Performance issues

    I see various performance issues with this version of discoverer.Like -
    1.Reports taking a long time. eg - it take 25 minutes to display 100 recorrds
    2.CPU spikes whie exporting reports to excel spreadsheet using discoverer viewer / plus.
    I have read many whitepapers on performance issues. But nothing is very specific to discoverer 10g. Anybody encountered similar issues ?

    It depends...
    What kind of host do you have?
    1) Have you checked memory consumtion - perhaps you're swapping a lot?
    2) Have you checked the cpu consumtion - perhaps you need a better cpu or more cpu's.
    3) How about network performance?
    Regards,
    Martin Malmstrom

  • Can someone help me diagnose a strange stored procedure performance issue please?

    I have a stored procedure (posted below) that returns message recommendations based upon the Yammer Networks you have selected. If I choose one network this query takes less than one second. If I choose another this query takes 9 - 12 seconds.
    /****** Object: StoredProcedure [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet] Script Date: 2/18/2015 3:10:35 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE [dbo].[MessageView_GetOutOfContextRecommendations_LargeSet]
    -- Parameters
    @UserID int,
    @SourceMessageID int = 0
    AS
    BEGIN
    -- variable for @HomeNeworkUserID
    Declare @HomeNeworkUserID int
    -- Set the HomeNetworkID
    Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON
    -- Begin Select Statement
    Select Top 40 [CreatedDate],[FileDownloadUrl],[HasLinkOrAttachment],[ImagePreviewUrl],[LikesCount],[LinkFileName],[LinkType],[MessageID],[MessageSource],[MessageText],[MessageWebUrl],[NetworkID],[NetworkName],[PosterEmailAddress],[PosterFirstName],[PosterImageUrl],[PosterName],[PosterUserName],[PosterWebUrl],[RepliesCount],[Score],[SmallIconUrl],[Subjects],[SubjectsCount],[UserID]
    -- From View
    From [MessageView]
    -- Do Not Return Any Messages That Have Been Recommended To This User Already
    Where [MessageID] Not In (Select MessageID From MessageRecommendationHistory Where UserID = @UserID)
    -- Do Not Return Any Messages Created By This User
    And [UserID] != @UserID
    -- Do Not Return The MessageID
    And [MessageID] != @SourceMessageID
    -- Only return messages for the Networks the user has selected
    And [NetworkID] In (Select NetworkID From NetworkUser Where [HomeNetworkUserID] = @HomeNeworkUserID And [AllowRecommendations] = 1)
    -- Order By [MessageScore] and [MessageCreatedDate] in reverse order
    Order By [Score] desc, [CreatedDate] desc
    ENDThe Actual Execution Plan Shows up the same; there are more messages on the Network that is slow, 2800 versus 1,500 but the difference is ten times longer on the slow network.Is the fact I am doing a Top 40 what makes it slow? My first guess was to take the Order By Off and that didn't seem to make any difference.The execution plan is below, it takes 62% of the query to look up theIX_Message.Score which is the clustered index, so I thought this would be fast. Also the Clustered Index Seek for the User.UserID take 26%which seems high for what it is doing.
    I have indexes on every field that is queried on so I am kind of at a loss as to where to go next.
    It just seems strange because it is the same view being queried in both cases.
    I tried to run the SQL Server Tuning Wizard but it doesn't run on Azure SQL, and my problem doesn't occur on the data in my local database.
    Thanks for any guidance, I know a lot of the slowness is due to the lower tier Azure SQL we are using, many of the performance issues weren't noticed when were on the full SQL Server, but the other networks work extremely fast so it has to be something to
    with having more rows.
    In case you need the SQL for the View that I am querying it is:
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE VIEW [dbo].[MessageView]
    AS
    SELECT M.UserID, M.MessageID, M.NetworkID, N.Name AS NetworkName, M.Subjects, M.SubjectsCount, M.RepliesCount, M.LikesCount, M.CreatedDate, M.MessageText, M.HasLinkOrAttachment, M.Score, M.WebUrl AS MessageWebUrl, U.UserName AS PosterUserName,
    U.Name AS PosterName, U.FirstName AS PosterFirstName, U.ImageUrl AS PosterImageUrl, U.EmailAddress AS PosterEmailAddress, U.WebUrl AS PosterWebUrl, M.MessageSource, M.ImagePreviewUrl, M.LinkFileName, M.FileDownloadUrl, M.LinkType, M.SmallIconUrl
    FROM dbo.Message AS M INNER JOIN
    dbo.Network AS N ON M.NetworkID = N.NetworkID INNER JOIN
    dbo.[User] AS U ON M.UserID = U.UserID
    GO
    The Network Table has an Index on Network ID, but it non clustered but I don't think that is the culprit.
    Corby

    I marked your response as answer because you gave me information I didn't have about the sort. I ended up rewriting the query to be a join instead of the In's and it improved dramatically, about one second on a very minimal Azure SQL database, and before
    it was 12 seconds on one network. We didn't notice the problem at all before we moved to Azure SQL, it was about one - three seconds at most.
    Here is the updated way that was much more efficient:
    CREATE PROCEDURE [dbo].[Procedure Name]
    -- Parameters
    @UserID int,
    @SourceMessageID int = 0
    AS
    BEGIN
    -- variable for @HomeNeworkUserID
    Declare @HomeNeworkUserID int
    -- Set the HomeNetworkID
    Set @HomeNeworkUserID = (Select HomeNetworkUserID From NetworkUser Where UserID = @UserID)
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT ON
    ;With cteMessages As
    -- Begin Select Statement
    Select (Fields List)
    -- Join to Network Table
    From MessageView mv Inner Join NetworkUser nu on MV.NetworkID = nu.NetworKID -- Only Return Networks This User Has Selected
    Where nu.HomeNetworkUserID = @HomeNeworkUserID And AllowRecommendations = 1
    -- Do Not Return Any Messages Created By This User
    And mv.[UserID] != @UserID
    -- Do Not Return The MessageID
    And mv.[MessageID] != @SourceMessageID
    ), cteHistoryForThisUser As
    Select MessageID From MessageRecommendationHistory Where UserID = @UserID
    -- Begin Select Statement
    Select Top 40 (Fields List)
    -- Join to Network Table
    From cteMessages m Left Outer Join cteHistoryForThisUser h on m.MessageID = h.MessageID
    -- Do Not Return Any Items Where User Has Already been shown this Message
    Where h.MessageID Is Null
    -- An Order By Is Needed To Get The Best Content First
    Order By Score Desc
    END
    GO
    The Left Outer Join to test for null was the biggest improvement, but it also helped to join to the NetworkUser table instead of do the In sub query.

  • Many-to-many Performance Problem (Using FAQ Template)

    Having read "HOW TO: Post a SQL statement tuning request - template posting" I have gathered:
    I have included some background information at the bottom of the post
    The following SQL statement has been identified as performing poorly. It takes ~160 seconds to execute, but similar (shown below first statement) SQL statements executes in ~1 second.
    SQL taking 160 seconds:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1);SQL taking ~1 second or less
    ab.channel_fk IN (7);Or even:
    ab.channel_fk IN (6, 9, 170, 89);The purpose of the SQL is to return rows from table_a that are associated with table_b (not in SQL) through the junction table table_a_b.
    The version of the database is 10.2.0.4.0
    These are the parameters relevant to the optimizer:
    show parameter optimizer;
    NAME                                               TYPE        VALUE
    optimizer_dynamic_sampling                         integer     2
    optimizer_features_enable                          string      10.2.0.4
    optimizer_index_caching                            integer     0
    optimizer_index_cost_adj                           integer     100
    optimizer_mode                                     string      ALL_ROWS
    optimizer_secure_view_merging                      boolean     TRUE
    show parameter db_file_multi;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    show parameter db_block_size;
    NAME                                               TYPE        VALUE
    db_file_multiblock_read_count                      integer     16
    select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME                          PNAME                          PVAL1                  PVAL2
    SYSSTATS_INFO                  STATUS                                                COMPLETED
    SYSSTATS_INFO                  DSTART                                                07-18-2006 23:19
    SYSSTATS_INFO                  DSTOP                                                 07-25-2006 23:19
    SYSSTATS_INFO                  FLAGS                          0
    SYSSTATS_MAIN                  SREADTIM                       5.918
    SYSSTATS_MAIN                  MREADTIM                       7.889
    SYSSTATS_MAIN                  CPUSPEED                       1383
    SYSSTATS_MAIN                  MBRC                           8
    SYSSTATS_MAIN                  MAXTHR                         1457152
    SYSSTATS_MAIN                  SLAVETHR                       -1Here is the output of EXPLAIN PLAN:
    PLAN_TABLE_OUTPUT
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       2 - filter(""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7)
    Note
       - 'PLAN_TABLE' is old versionFor reference, the EXPLAIN PLAN when using
    ab.channel_fk IN (6, 9, 170, 89);which executes in ~1 second is:
    PLAN_TABLE_OUTPUT
    Plan hash value: 794334170
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |           |   143K|    81M|       | 58982   (3)| 00:05:50 |
    |*  1 |  HASH JOIN         |           |   143K|    81M|  2952K| 58982   (3)| 00:05:50 |
    |   2 |   INLIST ITERATOR  |           |       |       |       |            |          |
    |*  3 |    INDEX RANGE SCAN| C_M_INDEX |   143K|  1262K|       |  1264   (1)| 00:00:08 |
    |   4 |   TABLE ACCESS FULL| TABLE_A   |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""A"".""ID"")
       3 - access(""AB"".""CHANNEL_FK""=6 OR ""AB"".""CHANNEL_FK""=9 OR
                  ""AB"".""CHANNEL_FK""=89 OR ""AB"".""CHANNEL_FK""=170)
    Note
       - 'PLAN_TABLE' is old versionHere is the output of SQL*Plus AUTOTRACE including the TIMING information:
    SQL> set autotrace traceonly arraysize 100;
    SQL> SELECT
      2  a.*
      3  FROM
      4  table_a a
      5  INNER JOIN table_a_b ab ON a.id = ab.media_fk
      6  WHERE
      7  ab.channel_fk IN (7, 1);
    1336148 rows selected.
    Execution Plan
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                    |  1352K|   771M|       | 60042   (3)| 00:05:56 |
    |*  1 |  HASH JOIN            |                    |  1352K|   771M|    27M| 60042   (3)| 00:05:56 |
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |  1352K|    11M|       |  1816   (4)| 00:00:11 |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |  2190K|  1230M|       | 32357   (4)| 00:03:12 |
    Predicate Information (identified by operation id):
       1 - access("AB"."MEDIA_FK"="A"."ID")
       2 - filter("AB"."CHANNEL_FK"=1 OR "AB"."CHANNEL_FK"=7)
    Note
       - 'PLAN_TABLE' is old version
    Statistics
          10586  recursive calls
              0  db block gets
         200457  consistent gets
         408343  physical reads
              0  redo size
      498740848  bytes sent via SQL*Net to client
         147371  bytes received via SQL*Net from client
          13363  SQL*Net roundtrips to/from client
             49  sorts (memory)
              0  sorts (disk)
        1336148  rows processedThe TKPROF output for this statement looks like the following:
    TKPROF: Release 10.2.0.4.0 - Production on Mon Oct 1 12:23:21 2012
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: ..._ora_4896.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    ALTER SYSTEM SET TIMED_STATISTICS = TRUE
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.03          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.03          0          0          0           0
    Misses in library cache during parse: 0
    Parsing user id: 21
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (7, 1)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        4     27.25     163.58     179906     198394          0          16
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 21
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.01       0.00          0          0          0           0
    Execute      2      0.00       0.03          0          0          0           0
    Fetch        2     27.23     163.57     179906     198394          0          16
    total        6     27.25     163.62     179906     198394          0          16
    Misses in library cache during parse: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        2  user  SQL statements in session.
        0  internal SQL statements in session.
        2  SQL statements in session.
    Trace file: ..._ora_4896.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           2  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           2  SQL statements in trace file.
           2  unique SQL statements in trace file.
          46  lines in trace file.
         187  elapsed seconds in trace file.The DBMS_XPLAN.DISPLAY_CURSOR output:
    select * from table(dbms_xplan.display_cursor('474frsqbc1n4d', null, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  474frsqbc1n4d, child number 0
    SELECT /*+ gather_plan_statistics */ c.* FROM table_a c INNER JOIN table_a_b ab ON c.id = ab.media_fk WHERE ab.channel_fk IN (7, 1)
    Plan hash value: 3781163428
    | Id  | Operation             | Name               | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
    |*  1 |  HASH JOIN            |                    |      1 |   1352K|   1050 |00:00:40.93 |     198K|    182K|    209K|    29M|  5266K| 3320K (1)|
    |*  2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_316310 |      1 |   1352K|   1336K|00:00:01.34 |   10874 |      0 |      0 |       |       |          |
    |   3 |   TABLE ACCESS FULL   | TABLE_A            |      1 |   2190K|   2267K|00:02:45.56 |     187K|    182K|      0 |       |       |          |
    Predicate Information (identified by operation id):
       1 - access(""AB"".""MEDIA_FK""=""C"".""ID"")
       2 - filter((""AB"".""CHANNEL_FK""=1 OR ""AB"".""CHANNEL_FK""=7))Thank you for reading I'm looking forward for suggestions how to improve the performance of this statement.
    h3. Backgroud
    Many years ago my company made the decision to store many-to-many relationships in our database using pipe delimited fields. An example field value:
    '|ABC|XYZ|VTR|DVD|'Each delimited value refers to a unique 'short code' in TABLE_B (There is also a true numeric foreign key in TABLE_B which is what I'm using in the junction table). We regularly search using these column with the following style SQL:
    WHERE
    INSTR(pipedcolumn, '|ABC|') > 0
    OR INSTR(pipedcolumn, '|XYZ|' > 0
    ...Appropriate indexes have been created over the years to make this process as fast a possible.
    We now have an opportunity to fix some of these design mistakes and implement junction tables to replace the piped field. Before this we decided to take a copy of a database from a customer with the largest record set and test. I created a new junction table:
    TABLE_A_B DDL:
        CREATE TABLE TABLE_A_B (
            media_fk NUMBER,
            channel_fk NUMBER,
            PRIMARY KEY (media_fk, channel_fk),
            FOREIGN KEY (media_fk) REFERENCES TABLE_A (ID),
            FOREIGN KEY (channel_fk) REFERENCES TABLE_B (ID)
        ) ORGANIZATION INDEX COMPRESS;
        CREATE INDEX C_M_INDEX ON TABLE_A_B (channel_fk, media_fk) COMPRESS;And parsing out a pipe delimited field, populated this new table.
    I then compared the performance of the following SQL:
    SELECT
    a.*
    FROM
    table_a a
    INNER JOIN table_a_b ab ON a.id = ab.media_fk
    WHERE
    ab.channel_fk IN (x, y, n); -- Can be Many Minutes
    --vs.
    SELECT
    a.*
    FROM
    table_a a
    WHERE
    INSTR(OWNERS,'|x|')    >0
    OR INSTR(OWNERS,'|y|')    >0
    OR INSTR(OWNERS,'|n|')    >0; -- About 1 second seemingly regardlessWhen x, y, n are values that occur less frequently in TABLE_A_B.CHANNEL_FK the performance is comparable. However once the frequency of x, y, n increases the performance suffers. Here is a summary of the CHANNEL_FK data in TABLE_A_B:
    --SQL For Summary Data
    SELECT channel_fk, count(channel_fk) FROM table_a_b GROUP BY channel_fk ORDER BY COUNT(channel_fk) DESC;
    CHANNEL_FK             COUNT(CHANNEL_FK)
    7                      780741
    1                      555407
    2                      422493
    3                      189493
    169                    144663
    9                      79457
    6                      53051
    171                    28401
    170                    19857
    49                     12603
    ...I've noticed that once I use any combination of values which occur more than about 800,000 times (i.e. IN (7, 1) = 780741 + 555407 = 1336148) then I get performance issues.
    I'm finding it very difficult to accept that the old pipe delimited fields are a better solution (ignoring everything other than this search criteria!).
    Thank you for reading this far. I truly look forward to suggestions on how to improve the performance of this statement.
    Edited by: user1950227 on Oct 1, 2012 12:06 PM
    Renamed link table in DDL.

    Possibly not, I followed the instructions as best as I could but may have missed things.
    h5. 1. DDL for all tables and indexes?
    h6. - TABLE_A_B is described above and has a total of 2,304,642 rows. TABLE_A and TABLE_B are described below.
    h5. 2. row counts for all tables?
    h6. - See below
    h5. 3. row counts for the predicates involved?
    h6. - Not sure what your asking for, I have a summary of data in TABLE_A_B above. Could you clarify please?
    h5. 4. Method and command used to collect stats on the tables and indexes?
    h6. - For the stats I collected above I have included the command used to collect the data. If you are asking for further data I am happy to provide it but need more information. Thanks.
    TABLE_A has 2,267,980 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_A
      CREATE TABLE "NS"."TABLE_A"
       (     "ID" NUMBER
         --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index ID_PK
      CREATE UNIQUE INDEX "NS"."MI_PK" ON "NS"."TABLE_A" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_A
      ALTER TABLE "NS"."TABLE_A" ADD CONSTRAINT "MI_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 29458432 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_A" MODIFY ("ID" NOT NULL ENABLE);TABLE_B has 22 rows. The DLL that follows has been abbriviated, only the column involved is described.
    --  DDL for Table TABLE_B
      CREATE TABLE "NS"."TABLE_B"
         "ID" NUMBER
      --Lots more columns
       ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
      STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "CUSTOMNAMESPACE" ;
    --  DDL for Index CID_PK
      CREATE UNIQUE INDEX "NS"."CID_PK" ON "NS"."TABLE_B" ("ID")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM" ;
    --  Constraints for Table TABLE_B
      ALTER TABLE "NS"."TABLE_B" ADD CONSTRAINT "CID_PK" PRIMARY KEY ("ID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 16384 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 505
      PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "SYSTEM"  ENABLE;
      ALTER TABLE "NS"."TABLE_B" MODIFY ("ID" NOT NULL ENABLE);Edited by: davebcast on Oct 1, 2012 8:51 PM
    Index name incorrect
    Edited by: davebcast on Oct 1, 2012 8:52 PM

  • Short dump in alv too many parameters in perform)

    I M GETTING PROBLEM IN THIS PROGRAM AGAIN.
    Getting short dump too many paramamerets in perform
    <CODE>Report Z_50840_ALV
    Line-size 80
    Line-count 64
    Message-id ZZ
    No Standard Page Heading.
    Copyright statement *
    @ copyright 2007 by Intelligroup Inc. *
    Program Details *
    Program Name: Z_50840_ALV
    Date : 19.07.2007
    Author : Vasudevaraman V
    Description : Test Program
    Transport No:
    Change Log *
    Date :
    Author :
    Description :
    Transport No:
    Tables *
    Tables: vbrk.
    Type Pools *
    Type-Pools: SLIS.
    Variables *
    Data: GV_REPID TYPE SY-REPID.
    Structures *
    Data: BEGIN OF GIT_VBRK OCCURS 0,
    VBELN LIKE VBRK-VBELN, "Billing Document
    FKART LIKE VBRK-FKART, "Billing Type
    KNUMV LIKE VBRK-KNUMV, "Number of the document condition
    BUKRS LIKE VBRK-BUKRS, "Company code
    NETWR LIKE VBRK-NETWR, "Net value in document currency
    WAERK LIKE VBRK-WAERK, "SD document currency in basic list
    END OF GIT_VBRK,
    GIT_FCAT TYPE SLIS_T_FIELDCAT_ALV,
    WA_FCAT TYPE slis_fieldcat_alv,
    GIT_EVENTS TYPE SLIS_T_EVENT,
    WA_EVENTS TYPE SLIS_ALV_EVENT.
    Field Symbols *
    Field-symbols: <fs_xxxx>.
    Selection Screen *
    SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
    SELECT-OPTIONS: S_VBELN FOR VBRK-VBELN.
    PARAMETERS: LISTDISP RADIOBUTTON GROUP G1,
    GRIDDISP RADIOBUTTON GROUP G1 DEFAULT 'X'.
    SELECTION-SCREEN END OF BLOCK B1.
    Initialization *
    Initialization.
    GV_REPID = SY-REPID.
    At Selection Screen *
    At selection-screen.
    Start Of Selection *
    Start-of-selection.
    SET PF-STATUS 'ABC'(001).
    PERFORM GET_BILLING_DETAILS.
    PERFORM FIELD_CATALOGUE.
    PERFORM GET_EVENTS.
    End Of Selection *
    End-of-selection.
    PERFORM DISPLAY_BILLING_DETAILS.
    Top Of Page *
    Top-of-page.
    End Of Page *
    End-of-page.
    *& Form GET_BILLING_DETAILS
    text
    --> p1 text
    <-- p2 text
    FORM GET_BILLING_DETAILS .
    SELECT VBELN
    FKART
    KNUMV
    BUKRS
    NETWR
    WAERK
    FROM VBRK
    INTO TABLE GIT_VBRK
    WHERE VBELN IN S_VBELN.
    IF SY-SUBRC = 0.
    SORT GIT_VBRK BY VBELN.
    ENDIF.
    ENDFORM. " GET_BILLING_DETAILS
    *& Form FIELD_CATALOGUE
    text
    --> p1 text
    <-- p2 text
    FORM FIELD_CATALOGUE .
    CALL FUNCTION 'REUSE_ALV_FIELDCATALOG_MERGE'
    EXPORTING
    I_PROGRAM_NAME = GV_REPID
    I_INTERNAL_TABNAME = 'GIT_VBRK'
    I_STRUCTURE_NAME = I_STRUCTURE_NAME
    I_CLIENT_NEVER_DISPLAY = 'X'
    I_INCLNAME = GV_REPID
    I_BYPASSING_BUFFER = 'X'
    I_BUFFER_ACTIVE = ' '
    CHANGING
    CT_FIELDCAT = GIT_FCAT
    EXCEPTIONS
    INCONSISTENT_INTERFACE = 1
    PROGRAM_ERROR = 2
    OTHERS = 3
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDFORM. " FIELD_CATALOGUE
    *& Form DISPLAY_BILLING_DETAILS
    text
    --> p1 text
    <-- p2 text
    FORM DISPLAY_BILLING_DETAILS .
    IF LISTDISP = 'X'.
    CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
    EXPORTING
    I_INTERFACE_CHECK = ' '
    I_BYPASSING_BUFFER = 'X'
    I_BUFFER_ACTIVE = ' '
    I_CALLBACK_PROGRAM = GV_REPID
    I_CALLBACK_PF_STATUS_SET = ' '
    I_CALLBACK_USER_COMMAND = ' '
    I_STRUCTURE_NAME = I_STRUCTURE_NAME
    IS_LAYOUT = IS_LAYOUT
    IT_FIELDCAT = GIT_FCAT
    IT_EXCLUDING = IT_EXCLUDING
    IT_SPECIAL_GROUPS = IT_SPECIAL_GROUPS
    IT_SORT = IT_SORT
    IT_FILTER = IT_FILTER
    IS_SEL_HIDE = IS_SEL_HIDE
    I_DEFAULT = 'X'
    I_SAVE = ' '
    IS_VARIANT = IS_VARIANT
    IT_EVENTS = GIT_EVENTS
    IT_EVENT_EXIT = IT_EVENT_EXIT
    IS_PRINT = IS_PRINT
    IS_REPREP_ID = IS_REPREP_ID
    I_SCREEN_START_COLUMN = 0
    I_SCREEN_START_LINE = 0
    I_SCREEN_END_COLUMN = 0
    I_SCREEN_END_LINE = 0
    IR_SALV_LIST_ADAPTER = IR_SALV_LIST_ADAPTER
    IT_EXCEPT_QINFO = IT_EXCEPT_QINFO
    I_SUPPRESS_EMPTY_DATA = ABAP_FALSE
    IMPORTING
    E_EXIT_CAUSED_BY_CALLER = E_EXIT_CAUSED_BY_CALLER
    ES_EXIT_CAUSED_BY_USER = ES_EXIT_CAUSED_BY_USER
    TABLES
    T_OUTTAB = GIT_VBRK
    EXCEPTIONS
    PROGRAM_ERROR = 1
    OTHERS = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ELSE.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
    I_INTERFACE_CHECK = ' '
    I_BYPASSING_BUFFER = 'X'
    I_BUFFER_ACTIVE = ' '
    I_CALLBACK_PROGRAM = GV_REPID
    I_CALLBACK_PF_STATUS_SET = ' '
    I_CALLBACK_USER_COMMAND = 'USER_COMMAND'
    I_CALLBACK_TOP_OF_PAGE = ' '
    I_CALLBACK_HTML_TOP_OF_PAGE = ' '
    I_CALLBACK_HTML_END_OF_LIST = ' '
    I_STRUCTURE_NAME = I_STRUCTURE_NAME
    I_BACKGROUND_ID = ' '
    I_GRID_TITLE = I_GRID_TITLE
    I_GRID_SETTINGS = I_GRID_SETTINGS
    IS_LAYOUT = IS_LAYOUT
    IT_FIELDCAT = GIT_FCAT
    IT_EXCLUDING = IT_EXCLUDING
    IT_SPECIAL_GROUPS = IT_SPECIAL_GROUPS
    IT_SORT = IT_SORT
    IT_FILTER = IT_FILTER
    IS_SEL_HIDE = IS_SEL_HIDE
    I_DEFAULT = 'X'
    I_SAVE = ' '
    IS_VARIANT = IS_VARIANT
    IT_EVENTS = GIT_EVENTS
    IT_EVENT_EXIT = IT_EVENT_EXIT
    IS_PRINT = IS_PRINT
    IS_REPREP_ID = IS_REPREP_ID
    I_SCREEN_START_COLUMN = 0
    I_SCREEN_START_LINE = 0
    I_SCREEN_END_COLUMN = 0
    I_SCREEN_END_LINE = 0
    I_HTML_HEIGHT_TOP = 0
    I_HTML_HEIGHT_END = 0
    IT_ALV_GRAPHICS = IT_ALV_GRAPHICS
    IT_HYPERLINK = IT_HYPERLINK
    IT_ADD_FIELDCAT = IT_ADD_FIELDCAT
    IT_EXCEPT_QINFO = IT_EXCEPT_QINFO
    IR_SALV_FULLSCREEN_ADAPTER = IR_SALV_FULLSCREEN_ADAPTER
    IMPORTING
    E_EXIT_CAUSED_BY_CALLER = E_EXIT_CAUSED_BY_CALLER
    ES_EXIT_CAUSED_BY_USER = ES_EXIT_CAUSED_BY_USER
    TABLES
    T_OUTTAB = GIT_VBRK
    EXCEPTIONS
    PROGRAM_ERROR = 1
    OTHERS = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDIF.
    ENDFORM. " DISPLAY_BILLING_DETAILS
    *& Form GET_EVENTS
    text
    --> p1 text
    <-- p2 text
    FORM GET_EVENTS .
    CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
    EXPORTING
    I_LIST_TYPE = 0
    IMPORTING
    ET_EVENTS = GIT_EVENTS
    EXCEPTIONS
    LIST_TYPE_WRONG = 1
    OTHERS = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    LOOP AT GIT_EVENTS INTO WA_EVENTS.
    CASE WA_EVENTS-NAME.
    WHEN 'USER_COMMAND'.
    WA_EVENTS-FORM = 'USER_COMMAND'.
    ENDCASE.
    MODIFY GIT_EVENTS FROM WA_EVENTS INDEX SY-TABIX.
    ENDLOOP.
    ENDFORM. " GET_EVENTS
    FORM USER_COMMAND.
    WRITE :/ 'USER_COMMAND'.
    ENDFORM.</CODE>.
    REGARDS,
    SURAJ

    I have run the program in my system and getting the following display instead of dump.
    Bill.Doc.  BillT Doc.cond.  CoCd             Net value Curr.
    90000763   B2    0000002800 1000                 0.00  DEM
    90005177   F2    0000012141 1000             5,500.00  DEM
    90005178   F2    0000012144 1000            32,838.00  DEM
    90005179   F2    0000012146 1000             6,100.00  DEM
    90005180   F2    0000012147 1000             6,100.00  DEM
    90005182   S1    0000012226 1000             5,500.00  DEM
    90005183   S1    0000012227 1000            32,838.00  DEM
    90005184   S1    0000012228 1000             6,100.00  DEM
    90005185   S1    0000012229 1000             6,100.00  DEM
    90005186   F2    0000012230 1000             6,100.00  DEM
    90005187   F2    0000012231 1000             6,100.00  DEM
    90005188   F2    0000012232 1000            32,778.00  DEM
    90005189   F2    0000012233 1000            34,354.00  DEM
    90005190   F2    0000012234 1000            19,991.00  DEM
    90005191   F2    0000012235 1000            19,719.00  DEM
    90005192   F2    0000012236 1000            43,004.00  DEM
    90005193   F2    0000012237 1000             9,242.00  DEM
    90005194   F2    0000012238 1000            12,156.00  DEM
    90005195   F2    0000012239 1000             7,294.00  DEM
    90005196   F2    0000012240 1000             9,694.00  DEM
    90005197   F2    0000012241 1000            32,838.00  DEM
    90005198   F2    0000012242 1000             9,352.00  DEM
    90005199   F2    0000012243 1000            13,013.00  DEM

  • OLAP issue with MANY TO MANY mapping

    Hi All,
    We have a requirement where we have to pull specific measures & associated dimensions data from an OLAP to SQL tables. The source cube has almost 80 % of MANY TO MANY mappings.
    When we pull this data to SQL tables either by writing MDX or DMX dimension level metric values are not matching with what OLAP browsing is providing.
    When we pull this measure with only regular dimensions metric values with all dimensions exactly match. The mismatch issue comes when we have at least 1 MANY TO MANY dimension part of MDX or DMX query. Further to this we have pulled all intermediate facts
    & dimensions involved in MANY TO MANY mapping into SQL tables & tried a number of JOINS but the metric values haven’t match up.
    We are very close on delivery dates & are not sure on any resolution. Could you please guide us on next steps here.
    Thanks is advance.

    Hi All,
    We have a requirement where we have to pull specific measures & associated dimensions data from an OLAP to SQL tables. The source cube has almost 80 % of MANY TO MANY mappings.
    When we pull this data to SQL tables either by writing MDX or DMX dimension level metric values are not matching with what OLAP browsing is providing.
    When we pull this measure with only regular dimensions metric values with all dimensions exactly match. The mismatch issue comes when we have at least 1 MANY TO MANY dimension part of MDX or DMX query. Further to this we have pulled all intermediate facts
    & dimensions involved in MANY TO MANY mapping into SQL tables & tried a number of JOINS but the metric values haven’t match up.
    We are very close on delivery dates & are not sure on any resolution. Could you please guide us on next steps here.
    Thanks is advance.

  • Many extend decrease performance?

    hallo,
    it is thru, if a segment is fragmented in many extend, then performance decrease?
    if yes, what is a threshold?
    if 10 is the threshold, this query:
    SELECT SEGMENT_NAME, BYTES, BLOCKS, EXTENTS FROM DBA_SEGMENTS
    WHERE OWNER='xxxxx' AND EXTENTS>10 ;
    show "bad" segment?
    Thanks in advance

    Multiple extents would only affect the performance of full table scans since single block IO is used for key access to an index and indexed access to a table block from an indexed key.
    With a full table scan the number of extents only affects DML IO performance if the extent size is not an even multiple of the multiblock extent size since a extra IO is now required to get the odd blocks.
    Example 64K multi-block IO size. If the table is in 100 64k extents it will take 100 IO requests to read the table. If the table is in a single 6400k extent it will take 100 IO requests to read the table. Since 100 = 100 the performance is the same.
    I did not follow the link but there should be a proof at Tom's site. It isn't very hard to run this test and check the IO statistics oneself.
    HTH -- Mark D Powell --

  • Many-to-many relationship performance problem

    Hi:
    I have model that uses a bridge table to solve a many-to-many relationship. Here it is:
    dimension 1 -< fact table >-dimension 2-< bridge table >-additional data table
    Now, when I drive from the additional data table with a specific value and join to the bridge table and dimension 2, the performance is fine. But, as soon as I add the fact table to the query, the query never returns. I'm in a development environment, so my fact table has 220K records, and the bridge table has 200K records. The dimension 2 and additional data tables have hundreds of records. In other words, it's not much data. I have indexes and referential constraints on all relevant columns.
    Can anyone suggest what is happening to cause such a performance hit when I add the fact table to the query?
    Thanks for any suggestions!

    sybrand_b wrote:
    The way you write it yes, but there is one minor detail.
    You can have a 0,1 or many relationship: one employee has zero, one or many phone numbers
    but you can not have the opposite, as it doesn't make sense.
    0, 1 or many phone numbers belong to 1 employee.
    It is customary to use 0,1 or m when the relationship is optional.
    Sybrand Bakker
    Senior Oracle DBAIs this correct now ?
    one to many : one employee has 0,1 or multiple phone numbers
    many to one : 1 or multiple phone numbers to one employee

  • Many to Many Retrieval Issue

    While understanding the limitation of having both a child and parent both be updatable, I am having trouble getting the child (read only mapping) to retrieve the appropriate objects??? is there some trick to get this to happen? the writable mapping is able to retrieve them, but the read only one just doesn't populate?
    Thanks,
    Michael

    Ok, I think I am closer to the real issue... your example provides a fremework that functions as it should.
    However, what I am doing is:
    (FYI: My parents can update their children in the M-M mapping)
    1. Copying a child object using a copy constructor.
    The constructor retains all existing relationships to it's parent and children (both many to many) by looping through them and calling the setParent and setChild appropriate methods that you recommended.
    ...this works properly
    2. Then I assign a new (already persisted) Parent object to the newly created child. When I persist the child, I get the following error after fully validating the uow:
    Exception occured: LOCAL EXCEPTION STACK:
    EXCEPTION [TOPLINK-7056] (TopLink - 9.0.3.2 (Build 429)): oracle.toplink.exceptions.ValidationException
    EXCEPTION DESCRIPTION: The wrong object was registered into the UnitOfWork. The object [com.wachovia.retail.dal.business.loanreview.Element@9815354] should be the object from the parent cache
    Can you explain why this is happening?
    FYI, if I attempt to assign a new (already persisted) child, it works fine.

  • Mapping issue, "many to many"  mapping

    Hi specialists!
    I have to map this structure:
    source
    segment_1   (1...n)
    key_1
    aaa
    bbb
    segment_2   (1...n)
    key_2
    ccc
    ddd
    Target:
    segment_2
    ccc
    ddd
    The condition is segment_1=>key1 = segment_2=>key2
    The problem is that I need to check each segment_1 with each segment_2.
    Is it possible using standard mapping functions?

    Dani_K,
    This can be done with standard mapping if your source.segment_1.key_1 is unique.   If it is not, you could end up with a many to many mapping which would be difficult no matter how you map. 
    Try this:
    For segment_2
    source.segment_1.key_1 (right click, context, source) ---> sort --->
    source.segment_2.key_2 (right click, context, source) ---> sort --->
    equalsS --->   ifWithoutElse --->
    source.segment_2  (then)               
    removeContexts  (after the If Then)--->   target.segment_2
    For ccc
    source.segment_1.key_1 (right click, context, source) ---> sort --->
    source.segment_2.key_2 (right click, context, source) ---> sort --->
    equalsS --->   ifWithoutElse --->
    source.segment_1.ccc  (right click, context, source)     (then)               
    removeContexts  (after the If Then)--->  SplitByValue(Each Value) ---> target.segment_2.ccc
    For ddd
    source.segment_1.key_1 (right click, context, source) ---> sort --->
    source.segment_2.key_2 (right click, context, source) ---> sort --->
    equalsS --->   ifWithoutElse --->
    source.segment_1.ddd (right click, context, source)     (then)               
    removeContexts  (after the If Then)--->   SplitByValue(Each Value) ---> target.segment_2.ddd
    Ideally, the segment_1.Key_1 and segment_2.key_2 are sorted before you process the message.  If so, remove the sort above.

  • Toplink - Many-to-Many Mapping issue

    Hi
    I have the following tables
    User
    Place
    Team
    User : uid , name (where uid is primay key)
    Place : uid , pid( where uid and pid are composite primary keys)
    Team : tid , name , pid( where tid is primary key)
    Here the following mappings :
    one user is belongs to many places
    one user have many teams( by place)
    user have many teams
    i have created many to many mapping between place(or user) and team tables using intermediate table user_team( uid ,tid). But when i try to retrieve the teams for user id then i m getting all the teams. Here i need to filter the teams by only place id instead getting all the teams.
    For example:
    if user id is belongs to place id 101 then i need to get only teams belongs to place id 101. But i m getting all the places(101,102) teams for same user id which belongs to place id 101.
    Please help me out how i can create valide mapping between all the tables to retrieve valid data. i m using toplink workbench(10g - 9.0.4).

    Your model seems confused, you may wish to re-think you model. It seems odd that a place would have a user id, seems more like it is a m-m join table between user and team, but then it should have a team id not a place id.
    You can use a selectionCriteria() on a ManyToManyMapping to define complicated m-m joins, but you would probably be better re-thinking your model. What object model are you trying to do exactly? Go from there, then define your data-model.
    -- James : http://www.eclipselink.org

  • Many to many relationship issue

    Guys,
         How do we deal with the situation where we have many to many relationship between 2 different characteristics?
    Thanks,
    RG

    Hi,
    I assume you are talking about data modeling.
    When you are required to decide how to model two Char with many to many relations
    1. If you keep them in Same Dimension table the size of the dimension table will be more.
    lets say 100 Customers & 100 Materials
    Now if you keep them in same dimension..
    lets say C1 will have combination of M1 till M100.similarly for other customers
    So Total rows in dimension table will be 100x 100 = 10000!!!
    It will again be of comparable size to fact table and not recommended.
    2.You always try to define them in different dimensions
    Now ,You try keeping them in diffrent dimension..in this case Customer dimension will have only 100 rows & Mateial dimension will have only 100 Rows.very small when compared to fact table size.
    Hope this helps
    Sriman

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

Maybe you are looking for

  • How do I get songs I purchased on a different computer to my IPOD?

    I mainly use my home computer when downloading itunes to my ipod. I have my office computer set up as an authorized computer also. I have downloaded songs into my office computer but they will not show up on my ipod. How can I get them on there? I am

  • Looking for a way to add a footer navigation with sub-pages listed underneath parent pages?

    The navigation widgets work great for mast navigation and sidebar navigation, but no so good for footer navigation. Is it possible to display main pages horizontally and have any sub-pages listed underneath? The only option I appear to have is to use

  • I currently have an old version of Photoshop - CS, is it possible to upgrade?

    I currently have an old version of Photoshop - CS, is it possible to upgrade to a newer version? It keeps crashing on my new Mac (Macbook Pro 2GHz Intel Core i7 running OSX version 10.6.8) and I don't use photoshop that much (probably once per month

  • A new AWT/Swing?

    I have a question for the community. IMHO, and I am not alone, AWT/Swing have major flaws that really ought to be fixed. Probably, because the APIs themselves are seriously flawed, what is required is a replacement for Swing. How would one go about p

  • Clone 9.0.1.3.1 (Server A) to 10.2.0.1 (Server B)

    Hello, I've been trying to clone a 9i (live production) database to 10g (currently on a test box). I've read the documentation and some posts, but I've ran into a few obstacles. Here's an outline of what's going on: Server A: Database version: 9.0.1.