CBO: OWB Dimension Performance Isssue (DIMENSION_KEY=DIM_LEVEL_ID)

Hi
In my opinion the OWB Dimensions are very useful, but sometimes there are some Performance Issues.
I am working with the OWB Dimensions quite a lot and with the big Dimensions ( > 100.000 rows) , i often get some Performance problems when OWB generates the code to Load (Merge Step) or Lookup these Dimensions.
OWB Dimensions have a PK on DIMENSION_KEY and Level Surrogate IDs which are equal to the DIMENSION_KEY if the The Row is an Element of that Level (and not a Parent Hierarchic Element)
I am hunting the Problem down to the Condition DIMENSION_KEY= (DETAIL_)LEVEL_SURROGATE_ID. The OWB does that to get only the Rows with (Detail-) Level Attributes.
But it seems, that the CBO isn´t able to predicted the Cardinality right. The CBO always assume, that the Result Cardinality of that Condition is 1 row. So I assume that Conditon is the reason for the "bad" Execution Plans, the Execution Plan
"NESTED LOOPS OUTER" With the Inline View with Cardinality = 1;
Example:
SELECT COUNT(*) FROM DIM_KONTO_TAB  WHERE DIMENSION_KEY= KONTO_ID;
--2506194
Explain Plan for:
SELECT DIMENSION_KEY, KONTO_ID
FROM DIM_KONTO_TAB where DIMENSION_KEY= KONTO_ID;
+| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |+
+| 0 | SELECT STATEMENT | | 1 | 12 | 12568 (3)| 00:00:01 | | |+
+| 1 | PARTITION HASH ALL | | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
+|* 2 | TABLE ACCESS STORAGE FULL| DIM_KONTO_TAB | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
Predicate Information (identified by operation id):
+2 - STORAGE("DIMENSION_KEY"="KONTO_ID")+
filter("DIMENSION_KEY"="KONTO_ID")
Or: For Loading an SCD2 Dimension:
+|* 12 | FILTER | | | | | | Q1,01 | PCWC | |+
+| 13 | NESTED LOOPS OUTER | | 328K| 3792M| 3968 (2)| 00:00:01 | Q1,01 | PCWP | |+
+| 14 | PX BLOCK ITERATOR | | | | | | Q1,01 | PCWC | |+
+| 15 | TABLE ACCESS STORAGE FULL | OWB$KONTO_STG_D35414 | 328K| 2136M| 27 (4)| 00:00:01 | Q1,01 | PCWP | |+
+| 16 | VIEW | | 1 | 5294 | | | Q1,01 | PCWP | |+
+|* 17 | TABLE ACCESS STORAGE FULL | DIM_KONTO_TAB | 1 | 247 | 489 (2)| 00:00:01 | Q1,01 | PCWP | |+
I tried a lot:
- statistiks are gathered often, with monitoring Informations and (Frequencey-)Histograms and the Conditions Colums
- created extend Statistiks DBMS_STATS.CREATE_EXTENDED_STATS(USER, 'DIM_KONTO_TAB', '(DIMENSION_KEY, KONTO_ID)')
- created combined idx one DIMENSION_KEY, LEVEL_SURROGATE_ID
- red a lot
- hinted the Querys in OWB ( but it seems the inline View is to complex to use a Hash Join)
Next Step:
-Tracing the Optimizer CBO Events.
Does some one has an Idea how-to help the CBO to get the cardinality right?
If you need more Information, please tell me.
Thanks a lot.
Moritz

Hi Patrick,
For a relational dimension, these values must be unique within the LEVEL. It is not required to be a numeric ID (although that follows the best practices of surrogate keys best).
If you use the same sequence for the dimension you have insured that each entry in the entire dimension is unique. Which means that you can move your data as is into OLAP solutions. We will do this as well in the next major release.
Hope that helps,
Jean-Pierre

Similar Messages

  • Owb client+performance+x-windows

    Hi,
    We are using 11.2.0.1 and finding the performance of OWB slow e.g opening mappings, control center etc.
    Machine has enough memory (2.5 GB and checked memory being used in task manager for owb.exe (about 400 M), total being used on PC at any time for all applications aboout 1.5gb and ruled out network latency as tested other cleint/server applications to access the same machine and dataabse and these are fine performance-wise.
    The server is located in the same building as the cleint and is a UNIX server.
    We are using the cumulative patch recommended by Oracle for performance reasons - still same problem.
    Anybody any other tips to get design center/control center to open quickly etc?
    Used previous suggestions such as Tools/Optimise repository etc and purging of audit tables.
    Another question
    Is it possible to run owb client (i.e. open/amend/deploy/run mapppings etc with the windows interface directly on the UNIX server (~e.g using x-windows) as opposed to the client WINDOWS pc
    Many Thank

    Hi
    There are patches with performance related fixes in this area, if you can upgrade to 11.2.0.2 that is advised otherwise there is a mega patch for 11.2.0.1;
    BEST --> OWB 11.2.0.2 + megapatch v3 (12874883)
    ALTERNATE --> OWB 11.2.0.1 + patch for bug 10270220: Mega Patch v2 (supersedes patch 9802120)
    Cheers
    David

  • 10.2 OWB Dimension "binding" erases all Table metadata!

    I need some help with dimension definitions in OWB. I have a dimension that I created and deployed as ROLAP which includes its corresponding table.
    I added a number of indexes on the table and some default values on the columns - trying to avoid NULLS in the dimension fields.
    I then realized I needed one additional attribute in the dimension. I put the additional attribute in, selected "Bind" and poof - no more indexes, constraints or default values. It appears to have completely recreated the table metadata.
    Also, the column ordering of the corresponding table for the dimension appears to be random.
    Is there some better way to control this? I'm sure I'll have to add future attributles to existing dimenions!
    TIA,
    Mike

    OK, after perusing the manual for awhile, I found that to add columns to an existing dimension I need to:
    1. Go into dimension object editor, select the Storage tab and set it to "Manual" instead of Star.
    2. Add the columns to both the dimension and table
    3. right click the dimension to show "detailed view"
    4. manually map the column
    5. deploy (after juggling the creation type of the dimension and table)
    I don't find it at all intuitive and if you forget you wipe out your table metadata for the dimension but at least it works.

  • OWB Repository Performance, Best Practice

    Hi
    We are considering installing OWB repository in its own database, designed solely to the design repository to achieve maximum performance at the design center.
    Does anyone have knowledge of best practice in setting up the database to OWB repository? (db parameters, block size and so on).
    We are currently using Release 11.1.
    BR
    Klaus

    You can found all this informations in the documentation. Just here:
    http://download.oracle.com/docs/cd/B31080_01/doc/install.102/b28224/reqs01.htm#sthref48
    You will find all Initialization Parameters for the Runtime Instance and for the design instance.
    Success
    Nico

  • Merged Dimension Performance vs. Multiple SQL Statements via Contexts

    Hi there,
    If you have a Webi reoprt and you select two measures, each from a different context, along with some dimensions and it generates two seperate SQL statements via a "Join", does that join happen outside of the scope of the Webi Processing Server?
    If it happens within the Webi Processing Server memory, how is the processing different from if you were to have two separate queries in your report and then merge the dimensions, with respect to performance?
    Thanks,
    Allan

    you can use the code as per your requirement
    but you need to do some performance tuning
    http://biemond.blogspot.com/2010/08/things-you-need-to-do-for-owsm-11g.html

  • OWB Dimensions

    DBME:Oracle 10gR2
    OS:WindowsXp
    hello,
    I am using OWB 10gR2 .I am working with dimensions.I know the basics of DWH but while reading "10R2 owb user guide" I get confused with some concepts.
    Can any one send me some good link from where I can read Dimension using OWB 10gR2. That might include detail.
    thanks
    tanveer

    Hi,
    I hope this link would help :
    http://download-uk.oracle.com/docs/cd/B31080_01/doc/owb.102/b28223/ref_dim_objects.htm

  • BPC limitations Dimensions/Performance

    What is the limitation on the number of dimensions BPC (Outlooksoft) can have? And how is the performance affected with more or less dimensions being used on BPC Excel. Is there and performance measure in place?
    Thanks

    For SAP BPC 5.1, SAP recommends max of 20 dimensions per application. I had an application which had 15 dimensions and one of the biggest templates (excel) had 5 dimensions in row expansion and 1 dimension (time) in column expansion. The template had close to 1000 rows of data and it used to take close to 6 minutes to open. This has been my worst experience.
    I am not sure if there is a performance metrics published by SAP but you might find something in their implementation guides.
    Thanks,
    Ameya Kulkarni

  • OWB Dimension Key

    Hi Everyone,
    I have a simple question for you guys, can anyone explain to me why that my PK from owb on Type I SCD is always starting a the number of rows that I inserts? If I'm loading a dimension for the first time, beit INSERT/UPDATE or UPDATE/INSERT and I'm loading n rows, the sequence starts at n instead of 1, I know that I should bother with that since it is only a PK but would like to understand what is being done in the background
    Thank you
    Jacques

    Hi
    OWB create the sequence to start at 1, when deployed it is ok, but when i run my mapping, it is like it is caching all the key for all rows and then generate new ones when inserting, I did start my sequence at minus n (-n) and then it started at 1, on the insert. Just curious to know what is happening under the hood.
    Thanks for your reply

  • Performance isssue!

    Hi friends,
    I am using ORACLE 8i, Weblogic 7.0 and Oracle thin
    driver.
    I have a table with 60 columns and 1 million rows.
    I am showing data page by page(50 records per page).
    The UI is Power Builder(PB) and through HTTP Servlet i am
    sending the response(as an XML).
    The PB has a scroll bar which can be drag to any place
    to see the relevant data.
    Any design help will be highly appreciated.
    Time taken to send the response is the crux of the
    issue.The data is Read Only.
    The user can wish to see the page based on various
    criteria .as
    a) ordering the data on a particular column
    b) mixing two or more columns to prepare the selecting
    criteria.
    .....and so on.
    Thanks !

    Use Demand Pagination. What it means is that you only query the DB for the required set of data.
    That is : let us say u are in the 3rd page displaying 21-30 rows (the no of rows to be displayed in a page is 10), when the user wishes to see the next 10 records, u will only query the DB for 31-40 rows.
    You can acheive this using ORACLE SubQueries or VARRAYs

  • DISTINCT operator performance issue

    Hi Guyz,
    I am facing a performance issue in a query which contains DISTINCT function. Following is my query:
    SELECT     /*+ ORDERED USE_NL_WITH_INDEX(c DIMENSION_KEY_PK) */
                        DISTINCT f.*,c.client_ids
    FROM FACT_TAB f, DIM_TAB c
    WHERE f.client = c.dimension_key
    FACT_TAB = Fact table with a bitmap index on client column (10,000,000 records).
    DIM_TAB = Dimension table with dimension_key as primary key (100,000 records).
    when i select only fact table columns in the above query, the query executes within a second. But when i execute the above query it takes more than 15 minutes to execute.
    How can i improve the above query. Any suggestions or tips would be helpful.
    Thanks in advance.

    Hi myers,
    you are absolutely right, there is no purpose of using DISTINCT, because i have found there are no duplicates in fact table, neither in dimension.
    BUT there is another problem after this, i am joining these two tables with another table (INLINE VIEW), which gives me duplicate data, so i need DISTINCT operator for that purpose. Time dimension is also used in this query now. Here is my new Query:
    SELECT /*+ ORDERED USE_NL_WITH_INDEX(c DIMENSION_KEY_PK) */
    DISTINCT f.*,c.client_ids
    FROM FACT_TAB f, DIM_TAB c, DIM_TIME_TAB t, (select id,start_date,end_date from tab3) tab
    WHERE f.client = c.dimension_key
    AND f.time = t.dimension_key
    AND f.tabid = tab.id
    AND t.day_start_date >= tab.start_date
    AND (t.day_start_date <= tab.end_date OR tab.end_date IS NULL)
    Thanks

  • Performance issue with MSEG table

    Hi all,
    I need to fetch materials(MATNR) based on the service order number (AUFNR) in the selection screen,but there is performance isssue with this , how to over come this issue .
    Regards ,
    Amit

    Hi,
    There could be various reasons for performance issue with MSEG.
    1) database statistics of tables and indexes are not upto date.
    because of this wrong index is choosen during the execution.
    2) Improper indexes, because there is no indexes with the fields mentioned in the WHERE clause of the statement. Because of this reason, CBO would have choosen wrong index and did a range scan.
    3) Optimizer bug in oracle.
    4) Size of table is very huge, archive.
    Better switch on ST05 trace before you run this statements, so it will give more detailed information, where exactly time being spent during the execution.
    Hope this helps
    dileep

  • Socket based application - Performance Issues - Suggestions Needed

    Hi All,
    We have an application which basically has been developed using core java. Here is a high level information about the application:
    a) It opens a serversocket which allows clients to connect to it.
    b) For every new client connection, a separate thread is created and this thread deals with requests from clients, processing the data and replying back to clients.
    c) Each socket is polled continuously and sockettimeout is 2 seconds. If there is a timeout, we handle the situation and socket is again read. So basically sockets is read every 2 seconds. If number of timeouts reaches a configurable value, we close the connection and thread is dropped as well.
    d) In production, three instances of this application are running with the help of a cisco load balancer. It is there for last 5 years.
    However there has always been some minor performance isssues and we have sorted them out using different types of garbage collectors, by introducing hardware load balancers, upgrading the code for new Java versions. It is currently running on 1.4.2.
    However there has always been some performance issues and today while googling over internet I came across following on the bea website which says that core java sockets are not as efficients as native API. BEA has implemented its own APIs for weblogic. My queries are:
    a) Are there any better Java Socket/network API (for solairs, I know Java is plateform independenet but there could be lib which also using native libs) which are much more efficient than Core Java.
    b) We are getting the InputStream/OutputStream and creating objects of DataInputStream/DataOutputStream to read the data 'Byte-By-Byte'. Each byte can have different information thats why it is required. Are there any better way of getting info than what we are currently doing.
    c) As I mentioned, we are continously polling the socket for read operation with a timeout value of 2 seconds. What is the better among the following from performance point of view: (1) Frequent read operation with a lesser timeout value or (2) Less Frequent read operations with larger timeout value. (3) Any better idea??
    Please suggest few things or pointers which I could do to improve the performance of the applcations. Many thanks.
    Thanks,Akhil
    From BEA website:-
    "Although the pure-Java implementation of socket reader threads is a reliable and portable method of peer-to-peer communication, it does not provide the best performance for heavy-duty socket usage in a WebLogic Server cluster. With pure-Java socket readers, threads must actively poll all opened sockets to determine if they contain data to read. In other words, socket reader threads are always "busy" polling sockets, even if the sockets have no data to read. This unnecessary overhead can reduce performance."

    My recommendations:
    - Always use a BufferedInputStream and BufferedOutputStream around the socket streams
    - Increase the socket send and receive buffers to at least 32k if you are on a Windows platform where the default is a ridiculous 8k, which hasn't been enough for about 15 years.
    - Your 2-second timeout is far too short. Increase it to at least 10 seconds.
    - Your strategy of counting up to N short timeouts of S seconds each is completely pointless. Change it to one single timeout of N*S seconds. There is nothing to be gained by the complication you have introduced to this.

  • ** JDBC Receiver - Oracle Stored Procedure - Large Records - Performance

    Hi friends,
    In my File to JDBC scenario, I use Oracle SP. I designed my target structure as mentioned in help.sap.com. In this scenario, the Sender file sends large no. of records, we have to update those records in the Oracle table. As per this requirement, I did mapping. I tested one file with 4 records. In SXMB_MONI, mapping works fine. I have given below the target payload. The message is processed successfully. (Still I have not created SP in database I am unable to check for the updating of records in the table).
    My doubt is
    1) Whether the target payload is correct ?
    2) For each <STATMENT> tag, Will XI establish connectivity to JDBC and update the record ? If it is, in real time if we send large no. of records, ex: 50 thousand records, performance isssu will come or not?
    3) How to solve the problem as said in point 2. (LookUp procedure etc)
    Kindly reply friends. (If you  have faced this problem ... kindly reply how to solve this issue)
    Target Payload:
    <?xml version="1.0" encoding="utf-8"?>
    <ns1:PSABCL_Mumbai xmlns:eds="http://sdn.sap.com/sapxsl" xmlns:ns0="http://abc.xyz.com" xmlns:ns1="http://abc.xyz.com/ABCL/Finance">
    <STATEMENT>
    <SP_ABCL ACTION="EXECUTE">
    <IF_ROW_STAT>FOR_IMPORT</IF_ROW_STAT><CON_FST_NAME>John</CON_FST_NAME><CON_LAST_NAME>Test001915</CON_LAST_NAME><CON_MID_NAME>W</CON_MID_NAME>
    </SP_ABCL>
    </STATEMENT>
    <STATEMENT>
    <SP_ABCL ACTION="EXECUTE">
    <IF_ROW_STAT>FOR_IMPORT</IF_ROW_STAT><CON_FST_NAME>Josephine</CON_FST_NAME><CON_LAST_NAME>Walker</CON_LAST_NAME><CON_MID_NAME>Rose</CON_MID_NAME>
    </SP_ABCL>
    </STATEMENT>
    <STATEMENT>
    <SP_ABCL ACTION="EXECUTE">
    </SP_ABCL>
    </STATEMENT>
    <STATEMENT>
    <SP_ABCL ACTION="EXECUTE">
    </SP_ABCL>
    </STATEMENT>
    </ns1:PSABCL_Mumbai>
    Thanking You.
    Kind Regards,
    Jegathees P.

    Hi,
    The structure should be -
    <MsgType Name>
    <StatementName>
    <storedProcedureName action = "EXECUTE">
    <table>
    <List of Parameters isInput = "true" type = "STRING">
    Map the table node to the stored procedure name.
    Also,
    For each statement, XI would make a database call. For better performance, do not check the button in CC - Open a new connection to database for each message.
    Also another solution would be, collect all the data in the mapping to a comma or a pipe separated string in the mapping and have the statement node created only once. This way, though you have 5000 records, they all will be given to the SP in one DB call. you can also manage the mapping so that you will not send more the the X number of records to database in a single call. We are using an XML created in code in an UDF for this. The SP has to take care of stripping the comma or pipe separated values or the XML sent as a string input parameter.
    VJ

  • 11g (11.2.0.1) - dimension operator very slow on incremental load

    Dimension operator very slow in porcessing incremental loads on 11.2.0.1 Have applied cumulative patch - still same issue.
    Statistics also gathered.
    Initial load into empty dimension performs fine (thousands of records in < 1 min) - incremental load been running over 10 mins and still not loaded 165 records from staging table.
    Any ideas?
    Seen in 10.2.0.4 and applied patch which cured this issue.

    Hi,
    Thanks for the excellent suggestion.
    Have run other mapings which maintain SCD type 2 using dimesnion operator behave similary to 10g. - have raised issue with this particular mapping with Oracle - awaiting response.
    One question - when look at the mappings which maintain SCD Type 2s looks to join on dimension key and the surrogate ids.
    What is best practice regarding indexing of such a dimension, e.g is it recommended to index dimension key and the surrogate ids along with the natural/nbusiness keys?
    Thanks

  • Alternative for result from other query  and merge dimension option option

    Hi Everyone ,
    Am Developing one webi report over bex Query.
    Actual scenario is output of one webi report should be the input of other webi report.
    Eg:
    Table 1
    2010        Cus 1
    2010        Cus 2
    2011        Cus 3
    table 2
    cus1    m1   100
    cus2    m2   200
    Cus3    m1  400
    Report 1 designing 
    First report created using table 1 and prompt for year
    Report  2 designing
    Second report created using table2 and prompt for customer
    So when am Running first report it will ask for parameter year and if am selecting 2010 then the report will return C1 and C2
    this out put should e the input for report 2.
    So out put will be 100+200=300
    NOTE:1. Result from other query is not working in webi filter pane since am building on olap universe.
               2. Merge Dimension performance is very slow .
    Any Solution ?
    Regards,
    Kannan.B

    Hi,
    Thanks for ur reply
    As you said , If am giving hyperlink to other report .
    Eg: User selected Tamilnadu then report 1 opened  then  he has to click the some cell or hyperlink cell to view the actual report(2nd report).
    Suppose user Clicked that hyperlink cell and 2nd report opened and he is viewing the data for Tamil nadu and he decided to see the report for
    Andrapradesh so according to this logic he has to select first report and refresh the data for Andra and from there he has to come to 2nd report.
    totally 4 screen will be opened for seeing the two states report.
    So Some other alternative.......

Maybe you are looking for

  • Use of Letter of credit for purchase through R/3

    Hi Any body can tell me how to use the facility of payment Guaranty - Documentary payment (Letter of Credit) in purchase. Create a letter of credit Link the Letter of credit with PO Track the Letter of credit with materials receipt Retire the letter

  • Firefox will not start and Firefox.exe process will not end!

    Good evening, I am typing this on Internet Explorer, and Google Chrome seems to refuse to install. I have been experiencing considerable trouble with launching FF. Whenever I double-click the icon, FF does not launch. There are no error mesages, the

  • Reject files in Bridge

    Bridge wont "show reject files only" in both PS4 and PSCC on my iMac?  It works completely fine in PS5 on my Macbook Pro, so I'm a bit perplexed why its not working on my iMac?? Any ideas on how I can fix this? Thanks in advance!

  • How to deploy the goldengate on EBS R12

    EBS : R12.1.1 ORACLE DATABASE : 11.1.0.7 OS version : AIX 6.1 The source environment and the standby enviroment are the same. I want to know how to deploy the goldengate on EBS R12 , not for reporting , only just for Disaster recovery . I have read 1

  • QUERY FOR FINDING SECOND HIGEST SALARY IN EACH DEPARTMENT...?

    Hi , if anyone know query..please mail to my id ...[email protected]