Measure Aggregated at Logical Level causes poor performance

OBIEE 11g
We've recently implemented a new measure which involves aggregating a Period expenditure figure up to the Fiscal Year level.
I've duplicated the existing Period_Expenditure rename to FY_Expenditure, and then changed the logical level for the Time dimension to Fiscal Year.
I get the figures I expected, however the OBI SQL generated wasn't what I expected.
We now have the main SQL as before but now its left outer joining to a 2nd SQL statement that does the aggregation, which is pretty expensive in terms of Logical IO. Elapsed time going from 2 seconds to 20 seconds.
I would have expected the BI server to be clever enough to apply analytics to solve this problem - has anyone else had similar issues when using aggregates like this?
Thanks in advance,
Matt

I'm going to answer my own question here as I've managed to build a simplified solution, it may help others.
Basically the Analyses this is running for is:
select measures, FY_Measure
from fact_table
where period = 'MMYYYY';
Because we don't have the full set of data within this SQL to enable coverage for the FY_Measure (ie. we're at period level granularity) OBIEE is smart enough to create a 2nd query to get the data required for the year.
If we change the analyses to be:
select measures, FY_Measure
from fact_table
where FY = '2012';
It will quite happily create analytics to satisfy the measure - fairly straightforward really.

Similar Messages

  • Web Items cause poor performance?

    Hi Experts!
    I have a web report that takes approximately 60 seconds to run. it has 2 queries but they are set to run only on navigation, and in RSRT, only 1 query runs on query execution.
    In RSRT, the report information says that the query takes about 7 seconds.
    Therefore, I am assuming that the other [approx] 50 seconds are being lost at the presentation layer.
    Within my report, I have a number of Web Items. stripping out the web itmes (tabs, containers, etc) to just the bare minimum (Analysis table). I have removed the other query entuirely. the report now returns in approximately 14 seconds.
    Therefore, I am wondering:-
    Does anyone know if the number of Web Items has a detrimental affect on report preformance?
    Does anyone know of a way to speed up this part?
    Is it something to do with the processing of the hardware that could slow this down?
    Has anyone experienced this before? And if so, are there any pointers on how this can be resolved.
    Many thanks!
    Dave
    Forum points always awarded for helpful answers!

    CSM
    Thanks - but I know what Web Items are, I am interested if they have an impact of report preformance (ie, does using 12 web items with 1 query reduce report preformance?) My theory is that is does, but if it does, I would like to know why and if there is anyway to improve performance - even if this means upgrading the hardware.
    I am trying to find out if anyone has come across similar issues on their projects and if they have, how they overcame this.
    Thanks anyway!

  • In version 31.2.0 have error in console " Error: Please do not load stuff in the multimessage browser directly" is this causing poor performance?

    Windows 8.1
    Error message at startup
    Timestamp: 26/11/2014 21:33:29
    Error: Please do not load stuff in the multimessage browser directly, use the SummaryFrameManager instead.
    Source File: resource://gre/modules/summaryFrameManager.js
    Line: 85
    What does this mean?

    If you keep getting a 'red' message it means that the HDD is faulty and will have to be replaced.  You will have to format the new HDD in Disk Utility>Erase and then install the OSX and your data from Time Machine.
    Ciao.

  • Setting aggregation content for logical level in 11g

    Hi Guys,
    When working on with horizontal and vertical federation in OBIEE 11g with multiple data sources here in my case it is essbase and RDBMS.
    1) pulled the columns and dragged into the concerened table.
    2) The related heirarchies have been defined.
    3) when trying to go to one of the LTS and trying to set the logical level aggregation im not able to see the levels columns corresponding nor im getting the get levels option to get them. where am i going wrong?
    when im trying to join a fact by pulling it on to the fact...i can see the levels in content tab,but when i try to define levels and check it its giving me error "There are no levels matching the BI algorithm"
    Any answers wud be appreciated.
    TIA,
    KK
    Edited by: Kranthi.K on Sep 5, 2011 2:52 AM

    It is autocreated,i dint customize it.....Im dropping the RDBMS table onto the Essbase cube dimension table and im not getting the RDBMS content levels that should be defined in the LTS of the table,and the RDBMS table has an level based hierarchy but still no sucess.
    Any more ideas
    UPDATED POST
    Deepak,it was not helpful as i have gone through tht document before....Im trying it in all scenerios to figure out where actually it is going wrong.
    If i dont find the path,i will let you kne what im trying to do so you can help me out.
    UPDATED POST-2
    Any more pointers from the experts.
    Edited by: Kranthi.K on Sep 6, 2011 7:01 AM

  • OBIEE11g - Hierarchy parent-child with a factable measure(poor performance)

    Hi
    I have a star model with a hierarchy parent child, joined to the fact table.
    Creating a new analysis, and adding the hierarchy column with any other field from any dimension (on selected columns or filters) all works fine, and the hierarchy displays every level very fast.
    But adding a measure from the fact table, the hierarchy performance drops dramatically and I have to wait 20-30 seconds to see each new level displayed.
    Taking a look at the SQL launched, seems to be all right:
    The measure is a COUNT(DISTINCT factable.field) and the group by is done by Parent,Child fields in the hierarchy table.
    Is there any other way to set up this hierarchy? Why the measures are reducing the performance?
    Any comment will be helpful.
    Thanks in advance.

    Try these
    Use Oracle Enterprise Manager (EM) URL to monitor end to end OBIEE real time performance: http://<server>:7001/em
    In Oracle Business Intelligence 11g, the perfmon URL is still valid to use i.e. http://<server>:9704/analytics/saw.dll?Perfmon
    Check these
    http://www.rittmanmead.com/files/biforum2012/ranka_performance.pdf
    http://docs.oracle.com/cd/E17904_01/web.1111/e13814/jvm_tuning.htm
    https://blogs.oracle.com/pa/entry/obiee_ibm_jdk_tuning_for
    Support note OBIEE 11g Infrastructure Performance Tuning Guide Doc ID 1333049.1
    If helps mark and update back :)

  • Quicktime 7.1.2 causing poor Rosetta performance

    Has anyone else noticed considerably poor performance from any PowerPC apps (such as major lag typing in Microsoft Word) and general system slugglishness after installing the Quick 7.1.2 update?
    I have.

    Not here. Although I am using Office v.X rather than 2004 and neither have I install Mac OS X 10.4.7.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Poor performance and overheat

    Story
    I've been using arch for the past four months, dual booting with my old Windows XP. As I'm very fond of Flash games and make my own programs with a cross-platform language, I've found few problems with the migration. One of them was the Adobe Flash Player performance, which was stunningly bad. But everyone was saying that was normal, so I left it as is.
    However, one special error always worried me: a seemingly randomly started siren sound coming from the motherboard speaker. Thinking it was a alarm about some fatal kernel error, I had been solving it mostly with reboots.
    But then it happened. While playing a graphics intensive game on Windows quickly after rebooting from Arch, the same siren sound started. It felt like a slap across the face: it was not a kernel error, it was an motherboard overheat alarm.
    The Problem
    Since the computer was giving overheat signs, I started looking at things from another angle. I noticed that some tasks take unusually long times in Arch (i.e.: building things from source; Firefox / OpenOffice startup; any graphics intensive program), specially from the Flash Player.
    A great example is the game Penguinz, that runs flawlessly in Windows but is unbearably slow in Arch. So slow that it alone caused said overheat twice. And trying to record another flash game's record using XVidCap things went so bad that the game halved the FPS and started ignoring key presses.
    Tech Info
    Dual Core 3.2 processor
    1 gb RAM
    256 mb Geforce FX 5500 video card
    Running Openbox
    Using proprietary NVIDIA driver
    TL;DR: poor performance on some tasks. Flash Player is so slow that overheats CPU and makes me cry. It's fine on Windows.
    Off the top of my head I can think up some reasons: bad video driver, unwanted background application messing up, known Flash Player performance problems and Actionscript Linux/Arch-only bug.
    Where do you think is the problem?

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Poor performance and voltage fluctuation.

    I'm running two 280x in crossfire which I upgraded from two 6970's.   When I'm playing DayZ my FPS never go above 30 FPS.  With my 6970's I was well into the 70's.  I never changed my video settings when I went from the 6970's to the 280X's.   For the most part I have a lot of the graphics low or disabled.   Borderlands 2 is a nightmare on the 280X.  I drop down into 10 fps area and this has never happened with on my 6970's.
    During games my core voltage fluctuates between 500mhz and 1020mhz during game play.  I have ULPS disable as well.
    My power supply is an Antec HCG-900.
    Happened on all these drivers 14.4, 14.6 Beta, and 14.6 RC (currently installed).

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • If Logical level mappings are blank in an LTS , what does it mean?

    Hi All,
    I was trying to find this out in the forum but couldn't.
    In my LTS, on the content tab, if i don't specify any "Aggregation, group by" information for the dims associated with my Fact table and keep it blank, then what does it mean? Would OBIEE treat this as at the detail level by default? So to test this, I created a logical fact and for its LTS, i didn't associate any logical level for dimensions. The query didn't give me any error and worked fine by using the correct join details.
    Also when do we associate an LTS with the Total Level for a dim, what's the use because as I understand, this means all our measures from that LTS would be calculated for the total level of the dim and we can do this by creating level based measure and associating it with the total level of the dim.
    I'll be thankful if somebody could explain me this in detail or direct me to a relevant documentation/ blog
    Thanks,
    Ronny

    Hi,
    When you have only one logical Table Source(LTS).
    You don't specify the logical levels. By default it would consider the detail level of the logical levels mapped to the fact table.
    When you have Multiple logical Table Source(LTS).
    The concept of Multiple LTS comes into picture when you have multiple sources. The BI server picks the optimum logical source depending upon the fields (columns) selected. And also depends on the Number of element set while creating the dimension hierarchies. Refer-http://gerardnico.com/wiki/dat/obiee/level_number_element
    For example- Suppose you have two logical table source Fact 1 and Fact 2 and logical levels( dimension hierarchies are created for two dimensions Dim 1 and Dim 2).
    For the first Fact 1 source logical level is mapped only with Dim 1 and Fact 2 is mapped to Dim 2.
    (You need to decide which fact source to hit depending upon the dimensions used in the report.)
    Then in the report when you select Dim 1 (by means of column selector/filters) then Fact 1 source will be hit and same appears in the query.
    Then in the report when you select Dim 2 (by means of column selector/filters) then Fact 2 source will be hit and same appears in the query.
    One more situation where you have mapped Fact 1 with Dim 1 First level of aggregation and Fact 2 with Dim 1 with Detailed level. Then the BI server picks depending upon the user's request. If the request data can be pulled from the detailed level ( which tends to be optimum then picks from the second source).
    So, its always better to set the logical source and check the query, if its hitting the correct sources according to the requirements.
    Regards,
    MuRam

  • Poor performance of Report Writer reports (Special Ledger Library)

    Greetings - We are running into problems with poor performance of reports that are written with the SAP Report Writer. The problem appears to be caused when SAP is using the primary-key index in our Special Purpose ledger (where the reports are generated). The index contains object fields that cannot be added to the report library (COBJNR, SOBJNR, ROBJNR). We have created alternate indices, but they are not being picked up with the Report Writer reports.
    Are there any configurable or technical settings that we can work with in order to force the use of a specific index for a report? It seems logical that SAP would find the most efficient index to use, but with the reports that we are looking at, this does not appear to be the case.
    Any help that can be offered will be greatly appreciated...We are currently using version 4.6C, but are planning an upgrade to ECC 6.0 later this year.
    Thanks in advance -

    Arjun,
    Where / which files contains these parameters we cannot find them all ??? 
    Tomcat - Java _properties and try again ( You can tune below value as per your system memory)
    -XX:PermSize=256m
    -XX:MaxPermSize=256m
    -XX:NewSize=171m
    -XX:MaxNewSize=171m
    -XX:SurvivorRatio=2
    -XX:TargetSurvivorRatio=90
    -XX:+DisableExplicitGC
    -XX:+UseTLAB
    As a general update it looks like we need to use the Monitoring tools that are installed by default, we are now in the process of installing the database etc
    Cheers

  • N580GTX Poor Performance

    I recently bought a 580GTX Lightning and out of the box I was experiencing fairly poor performance especially in DX11 games and benchmarks. This was with the MSI OC clocks (832Mhz core etc). I down-clocked to standard 580 values and performance immediately increased to levels you would expect a 580 card to be capable of. Seemed at the time as if the GPU voltage was set too low...
    In playing around with the card, I flipped the BIOS dipp switch to the LN2 setting and found that the clock settings were at (by default) the standard 580GTX settings. Performance was in this case again poor despite the lower clocks. I then increased clocks to the MSI OC values and performance jumped again to where a 580GTX card should be. Odd.
    Next I flipped the Bios back to the original, restarted and left the OC at the MSI values. Performance remains at most excellent.
    At the moment it seems I get good performance when I am using Afterburner and have the "Apply Overclocking at System Startup" option applied.
    My question is why is this? Will this card only work correctly when used in conjunction with Afterburner? Any thoughts on why this is?
    GPU BIOS: 70.10.17.00.06
    PSU: Silverstone Strider Gold 750W
    Nvidia Drivers: 270.61
    OS: Windows 7 SP1

    Quote
    My question is why is this? Will this card only work correctly when used in conjunction with Afterburner? Any thoughts on why this is?
    AB is just a software tool that allows you to manipulate the clocks and voltages, easily. Nothing more.
    Quote
    At the moment it seems I get good performance when I am using Afterburner and have the "Apply Overclocking at System Startup" option applied.
    That setting will apply an Overclock that you manually set and then saved as a user profile within Afterburner. If you did not save a user profile, then it will be the same settings as what your card's factory clocks are. i.e. it will not apply anything.
    Quote
    I recently bought a 580GTX Lightning and out of the box I was experiencing fairly poor performance especially in DX11 games and benchmarks.
    Poor performance is relative. This needs to be quantified and a measuring standard is needed, as well as comparisons to the same or similar cards to establish a consistent baseline.

  • CRIO Poor Performance - Where have my MIPS gone?

    I have a cRIO based system that is used to control a motor for a particular application. The application has been developed and enhanced over the years and is currently using about 50% of the CPU. The RT Controller is a cRIO-9012. I have recently been asked to add a 1 kHz (or more) loop function to the cRIO application. I can only achieve a maximum loop rate of about 200 Hz. When I told the customer this, he asked how fast my controller was, to which I replied 400 MHz. “Where is all the CPU power going?”, he asked. He's now thinking of replacing the cRIO Controller with an mbed with C code to get the performance he requires, which is a pity since I'd like to continue developing the application in LabVIEW.
    Following on from his question, “where is all the CPU power going?”, I decided to write a simple application to test the cRIO 9012's performance. Below is the code I used to perform the evaluation:
    With just the bottom loop running, which reports CPU load over the cRIO Controller's serial port, I have a CPU load of 7.0%. This is the baseline.
    I then add the "execution" loops as shown above the bottom loop, one at a time and recoded the CPU load. Here are the results:
    1 Loops - 18.3% load (11.3% extra)
    2 Loops - 29.4% load
    3 Loops - 45.5% load
    4 Loops - will not run!
    I have two problems/concerns.
    Concern 1
    The cRIO 9012 has a 400 MHz processor, which has 760 MIPS of processing power. The rate of the simple loop is 2 kHz and each loop takes about 11% of the CPU power. That is, each loop uses up 83.6 MIPS and each loop iteration uses up 41,800 instruction cycles. Where are the 41,800 instructions going? Even if there was a context switch after each loop iteration, this would account for 150 to 200 instruction cycles. Each loop is only doing an integer increment, timing check, compare and branch. These should only take up about 4 instruction cycles (8 if you want to be generous). If this was programmed in C, you could get bare metal performance that allows a single loop rate of something like 40 MHz or with an RTOS something like 2 MHz. Instead, my maximum loop rate is something like 20 kHz.
    Where are the "wasted" 41,600 instructions per loop iteration going? This is only 0.5% efficient!
    Concern 2
    Why does adding the 4th "execution" loop cause the application to halt (or at least not send data over the serial port)?
    I like programming on the desktop using LabVIEW and I like programming the FPGA using LabVIEW. The RT Controller is however becoming an embarrassment. Is it really the case that the best additional loop rate I can add to an existing application that already uses 50% of the cRIO Controller's CPU can only be 200 Hz maximum?

    Thanks for all the feedback.
    MajorTom,
    Changing to timed loops instead of while loops makes the performance worse. For 2 Loops, rather than a CPU load figure of 29.4% (22.4% after removing base load) it shoots up to 78.4% (71.4% after removing base load). That is, it runs about 3 times slower, which takes the "efficiency" down from 0.5% to 0.15% efficient.
    TimothyA,
    I tried making the "execution" loops subVIs (with Preferred Execution System = other 1, with the top level = other 2) and that solved the four "execution" loops problem. Thanks, one of my concerns is now resolved (I'll mark it as such once the conversation quietens down).
    The execution time is still large with the 2 loops taking 29.1% of the CPU, which is the same as before.
    I tried using the "reduced" us wait next multiple CLN.vi, but it appears to be in LabVIEW 2014 and I'm using LabVIEW 2013. Any chance of resaving it as LabVIEW 2013?
    crossrulz,
    Thank you for pointing me to the table in the CompactRIO Developer's Guide. I assume you’re talking about Figure 3.5. Priorities and Execution Systems available in LabVIEW Real-Time. I didn't realise that Execution Systems are limited in the number threads (would be nice to get a warning when this happens). This will make interesting reading and experimentation.
    All,
    I’ve tried various loop limiting rates including Timed Loops and While Loops with RT Wait Until Next Multiple, RT Wait, Wait Until Next Multiple and Wait and the best perform is from the two RT waits. The worst performance was from the Timed Loops.
    In summary, I’ve solved the problem regarding how to run more parallel loops, but I still get very poor performance with each loop interaction taking about 41,800 instructions when it “should” take more about 200 instruction cycles. All I want is to be able to run a loop at at least 1 kHz on my cRIO-9012 when I already have an application that takes up about 50% of the CPU. I have the threat of the code being moved to an mbed using C, which I’m trying to resist. Surely a 400 MHz controller can have a 1 kHz loop and not take up more than 10% of the CPU.

Maybe you are looking for

  • Error while creating Multiple Items Shopping Cart with Asset Assignment

    Hi All, I am creating Shopping Cart with multiple items for different Assets. Example: I have Two Assets (Asset A with the budget of $1000 and Asset B with the Budget of $1500) in the back end system. Asset A assigned to Order 102649996 (value - $100

  • Sharing files with other mac users on iCloud

    May I share files I store on iCloud with other users?

  • Servlets require downloading ? While .jsp files do not ?

    I've deployed my web application and parts of my web application are running servlets which I have declared properly in my web.xml. Currently my JSP files are able to running perfectly, but my servlet classes are loaded correctly, but my web browser

  • Dynamic Logo.

    Hi, I have a requirement where the logo should dynamically change depending upon the customer in Packing Slip report. Would appreciate if any one can share there experience. Thanks.

  • Extract audio for transfer to CD

    Is it possible to extract audio portion from an imported videotape (school concert), and burn audio to a CD? How...from a newbie. Thanks.