Performance Question on Time dimension.

Hi all,
I have a cube with a date dimension, which represents a snapshot date for open orders on the system ( 5 other dimensions apart from this one ) - this is updated daily.
Should I:
(1) build all dates for the foreseeable future
(2) add each new snapshot date in day by day.
Each seems to have pros & cons, 1 doesnt need to restructure every dimension build, but the calculation seems to take longer, 2 seems to restructure the outline every time which takes longer but the calculation is quicker.
I'd value anyone's thoughts or experience on this.
Essbase 11.1.1 on redhat LINUX.
Thanks

Why not just build the dimension (every two years?) and incorporate a defrag routine in your data load -- I'm guessing that something is fragmenting your db although a sparse restructure shouldn't be fixing that?
This would be easy to test -- try the first approach and look at PAG file size, then try the second and look at the same. Again, a sparse restructure causes Essbase to rewrite the .IND file -- the .PAG file just gets added to as new sparse member data comes in. That should be the same behavior to .PAG whether the members exist or not -- Essbase doesn't create or rewrite to existing blocks until there's data.
Or is it possible that you're using Intelligent Calc and Essbase is only calculating the dirty blocks? That sort-of, kind-of makes sense although I have to wonder why the number of dirty blocks would be different between the two approaches.
Regards,
Cameron Lackpour

Similar Messages

  • Project Analytics (PA) Specific Question regarding Time Dimensions

    Hello PA experts, i am having a difficult time distinguishing between all date options we are presented in PA Answers.
    I see GL Cal, Enterprise Cal, Project Cal and then i see additional Secondary Dates listed as Accrued / Released Date.
    What i am looking for is something basic but not sure which is the right date.
    I need the date when certain invoices were created / hours were entered into our PA system.
    I have brought in all these dates to see which one makes sense but at this point, they all have various dates.
    Any help is appreciated.
    Regards,

    Probably the best way to work this out would be to refer to the OBIA 7.9.6 Data Lineage Worksheet available on metalink, which will show you which EBS date fields the OBIA Model Date columns refer to. I find this is a lot easier than viewing the mappings in Informatica,(although still go in there to confirm) It may be that you need to map this extra date column into the model if its not already there.
    Regards,
    Imran

  • Performance Point Filter Scorecard by Time Dimension (without Time Intelligence)

    Hello,
    I use Performance Point 2010 and want to build a Performance Point Dashboard with Scorecard. My requirement is to provide a List of Years as a Filter for a Scorecard.
    In the Scorecard I have a KPI which I want filter by Time Dimension. I can do this when I use "Time Intelligence" and "Time Intelligence Connection". But I want to use the "Member Selection Filter" to filter the Scorecard
    KPIs by Years (Selected from a List).
    I created a "Member Selection Filter" with certain Time Dimension and put Filter to a Dashboard. In my KPI Mapping Source I added a "new Dimension Filter" and choosed the same Time Dimension as
    in the "Member Selection Filter".
    When I try to connect the "Member Selection Filter" to my Scorecard, I can only choose the "Curent Date Time". As I know "Current Date Time" have to be used when Time Intelligence Filter is in use.
    How I have to connect the Scorecard with the "Member Selection Filter"?
    Thank you very much

    Hi ShebUK,
    I don't think we can achieve this requirement on the front end, but you can discuss this issue at the following forum:
    http://social.technet.microsoft.com/Forums/en-US/home?category=performancepointserver
    I'm not familiar with PerformancePoint Services and not sure that we can implement linked dimension to achieve this. A linked dimension is one that exists in one Analysis Services database, but reused in another Analysis Services database of the same version
    and compatibility level. For more information, please see:
    Define Linked Dimensions: http://msdn.microsoft.com/en-us/library/ms175648.aspx
    Hope this helps. 
    Regards,  
    Elvis Long
    TechNet Community Support

  • Universe Design Question - Time dimension

    Hi,
    Please help me with universe design question here.
    I have to make line graph
    Y- axis time dimension hh24:mi:ss
    y - axis Date dimension YYYYMM
    The data is like this
    progname end_date
    abcd 2011/01/23 13:01:20
    abcd 2011/01/24 13:30:20
    abcd 2011/01/25 13:34:20
    abcd 2011/01/23 13:34:20
    abcd 2011/01/25 13:45:20
    abcd 2011/01/2614:34:20
    I need to take time portion of each month and take average of it and that is printed on graph.
    I just wonder how I get average of time. The tool seems like can not get average of time as this is not a number.
    Then I converted time portion into seconds and did the average but my requirement is to show time on line graph not in seconds.
    Please give some idea.
    Thanks.

    Hi Afzal,
       I know how to segregate the date part and the time part. I just wonder how I get average of time. The tool seems like can not get average of time as this is not a number.
    Then I converted time portion into seconds and did the average but my requirement is to show time on line graph not in seconds.
    Please give some idea.
    Thanks.

  • Time dimension question

    Hi,
    I just created a time dimension from a table that contains all the info I need, Once processed, I went to the browser to finalise the viewing, I have a time period from 2009 to 2025 and once I expand the levels, I see a level unknown right under the
    year 2025, the table I use for this dim is not missing any rows at all, all the columns I need are properly populated, no missing info at all.
    What can cause that and how to suppress this unknown level
    Thanks

    Hi a,
       Analysis Services has a built in member for each dimension called  “Unknown.”  This is to simplify the process of dealing with facts that have the property of Unknown for a particular dimension.  If a dimension member comes to the
    fact table after failing a lookup in the SSIS package and contains a null for the surrogate key, Analysis Services assigns it to this special Unknown Member and moves forward.
        The dimension has a property called UnknownMember that describes the usage of this unknown member.  It can be set to Visible, Hidden, or None.  Next, the dimension member set with the usage of Key has a property called NullProcessing
    that is set to UnknownMember. You can supress the visibilty of Unknown member in this way..Hope this helps
    Regards
    Venkata Koppula

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • Create time dimension table in repository without data warehouse

    Hi,
    I want to implement only BI repository solution in my customer (not datawarehousing). Is it possible to transform the data by repository tools, so that the times columns in fact tables are categorized by the "time dimension" table?
    To be more explanatory:
    The "Sales" table has the "time of sale" column. It contains the timestamp when the sale was performed. I have imported this table in "physical layer" of the repository. Now I want to create a new "time dimension" table, something like:
    CREATE TABLE dimension_time (
    Day_Key INT NOT NULL PRIMARY KEY,
    Day_Timestamp DATETIME NOT NULL,
    Day_Name NVARCHAR(32) NOT NULL,
    Day_Text NVARCHAR(32) NOT NULL,
    INSERT INTO dimension_time VALUES (20110101, {d '2011-01-01'}, '1/1', 'January 1', 'Saturday', 0, 6, 1, 1, 185, 1, 201052, 'W52', 'Week 52', 52, 201101, '01', 'January', 1, 7, 1004, 'Winter', 'Winter', 20111, 'Q1', '1st Quarter', 1, 20103, 'Q3', '3rd Quarter', 3, 20111, 'S1', '1st Semester', 1, 20102, 'S2', '2nd Semester', 2, 2011, '2011', '2011', 2010, '10/11', '2010/2011', 0);
    INSERT INTO dimension_time VALUES (20110102, {d '2011-01-02'}, '2/1', 'January 2', 'Sunday', 0, 7, 2, 2, 186, 2, 201052, 'W52', 'Week 52', 52, 201101, '01', 'January', 1, 7, 1004, 'Winter', 'Winter', 20111, 'Q1', '1st Quarter', 1, 20103, 'Q3', '3rd Quarter', 3, 20111, 'S1', '1st Semester', 1, 20102, 'S2', '2nd Semester', 2, 2011, '2011', '2011', 2010, '10/11', '2010/2011', 0);
    and after to add a new column in "sales" fact table for "time dimension ID" and through the repository populate this column based on the "time of sale" column and the corresponding "time dimension ID".
    I know that the ETL process might perform it, but I do not want to go for Data Warehousing (it is not real - time, needs more resources, etc).
    Is it possible to perform such action only on repository?
    Thank you.

    Hi,
    I can do it, but this would be usefull only to create "time dimension" table. But also the "sales" fact table needs to be altered (thus, the "time" column will not contain the value of the time, but the ID of the corresponding time in the "time dimension" table).
    I know that on DW this procedure is done automatically by the ETL process.
    My question is that does the repository has any tools similar to this?
    Thank you.

  • Display all dates between date range (Time Dimension left outer join Fact)

    All,
    I have done some searching around this issue but within all the posts regarding date variables, date prompts and date filtering I haven't seen one exactly answering my issue (maybe they are and I just dont have my head around it correctly yet).
    My report requirement is to allow a user to select a start day and an end day. The report should show all activity between those two days - AND display 0/null on days where there is no activity. That second part is where I am getting hung up.
    The tables in question are:
    TimeDim
    EventFact
    CustomerDim
    My BMM is setup as follows:
    TimeDim left outer join EventFact
    CustomerDim inner join EventFact
    If I run a report selecting DAY from TimeDim and a measure1 from EventFact with day range 1/1/2010 - 12/31/2010 .. I get a record for every day and it looks perfect because of the left outer join between TimeDim and CustomerDim.
    But .. if I add in a field from CustomerDim, select TimeDim.DAY, CustomerDim.CUSTNAME, EventFact.MEASURE1, OBIEE only returns records for the days that have EventFact records.
    This is due to the fact that the TimeDim is still outer joined into EventFact but adding in CustomerDim makes OBIEE setup an inner join between those tables which then causes only data to be returned where EventFact data exists.
    There is a way around this in this simple case and that is to define the relationship between CustomerDim and EventFact as an outer join as well. This will give the desired effect (but an outer join between these two tables is not the true relationship) and as I add additional dimensions and add additional logical sources to a single dimension in the BMM it gets complicated and messy.
    Ive also messed with setting the driving table in the relationship, etc.. but it has not given the desired effect.
    Has anyone ever encountered the need to force display all dates within a specfied range with a fact table that may not have an entry for every date?
    Thanks in advance.
    K
    Edited by: user_K on Apr 27, 2010 11:32 AM

    It worked!!!* Even my time drill downs and date based filtering still work!
    That is awesome. Never would have thought of that intuitively.
    Now, just need a little help understanding how it works. When I run my report and check the logs I can see that two queries are issued:
    Query 1: Joins the fact table to all the associated dimensions. I even changed all the relationships to inner joins (which is what they truly are). And calculates the original measure. If I copy and paste this query into sql developer it runs fine but only returns those rows that joined to the time dimension - which is what was happening before. It is correct but I wanted a record for every time dimension record.
    Query 2: Looks like the following:
    select sum(0)
    from timedim
    where date between <dateprompt1> and <dateprompt2>
    group by month *<--* this is the time dimension level specified in Query 1, so it knows to aggregate to the month level as was done in query 1
    Final Question: So what is OBIEE doing ultimately, does it issue these two requests and then perform a full outer join or something to bring them together? I couldn't see anywhere in the log a complete query that I could just run to see a similar result that I was getting in Answers.
    Thanks for all the help .. Id give more points if I could.
    K

  • Populating the time dimension in ODI

    I need to populate my time dimension in ODI> I read a solution in this forum suggesting to create a time table/view in the source schema, reverse it in ODI and then use it as source to populate the time dimension. Is there another way to do this? One way I thought of was to use the ORDERDATE field in my ORDER table (my source table in Oracle) and map it to my time dimension in SQL Server via an interface. But I also have DUEDATE, SHIPDATE and PAYDATE fields in my ORDERS table and this approach would mean that I have to map them through separate interfaces to the time dimension as well. I have created a procedure in the source schema(Oracle) and want to use it in ODI to populate the time dimension. But I amnt sure if that is possible in ODI. Could anyone help me with this please?
    Regards,
    Neel

    Hi Neelab,
    Sorry for my delay to reply you, I had no time the lasts days...
    To get the four distinct key from your time dimension, just add four instance of dimension table at interface each one joined with one of the columns.
    I believe that you load your time dimension from some other table than PRJ_TBL_TRANSACTION because you have the HolidayType column in your time dimension...
    A view is one possible solution to load the time table but depends how the performance of the query is.
    A way to do it at ODI is:
    - Create 4 interfaces, one for each column, to load 1 singe table with 1 single date column, don't worry about duplicated value at this time, than you can just use the "IKM Control Append" that has more performance but check the "Distinct" box (flow tab) at each interface
    - Create a last interface from this temp table as source, to the time dimension target table. Now you will use the "IKM Incremental Update" and do choose the "Update" option to "NO". Check the "Distinct" box.
    As this table will have no more than 6.200 records from the last 20 years it will be a small table where you shouldn't have performance problems.
    These are some of possible solutions but I would like to add other "way to think".
    By the table that you show here you have a simple time table with no special feature, for that, let me suggest you other way.
    - in the current way you will join but didn't get the record that "fail" from the join once they will be exclude if a date do not exist at time dimension
    My suggestion:
    - Load the dimension time table from your source table
    - as PK in time dimension table, use the ''Julian Day"
    - At ODI target fact table (datastore), create a 4 reference constraints (one by column) to the time dimension
    - at interface do not use the dimension as source and transform the 4 date to Julian and let the 4 constraints take care if they exists or not at dimension table.
    OR
    - Look for the minimum "possible" date at your company
    - populate your time dimension with every each day since then until a future date (Dec 31, for instance)
    - create a process to populate the future date that will be execute in a interval that you decide (once a year, once a month, as you wish) dependent on how further the date is populated
    - use the "Julian date" as PK
    - At interface just transform any date to "Julian Date" it will be at dimension time once it is naturally unique
    You could substitute the Julian date for "YYYYMMDD" that is a unique value too.
    I presented you 2 way to consider be considered, each one could be used based on how important is for the business know if a date was loaded or not.
    Someone can question that has the dates loaded from source against has all dates previous loaded could help to find errors from days that wasn’t loaded but it has a failure. As there are 4 dates source columns (and we are talking just about one source table until now) if a date loaded math a date when the load failure there is no value in use the time dimension date to analyze this possibility.
    I defend the full time dimension load.
    Make sense and/or help you??

  • Time Dimension- Wk and Month

    Hi,Our user would like to see sales performance by time which rolls up to different hierarchy paths, with measures which use dynamic time series, such as MTD, YTD.TIME1) Year --> Qtr --> Month2) Year --> Week of yearMy question is since I have two different hierachy paths, should I put these time dimensions in one cube. Thus, the cube would contains the following dimensions:TIMEinMonth, TIMEinWeek, Measures, Prod, Geogaphy Or should I build two cubes: one has time dimension as in months, and another has time dimension as in week.It looks like the first alternative has lower space requirements overall, but I can't tag both TIMEinMonth and TIMEinWeek as "Time" dimension in OLAP outline. Can I still have YTD for both time dimensions?What should be the best design?Thanks.Sam

    Why don't you try with the two different time hierachies under the same dimension?TIME Dimension: Time --> Year_Month --> Qtr --> Month | --> Year_Week --> Week of year Do not aggregate the Year_Week to the Year_Month and you will have the Year_Month data at Time level! In this way your cube would contains only the following dimensions: TIME, Measures, Prod, Geogaphy.I have used this solution several times, but sincerely I haven't tested too much the Dynamic time series functions with this architecture.

  • Populating the Time Dimension

    Ok, the Oracle Enterprise Manager was kind enough to automatically create the often used Time dimension for me. It even created the associated look up table for me.
    Question:
    Is there a facility/function/feature that populates the weeks - month - year levels in this new lookup table or do I have to do it "by hand"?
    Thanks for your time.
    GGA

    Reposted.

  • Slicer Time Dimension Issue with Cube Functions

    Hi,
    Hoping someone can help me figure out right approach here.
    Summary:
    Using Excel 2013 connected to a SSAS cube as data source, and cube functions with slicers to create a dashboard.
    Have following time dimension slicers; Fiscal Year, Fiscal Quarter, Fiscal Month, Fiscal Week & Date, that are used to slice data based on user selection, along
    with a sales measure.
    Below is example of Slicer name and CubeMember function for each:
    Slicer_Fiscal_Year: 
    =CUBEMEMBER("Cube","[Date].[Fiscal Year].&[2015]")
    Slicer_Fiscal_Quarter: 
    =CUBEMEMBER("Cube","[Date].[Fiscal Quarter].[All]")
    Slicer_Fiscal_Month: 
    =CUBEMEMBER("Cube","[Date].[Fiscal Month].&[201408]")
    Slicer_Fiscal_Week: 
    =CUBEMEMBER("Cube","[Date].[Fiscal Week].&[201509]")
    Slicer_Date: 
    =CUBEMEMBER("Cube","[Date].[Date].[All]")
    Problem:
    What I am trying to do is to build a table with cube functions that takes the lowest grain of the slicer time dimension selected, shows the current member, plus
    the prior 7 so I can have an 8 period trending view table that I will build a chart from. In the above example that would mean that it would look at Slicer_Fiscal_Week since that is lowest grain that has an attribute other than All, and then show me the prior
    7 periods. In this case 201509 means Week 9, so I would want to show in table Week 9 back to Week 2. But if Slicer_Fiscal_Week was set to All, along with Slicer_Date, then Fiscal Month would be lowest grain, so I would want to show Fiscal Months from August
    (201408) back to January 2014. I know how to use CubeRankedMember to pull the value from what is selected in the slicer, the problem is figuring out how to pass the lowest grain time dimension so that I can use lag or some other MDX function to get the previous
    periods.
    Any help on this would be greatly appreciated.
    <object height="1" id="plugin0" style=";z-index:1000;" type="application/x-dgnria" width="1"><param name="tabId" value="{28593A5C-70C0-4593-9764-80C76B51795C}"
    /></object>

    Hello,
    Thank you for your question.
    I am trying to involve someone familiar with this topic to further look at this issue.
    George Zhao
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • ERROR while loading time dimension table

    i need to load time dimension from csv to oracle table, while loading i got the error.
    my source data type is date and target is date.
    ODI-1226: Step sample day fails after 1 attempt(s).
    ODI-1240: Flow sample day fails while performing a Loading operation. This flow loads target table W_SAMPLE_DATE.
    ODI-1228: Task SrcSet0 (Loading) fails on the target ORACLE connection Target_Oracle.
    Caused By: java.sql.SQLException: ORA-30088: datetime/interval precision is out of range
    while creating c$ table
    create table WORKSCHEMA.C$_0W_SAMPLE_DATE
         C3_ROW_WID     NUMBER(10) NULL,
         C1_CALENDAR_DATE     TIMESTAMP() NULL,
         C2_DAY_DT     TIMESTAMP() NULL
    )

    check the source data and use the correct function eg TO_DATE(SRC.DATE, 'MM?DD/YYYY') use NVL if required.

  • Creating Time dimension in BW data model. - like seen in logical data model

    Hello all,
    I have been struggling with this thing and I am looking for some help from anyone on this forum.
    We are trying to create a logical data model of our bw system. We are going live next month with Student module for universities. We have multiple Infocubes and DSO and since there is so much crossing over in between them most of the reporting is done on infosets.
    One of the thing we were thinking; is it possible to create something like a common time dimension table for every infoprovider. Basically when we are providing the reports to the end user can we give them a drop down menu which gives a time frame for reporting rather than selecting.
    Example: Like can we create something which looks in the drop down like current month data, last months data, three months ago, four months ago, five months ago, one year ago, two years ago. Can we make like these data slices in our cube and deliver it to the end user?
    We have in our cube a few date infoobjects, like receipt date, decision date, cancellation date and like wise.
    Please let me know if any one has done any similar thing, it will be very helpful.
    Thank you so much in advance.

    if you add your common time dimension to your data model, first identify for each infoprovider the time against which 'current month' and other frames should be applied and map them to your dimension.
    just a question... are you not using time dimension in cubes ? ideally this should be your time dimension llinking all.
    when you use time dimension which uses 'current month' , 'current year' , you will have to address their historisation as well. (because current month now will not be so current after 2 months).
    so in data load procedure every day these values need to change (meaning drop and reload).
    and routines to populate these values based on reporting date.
    Edited by: hemant vyas on May 6, 2009 1:56 PM

  • How to know on which time dimension level we are ?

    Hello,
    I would like to know is there a variable or a mean to know dynamically on which time dimension level we are in order to use that in a CASE WHEN clause ?
    By using a sort of aggregation tables in which one of the column contains the name of the level, I could know on which level I am but I can't use that for drill down. What I mean :
    Tab1 :
    'Year' as typelevel, year
    Tab2 :
    'Month' as typelevel, year, month
    In BMM, I have made one logical table with as Source tab1 and tab2 and as columns typelevel, year and month.
    tab1 has in content column the year level
    tab2 has in content column the month level.
    So when in Answers I retrieve
    typelevel, year
    the result is : 'Year', 2008
    and when I request : typelevel, year, month
    the result is : 'Month', 2008, 1
    But if I want to drill from year to month in order to have :
    'Year', 2008
    and then after drill
    'Month', 2008, 1
    it is impossible as a filter on typelevel='Year' is added on the month level, so it retrieves 0 columns.
    If someone has an idea on how to do that it would be very great.
    Thanks in advance for your help.

    Hi Supriya,
    OOTB I think you can use SharePoint designer, but I would suggest  custom code to iterate to all pages, and get the lists that are associated with these pages.
    http://stackoverflow.com/questions/633633/sharepoint-how-can-i-find-all-the-pages-that-host-a-particular-web-part
    another one would be if those lists were never used and you can check for list with empty data.
    I would use Get-SPLists to get all of the lists to check for zero items.
    http://blogs.technet.com/b/heyscriptingguy/archive/2010/09/15/use-windows-powershell-to-manage-lists-in-sharepoint-2010.aspx
     http://sharepointrelated.com/2011/11/28/get-all-sharepoint-lists-by-using-powershell/
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

Maybe you are looking for