Semiweekly Time Series Creation for DP

Hi,
I want to create a semiweekly (twice a week) time series for DP.
And this would be spread for 2 years (i.e. 1 year for the past and 1 year for the future).
So a total of 208 entries need to be made for this time series.
My understanding of the method is
1)     Create a fiscal Year Variant.
2)     Create a Storage Bucket profile based on fiscal Year Variant.
3)     Create a Time Series Object based on Storage Bucket profile.
4)     In the new Planning Area, provide the Time Series Object.
Correct me if my understanding is wrong.
Also can you provide the T-Code or Menu Path for the first three activities?
Regards,
Vikas

The Menu Path for
(1) Fiscal Year Variants
SPRO>APO> Supply CHain PLanning> Demand Planning> Basic Settings>Maintain Fiscal Year Variants
(2) Storage Bucket Profiles
SPRO>APO> Supply CHain PLanning> Demand Planning> Basic Settings>Define STorage Bucket Profiles
(3) I guess when you mean time series, you meant planning bucket profile
SPRO>APO> Supply CHain PLanning> Demand Planning> Basic Settings>Define Planning Bucket Profiles
(4) Define PLanning Area- /n/sapapo/msdp_admin
You define a storage bucket profile
(5) Define PLanning Book
You define the planning bucket profile

Similar Messages

  • Problem in creating time series objects for a planning area

    Hi Friends,
    I am working on SCM 4.0 APO. I am trying to create the time  series objects for a  'Z'planning area which i have created. This Planning area is linked to a planning object structure which contains only three characteristics 9amatnr,9alocno & zplnpart. But the creation of time series objects is taking a long time. the screen just remains the same.
    Can anybody encountered such a problem any time?
    request you all to put some light as what could be wrong or causing the delay in creating them???
    All helpfull answers will be suitably rewarded.
    Thanks
    mvd

    Hi mv d,
    I have a couple of questions:
    1) how many Key Figures does your planning area have?
    2) what is the length of your time series horizon?
    3) how many characteristic combinations are currently in the Planning Object Structure? and how many newly character
    I am asking because these aspects might affect the running time of the time series creation. Have you also tried creating the time series in the background? How long did it take there?

  • Time Series Objects for a Planning Area

    Hi all,
    Can anyone let me know why do we create Time Series Objects for a Planning Area.
    What is its Role & significance..
    Regards,.
    Vishal.S.Pandya

    time series is usually a Demand planning concept(and used in SNP as well but in SNP its predominantly the order series that plays a main role)
    time series is a general concept in statistics(and forecasting) wherein the value of a key figure is represented in a time bucket
    Time series gives you an idea of the gradual change in the values by time and the relation of the future based on the past
    planning area in APO (and other tools) tries to represent this as a 2 dimensional model with time on the columns and key figures in the rows. the value that you load into the cells that are formed by the above are based on the characterisitic values you choose from your MPOS which is linked to the way the values are stored in the planning  area
    The planning area stores data for each key figure in the smallest unit of time(technical storage buckets) and the lowest level of characterisitc value combination

  • How to use time series functions for this requirement

    we are on OBIEE 11g (11.1.1.5)
    Any pointers and links are really helpful.
    I have a requirement where YTD sales are displayed in a column. We wan to compare that number to what that number was last year at the same time. Tried to use TODATE, AGO and it did not work.
    || used for column seperation below.
    ProductGroup || YTD Sales Rev||YTD Sales Rev 1 Year Ago||YTD Sales Rev 2 Years Ago
    Climate Solutions || 1.7 mil (1/1/2011 - 7/1/2011) || 1.45 mil (1/1/2010 - 7/1/2010) || 1.15 mil (1/1/2009 - 7/1/2009)
    Thanks in advance
    kris
    Edited by: user566193 on Aug 10, 2011 6:06 PM

    Kris,
    Basically you want to combine the 2 time series functions here i.e. TODATE & AGO.
    Here you go..
    You can create first a YTD logical column in BMM say YTD Sales Rev..Then create another logical column with calculation as AGO(YTD Sales Rev, YEARDIM, 1) or AGO(YTD Sales Rev, MONTHDIM, 12).
    Similarly, you create other measures...OBIEE 11g has time series function in Answers as well but I'd say try to create using RPD as you to calculate 1 logical column based on another. Better to push it on RPD side.
    Hope this helps

  • Dynamic Series creation for Plot Chart

    Hi,
    I am using a plot chart which takes data from an external XML file.
    Based on certain fields in the data, I create 3 different plot series.
    I use a Combo box to show different plot series based on the selected value from it.
    I am placing another Combo box
    How can do the below task:
    I need to create some dynamic series based on some node elements in the series for which the chart is currently shown.
    I create another Combo box to show the above generated plot series based on the selected value from it.

    Hi,
    I am using a plot chart which takes data from an external XML file.
    Based on certain fields in the data, I create 3 different plot series.
    I use a Combo box to show different plot series based on the selected value from it.
    I am placing another Combo box
    How can do the below task:
    I need to create some dynamic series based on some node elements in the series for which the chart is currently shown.
    I create another Combo box to show the above generated plot series based on the selected value from it.

  • Time series for cvc's

    hello
    In any case ( whether new cvc's added or the new period is to be included ) we create time series for the planning area. Not aware as to why Adjust time series option is given for Planning Object Structure at the time of creation of cvc's.
    Any thoughts please
    regards
    KK

    Hi
    Yes you are correct we have some background jobs for create time series for the planning area which might execute depending on our frequency. But if you create CVCs and try to check before our program has to be executed in background we might have inconsistency. To avoid this inconsistency it is always advisable to put Adjust Time Series Objects indicator which will immediately Updates the time series objects for new CVC in all planning areas based on this master planning object structure.
    SAP always recommends that you update all planning areas immediately due to which we should use this indicator .
    Alternatively you can leave these indicator unselected for all new characteristic value combinations. Afterwords to adjust the time series objects at a later date we can choose Adjust Time Series from the context menu for the planning object structure in via S&DP Adminsistration. This also has some of the advantage like that you can create a variant and schedule the job to run later,  when there is little system activity.
    I hope this helps to answer your question
    Thanks
    Amol

  • Time series functions are not working in OBIEE for ESSBASE data source

    Hi All,
    I am facing a problem in OBIEE as I am getting error messages for measure columns with Time series functions(Ago,ToDate and PeriodRolling) in both RPD and Answers.
    Error is "Target database does not support Ago operation".
    But I am aware of OBIEE supports Time Series functions for Essbase data source.
    using Hyperion 9.3.1 as data source and obiee 11.1.1.5.0 as reporting tool.
    Appreciate your help.
    Thanks,
    Aravind

    Hi,
    is because the time series function are not supported for the framentation content, see the content of the oracle support:
    The error occurs due to the fact the fragmented data sources are used on some Time series measures. Time series measures (i.e. AGO) are not supported on fragmented data sources.
    Confirmation is documented in the following guide - Creating and Administering the Business Model and Mapping Layer in an Oracle BI Repository > Process of Creating and Administering Dimensions
    Ago or ToDate functionality is not supported on fragmented logical table sources. For more information, refer to “About Time Series Conversion Functions” on page 197.
    Regards,
    Gianluca

  • Best Partition for Time Series

    Hi All,
    i have the following tables in my DB
    CREATE TABLE READING_DWR (
    ID     VARCHAR(20)     NOT NULL,
    MACHINE_ID     VARCHAR(20),
    DATE_ID     NUMBER,
    TIME_ID NUMBER,
    READING NUMBER
    CREATE TABLE DATE_DIMENSION (
    DATE_ID     NUMBER     NOT NULL,
    DATE_VALUE     DATE     NOT NULL,
    DAY     VARCHAR(10),
    DAY_OF_WEEK     INTEGER,
    DAY_OF_MONTH     INTEGER,
    DAY_OF_YEAR     INTEGER,
    PREVIOUS_DAY     DATE,
    NEXT_DAY     DATE,
    WEEK_OF_YEAR     INTEGER,
    MONTH     VARCHAR(10),
    MONTH_OF_YEAR     INTEGER,
    QUARTER_OF_YEAR     INTEGER,
    YEAR     INTEGER
    CREATE TABLE TIME_DIMENSION (
    TIME_ID     NUMBER     NOT NULL,
    HOUR     VARCHAR(3),
    MINUTE     VARCHAR(3),
    SECOND     VARCHAR(3),
    INTERVAL     NUMBER
    Referential Constrains:-
    STG_READING(DATE_ID)>>>>>DATE_DIMENSION(DATE_ID)
    STG_READING(TIME_ID)>>>>>TIME_DIMENSION(TIME_ID)
    READING_DWR contains the time series data for a particular machine.
    What is the best way to partition the READING_DWR to improve the performance of my select query??

    Thanks for posting the additional information. I think I have a better understanding of what you are trying to do.
    As I suspected partitioning has nothing to do with it.
    >
    Now where the first value is null , i have to get the record from the READING_DWR , where the time is less then 10:00 for a particular machIne
    >
    If I understand what you what you are trying to do correctly it is something like this. Please correct anything that is wrong.
    1. READING_DWR is a history table - for each machine_id there is a datetime value and an amount which represents a 'total_to_date' value
    2. STG_READING is a stage table - this table has new data that will be (but hasn't been) added to the READING_DWR table. All data in this table has a later datetime value than any data in the READING_DWR table. You know what the date cutoff is for each batch; in your example the earliest date is 10:00
    3. You need to report on all records from STG_READING (which has 'total_to_date') and determine the 'incremental-value'; that is, the increase of this value from the preceding value.
    4. For the first record (earliest datetime value) in the record set for each machine_id the preceding value will be the value of the READING_DWR table for that machine_id for the record that has the latest datetime value.
    5. Your problem is how to best meet the requirement of step 4 above: that is, getting and using the proper record from the READING_DWR table.
    If the above is correct then basically you need to optimize the 'getting' since you already posted code that uses the LAG (1 record) function to give you the data you need; you are just missing a record.
    So where you show output that was from only the STG table
    >
    Now the output will be
    =======================
    Time Reading lag
    10:00 200 null
    10:15 220 200
    10:20 225 220
    10:30 230 225
    >
    If you include the DWR record (and no other changes) the output might look like
    >
    Time Reading lag
    08:23 185 null
    10:00 200 185
    10:15 220 200
    10:20 225 220
    10:30 230 225
    >
    The above output is exactly what you want but without the first record. I assume you already know how to eliminate one record from a result set.
    So the process for what you need, in pseudo-code, basically boils down to:
    WITH ALL_RECORDS_NEEDED AS (
    SELECT machine_id, last_record_data FROM READING_DWR
    UNION ALL
    SELECT * FROM STG_READING
    SELECT lag_query_goes_here FROM ALL_RECORDS_NEEDEDThen either ignore or remove the earliest record for each machine_id since it came from READING_DWR and will have a NULL for the lag value. If you add a flag column to each query to indicate where the data came from (e.g. 'R' for READING_DWR and 'S' for STG_READING) then you can just use the records with a flag of 'S' in a report query or outer query.
    So now the problem is reduce to two things:
    1. Efficiently finding the records needed from the READING_DWR table
    2. Combining the one DWR record with the staging records.
    For #1 since you want the latest date for each machine_id then an index COULD help. You said you have an index
    >
    index on READING_DWR---MACHINE_ID,DATE_ID,TIME_ID
    >
    But for a query to find the latest date you want DATE_ID and TIME_ID to be in descending order.
    The problem here is that you have seriously garbaged up your data by using numbers for dates and times - requiring
    >
    TO_DATE(DATE_ID||''||LPAD(time_id,6,0),'YYYYMMDDHH24MISS'))
    >
    to make it useful.
    This is a VERY BAD IDEA. If at all possible you should correct it. The best way to do that is to use a DATE column in both tables and convert the data to the proper date values when you insert it.
    If that is not possible then you should create a VIRTUAL column using your TO_DATE functionality so that you can index and query the virtual column as if it were a date.
    For #2 (Combining the one DWR record with the staging records) you can either just union the two queries together (as in my psuedo-code) or extract a copy of the DWR and insert it into the staging table.
    In short query ALL of the DWR records you need (one for each machine_id) separately as a batch and then combine them with the STG records. Don't look them up one at a time like your posted code is trying to do.
    If your process is something like this and perhaps run every 15 minutes
    1. truncate the stage table
    2. run my report
    3. add stage records to the history table
    Then I would modify the process to use the 15 minutes 'dead' time between batches to extract the DWR records needed for the next batch into a holding table. Once you do step 3 above (update the history table) you can run this query and have the records preprocessed for your next batch and report.
    I would use a new holding table for this purpose rather than have the staging table server a double purpose. You never know when you might need to redo the staging table load; this means truncating the table which would wipe out the DWR staged records.
    Anyway - with all of the above you should be able to get it working and performing.

  • Time Series Storage Design

    Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
    The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
    My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
    Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
    To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
    Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...

    Hi,
      Time Series Key Figure:
            Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
           There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
    Storage Bucket profile:
          SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
    Time/Planning bucket profile:
         basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
    Hope this will help you.
    Regards
    Sujay

  • Time Series initialization dates with fiscal periods!

    Dear Experts
    Problem:
    I cannot initialize planning area.
    Period YYYY00X is invalid for periodicity P FYV
    Configuration
    Storage Buckets Profile: Week, Month, Quarter, Year, Post Period FYV all checked.
    Horizon
    Start: 24.02.2013
    End: 31.12.2104
    No Time stream defined.
    Fiscal Year Variant
    2012
    Month
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    Edate
    28
    25
    31
    28
    26
    30
    28
    25
    29
    27
    24
    31
    FP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    2013
    Month
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    Edate
    26
    23
    30
    27
    25
    29
    27
    24
    28
    26
    23
    31
    FP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    2014
    Month
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    Edate
    25
    22
    29
    26
    24
    28
    26
    23
    27
    25
    22
    31
    FP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    2015
    Month
    1
    2
    4
    5
    5
    7
    8
    8
    10
    10
    11
    12
    Edate
    31
    28
    4
    2
    30
    4
    1
    29
    3
    31
    28
    31
    FP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    Question
    What goddamn dates should I enter in the planning area initialization start and end to initialize for maximum duration given the settings above?
    I tried a few dozens but none is accepted. For a start I tried the same dates as in Horizon of storage bucket profile. But given the kind of error text I have I cannot decipher with tiny little mind what dates I am expected to enter in for time series creation.
    Thanks
    BS

    Thanks Mitesh,
    No its 2014
    Here is what worked.
    Storage Bucket Horizon
    Start: 24.02.2013
    End: 22.11.2014
    Time Series Initialization
    Start: 01.02.2013
    End Date: 31.12.2014
    The fiscal year variant is what I pasted above.
    I thought time series can only be initialized for a subset of period in the storage bucket profile !. This is my first experience of this kind where Initialization period is larger than the storage horizon. Is this the case ? I went by the book always and this is just a chance discovery by me.
    I was aware of the limitation by SAP notes on need to have one more year on either sides in fiscal year variant for it to be used for initialization and hence the range of dates I tried.
    Appreciate your comments on this.
    All dates in dd/mm/yyyy.
    Thanks
    BS

  • Time series error

    hi friends
    got this error
    new to 11g
    plz can any one help me out of this
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 22040] To use Ago function, the storage level of the query ('[TIMES.TIME_ID]') must be a static level. (HY000)

    >
    hi friends
    got this error
    new to 11g
    plz can any one help me out of this
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 22040] To use Ago function, the storage level of the query ('[TIMES.TIME_ID]') must be a static level. (HY000)
    >
    Please mark this question ANSWERED and repost it in the OBIS forum as this is question about Siebel Analytics which is now part of Oracle's BI.
    https://forums.oracle.com/forums/category.jspa?categoryID=147
    That error is because you have not set up your time dimension properly. Did you perhaps use the OBIEE extract() function to derive your dimension?
    If so you should create a view or table instead and the error should go away.
    See this article on 'OBI EE, Time Dimensions and Time-Series Calculations for an example of how to set up your dimensions properly.
    http://www.rittmanmead.com/2007/04/obi-ee-time-dimensions-and-time-series-calculations/

  • Time series and Order series questions

    Hi Guys - Need some help in understanding/Visualizing some basic APO concepts. I do not want to move further without understanding these concepts completely. I did read sap help and couple of apo books but none gave me a complete understanding of this very basic concept.
    1. Data is stored in livecache in 3 different ways. time series, order series and atp time series. for now I am concentrating on just time series and order series. Can some one help me understand with an example how data is stored in time series and how it is stored in order series? I read that data which is not order related is called time series data and which is order related is called order series data.
    My query is even in DP time series data, data is stored with respect to product and location that is transferred to snp. In SNP too data is processed with respect to product and location. so what is the difference in time series data and order series data?
    2. what are time series key figures and what are order series key figures? I read safety stock for example is a time series keyfigure. why is it not a order series key figure? what makes a keyfigure time series or order series? can some one xplain this in detail with an example or numbers?
    3. there is a stock category group in snp tab of location master LOC3. Stock category should be product related right? how is this related to location and what does this field mean in location master
    Thanks a lot for your help in advance. Please let me know if I am not clear in any of the questions.

    Hi,
    Time series:  Data is stored in buckets with no reference to orders.( If you place the mouse on time series data and right click for
                         display details , you will not find any information.
                          Suitable for tactical planing and aggregated planning. Usually in demand planning.
    Pre requisite: 1. You need to create time series objects for the planning area.
                           2. When creating planning area you should not make any entries for the Key figure in the field Info Cube, category
                               and  category group.
                           3. When creating planning area any entry you made in the field Key figure semantics with prefixed with TS.
                               (Optional entry)
    Order series: Data is stored in buckets with reference to orders.( If you place the cursor on the order series data and right click
                          the mouse for display details , you will find information of order details.)
                         Useful for operative planning.
                         *You will have real time integration with R3.
    Pre requisite: 1. You need to create time series objects for the planning area.( though you are creating Order series)
                          2.When creating a planning area specify a category or category group or enter a key figure semantics with prefix
                            LC.
                         3. When creating planning area you should not make an entry for the key figure in the field Info cube.
    Thanks,
    nandha

  • Time series inconsistency due to change in Fiscal variant/Storage bucket

    Hi All,
    We got into the situation where running time series consistency check on a DP planning area fails with  the message pointing to, 'changes to Fiscal variant'.  We are unable to track any changes to the fiscal variant though there is a possibility that storage bucket horizon could have been changed. Is there a way to correct/synch this by running a report?
    We are trying to avoid re-initialization here, though this is an option, for the obvious reasons to backup the data in production. Is there an alternative solution to fix this inconsistency?
    Cheers!

    Dear Ashok,
    You should never change a FYV when this FYV is used in a storage
    bucket profile which is used in a planning area and if time series
    objects are already created. It is no problem to maintain the FYV for
    an additional time range, but changes and deletion of periods should
    never be made if the FYV is actively used. You should not
    change existing periods. If you want to make any changes to buckets
    in your FYV or delete buckets, you should always make a COMPLETE
    backup of ALL your data from the planning area into an InfoCube,
    then delete the time series objects for the planning area (and with
    this all the data in liveCache) and then change the FYV. After that,
    you can create time series objects again and reload the data from the
    Backup-InfoCube into the planning area. If you do these steps, you
    will not risk to loose any data. The data in the InfoCube will be
    the backup you can reload.
    As some processes check the FYV some time before and after it is used,
    it is very recommendable to maintain the FYV at least 2 - 3 years in
    the past and in the future. E.g. if you create time series objects up
    from 2001 you should maintain your FYV at least back to 1999 and if you
    create time series objects until 2006, you should maintain your FYV
    at least until 2008. It might be that you never experience problems if
    you do not maintain the FYV in this time range, but some processes do
    additional checks and then it can come to problems
    Regards,
    Rakesh

  • Error in Creating Time series objects in APO 5.0

    Hi, We are implementing APO 5.0.  I am trying to Create Time series objects for Planning Area 9ASNP05, but getting the following Run Time Error
    We have used Support Package 10
    Runtime Errors         PERFORM_TOO_MANY_PARAMETERS
    Exception              CX_SY_DYN_CALL_PARAM_NOT_FOUND
    The exception, which is assigned to class 'CX_SY_DYN_CALL_PARAM_NOT_FOUND', was
    not caught in procedure "RSS_TEMPLATE_INSTANTIATE" "(FUNCTION)", nor was it propagated by a RAISING clause.
    Since the caller of the procedure could not have anticipated that the
    exception would occur, the current program is terminated.
    The reason for the exception is: A PERFORM was used to call the routine "INSTANTIATE" of the program
      "GP_MET_RSSG_HEADER_COMMENT".
    This routine contains 15 formal parameters, but the current call
    contains 16 actual parameters.
    Any

    Hi
    I am getting exactly the same error, on SCM 5.0 SR2, running on Windows 2003 64 BIT & SQL 2000.
    Conditions for error:
    Using transaction:
    /SAPAPO/MSDP_ADMIN
    Select: Planning Object Structures
    Short Dump occurs if you either:
    1. Attempt to deactivate an active Planning Object Structure
    2. Attempt to create a Characteristic Combination
    Gives a runtime error shortdump PERFORM_TOO_MANY_PARAMETERS
    Error analysis:
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_DYN_CALL_PARAM_NOT_FOUND', was
         not caught in
        procedure "RSS_TEMPLATE_INSTANTIATE" "(FUNCTION)", nor was it propagated by a
         RAISING clause.
        Since the caller of the procedure could not have anticipated that the
        exception would occur, the current program is terminated.
        The reason for the exception is:
        A PERFORM was used to call the routine "INSTANTIATE" of the program
         "GP_MET_RSSG_HEADER_COMMENT".
        This routine contains 15 formal parameters, but the current call
        contains 16 actual parameters.
    Has anyone seen this before?
    Did you find a solution Bhavesh ?
    Thanks.

  • Fragementation with Time series calculation

    Hi,
    I have three 3 tables
    Table1 for Region 1 , Frag Logic =Region = Region 1
    Table 2 for Region 2 Frag Logic =Region = Region 2
    Table 3 for Region 3 Frag Logic =Region = Region 3
    we use fragementation to query based on the region.Now i have duplicated Table 1 to calculate the Previous date for time series
    Table1 for Region 1 , Frag Logic =Region = Region 1
    Table1 for Region 1 Prev ?
    Table 2 for Region 2 Frag Logic =Region = Region 2
    Table 2 for Region 2 Prev ?
    Table 3 for Region 3 Frag Logic =Region = Region 3
    Table 3 for Region 3 Prev ?
    After adding prev tables i am almost lost,Do i need to add the Fragmentation for Prev tables also ,else we dont need as main and Prev tables are created as Alias from the base tables.Please share your thoughts.

    To use TimeSeries on Fragmentation in OBIEE, is identified as a bug
    http://obieetalk.com/time-series-and-fragmentation

Maybe you are looking for

  • Can I have multiple iCloud accounts?

    If I can have multiple iCloud accounts, where and how do I set the up?  I cannot find the option under the iCloud settings in system preferences.  I have a personal and two different work emails with calendars I would like to sync.

  • Can I share an external hard drive and simultaneously have it connected via firewire to my iMac?

    Trying to get an external hard drive that is connected to my Airport Extreme to also be connected to my iMac via firewire 800. Any ideas?

  • Upgarde from Macbook to Pro

    I will be upgradeing from a macbook (Intel) to a Macbook pro. I am use to a windows system where I have to install all my applications again from scratch. Do I have to do this on my mac? Here are some of the applications I have. ILIFE 09, IWORK 09, A

  • About Replacement Path, Text Variables & Cusomer Exit?

    Hi Please explain along with examples youwhen we use Replacement Path                                                Text Variables and                                                Customer Exit Please search the forum before posting a thread Edite

  • About hiding password in monitor

    Hi All¨ I am sending a user password from sap system to a third party system via XI , I do not want anyone to se the password in the monitor , how can I hide it. thanks in advance.