Time series analysis in Numbers

Has anyone done any chart showing time series analysis in Numbers? I could not find a way to change the axis to reflect the right data series. 

Hi sanjay,
Yes, Numbers has a different style. Instead of a single large, multi-purpose table, Numbers uses several small tables, each with a purpose.
To plot a time series (or any Category graph) the X values must be in a Header Column. Here is a database of measurements over time as a tree grows:
That database can be left alone. No need to juggle with it. You can even lock it to prevent accidental edits.
A table to pull data and graph them:
Formula in B1
=Tree Data::B1
Formula in B2 (and Fill Down)
=Tree Data::B2
Next graph, pull some other data
(Scatter Plots do not require X data to be in a Header Column. Command click on each column to choose.)
Regards,
Ian.

Similar Messages

  • Time series analysis

    I'm studying at University and currently working on a time series analysis in assistance with Oracle 10g R2. The aim of my analysis is the comparison of two time series tables, each table contains two columns, 1st column comprises the date and the 2nd column comprises the value (price). The standard functionality within Oracle (this includes also the statistical functionality) doesn't support any time series analysis.
    I’m searching for a code or script in PL/SQL which supports the analysis I’m doing such as cross correlation or others. Any help I’ll get in this regard is highly appreciated.
    Thanks in advance

    Well, maybe your real problem is more complex, but on provided dataset, would it not be sufficient?
    SQL> with table1 as(
      2          select DATE '2007-03-30' dt,72.28 price from dual union all
      3          select DATE '2007-03-29',72.15 from dual union all
      4          select DATE '2007-03-28',72.13 from dual union all
      5          select DATE '2007-03-27',71.95 from dual union all
      6          select DATE '2007-03-26',72.00 from dual union all
      7          select DATE '2007-03-23',72.00 from dual union all
      8          select DATE '2007-03-22',72.02 from dual union all
      9          select DATE '2007-03-21',71.13 from dual union all
    10          select DATE '2007-03-20',70.75 from dual union all
    11          select DATE '2007-03-19',70.38 from dual ),
    12  table2 as(
    13          SELECT DATE '2007-03-30' dt ,33.28 price from dual union all
    14          select DATE '2007-03-29',31.73 from dual union all
    15          select DATE '2007-03-28',33.74 from dual union all
    16          select DATE '2007-03-27',32.21 from dual union all
    17          select DATE '2007-03-26',32.50 from dual union all
    18          select DATE '2007-03-23',33.79 from dual union all
    19          select DATE '2007-03-22',34.04 from dual union all
    20          select DATE '2007-03-21',32.18 from dual union all
    21          select DATE '2007-03-20',42.15 from dual union all
    22          select DATE '2007-03-19',38.10 from dual)
    23  select
    24  t1.dt,t1.price p1,t2.price p2,
    25  corr(t1.price,t2.price) over() correlation
    26  from table1 t1,table2 t2
    27  WHERE t1.dt=t2.dt
    28  /
    DT                          P1         P2 CORRELATION
    30.03.2007 00:00:00      72.28      33.28  -.73719325
    29.03.2007 00:00:00      72.15      31.73  -.73719325
    28.03.2007 00:00:00      72.13      33.74  -.73719325
    27.03.2007 00:00:00      71.95      32.21  -.73719325
    26.03.2007 00:00:00         72       32.5  -.73719325
    23.03.2007 00:00:00         72      33.79  -.73719325
    22.03.2007 00:00:00      72.02      34.04  -.73719325
    21.03.2007 00:00:00      71.13      32.18  -.73719325
    20.03.2007 00:00:00      70.75      42.15  -.73719325
    19.03.2007 00:00:00      70.38       38.1  -.73719325which shows rather negative correlation - by rising prices in table 1, prices in table 2 decreases?
    Best regards
    Maxim

  • Administrator desing star schema for "time series analysis"

    Hi all,
    I need to develop a set of dashboard with reports display a set of customers properties at
    the last etl period (this month) and, for these customers, show their properties in the "past"
    (this month - "n").
    I've a fact table with cust_id and the classic dimension cust, period, product and so on...
    My question is find a technique to desing the model in order to do these analysis or
    use oracle administrator function to retrieve the photo of my customers in the past.
    Here a specific user request:
    Find all customer revenue that this month have status = 1
    and, only for these customer having status != 1, show the revenue "in the past".
    Any suggestion?
    Ugo
    Edited by: user8021820 on 13-apr-2011 1.43
    Edited by: user8021820 on 13-apr-2011 1.44

    http://gerardnico.com/wiki/dat/obiee/function_time

  • How do I add moving averages to my time series graph is there a function on numbers on tr iMac?

    I've created a time series graph on numbers on my iMac but I need to add moving averages. Is there a function an if so where is it and if not is there a way to get around it

    Badunit,
    Here is an example plot, data sorted from most recent data at top of the table...
    You can see the moving average (of 20) is plotted from right to left.
    The Moving Average calculation is now wrong, and should have been calculated and presented from oldest to most recent.
    Here is the same data, with table sorted from oldest data at the top of the table.
    The moving average is also plotted from right to left, and shows the correct Moving Average for the most recent data.
    That is, it is calculated from oldest to most recent, with the last Moving Average data point plotted on "todays" date.
    What I want to see is my table displayed from most recent at the top (the top table), and moving average calculated and displayed as per the bottom graph.
    Edit: So, think about this some more,
    I need an option to tell Numbers to do the Moving Average calculation from the bottom of the table up, not from the top of the table down.

  • Time Series Graph Show Inappropriate Data for Continuous Analysis

    Hi All,
    I have marked Month as the Chronological Key in my BMM Layer but still I am unable to view the data correctly in my Time Series graph because it shows Inappropriate Data for Continuous Analysis at the time of creating the Graph. Can anybody help me out with the same.
    Thanks

    What data type is your key? The chronological key is required for the time series formulas (ago etc.).
    The time series chart requires a date or datetime data type to work - perhaps a new column with the first of the month/period would help? Regards,
    Robret

  • SAP HANA One and Predictive Analysis Desktop - Time Series Algorithms

    I have been working on a Proof-of-Concept project linking the SAP Predictive Analysis Desktop application to the SAP HANA One environment.
    I have modeled that data using SAP HANA Studio -- created Analytic views, Hierarchies, etc. -- following the HANA Academy videos.  This has worked very well in order to perform the historical analysis and reporting through the Desktop Application. 
    However, I cannot get the Predictive Analysis algorithms -- specifically the Time Series algorithms -- to work appropriately using the Desktop tool. It always errors out and points to the IndexTrace for more information, but it is difficult to pinpoint the exact cause of the issue.  The HANA Academy only has videos on Time Series Algorithms using SQL statements which will not work for my user community since they will have to constantly tweak the data and algorithm configuration. 
    In my experience so far with Predictive Analysis desktop and the Predictive Algorithms, there is a drastic difference between working with Local .CSV / Excel files and connecting to a HANA instance.  The configuration options for using the Time Series Algorithms are different depending upon the data source, which seems to be causing the issue.  For instance, when working with a local file, the Triple Exponential Smoothing configuration allows for the specification of which Date field to use for the calculation.  Once the data source is switched to HANA, it no longer allows for the Date field to be specified.  Using the exact same data set, the Algorithm using the local file works but the HANA one fails. 
    From my research thus far, everyone seems to be using PA for local files or running the Predictive Algorithms directly in HANA using SQL.  I can not find much of anything useful related to combing PA Desktop to HANA. 
    Does anyone have any experience utilizing the Time Series Algorithms in PA Desktop with a HANA instance?   Is there any documentation of how to structure the data in HANA so that it can be properly utilized in PA desktop? 
    HANA Info:
    HANA One Version: Rev 52.1
    HANA Version: 1.00.66.382664
    Predictive Analysis Desktop Info:
    Version: 1.0.11
    Build: 708
    Thanks in advance --
    Brian

    Hi,
    If you use CSV or XLS data source you will be using Native Algorithm or R
    Algorithm in SAP Predictive Analysis.
    When you connect HANA, SAP Predictive Analysis uses PAL Algorithm which runs
    on HANA server.
    Coming to your question regarding difference,
    In SAP PA Native Algorithm, we could provide the Data variable, Algorithm
    picks the seasonal information from the Data column. Both R and SAP HANA PAL
    does not support Date Column. We need configure seasonal information in
    Algorithm properties.
    R Properties
    1) Period : you need to mention the periodicity of the Data
    Monthly : (12)
    Quarter : (4)
    Custom : you can use it for week or Daily or hourly.
    2) Start Year: need to mention Start year.
    Start year is not used by algorithm for calculating Time series, but it helps
    PA to generate Visualization ( Time series chart) by simulating year and
    periodicity information.
    3) Starting Period:
    if your data is Quarterly and you have data recordings from Q2, mention 2 in
    start period.
    Example.
    If the data periodicity is Monthy and my data starts from Feb 1979, we need to provide following information,
    Period: 12
    Start year: 1979
    start Period: 2
    PAL Properties. : Same as properties defined in R.
    Thanks
    Ashok
    [email protected]

  • Can Numbers handle time series?

    I have a time series and i want to display the data in a diagram. The intervals of the points in time vary. I found no option to display the values in a digram so that values that are close in time are displayed close to each other and greater gaps are displayed further apart. Unfortunately all values are displayed in a constant interval.
    Is there a solution, other than creating a table with all possible dates and inserting values only in rows where data is available?
    Martin

    In the old Numbers you can use an x-y scatter plot with dates for the x axis. Numbers 3.0 seems to accept the dates but displays them wrong on the scatter chart, as large numbers rather than dates.
    Please do give Apple feedback on this, in your menu Numbers>Provide Numbers Feedback.
    SG

  • How do you analyse a binary time series using labview?

    I am new with time series data analysis and labview. I was wondering if anyone could help me out with analyzing a binary time series (1 or 0 output) using labview. ecifically I want to find out the period/frequency of a
    oscillating signal. Can I use the Walsh transform. If so how do I convert 'sequency' to time period. Are there any beginners text book out there? I would reallly appreciate it if anyone could help me out with this.

    Your comment about an indicator getting "clogged up with data" doesn't make any sense. The contents of a string or array indicator can get large and slow things down but a numeric indicator cannot get clogged up. If the data stops making sense, then you are incorrectly reading the instrument and converting that data to a numeric. With your comments about the device transmit buffer, I suspect you have occasionaly set the byte count too low and unread bytes are there that you then read the next time. As long as the instrument is fairly new, it will send out a termination character (typically EOI) that will terminate the read. You can then set the read count to some arbitrarily high number to ensure you've got the entire tr
    ansmit buffer contents. It's also possible that you periodicaly have an error condition where the instrument is sending unexpected information. For example, if it normally sends a floating point number as a result and then it sends an error message string, you might not be intrepreting it correctly.

  • Time Series Storage Design

    Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
    The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
    My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
    Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
    To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
    Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...

    Hi,
      Time Series Key Figure:
            Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
           There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
    Storage Bucket profile:
          SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
    Time/Planning bucket profile:
         basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
    Hope this will help you.
    Regards
    Sujay

  • Error in Source System, Time series does not exist

    Hi Guys,
    I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
    Error Message : Time series does not exist,
    Error in Source System
    I have pasted the ststus message below
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.
    Procedure
    How you remove the error depends on the error message.
    Note
    If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
    Thanks,
    YJ

    Hi,
    You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
    Few notes are:
    528028,542946,367951,391403,362386.
    With rgds,
    Anil Kumar Sharma .P

  • Time series does not exist, Error in Source System

    Hi friends,
    I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
    Error Message : Time series does not exist,
                             Error in Source System
    I have pasted the ststus message below
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.
    Procedure
    How you remove the error depends on the error message.
    Note
    If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
    Thanks,
    YJ

    Hi,
    You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
    Few notes are:
    528028,542946,367951,391403,362386.
    With rgds,
    Anil Kumar Sharma .P

  • How to use dynamic time series in formula

    I have Time dimension with active Dynamic Time Series for Y-T-D.
    I'd like to use it in member formula - e.g. IF (@ISMBR("Y-T-D(Aug)")) - but this doesn't work as I get error message Unknown variable or member name.
    Any idea how to work with dynamic time series members in formula?
    Still using Essbase 6.5 X-(.

    As Glenn mentioned, I also don't believe you can reference a DTS member in a formula, you would have to calc the YTD. Take a look at the @PTD function to calc Period to date numbers, I believe this function was available in v6.
    Having said that, we're assuming you want to calc a ptd number, but your example was based on an IF @ISMBR(). So the question becomes what are you trying to do? Given what I assume your time dim looks like, what would be the difference between @ISMBR("Y-T-D(Aug)") and @ISMBR("Aug")

  • Discoverer 4i - Time Series Data type support

    Does Discoverer 4i support time-series data type i.e. the ability to store an entire string of
    numbers representing for example daily or weekly data points?
    Thanks & Regards,
    Deepti

    Hi O G-M,
    Each model must contain one numeric or date column that is used as the case series, which defines the time slices that the model will use. The data type for the key time column can be either a datetime data type or a numeric data type. However, the column must
    contain continuous values, and the values must be unique for each series. The case series for a time series model cannot be stored in two columns, such as a Year column and a Month column. For more information about it, please see:
    http://msdn.microsoft.com/en-us/library/ms174923(v=sql.100).aspx
    Thanks,
    Eileen
    Eileen Zhao
    TechNet Community Support

  • Median aggregation on various time series

    I have a requirement to compare a measure, project milestone median days in these time brackets: current quarter, previous quarter, previous two quarters, YTD (year to date), YTD (excluding current quarter), previous year and previous two years, all years up to previous year.
    I have built a time dimension of All, Year, Quarter, Month, Date. With time function and logical level, I can get median days for most of these brackets, except: previous two quarters, previous two years, YTD(excluding current quarter) and all years(up to 2009). I can get these median day aggregation for using the filters on separate requests, but then I cannot get all these fields in one request, side by side.
    Is there a way to build the time dimension so I can use the time function for these "peculiar" time periods?
    Thanks,
    Shining

    May be worth looking this post -
    http://epmandbitech.blogspot.com/2010/12/obiee-11g-and-microsoft-analysis.html
    I am not sure, what you want to achieve, but it should be possible against time series too.
    Rgds

  • SQL for Time Series Functions AGO and YTD

    When we use a time series function such as AGO or TODATE, OBIEE creates 2 physical queries. One query reads the calendar table. The other query reads the fact table without any date filter in the WHERE clause. Then the results of the 2 queries are stitched together. The query on the fact table returns a lot of rows because there is no filter on date.
    Is there a way to force OBIEE to put a filter on the date when performing the physical query on the fact table when using AGO or TODATE?
    Thanks,
    Travis
    v11.1.1.6

    We do have a date filter on the analysis. We need the analysis to show sales for a certain month and sales for that month a year ago, so we use the AGO function. However, it is really slow because it does a physical query on the sales table without filtering on date and then filters the results of that physical query by the dates from the physical query on the calendar table.

Maybe you are looking for