Time series analysis

I'm studying at University and currently working on a time series analysis in assistance with Oracle 10g R2. The aim of my analysis is the comparison of two time series tables, each table contains two columns, 1st column comprises the date and the 2nd column comprises the value (price). The standard functionality within Oracle (this includes also the statistical functionality) doesn't support any time series analysis.
I’m searching for a code or script in PL/SQL which supports the analysis I’m doing such as cross correlation or others. Any help I’ll get in this regard is highly appreciated.
Thanks in advance

Well, maybe your real problem is more complex, but on provided dataset, would it not be sufficient?
SQL> with table1 as(
  2          select DATE '2007-03-30' dt,72.28 price from dual union all
  3          select DATE '2007-03-29',72.15 from dual union all
  4          select DATE '2007-03-28',72.13 from dual union all
  5          select DATE '2007-03-27',71.95 from dual union all
  6          select DATE '2007-03-26',72.00 from dual union all
  7          select DATE '2007-03-23',72.00 from dual union all
  8          select DATE '2007-03-22',72.02 from dual union all
  9          select DATE '2007-03-21',71.13 from dual union all
10          select DATE '2007-03-20',70.75 from dual union all
11          select DATE '2007-03-19',70.38 from dual ),
12  table2 as(
13          SELECT DATE '2007-03-30' dt ,33.28 price from dual union all
14          select DATE '2007-03-29',31.73 from dual union all
15          select DATE '2007-03-28',33.74 from dual union all
16          select DATE '2007-03-27',32.21 from dual union all
17          select DATE '2007-03-26',32.50 from dual union all
18          select DATE '2007-03-23',33.79 from dual union all
19          select DATE '2007-03-22',34.04 from dual union all
20          select DATE '2007-03-21',32.18 from dual union all
21          select DATE '2007-03-20',42.15 from dual union all
22          select DATE '2007-03-19',38.10 from dual)
23  select
24  t1.dt,t1.price p1,t2.price p2,
25  corr(t1.price,t2.price) over() correlation
26  from table1 t1,table2 t2
27  WHERE t1.dt=t2.dt
28  /
DT                          P1         P2 CORRELATION
30.03.2007 00:00:00      72.28      33.28  -.73719325
29.03.2007 00:00:00      72.15      31.73  -.73719325
28.03.2007 00:00:00      72.13      33.74  -.73719325
27.03.2007 00:00:00      71.95      32.21  -.73719325
26.03.2007 00:00:00         72       32.5  -.73719325
23.03.2007 00:00:00         72      33.79  -.73719325
22.03.2007 00:00:00      72.02      34.04  -.73719325
21.03.2007 00:00:00      71.13      32.18  -.73719325
20.03.2007 00:00:00      70.75      42.15  -.73719325
19.03.2007 00:00:00      70.38       38.1  -.73719325which shows rather negative correlation - by rising prices in table 1, prices in table 2 decreases?
Best regards
Maxim

Similar Messages

  • Time series analysis in Numbers

    Has anyone done any chart showing time series analysis in Numbers? I could not find a way to change the axis to reflect the right data series. 

    Hi sanjay,
    Yes, Numbers has a different style. Instead of a single large, multi-purpose table, Numbers uses several small tables, each with a purpose.
    To plot a time series (or any Category graph) the X values must be in a Header Column. Here is a database of measurements over time as a tree grows:
    That database can be left alone. No need to juggle with it. You can even lock it to prevent accidental edits.
    A table to pull data and graph them:
    Formula in B1
    =Tree Data::B1
    Formula in B2 (and Fill Down)
    =Tree Data::B2
    Next graph, pull some other data
    (Scatter Plots do not require X data to be in a Header Column. Command click on each column to choose.)
    Regards,
    Ian.

  • Administrator desing star schema for "time series analysis"

    Hi all,
    I need to develop a set of dashboard with reports display a set of customers properties at
    the last etl period (this month) and, for these customers, show their properties in the "past"
    (this month - "n").
    I've a fact table with cust_id and the classic dimension cust, period, product and so on...
    My question is find a technique to desing the model in order to do these analysis or
    use oracle administrator function to retrieve the photo of my customers in the past.
    Here a specific user request:
    Find all customer revenue that this month have status = 1
    and, only for these customer having status != 1, show the revenue "in the past".
    Any suggestion?
    Ugo
    Edited by: user8021820 on 13-apr-2011 1.43
    Edited by: user8021820 on 13-apr-2011 1.44

    http://gerardnico.com/wiki/dat/obiee/function_time

  • Time Series Graph Show Inappropriate Data for Continuous Analysis

    Hi All,
    I have marked Month as the Chronological Key in my BMM Layer but still I am unable to view the data correctly in my Time Series graph because it shows Inappropriate Data for Continuous Analysis at the time of creating the Graph. Can anybody help me out with the same.
    Thanks

    What data type is your key? The chronological key is required for the time series formulas (ago etc.).
    The time series chart requires a date or datetime data type to work - perhaps a new column with the first of the month/period would help? Regards,
    Robret

  • SAP HANA One and Predictive Analysis Desktop - Time Series Algorithms

    I have been working on a Proof-of-Concept project linking the SAP Predictive Analysis Desktop application to the SAP HANA One environment.
    I have modeled that data using SAP HANA Studio -- created Analytic views, Hierarchies, etc. -- following the HANA Academy videos.  This has worked very well in order to perform the historical analysis and reporting through the Desktop Application. 
    However, I cannot get the Predictive Analysis algorithms -- specifically the Time Series algorithms -- to work appropriately using the Desktop tool. It always errors out and points to the IndexTrace for more information, but it is difficult to pinpoint the exact cause of the issue.  The HANA Academy only has videos on Time Series Algorithms using SQL statements which will not work for my user community since they will have to constantly tweak the data and algorithm configuration. 
    In my experience so far with Predictive Analysis desktop and the Predictive Algorithms, there is a drastic difference between working with Local .CSV / Excel files and connecting to a HANA instance.  The configuration options for using the Time Series Algorithms are different depending upon the data source, which seems to be causing the issue.  For instance, when working with a local file, the Triple Exponential Smoothing configuration allows for the specification of which Date field to use for the calculation.  Once the data source is switched to HANA, it no longer allows for the Date field to be specified.  Using the exact same data set, the Algorithm using the local file works but the HANA one fails. 
    From my research thus far, everyone seems to be using PA for local files or running the Predictive Algorithms directly in HANA using SQL.  I can not find much of anything useful related to combing PA Desktop to HANA. 
    Does anyone have any experience utilizing the Time Series Algorithms in PA Desktop with a HANA instance?   Is there any documentation of how to structure the data in HANA so that it can be properly utilized in PA desktop? 
    HANA Info:
    HANA One Version: Rev 52.1
    HANA Version: 1.00.66.382664
    Predictive Analysis Desktop Info:
    Version: 1.0.11
    Build: 708
    Thanks in advance --
    Brian

    Hi,
    If you use CSV or XLS data source you will be using Native Algorithm or R
    Algorithm in SAP Predictive Analysis.
    When you connect HANA, SAP Predictive Analysis uses PAL Algorithm which runs
    on HANA server.
    Coming to your question regarding difference,
    In SAP PA Native Algorithm, we could provide the Data variable, Algorithm
    picks the seasonal information from the Data column. Both R and SAP HANA PAL
    does not support Date Column. We need configure seasonal information in
    Algorithm properties.
    R Properties
    1) Period : you need to mention the periodicity of the Data
    Monthly : (12)
    Quarter : (4)
    Custom : you can use it for week or Daily or hourly.
    2) Start Year: need to mention Start year.
    Start year is not used by algorithm for calculating Time series, but it helps
    PA to generate Visualization ( Time series chart) by simulating year and
    periodicity information.
    3) Starting Period:
    if your data is Quarterly and you have data recordings from Q2, mention 2 in
    start period.
    Example.
    If the data periodicity is Monthy and my data starts from Feb 1979, we need to provide following information,
    Period: 12
    Start year: 1979
    start Period: 2
    PAL Properties. : Same as properties defined in R.
    Thanks
    Ashok
    [email protected]

  • How do you analyse a binary time series using labview?

    I am new with time series data analysis and labview. I was wondering if anyone could help me out with analyzing a binary time series (1 or 0 output) using labview. ecifically I want to find out the period/frequency of a
    oscillating signal. Can I use the Walsh transform. If so how do I convert 'sequency' to time period. Are there any beginners text book out there? I would reallly appreciate it if anyone could help me out with this.

    Your comment about an indicator getting "clogged up with data" doesn't make any sense. The contents of a string or array indicator can get large and slow things down but a numeric indicator cannot get clogged up. If the data stops making sense, then you are incorrectly reading the instrument and converting that data to a numeric. With your comments about the device transmit buffer, I suspect you have occasionaly set the byte count too low and unread bytes are there that you then read the next time. As long as the instrument is fairly new, it will send out a termination character (typically EOI) that will terminate the read. You can then set the read count to some arbitrarily high number to ensure you've got the entire tr
    ansmit buffer contents. It's also possible that you periodicaly have an error condition where the instrument is sending unexpected information. For example, if it normally sends a floating point number as a result and then it sends an error message string, you might not be intrepreting it correctly.

  • Error in Source System, Time series does not exist

    Hi Guys,
    I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
    Error Message : Time series does not exist,
    Error in Source System
    I have pasted the ststus message below
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.
    Procedure
    How you remove the error depends on the error message.
    Note
    If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
    Thanks,
    YJ

    Hi,
    You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
    Few notes are:
    528028,542946,367951,391403,362386.
    With rgds,
    Anil Kumar Sharma .P

  • Time series does not exist, Error in Source System

    Hi friends,
    I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
    Error Message : Time series does not exist,
                             Error in Source System
    I have pasted the ststus message below
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.
    Procedure
    How you remove the error depends on the error message.
    Note
    If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
    Thanks,
    YJ

    Hi,
    You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
    Few notes are:
    528028,542946,367951,391403,362386.
    With rgds,
    Anil Kumar Sharma .P

  • Median aggregation on various time series

    I have a requirement to compare a measure, project milestone median days in these time brackets: current quarter, previous quarter, previous two quarters, YTD (year to date), YTD (excluding current quarter), previous year and previous two years, all years up to previous year.
    I have built a time dimension of All, Year, Quarter, Month, Date. With time function and logical level, I can get median days for most of these brackets, except: previous two quarters, previous two years, YTD(excluding current quarter) and all years(up to 2009). I can get these median day aggregation for using the filters on separate requests, but then I cannot get all these fields in one request, side by side.
    Is there a way to build the time dimension so I can use the time function for these "peculiar" time periods?
    Thanks,
    Shining

    May be worth looking this post -
    http://epmandbitech.blogspot.com/2010/12/obiee-11g-and-microsoft-analysis.html
    I am not sure, what you want to achieve, but it should be possible against time series too.
    Rgds

  • Time Series Storage Design

    Hi, I've got the unenviable task of rewriting the data storage back end for a very complex legacy system which analyses time series data for a range of different data sets. What I want to do is bring this data kicking an screaming into the 21st century but putting it into a database. While I have worked with databases for many years I've never really had to put large amounts of data into one and certainly never had to make sure I can get large chunks of that that data very quickly.
    The data is shaped like this: multiple data sets (about 10 normally) each with up to 100k rows with each row containing up to 300 data points (grand total of about 300,000,000 data points). In each data set all rows contain the same number of points but not all data sets will contain the same number of points as each other. I will typically need to access a whole data set at a time but I need to be able to address individual points (or at least rows) as well.
    My current thinking is that storing each data point separately, while great from a access point of view, probably isn't practical from a speed point of view. Combined with the fact that most operations are performed on a whole row at a time I think row based storage is probably the best option.
    Of the row based storage solutions I think I have two options: multiple columns and array based. I'm favouring a single column holding an array of data points as it fits well with the requirement that different data sets can have different numbers of points. If I have separate columns I'm probably into multiple tables for the data and dynamic table / column creation.
    To make sure this solution is fast I was thinking of using hibernate with caching turned on. Alternatively I've used JBoss Cache with great results in the past.
    Does this sound like a solution that will fly? Have I missed anything obvious? I'm hoping someone might help me check over my thinking before I commit serious amounts of time to this...

    Hi,
      Time Series Key Figure:
            Basically Time series key figure is used in Demand planning only. Whenever you cerated a key figure & add it to DP planning area then it is automatically convert it in to time series key figure. Whenever you actiavte the planning area that means you activate each Key figure of planning area with time series planning version.
           There is one more type of Key figure & i.e. an order series key figure & which mainly used in to SNP planning area.
    Storage Bucket profile:
          SBP is used to create a space in to live cache for the periodicity like from 2003 to 2010 etc. Whenever you create SBP then it will occupy space in the live cache for the respective periodicity & which we can use to planning area to store the data. So storage bucket is used for storing the data of planning area.
    Time/Planning bucket profile:
         basically TBP is used to define periodicity in to the data view. If you want to see the data view in the year, Monthly, Weekly & daily bucket that you have to define in to TBP.
    Hope this will help you.
    Regards
    Sujay

  • SQL for Time Series Functions AGO and YTD

    When we use a time series function such as AGO or TODATE, OBIEE creates 2 physical queries. One query reads the calendar table. The other query reads the fact table without any date filter in the WHERE clause. Then the results of the 2 queries are stitched together. The query on the fact table returns a lot of rows because there is no filter on date.
    Is there a way to force OBIEE to put a filter on the date when performing the physical query on the fact table when using AGO or TODATE?
    Thanks,
    Travis
    v11.1.1.6

    We do have a date filter on the analysis. We need the analysis to show sales for a certain month and sales for that month a year ago, so we use the AGO function. However, it is really slow because it does a physical query on the sales table without filtering on date and then filters the results of that physical query by the dates from the physical query on the calendar table.

  • Time-series / temporal database - design advice for DWH/OLAP???

    I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
    So - it would be very nice if some can give me some hints about this type design problems?

    Hi Frank,
    Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
    Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
    Cheers,
    Brent

  • Error: Inappropriate data for continuous time axis while displaying in Time-series Line in OBIEE

    I wanted to use time series line in my analysis but I couldn't get What kind of data is required for this graph? It shows inappropriate data for continuous time axis. I had total amountmin vertical axis and time(year) in horizontal axis.

    Check this if not helpful then follow 2 links at bottom
    OBIEE - Time Series Conversion Functions : AGO and TODATE | GerardNico.com (BI, OBIEE, OWB, DataWarehouse)
    ~ http://cool-bi.com

  • Error in Creating Time series objects in APO 5.0

    Hi, We are implementing APO 5.0.  I am trying to Create Time series objects for Planning Area 9ASNP05, but getting the following Run Time Error
    We have used Support Package 10
    Runtime Errors         PERFORM_TOO_MANY_PARAMETERS
    Exception              CX_SY_DYN_CALL_PARAM_NOT_FOUND
    The exception, which is assigned to class 'CX_SY_DYN_CALL_PARAM_NOT_FOUND', was
    not caught in procedure "RSS_TEMPLATE_INSTANTIATE" "(FUNCTION)", nor was it propagated by a RAISING clause.
    Since the caller of the procedure could not have anticipated that the
    exception would occur, the current program is terminated.
    The reason for the exception is: A PERFORM was used to call the routine "INSTANTIATE" of the program
      "GP_MET_RSSG_HEADER_COMMENT".
    This routine contains 15 formal parameters, but the current call
    contains 16 actual parameters.
    Any

    Hi
    I am getting exactly the same error, on SCM 5.0 SR2, running on Windows 2003 64 BIT & SQL 2000.
    Conditions for error:
    Using transaction:
    /SAPAPO/MSDP_ADMIN
    Select: Planning Object Structures
    Short Dump occurs if you either:
    1. Attempt to deactivate an active Planning Object Structure
    2. Attempt to create a Characteristic Combination
    Gives a runtime error shortdump PERFORM_TOO_MANY_PARAMETERS
    Error analysis:
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_DYN_CALL_PARAM_NOT_FOUND', was
         not caught in
        procedure "RSS_TEMPLATE_INSTANTIATE" "(FUNCTION)", nor was it propagated by a
         RAISING clause.
        Since the caller of the procedure could not have anticipated that the
        exception would occur, the current program is terminated.
        The reason for the exception is:
        A PERFORM was used to call the routine "INSTANTIATE" of the program
         "GP_MET_RSSG_HEADER_COMMENT".
        This routine contains 15 formal parameters, but the current call
        contains 16 actual parameters.
    Has anyone seen this before?
    Did you find a solution Bhavesh ?
    Thanks.

  • Power Spectrum Density conversion to Time Series Data

    Hi,
    This may seem an odd request but is there a way to convert power spectrum density data back to the time series data that generated it in the first place. I have lost the original time series data but still have the PSD and need the time series to do other analysis.
    Thanks,
    Rhys Williams

    Hate to be the bearer of bad news, but there are an infinite number of time series that will generate a given PSD.  You lose all phase information upon taking the PSD.  For this reason I almost always save time domain data, or at least complex FFT values.  

Maybe you are looking for

  • Sometimes iMac will not boot up

    hey guys i have been having this on going problem for about a week now. Ok so here's what's happening, this usually happens at night. It seems that my iMac will not boot up, I have my computer put to sleep when I go to sleep and I can wake it up agai

  • I dropped my ipad 2 and now screen is green

    I dropped my ipad 2 and now screen is green.  ipad 2 Wi-Fi 16GB.

  • Iphone 3gs wont conect anymore??!

    Please can you help, its really frustrating, my iphone 3gs used to connect to my computer fine but now its stopped. It wont sync, charge or appear on itunes? the usb connection and lead are both working. I cant update it or do anything. please can so

  • Problem with reset SMC

    hello guys .. i have problem by resetting the smc in yosemite system . ... i've did it for a twice now .. and it didn't make any sense like it doesn't change anything in settings or in touch pad or in sounds or anything ...  and when it was rebooting

  • Local Music vs. Streaming from the Mac

    Is there a difference in sound quality? I bought an ATV with a larger HD because I have a pretty large collection of music and I figured that it would be preferable to have that collection residing on the ATV. With the advent of the "genius" feature