Data Mining Blog: New post on Time Series Multi-step Forecasting
I've posted the third part of the Time Series Forecasting series. It covers:
- How to use the SQL MODEL clause for multi-step forecasting
- Many example queries
- Two applications: a classical time series dataset and a electric load forecast competition dataset
- Accuracy comparison with a large number of other techniques
http://oracledmt.blogspot.com/2006/05/time-series-forecasting-3-multi-step.html
--Marcos
Many biological databases can be queried directly via the Structure Query Language.SQL is at the heart of biological databases.Oracle Data Miner load data from flat files in the database.we would create a numeric ID for the genes while loading the data and remove some columns and rows from the data, it is better to load the data directly with SQL Loader.
http://www.genebyte.firm.in/
Edited by: 798168 on Sep 28, 2010 11:00 PM
Similar Messages
-
Data Mining model usage in real time application
Has Oracle a product that is similar with the Intelligent Miner for Scoring from IBM. Or are there any technical white papers how to use analytical Models, exported via PMML, in an Oracle Datawarehouse enviroment? I read on the Data Mining Group web pages that Oracle is a member there, but i coul`d not found any documentation of how to use analytical models exported with PMML in an Oraclae enviroment
Another way to do this is to specify your shared variable to be a custom control and create a control that simply has a 2-D array in it. I've attached a zipped LabVIEW 8.2 project that shows both methods. Enjoy!
Becky
Becky Linton
Field Engineer - Michigan
National Instruments
Attachments:
2DArraySharedVariable.zip 110 KB -
hello,
I creata a mining model and trained it with some data using the sql table.Now I want that when ever I make any Changes in sqltable record. Then I have to chane the data of mining model also.Please provide help.
Thanks in Advance
Shalini RathoreHi, thanks for the reply!
Just to follow up for what we did to disable the delete function for maintaining table records. We hided the Delete button by adding a "MODULE disable_delete" code in Screen Painter. So now only adding records to the table is allowed.
Thanks,
Jenny -
Blog/News posting using Dreamweaver/Contribute
This is a general question and I hope I'm posting in the
right place.
I have a client who wants a site that can be maintained using
Contribute. I've figured this being the case, I'd be best to set it
up as static HTML files using Dreamweaver, rather than the hand
coded ASP I usually use. I'm familiar with storing page content in
a database as XML and using XSL to generate HTML. Obviously this is
not going to work with Contribute.
I understand the process of creating pages using Dreamweaver
and/or Contribute. What I don't understand is how I can replicate
the dynamic news item menus etc that I can produce so easily with
ASP and a database. Is it possible to allow the user to create new
pages through Contribute and have links to these automatically
updated on the home page, or would they have to also add a link to
the home page? Coming from my background, that seems like a step
backwards to me, so I hope there is a neat way of doing it.
Forgive my ignorance if it's a common practice. This is my
first attempt at a site with CS. I've done some training on CS work
flow but it didn't cover dynamic content, blogs etc. I appreciate
any advice that comes my way or please let me know if I should post
somewhere else.Thanks for the reply.
>Can you do something in asp
Yeah I can. Currently what I do for existing sites is that
all content is stored in the database. I have some special code
that grabs the current news headlines (based on some rules) and I
automatically build a news menu for the home page. Parsing
directories would be a bit painful I think but sounds like that's
the only option.
I'm still trying to get it straight in my head how Contribute
will integrate into the whole thing. I think I'm right in saying
that Contribute will only work with flat HTML files, not anything
built on the fly server-side like ASP. So all the maintainable
content needs to be in a flat file and any pages with dynamic
content will not be maintainable by the client.
>Will a real serverside blog suit the clients needs?
Could be but I guess that depends how easily I can integrate
it into the site with regard to appearance and navigation. They're
not going to want a sub-site with different look and feel. It also
gives them another maintenance interface to deal with for the blog
content and the reason they're insisting on Contribute is that they
already use it for their other site. I suppose RSS would be the way
to keep the news links up to date in that case. A home page with an
RSS feed back to the blog.
Thanks for the input though. I'm trying to get up to speed
quickly with this problem as they want a quote and it's the blind
leading the blind at the moment I'm afraid. -
I set up my new macbook from an old macbook via time machine and an external WD Harddrive.
The setup partitioned my macbook to two user id's, now I need to back up.
The old user id is backing up everything again and the new user id wants to create a new backup to my WD Harddrive like it is not used.
I don't believe that my WD was partitioned.
Help! How can I continue using my WD and backup both user id information without losing anything?Welcome to the Apple Support Communities
When you turn on the new iMac for the first time, Setup Assistant will ask you to restore a backup, so connect the external disk and follow steps to restore all your files to your new iMac. Your new Mac will have the same settings and programs as your old computer.
In other cases, I would recommend to restore the whole backup without using Migration Assistant or Setup Assistant, but a Late 2012 iMac uses a special OS X build, so the OS X version that you're using on your old Mac won't work on the new one. For more information, see > http://pondini.org/OSX/Home.html -
HIJRI DATE TO GREGORIAN(NEW POST)
hi im trying to query this using sql. the table name of the column is TEST_DATE. with the following data
HIJRI_DATE
17/11/1431
18/11/1431
19/11/1431
how can i convert this data to gregorian? in sql statement?
THANKSthis May help
SQL> WITH tbl AS (SELECT to_date('17/11/1431','DD/MM/YYYY','NLS_CALENDAR=''Arabic Hijrah''') dt FROM DUAL UNION ALL
2 SELECT to_date('18/11/1431','DD/MM/YYYY','NLS_CALENDAR=''Arabic Hijrah''') FROM DUAL UNION ALL
3 SELECT to_date('19/11/1431','DD/MM/YYYY','NLS_CALENDAR=''Arabic Hijrah''') FROM DUAL
4 )
5 SELECT dt,to_char(dt,'dd/mm/yyyy','NLS_CALENDAR=Gregorian')
6 FROM tbl;
DT TO_CHAR(DT
25-OCT-10 25/10/2010
26-OCT-10 26/10/2010
27-OCT-10 27/10/2010
SQL> WITH tbl AS (SELECT to_date('17/11/1431','DD/MM/YYYY','NLS_CALENDAR=Gregorian') dt FROM DUAL UNION ALL
2 SELECT to_date('18/11/1431','DD/MM/YYYY','NLS_CALENDAR=Gregorian') FROM DUAL UNION ALL
3 SELECT to_date('19/11/1431','DD/MM/YYYY','NLS_CALENDAR=Gregorian') FROM DUAL
4 )
5 SELECT dt ,to_char(dt,'dd/mm/yyyy','NLS_CALENDAR=''Arabic Hijrah''')
6 FROM tbl;
DT TO_CHAR(DT
17-NOV-31 11/03/0835
18-NOV-31 12/03/0835
19-NOV-31 13/03/0835 -
Time series functions are not working in OBIEE for ESSBASE data source
Hi All,
I am facing a problem in OBIEE as I am getting error messages for measure columns with Time series functions(Ago,ToDate and PeriodRolling) in both RPD and Answers.
Error is "Target database does not support Ago operation".
But I am aware of OBIEE supports Time Series functions for Essbase data source.
using Hyperion 9.3.1 as data source and obiee 11.1.1.5.0 as reporting tool.
Appreciate your help.
Thanks,
AravindHi,
is because the time series function are not supported for the framentation content, see the content of the oracle support:
The error occurs due to the fact the fragmented data sources are used on some Time series measures. Time series measures (i.e. AGO) are not supported on fragmented data sources.
Confirmation is documented in the following guide - Creating and Administering the Business Model and Mapping Layer in an Oracle BI Repository > Process of Creating and Administering Dimensions
Ago or ToDate functionality is not supported on fragmented logical table sources. For more information, refer to “About Time Series Conversion Functions” on page 197.
Regards,
Gianluca -
Scatter plot using time series function - Flash charting
Apex 3 + XE + XP
I am trying to build a time series scatter plot chart using flash chart component.
Situation :
On each scout date counts are taken within each crop. I want to order them by scout dates and display them in a time series chart. Each series represents different crop.
I am posting the two series queries I used
Queries:
Series 1
select null LINK, "SCOUTDATES"."SCOUTDATE" LABEL, INSECTDISEASESCOUT.AVERAGECOUNT as "AVERAGE COUNT" from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings' AND CROP.CROPID=1
order by SCOUTDATES.SCOUTDATE' ASC
Series 2:
select null LINK, "SCOUTDATES"."SCOUTDATE" LABEL, INSECTDISEASESCOUT.AVERAGECOUNT as "AVERAGE COUNT" from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings' AND CROP.CROPID=4
order by SCOUTDATES.SCOUTDATE' ASC
Problem
As you can see the observations are ordered by scout date. However when the chart appears, the dates dont appear in order. The chart displays the data from crop 1 and then followed by crop 4 data, which is not exactly a time series chart. Does flash chart support time series or they have no clue that the data type is date and it should be progressive in charting ? I tried to use to_char(date,'j') to converting them and apply the same principle however it did not help either.
Any suggestions ?
Message was edited by:
tarumugam
Message was edited by:
aruArumugam,
All labels are treated as strings, so APEX will not compare them as dates.
There are two workarounds to get all your data in the right order:
1) Combine the SQL statements into single-query multi-series format, something like this:
select null LINK,
"SCOUTDATES"."SCOUTDATE" LABEL,
decode(CROP.CROPID,1,INSECTDISEASESCOUT.AVERAGECOUNT) as "Crop 1",
decode(CROP.CROPID,4,INSECTDISEASESCOUT.AVERAGECOUNT) as "Crop 4"
from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings'
AND CROP.CROPID in (1,4)
order by SCOUTDATES.SCOUTDATE ASC2) Union the full domain of labels into your first query. Then the sorting will be applied to the full list, and the values of the second series will be associated with the matching labels from the first.
- Marco -
Voucher Number in new posting period
hello,
I have created new posting period and series for financial year 2011-12.
In previous year I have created 600 voucher. My issue is that when I am creating new
voucher, voucher number is showing 601. I want that for new series voucher number should be
start from 1. but it seems like numbering series doesn't affect the voucher number.
Because there are no series creation for journal voucher. In this case how can I solve my problem.
Please help me on this.
thanks
AnnuHi Annu.............
You created Posting Period but have you created Period Indicator and Assigned this Indicator to this created Posting period?
If you have done this then when you define new no. series from one just assign the same indicatorto this New No. Series also and lock old series.....
Hope this will work....
Regards,
Rahul -
Data mining Algorithms in Essbase
Hi,
Just wondering if anyone has used data mining algorithms provided within Essbase. Any thoughts or pointers towards more information will be helpful..
Thanks in Advance !!In a 2009 persentation at Kscope from ODTUG titled little used features of Essbase, I went through how to use data moning. It is available on the odtug website. I do know that nothing has been done with the data mining modules in a long time as the team was disbanded since Oracle has other tools to do data mining.
-
Time series functions are not working for fragmented logical table sources?
If i remove that fragmented logical table sources, then its working fine.
if any body know the reason, please let me know.
thanks and regards,
krishnaHi,
is because the time series function are not supported for the framentation content, see the content of the oracle support:
The error occurs due to the fact the fragmented data sources are used on some Time series measures. Time series measures (i.e. AGO) are not supported on fragmented data sources.
Confirmation is documented in the following guide - Creating and Administering the Business Model and Mapping Layer in an Oracle BI Repository > Process of Creating and Administering Dimensions
Ago or ToDate functionality is not supported on fragmented logical table sources. For more information, refer to “About Time Series Conversion Functions” on page 197.
Regards,
Gianluca -
All,
I'd like to know how to model a time series table to submit it to Oracle Data Mining. I've already tried a reverse pivot, but is there something different?
Regards,
Paulo de Tarso Costa de SousaIf you are trying to do something like create a demand forecasting model like ARIMA, ODM does not have explicit support for this type of modeling. Data mining is usually more about creating a "prediction" rather than a forecast. You may want to look at Oracle's OLAP capabilities for this.
If you are trying to include variables that contain the element of "time", such as "blood pressure before" and "blood pressure after", you can include these a variables (attributes) in the model. ODM has no real limit on the number of variables it can include in the model, so you don't have to worry about creating too many of them (usually).
You may want to "clump" the data so as to create a set of variables at certain check points in time like the "before" and "after" approach above. Rather than entering for example the measurement off an instrument ever 10 seconds (which would ordinarily create new variables for each time period), you may want to only detect "events". That is, only record the amount of time between events--sort of Mean Time Between Failure (MTBF) type of modeling.
Hope this helps with your thinking about how to approach your problem -
Document Series Error after creation of a new Posting Period
Hi
While Creation of a new Document Series for user created UDO (after creation of new Posting Period
with Period Indicator) on update its showing a message box
"Application Error occurred, Dump file created in path C\Program Files\SAP\..\Log......"
And soon SAP B1 gets exit.
Can any one help me to come out of this problem.Hi,
Whenever Dump file created, it is high time to log a message to SAP support. This problem is beyond forum function.
Thanks,
Gordon -
Issue in new macro calculating values in time series in CVC
Hi friends.
I'm new in APO.
I have a problem with a new macro in CVC which calls up a FM to calculate and store data.
This new macro calculates the selected data in time series (TS) and e.g. we select 3 days to make calculation with this macro, the first selected day in TS is ignorated.
We created this macro to do this calculation when user enter some manual values and want it to be calculated in the deliver points in CVC, by TS.
This macro calls up my Z function which internally calls up a standard FM '/SAPAPO/TS_DM_SET' (Set the TS Information).
Sometimes, this FM, raises error 6 (invalid_data_status = 6), but only when I user 'fcode_check' rotine together.
After that, we call the FM '/SAPAPO/MSDP_GRID_REFRESH' in mode 1 and 'fcode_check' rotine in program '/sapapo/saplmsdp_sdp'.
Firstly, I thought it could be dirty global variables in standard FM so I put it inside a program and called it by submit and return command. But now I think could not be this kind of error because it did not work. And inverted the results, and now only first line os TS get storted and change in CVC.
It's a crazy issue. Please friends. Guide me for a solution!
thanks.
GlaucoHi friend. Issue still without a correct solution yet.
A friend changed the macro adding another step calling the same function. Now this macro has two calls in sequence to same FM. Now it's working, but we can't understand why it's working now.
It's seems like dirty memory in live cash.
Nobody knows how to solve this!
Glauco. -
Read optimization time-series data
I am using Berkeley DB JE to store fairly high frequency (10hz) time-series data collected from ~80 sensors. The idea is to import a large number of csv files with this data, and allow quick access to time ranges of data to plot with a web front end. I have created a "sample" entity to hold these sampled metrics, indexed by the time stamp. My entity looks like this.
@Entity
public class Sample {
// Unix time; seconds since Unix epoch
@PrimaryKey
private double time;
private Map<String, Double> metricMap = new LinkedHashMap<String, Double>();
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.
I am accessing the db from a web front end. I am not too worried about insertion performance, as this doesn't happen that often, and generally all at one time in bulk. For smaller ranges (~1-2 hr worth of samples) the read performance is decent enough for web calls. For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?
Here are my current configuration params.
environmentConfig.setAllowCreateVoid(true);
environmentConfig.setTransactionalVoid(true);
environmentConfig.setTxnNoSyncVoid(true);
environmentConfig.setCacheModeVoid(CacheMode.EVICT_LN);
environmentConfig.setCacheSizeVoid(1000000000);
databaseConfig.setAllowCreateVoid(true);
databaseConfig.setTransactionalVoid(true);
databaseConfig.setCacheModeVoid(CacheMode.EVICT_LN);Hi Ben, sorry for the slow response.
as you can see, there is quite a large amount of data for each entity (~70 - 80 doubles), and I'm not sure storing them in this way is best. This is my first question.That doesn't sound like a large record, so I don't see a problem. If the map keys are repeated in each record, that's wasted space that you might want to store differently.
For larger ranges, the read operations take quite a while. What would be the best approach for configuring this application?What isolation level do you require? Do you need the keys and the data? If the amount you're reading is a significant portion of the index, have you looked at using DiskOrderedCursor?
Also, I want to define granularity of samples. Basically, If the number of samples returned by a query is very large, I want to only return a fraction of the samples. Is there an easy way to count the number of entities that will be iterated over with a cursor without actually iterating over them?Not currently. Using the DPL, reading with a key-only cursor is the best available option. If you want to drop down to the base API, you can use Cursor.skipNext and skipPrev, which are further optimized.
environmentConfig.setAllowCreateVoid(true);Please use the method names without the Void suffix -- those are just for bean editors.
--mark
Maybe you are looking for
-
How can I pay an overdue balance with a new account where the old account is no longer valid?
I used to live in the UAE and was using my UAE local credit card for my apple accout. I returned recently back home and the old account I had is now no loneger valid,I now have a new credit card, I would like to use my new account to settle the balan
-
Error while installing PC Suite on Windows Vista m...
Hi, I am unable to install the PC Suite 7.1.30.9 software on my machine . The OS is Windows Vista Home Premium. The installation starts but at 0% gives an error " Could not access network location %APPDATA% \. " . Can anyone please advice how to re
-
Importing Photos from iphoto to iweb in proper orientation
I have photos in iPhoto that I have rotated because they were shot vertically but when downloaded into iPhoto they are in a horizontal or "landscape" mode. When I use the Media browser in iWeb, all images appear as they were originally downloaded, in
-
My laptop crashed and I had to get a new hard drive. How do I get the music from my iPod back into iTunes Library?
-
How to get SSLSession in Servlet?
Hello, Is there any new way to get a SSLSession object inside Servlet? Why javax.net.ssl.session attribute is no longer available since Servlet 2.1 specification? See: http://java.sun.com/products/servlet/2.1/api/javax.servlet.ServletRequest.html#get