Handling Time Dimensions with Monthly and Weekly analysis
I'm charged to build a cube that can handle both monthly and weekly analysis. They want to be view projections on a weekly grain, but looking back at history they want to be able to tie back to the general ledger at a monthly level.
Is it overkill to load data down to the daily level in order to support multiple roll ups to the month and week? What is the best way to handle this kind of situation?
Having the data come as two different formats (one for months, one for weeks) versus having it come in daily buckets and constructing the months/weeks within Essbase is a call I wouldn't make without understanding the needs first.
Assuming there is no need for data at the daily level, I would weigh the selection based on performace. If the performance is not too outragious, taking the data in at a daily basis would make it easier to reconstruct later. If performance is an issue (or becomes an issue), I would still be tempted to load a daily cube at level 0, summarize the Months and Weeks, and then feed the main cube from there.
So many ways to do this... and for every way there is a reason you might want to try something else. I'd start with the easy approach, and only go "grand" if you have to (meaning: load it daily, summarize in Essbase, and revisit the detail/summary cube split idea if the performance tanks out).
Similar Messages
-
Modelling Time Dimension with Fact Table containing Start Date and End Date
Hi Gurus,
I have a time dimension with Year till Date. I have a fact table which consists of Start Date, End Date, Person ID, Department ID.
How do i design Time dimension with fact table the below scenario
In the dashboard i have start Month and End month as prompts.
In the report i need to display Count(Person ID) > Start Date and < End Date along the trend.
For instance, i have selected Jan-2009 as start date and Apr-2009 as End Date, then i need to display Count(Person ID) of Jan-2009, Feb2009, Mar-2009 andApr-2009.
I Can not connect Time dimension with only Start Date or only with End Date to get the trend along the months.
Please advice on the issue which i am having.Hi,
Thanks for the response, Infact i tried using Complex join in physical layer. I have considered Time table joined with Fact table, and used >= and took and alias of the Time table and joined fact table using <=. When coming to BMM, i am not knowing how do i design this as if i merge the both the time dimensiona and its alias into single table, values will not be correct and if i make them as seperate columns. i can not show the trend as both are different columns.
Can you please let know where i am going wrong.
Thanks -
Is there a way to change the intervals at which Time Machine performs backups (e.g., weekly instead of hourly or daily) past 24 hours, daily for the past month, and weekly for everything older...
You can edit the interval in Console or install a Pref Pane
called TimeMachineScheduler (Leopard or higher) free from:
http://www.klieme.com/TimeMachineScheduler.html
Good luck, Tom -
How to create a working day/holiday attribute in Time Dimension with OWB ?
Hello everybody,
I am trying with no success to create a Time Dimension (with the wizard it would be easier...) with a working day or holiday attribute (0 or 1) to be able to calculate measures only on working time !!! I am totally sure it's possible, obviously, but I found nothing on forums... Could you help me ??
I use OWB 11g, I need all fiscal year/quarter/month/week and all calendar year/quarter/month/week attributes so without any wizard it would be quite impossible... thank you for your help !
NB: of course I know how to define if a day is a working one or not, but I currently can't add any columns in Time dim created with wizard...
Francois
Edited by: [email protected] on Jun 15, 2009 8:24 AMHi,
First of all, thanks everyone for your help.
I did several tests this morning.
First of all, i've tried with time_from = 000000 and time_to = 235959 (23:59:59 CET) and the activity has been created with Time From = 04:00:00 and Time To = 03:59:59
Strange no ??
I've also tried with 230000 for time from and time to but the activity has been created with Time From = 03:00:00 and Time to = 03:00:00
I cannot understand the logic behind....
Thanks,
Johnny Baillargeaux -
Time dimension with Hourly base time periods
Hi all
I need to analyze data at Hour, Day, Month, and Year levels. The data in the fact and dimension tables are at the 'Hour' level with DATE datatype, such as:
02-SEP-10 10:00:00 AM
02-SEP-10 11:00:00 AM
To use Time-Series type calculations, I understand that I have to create an OLAP dimension of type 'TIME' (and not 'USER') and map it to the populated relational time dimension table.
1) Can I have the primary key for 'Hour' level as the actual base level value of datatype DATE (eg. 02-SEP-10 10:00:00 AM) ?
2) For the END_DATE and TIME_SPAN attributes at the 'Hour' level, what should I use?
The documentation is only available for minimum 'Day' level hierarchies, which allows setting END_DATE and TIME_SPAN to the actual 'Day' value and 1, respectively.
3) For the END_DATE and TIME_SPAN attributes at the 'Month' level, do I need to supply the last-date-of-each-month and number-of-days-in-that-month, respectively?
Please bear in mind that I am relatively new to Oracle OLAP. Any assistance will be appreciated.
Cheers.Thank you Szilard and Adnan for the very prompt and informative responses.
I managed to follow the advice on the oracleolap.blogspot link and created a time dimension with members at Hour level loaded into the dimension in character format: TO_CHAR(hour_id, 'DD-MON-YYYY HH24')
The problem now is the maintenance (loading) of the dimension is taking an abnormally large amount of time (over 1 hour) as opposed to when the members were being loaded in DATE format (5 minutes). The mapping table only as 10,000 entries.
Why is these such a big difference? Is it normal? Is there a way to speed up the maintenance time?
FYI, I have not created any indexes on any of the attributes.
My platform is:
11.1.0.7.0 DB
11.1.0.7.0B Client -
Time Dimension with Hourly base level
Hi all
I need to analyze data at Hour, Day, Month, and Year levels. The data in the fact and dimension tables are at the 'Hour' level with DATE datatype, such as:
02-SEP-10 10:00:00 AM
02-SEP-10 11:00:00 AM
To use Time-Series type calculations, I understand that I have to create an OLAP dimension of type 'TIME' (and not 'USER') and map it to the populated relational time dimension table. My questions are:
1) Can I have the primary key for 'Hour' level as the actual base level value of datatype DATE (eg. 02-SEP-10 10:00:00 AM) ?
2) For the END_DATE and TIME_SPAN attributes at the 'Hour' level, what should I use?
The documentation is only available for minimum 'Day' level hierarchies, which allows setting END_DATE and TIME_SPAN to the actual 'Day' value and 1, respectively.
3) For the END_DATE and TIME_SPAN attributes at the 'Month' level, do I need to supply the last-date-of-each-month and number-of-days-in-that-month, respectively?
Please bear in mind that I am relatively new to Oracle OLAP. Any assistance will be appreciated.
Cheers.Thank you Szilard and Adnan for the very prompt and informative responses.
I managed to follow the advice on the oracleolap.blogspot link and created a time dimension with members at Hour level loaded into the dimension in character format: TO_CHAR(hour_id, 'DD-MON-YYYY HH24')
The problem now is the maintenance (loading) of the dimension is taking an abnormally large amount of time (over 1 hour) as opposed to when the members were being loaded in DATE format (5 minutes). The mapping table only as 10,000 entries.
Why is these such a big difference? Is it normal? Is there a way to speed up the maintenance time?
FYI, I have not created any indexes on any of the attributes.
My platform is:
11.1.0.7.0 DB
11.1.0.7.0B Client -
OWB 10.2 - How are you handling time dimensions?
Hi all,
I am struggling with what should be a simple thing to do. I want to create a time dimension and then have many "roles" or aliases for the time dimensioin WITH UNIQUE COLUMN NAMES across all of the roles.
When the time dimensions are deployed to Discoverer, I want every one of them to have unique names and the column names within the time dimension have a unique prefix so that report users know which date column is from which table or dimension.
Here's what I've done and failed at:
1. Use the time dimension wizard - I can create any number of dimensions and corresponding tables BUT all of them have the same column names and I would have to manually change each and every one of them to get unique names (which may not even be possible with the wizard). Also, because I require ISO weeks, I can't really use the wizard at all.
2. Manually create a time dimension (that supports ISO weeks) and create multiple "roles" for it:
Thanks to a bug, I cannot exceed 4 roles without OWB crashing. Even with those 4 roles, when deployed to Discoverer, every attribute within the item folders has the same name. When I drag them to a report, there is no way to tell one from another. Is there some way I could do this without having to manually rename hundreds of columns?
3. I wrote an elaborate SQLPlus script to copy and prefix dimensions and tables from a base dimension and table. When I then import the Dimension to OWB, the metadata for business identifier and surrogate identifier is not there and any cubes using those dimensions do not work with these attributes missing. I can't find a way to cleanly reverse engineer these into OWB.
I have a cube with 12 dates - each of which should be a foreign key to a date dimension.
How can I have all of these be uniquely named with uniquely named columns?
How can I make it easy for my reporting users to select dates onto their reports?
I hope I am missing an obvious solution, because so far, I cannot see where Oracle Warehouse Builder supports such a basic data warehousing concept.Well, since I'm the only one obsessed with time dimensions I guess, here is my ugly workaround:
1. I create a base time dimension in OWB which I named 'ATLAS_TIME_DIM'
2. I run the OMB script below which clones the dimension and table and renames the columns and attributes with a user supplied suffix. The end result is I get a full copy of the time dimension metadata with uniquely named columns.
You then have to deploy the objects, and with SQLPlus, ensure that the table gets its data copied from your original table. Hope it helps someone until we have better Time dimension support from OWB.
OMBCONNECT repos/password@SERVERNAME:1521:sidname
# Prompt for new Dimension name and prefix
puts -nonewline "Please enter name for new Dimension: "
gets stdin newDim
puts -nonewline "Enter Prefix for Dimension table columns: "
gets stdin dimPrefix
# Change into ATLAS_DW module in project ATLAS_DW
OMBCC 'ATLAS_DW'
OMBCC 'ATLAS_DW'
# Copy the ATLAS_TIME_DIM to this dimension
OMBCOPY DIMENSION 'ATLAS_TIME_DIM' TO '$newDim'
# Set the business name
OMBALTER DIMENSION '$newDim' \
SET PROPERTIES (BUSINESS_NAME) VALUES ('$newDim')
# Unbind the dimension from original table
OMBALTER DIMENSION '$newDim' \
DELETE BINDING
# Bind to new table
OMBALTER DIMENSION '$newDim' \
IMPLEMENTED BY SYSTEM STAR
# Add a prefix to all of the Dimension attributes
set attrList [OMBRETRIEVE DIMENSION '$newDim' GET DIMENSION_ATTRIBUTES]
foreach attrName $attrList {
OMBALTER DIMENSION '$newDim' \
MODIFY DIMENSION_ATTRIBUTE '$attrName' RENAME TO '$dimPrefix\_$attrName'
# Add a prefix to all level attributes of the Dimension
set levelList [OMBRETRIEVE DIMENSION '$newDim' GET LEVELS]
foreach levelName $levelList {
set levelAttrList [OMBRETRIEVE DIMENSION '$newDim' \
LEVEL '$levelName' GET LEVEL_ATTRIBUTES]
foreach levelAttr $levelAttrList {
OMBALTER DIMENSION '$newDim' MODIFY \
LEVEL_ATTRIBUTE '$levelAttr' OF LEVEL '$levelName' \
SET PROPERTIES (BUSINESS_NAME) VALUES ('$dimPrefix\_$levelAttr')
OMBALTER DIMENSION '$newDim' MODIFY \
LEVEL_ATTRIBUTE '$levelAttr' OF LEVEL '$levelName' \
RENAME TO '$dimPrefix\_$levelAttr'
# Add a prefix to all of the table columns except DIMENSION_KEY
set columnList [OMBRETRIEVE TABLE '$newDim' GET COLUMNS]
foreach colName $columnList {
if { $colName == "DIMENSION_KEY" } {
puts "$colName"
} else {
OMBALTER TABLE '$newDim' \
MODIFY COLUMN '$colName' SET PROPERTIES (BUSINESS_NAME) VALUES ('$dimPrefix\_$colName')
OMBALTER TABLE '$newDim' \
MODIFY COLUMN '$colName' RENAME TO '$dimPrefix\_$colName'
puts "$dimPrefix\_$colName"
OMBSAVE
OMBDISCONNECT
Message was edited by:
mike_fls -
Hi all Thanks in Advance
I need help in Pivoting Data
SELECT ID,MONTH,COUNT FROM TABLE_PIVOT ORDER BY ID,MONTH
ID MONTH COUNT
10 10/01/2009 60
10 11/01/2009 80
10 12/01/2009 78
10 01/01/2010 81
10 02/01/2010 73
10 03/01/2010 84
10 04/01/2010 100
10 05/01/2010 107
10 06/01/2010 90
10 07/01/2010 0
10 08/01/2010 0
10 09/01/2010 73
20 10/01/2010 71
20 11/01/2010 76
20 12/01/2010 79
20 01/01/2011 79
20 02/01/2011 81
20 03/01/2011 88
20 04/01/2011 97
20 05/01/2011 87
20 06/01/2011 97I tried to pivot with below query
SELECT ID,
SUM(DECODE(to_char(month,'MM'),'01',count,0)) " Jan",
SUM(DECODE(to_char(month,'MMYY'),'02',count,0)) Feb,
SUM(DECODE(to_char(month,'MM'),'03',count,0)) Mar,
SUM(DECODE(to_char(month,'MM'),'04',count,0)) Apr,
SUM(DECODE(to_char(month,'MM'),'05',count,0)) May,
SUM(DECODE(to_char(month,'MM'),'06',count,0)) June,
SUM(DECODE(to_char(month,'MM'),'07',count,0)) July,
SUM(DECODE(to_char(month,'MM'),'08',count,0)) August,
SUM(DECODE(to_char(month,'MM'),'09',count,0)) September,
SUM(DECODE(to_char(month,'MM'),'10',count,0)) October,
SUM(DECODE(to_char(month,'MM'),'11',count,0)) November,
SUM(DECODE(to_char(MONTH,'MM'),'12',count,0)) December
FROM Table_PIVOT
GROUP BY ID
ORDER BY ID
ID Jan FEB MAR APR MAY JUNE JULY AUGUST SEPTEMBER OCTOBER NOVEMBER DECEMBER
10 81 0 84 100 107 90 0 0 73 60 80 78
20 79 0 88 97 87 97 0 0 0 71 76 79I want output to display the column names with Month and Year like below
ID Oct-2009 Nov-2009 Dec-2009 Jan-2010 Feb-2010 ................... OCT-2010 NOV-2010 DEC-2010 JAN-2011 FEB-2011 ......
10 60 80 78 81 73 ...................
20 71 76 79 79 81
CREATE TABLE "TABLE_PIVOT"
( "ID" NUMBER,
"MONTH" DATE,
"COUNT" NUMBER
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('10/01/2009','MM/DD/YYYY'),60);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('11/01/2009','MM/DD/YYYY'),80);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('12/01/2009','MM/DD/YYYY'),78);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('01/01/2010','MM/DD/YYYY'),81);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('02/01/2010','MM/DD/YYYY'),73);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('03/01/2010','MM/DD/YYYY'),84);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('04/01/2010','MM/DD/YYYY'),100);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('05/01/2010','MM/DD/YYYY'),107);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('06/01/2010','MM/DD/YYYY'),90);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('07/01/2010','MM/DD/YYYY'),0);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('08/01/2010','MM/DD/YYYY'),0);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (10,to_timestamp('09/01/2010','MM/DD/YYYY'),73);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('10/01/2010','MM/DD/YYYY'),71);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('11/01/2010','MM/DD/YYYY'),76);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('12/01/2010','MM/DD/YYYY'),79);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('01/01/2011','MM/DD/YYYY'),79);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('02/01/2011','MM/DD/YYYY'),81);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('03/01/2011','MM/DD/YYYY'),88);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('04/01/2011','MM/DD/YYYY'),97);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('05/01/2011','MM/DD/YYYY'),87);
Insert into TABLE_PIVOT (ID,MONTH,COUNT) values (20,to_timestamp('06/01/2011','MM/DD/YYYY'),97);
COMMIT;Hi,
user1849 wrote:
Any Sample code is appreciated
I didn't see any solution for following one in your linklI think Centinul was specifically referring to:
Help for a query to add columns
but other links from
SQL and PL/SQL FAQ
may help you more.
>
Re: How to pipeline a function with a dynamic number of columns?
Posted: May 9, 2006 2:58 PM in response to: Billy Verreynne Reply
Interesting stuff! It's going to take me awhile to digest it.
For what it's worth, I was trying to build a pivoting function that would take a MYTABLE table like this:
YEAR CITY X Y
2000 BAL 95 96
2000 BOS 101 101
2001 BAL 92 94
2001 BOS 101 101
2002 BAL 98 98
2002 BOS 98 99
2003 BAL 95 96
2003 BOS 105 104
and allow me to do something like:
CREATE VIEW MYPIVOT
AS
SELECT *
FROM TABLE (PIVOT(MYTABLE, [with other params]))
and get the following view MYPIVOT on the table:
YEAR BOS_X BOS_Y BAL_X BAL_Y
2000 101 101 95 96
2001 101 101 92 94
2002 98 99 98 98
2003 105 104 95 96
Where the number of distinct CITY values will vary over time. I am able to build the query I need dynamically, but since the CITY data values in the original table change, the columns are not necessarily static from invocation to invocation. Therefore I didn't want to just create a view using the dynamic SQL once, because it may need to be created next time I need to access the view. I wanted to be able to access the pivoted data on demand.A pipelined function is your best bet for that.
I couldn't do was be able to execute the query and treat it as a pipelined function, hence my original question.Sorry, I don't understand.
I'll dig into the code above to see if it does what I wanted, but if someone has a better suggestion on how to approach this, I'd be interested in hearing it.A completely different approach is String Aggregation , where you would get output like this:
YEAR TXT
BOS_X BOS_Y BAL_X BAL_Y
2000 101 101 95 96
2001 101 101 92 94
2002 98 99 98 98
2003 105 104 95 96Note that this output contains 6 rows and 2 columns. On the first row, year is NULL and txt=' BOS_X BOS_Y BAL_X BAL_Y'. You can do this is pure, static SQL, without knowing the number of cities in advance. -
Return the end of week date using month and week number
Hi all,
I am trying to find the date for end of the week using given month (201401, 201402...i.e, 2014 is year and 01,02 are months) and week number (01,02,03...) for the year. For example if my month is 201402 and week_number is 06 then it return the week_date
as 2014-02-08. Can you please help me in writing a sql statement for this scenario.
Thanks in advance,
NikhilCurrent month is irrelevant
with dt as (
select dateadd(day, (6-1)*7, '2014-01-01') as xwk
select dateadd(day, 7-datepart(weekday, dt.xwk), xwk)
from dt;
Change the "6" in the with statement to the week of interest and 2014 to the year of interest... -
Time Dimension 2 PARENTH2 and PRIOR,NEXT
Hi all,
We have standart TIME hierarchy from AppShell (MONTH, QUARTER, YEAR). But in some cases users want insert data in detalization by week. We decide add additional ID's with detalizations to weeks and add they in second hierarchy. Everything is ok in system.
But than we try write scrip logic with PRIOR or NEXT which use data from previous year it doesn't work. But than we use PRIOR or NEXT with current year months everything ok.
Example of logic.
*XDIM_MEMBERSET TIME = PRIOR, %TIME_SET%, %YEAR%.DEC
*CALC_EACH_PERIOD
*WHEN P_ACCT
*IS A01
*REC(EXPRESSION=%VALUE% - GET(P_ACCT="A02",TIME=PRIOR), P_ACCT = A03)
*ENDWHEN
*ENDWHEN
Then we save data on January it doesn't work. BUt on December prev year A02 and January A01 we have data.
Maybe you have some advises according second TIME dimension hierarchy. I think problem on it.
Best regards,
iaroslavHi,
Here is fragment of debuglogic.
Start time --->7:29:17 PM - Date:9/28/2011 (build code:7.5.107)
User:T2RU\bpc.sysadmin
Appset:DEVTELE2
App:BUDGETS
Logic mode:0
Logic by:
Scope by:CATEGORY,INPUTCURRENCY,P_BF,P_BUDDIM,P_BUDDIM2,P_BUSTYPE,P_CC,P_CITY,P_DATASRC,P_PROJECT,TIME
Data File:
Debug File:D:\PC_MS\DATA\WebFolders\DEVTELE2\BUDGETS\PrivatePublications\bpc.sysadmin\TempFiles\DebugLogic.log
Logic File:
Selection:DIMENSION:CATEGORY|BU|DIMENSION:INPUTCURRENCY|RUR|DIMENSION:P_BF|IBF19|DIMENSION:P_BUDDIM|NO_BUDDIM|DIMENSION:P_BUDDIM2|NO_BUDDIM2|DIMENSION:P_BUSTYPE|NO_PRODUCT|DIMENSION:P_CC|3_KEM_FF_FA|DIMENSION:P_CITY|KEM|DIMENSION:P_DATASRC|CCDATA|DIMENSION:P_PROJECT|PR_PL|DIMENSION:TIME|2012.JAN|
Run mode:1
Query size:0
Delim:,
Query type:0
Simulation:0
Calc diff.:1
Formula script:
Max Members:
Test mode:0
Is Modelling:1
Query Type:0
Max members:
Region:
DIMENSION:CATEGORY
BU
DIMENSION:INPUTCURRENCY
RUR
DIMENSION:P_ACCT
----- There a lot of accounts <All> accounts and I dont show it all
DIMENSION:P_BF
IBF19
DIMENSION:P_BUDDIM
NO_BUDDIM
DIMENSION:P_BUDDIM2
NO_BUDDIM2
DIMENSION:P_BUSTYPE
NO_PRODUCT
DIMENSION:P_CC
3_KEM_FF_FA
DIMENSION:P_CITY
KEM
DIMENSION:P_DATASRC
CCDATA
DIMENSION:P_PROJECT
PR_PL
DIMENSION:TIME
2012.JAN
DIMENSION:P_CC
3_KIR_FF_FA
DIMENSION:CATEGORY
BU
DIMENSION:P_DATASRC
CCDATA
DIMENSION:INPUTCURRENCY
RUR
DIMENSION:P_BF
IBF19, IBF15
DIMENSION:P_BUDDIM
NO_BUDDIM
DIMENSION:P_BUSTYPE
NO_PRODUCT
DIMENSION:P_BUDDIM2
NO_BUDDIM2
DIMENSION:TIME
PRIOR, %YEAR%.JAN, %YEAR%.FEB, %YEAR%.MAR, %YEAR%.APR, %YEAR%.MAY, %YEAR%.JUN, %YEAR%.JUL, %YEAR%.AUG, %YEAR%.SEP, %YEAR%.OCT, %YEAR%.NOV, %YEAR%.DEC
DIMENSION:P_ACCT
<ALL>
Loading TIME.TIMEID
Time to load properties:0.0 sec.
select P_ACCT,P_BF,TIMEID,SIGNEDDATA
into #tblTempLogic_266166
from tblFactBUDGETS
WHERE CATEGORY=N'BU' AND INPUTCURRENCY=N'RUR' AND P_BF in (N'IBF19',N'IBF15') AND P_BUDDIM=N'NO_BUDDIM' AND P_BUDDIM2=N'NO_BUDDIM2' AND P_BUSTYPE=N'NO_PRODUCT' AND P_CC=N'3_KIR_FF_FA' AND P_CITY=N'KIR' AND P_DATASRC=N'CCDATA' AND P_PROJECT=N'PR_PL' AND TIMEID in (N'20120400',N'20120800',N'20121200',N'10000425',N'20120200',N'20120100',N'20120700',N'20120600',N'20120300',N'20120500',N'20121100',N'20121000',N'20120900')
insert into #tblTempLogic_266166 (P_ACCT,P_BF,TIMEID,SIGNEDDATA)
select P_ACCT,P_BF,TIMEID,SIGNEDDATA
from tblFactWBBUDGETS
WHERE CATEGORY=N'BU' AND INPUTCURRENCY=N'RUR' AND P_BF in (N'IBF19',N'IBF15') AND P_BUDDIM=N'NO_BUDDIM' AND P_BUDDIM2=N'NO_BUDDIM2' AND P_BUSTYPE=N'NO_PRODUCT' AND P_CC=N'3_KIR_FF_FA' AND P_CITY=N'KIR' AND P_DATASRC=N'CCDATA' AND P_PROJECT=N'PR_PL' AND TIMEID in (N'20120400',N'20120800',N'20121200',N'10000425',N'20120200',N'20120100',N'20120700',N'20120600',N'20120300',N'20120500',N'20121100',N'20121000',N'20120900')
and SOURCE = 0
insert into #tblTempLogic_266166 (P_ACCT,P_BF,TIMEID,SIGNEDDATA)
select P_ACCT,P_BF,TIMEID,SIGNEDDATA
from tblFAC2BUDGETS
WHERE CATEGORY=N'BU' AND INPUTCURRENCY=N'RUR' AND P_BF in (N'IBF19',N'IBF15') AND P_BUDDIM=N'NO_BUDDIM' AND P_BUDDIM2=N'NO_BUDDIM2' AND P_BUSTYPE=N'NO_PRODUCT' AND P_CC=N'3_KIR_FF_FA' AND P_CITY=N'KIR' AND P_DATASRC=N'CCDATA' AND P_PROJECT=N'PR_PL' AND TIMEID in (N'20120400',N'20120800',N'20121200',N'10000425',N'20120200',N'20120100',N'20120700',N'20120600',N'20120300',N'20120500',N'20121100',N'20121000',N'20120900')
select tmpTable.P_ACCT,tmpTable.P_BF,tmpTable.TIMEID,sum(SIGNEDDATA) as SIGNEDDATA
from #tblTempLogic_266166 as tmpTable
group by tmpTable.P_ACCT,tmpTable.P_BF,tmpTable.TIMEID
drop table #tblTempLogic_266166
I saw that TIMEID 10000425 is incorrect -its 2012.DEC.WEEK3 from second hierarchy
Thanks,
Bets regards
Iaroslav -
How to create Base Dimensions with MaxL and Text File?
Hi,
Doing a scratch rebuild of a cube every month. Don't want to have a 'dummy' outline with base dimensions to copy over every build. Instead want to build from text file somehow. Thus my plan is to: 1) Delete the existing app/db 2) Create a new blank app/db 3) Create the base dimensions in the outline via text file and 4) Build entire outline via a text file. I'm stuck on #3 how to get the 'base dimensions' built via text file. I need:
ACCOUNTS
PERIOD
VALUE
VIEWS
SCENARIO
CUSTOM4
YEAR
CUSTOM3
CUSTOM2
ENTITY
CUSTOM1
I see this MaxL, but it uses a 'rules file' and I never have built a rules file to create base dims so I'm confused if it's possible or not...
import database sample.basic dimensions
from data_file '/data/calcdat.txt'
using rules_file '/data/rulesfile.rul'
on error append to '/logs/dimbuild.log';We rebuild our Departments and Organization from an enterprise hierarchy master each week.
The way we implemented (what you call #3) was to not do #1 and #2, but to have a "destructive" load rule for each of these dimensions using a text file. (in the "Dimension Build Settings" for the load rule, select "Remove Unspecified" to make it destructive)
The text file just has the dimension name (parent) and any children we needed defined in a parent/child relationship. For instance
"Sales Departments" "0100-All Departments"
This essentially works the same as deleting the app because the destructive load rules will drop all the blocks of data that were unspecified.
Then we run our SQL load rule to build the rest of the dimensions from the Location Master.
We perform a level-0 export prior to this process, then reload the level-0 data and execute all the consolidation scripts to get the data back (now in a current enterprise defined hierarchy) -
Finding a common Time Dimension in DSO and InfoCube
Hi,
I am working on creating a MultiProvider which is a combination of 2 COPA InfoCubes, Open Orders DSO and a Shipment Data DSO.
I want to use the 'Requested Delivery Period' from Open Orders DSO and a custom field ZSHIPMONTH from the Shipments Data DSO for a common Period dimension in the MultiProvider but that does not look like an option when I tried to assign the 0FISCPER in the MultiProvider to these fields in the DSO. I do not see the fields from the DSOs as options for the period field here, I only see the 0FISCPER from the two COPA InfoCubes as seen in the screenshot attached.
The only alternative I can think of is to create separate InfoCubes on top of these two DSOs and then use a MultiProvider.
I learned that reporting is allowed on top of DSOs from 7.3 onwards, is there any easier way to get a common time dimension across these InfoCubes and DSOs for a MultiProvider?Hi,
in both DSO and Cube the data is stored as
Sales Doc1 M1 10 100
Sales Doc1 M2 5 50
Sales Doc2 M1 20 200.
The material M1 in sales doc1 does not get overwritten with M1 in sales doc 2 in DSO since the data is updated based on sales doc no.
If u try to view the contents of data target or analyse the data at reporting level with all these fields then it displays as above.
But if u ignore sales document field then the data is displayed as,
M1 20 300
M2 5 50.
In Cube, suppose if you dont have sales doc, then
Material | Qty | Amount
M1 30 300
M2 5 50
and in DSO.
Material | Qty | Amount
M1 20 100
M2 5 50
Thanks,
Sandeep -
OLAP time dimension - how are the weeks allocated
Could someone please help me understand what logic Oracle uses when one makes use of Oracles OLAP time dimension based on weeks.
I am use Oracle OLAP 11.2
I have a Time dimension that has the following hierarchy:-
YEAR
QUARTER
MONTH
WEEK
For calculating the weeks ID I make use ISO week and year i.e. IYYYIW
For calculating the end date I use the following:- NEXT_DAY( l_date, 'MON' ) -1
i.e. the weeks end date is the Sunday.
According to me this is the required result.
Problem is that for some months there are 3 weeks allocated which makes no sense to me.
I cannot understand the logic used in allocating the weeks.
The following is an example:-
the following weeks were allocated to the month February
201306 (end date= 10-2-2013)
201307 (end date= 17-2-2013)
201308 (end date= 24-2-2013)
but the following week was allocated to January which makes no sence to me,
I would have expected it to be found in February
201305 (end date= 3-2-2013)
Week 201309 (end date= 3-3-2013) was allocated to March which according to me is correct..
Another example is week *201030 (end date= 1-8-2010)* that is allocated to July while I would have expected that it should be August.
I would have thought that it uses the end date that is placed in the mapping to determine the month to place it in.
Could some one please explain what the reason for this could be.Oracle OLAP model/design (in this case, at least) cannot compensate to coverup existing flaws in the relational model/design.
I dont think this is a valid hierarchy even considering relational model/design.
Weeks do not fit in below Months (calendar months).
You can force fit them by making Weeks the source of truth and making all Months logical Months like "Business Month", "Business Quarter", "Business Year" etc. which will be composed of whole weeks but differ significantly from the calendar definition of Month, Quarter, Year etc.
You are better off modeling time as composed of two hierarchies:
H1: DAY -> MONTH -> QUARTER -> YEAR and
H2: DAY -> WEEK
Alternately if you dont want to introduce DAY level (lower granularity), you can split up the Time dimension into 2 different dimensions. And also modify your star schema to add an additional time dimension key to fact table - one key pointing to WEEK and another pointing to MONTH (independently).
TIME_MO: MONTH -> QUARTER -> YEAR
TIME_WK: WEEK
The fact data will need to be split up to correctly identify the coirrect MONTH-WEEK combination for the relational record.
E.g:
Fact table should have 2 dimension Fks for WK_KEY as well as MO_KEY so that a fact figure of 1000 evaluated over 7 days is split up into 300 from 29-31st of previous month and attached to MO_KEY of previous month and another record of 700 covering days from 1-4 and recorded/tied to same WK_KEY but next month (next month's MO_KEY).
HTH
Shankar -
Order changes in Monthly and Weekly Data View
Hi,
I need some of your suggestions for my problem below:
I have a planning book with two data views (1 with monthly bucket and other with weekly bucket).Now I create three forecast orders in the weekly data view as below:
Quantity Date
10 05.11.2007
20 12.11.2007
30 19.11.2007
So I can see the sum as 60 in the monthly data view. Now I try to modify (deduct by 5) the forecast order from the monthly data view. So the sum 60 changes to 55 but when I see the details of the order it looks strange to me.
11 26.11.2007
13 19.11.2007
13 12.11.2007
13 05.11.2007
5 01.11.2007
Please let me know what is the logic behind this?
But if I perform the above calculation for the category 'HG' as above I get the result like this.
Weekly Buckets
10 05.11.2007
20 12.11.2007
30 19.11.2007
Monthly Buckets
55 01.11.2007
Thanks,
Siva.
Message was edited by:
sivaprakash pandianHi siva
This is possibly because of the settings in the planning area for the time disaggregation of the Key figure
Can you go Planning area administration and see what the setting is in the key figure disaggregation tab?
I think it needs to be P for proportional distribution and will use previously existing proportions for new quantities.
instead in your case it is ignoring previously exsisting proportions and distributing afresh. Am not sure what type that is (K?)
the F1 on the time based disaggregation field will give you a better idea.
if you want to know why the 11 and 5 are there at the ends and 13 is in the middle buckets, it is because the storage buckets for the weeks have 6, 7,7,7 and 2 days and 55/30 multiplied by these days is how the number are calculated.
there is also some help here
http://help.sap.com/saphelp_scm50/helpdata/en/73/3e1167347111d398290000e8a49608/frameset.htm -
Advice needed: join fact to time dimension with between clause
Hi All,
I've got 1 dimension and two fact tables. The 1 dimension could serve as a time dimension.(not specifically marked as this is a time dimension)
My Tables look like this (simplified)
Dim1:
date_pk (unique identifier)
date
month
year
fact1:
iid_date (foreign key to date_pk)
fact1_amount
Fact2:
begin_date
end_date
fact2_amount
In the physical layer i have a complex join between fact 1 and dim1 (date_pk = idd_date) and a complex join between fact2 and dim1 (dim1.date between fact2.begin_date and fact2.end_date and dim1.date <= CURRENT_DATE). In the business model i have complex joins between fact1 and dim1 and a complex join between fact2 and dim1 without further specification.
What I would like to achieve in Answers is that I select a Year and a Month from dim1 and add fact1_amount and fact2_amount to the same report and that the report shows both amounts. I would like some advice on how to set this up. Further more how to add a drill from year to month to date and what should I do when I'm willing to add more facts joined to the same Dim1
Any Advice is greatly appreciated
GillesHello MMA1709,
You're right, this setup works!
But...
When you add an hierarchy and mark it as a time dimension it doesn't work anymore. It gives the following error in the consistency checker:
[38086] Physical table Dim1 in the time dimenison table source Dim1 is part of a complex join condition
And that means you cannot use any timebased calculations (AGO and TODATE). When I just create an hierarchy and do not mark it as a time dimension the hierarchy works well.
Any suggestions?
Maybe you are looking for
-
HP LaserJet Pro M1217 nfw MFP - Scanner Error 22
We are having Scan/Copy/Fax issues. Specifically "Scanner Error 22" (Yes 22, no not 52). I cannot find a manual specific to this model. When I bring one up via HP's website the actual pdf of the manual is for two other versions and they have the Erro
-
ICloud Sync issues between iPhone and MacBook
I have a peculiar problem. I have iCloud turned on for Contacts, Calendar and Notes between my iPhone 6P and my MacBookPro running OS X 10.10.2. If I make a change or add an entry on the MBP, it does not show up on the iPhone until I make a change t
-
hi , when iam dropping a table.. iam getting the error message and that indicates my table is been using by some other user... how to check which user is accessing my table currently... Regards, Mohd Mehraj Hussain
-
Clueless as to whether I can do this or not in a program like Numbers
Before I begin - my disclaimer... I have very little training in this area, and am completely clueless whether this can even be done. I am trying to consider whether we will be able to create our own charting system in a medical clinic. In beginning
-
OSInstall.mpkg folders on the Hard Drive after new installation of Mac OS X
I recently installed Mac OS X 10.4.3 on a newly erased hard drive on a G4 to which I'm doing major upgrades. The installation went fine, and everything works well, but ther are four folders on the root level of the hard drive which are puzzling. The