SSM - Doing a Weighted Average in Time Series Consolidation
Hi,
We have a requirement wherein the Yearly/ YTD View of a scorecard has to be a weighted average of the Quarterly Values.
Please let me know if there is any way to implement this while doing a time series consolidation.
Thanks,
Peeyush
Hi Peeyush,
You can use the following standard methods for time consolidation:
SUM,
FIRST DATA VALUE,
LAST DATA VALUE,
INCLUSIVE AVERAGE,
EXCLUSIVE AVERAGE,
WEIGHTED ON ANOTHER MEASURE
The command in IDQL to set the time consolidation for a measure to WEIGHTED is the FOLLOWING:
SET VAR kpi1 WEIGHTED kpi2
You can also use the Measure Properties dialog box (in the tab 'Numeric') to set this property. You may need to first issue the command SET SHORT in IDQL in order to correctly choose a variable selected from the dropdown list.
If the measure kpi1 is monthly, for example, then the quarterly and yearly values will now display as weighted average with kpi2.
Hope this helps!
Best regards!
Ricardo
Similar Messages
-
How to do an average on time series data?
I need to generate average hold times for various stock of companies as follows:
The data looks like:
stock timestamp (sec) quantity
GOOG 12459.6 -100 <-- SALE
GOOG 12634.0 +100 <-- PURCHASE
GOOG 12636.2 +200
GOOG 12464.8 -100
GOOG 12568.3 -300
GOOG 12678.0 +200
The rules are
1. begin and end day with balance 0
2. can short sell, i.e. can sell shares even if balance is currently 0
3. hold time is defined as number of seconds stock was held before it was sold
4. first stock purchased are sold first
I need to generate the average hold times seconds per share. I'd prefer to do this using SQL and NOT a procedure.
Any tips on how to go about calculating this? I have looked at various analytic functions, but still not sure.
Thank you.I'm afraid you might be after something like below:
this is a simplified scenario where the quantity balance always reaches 0 before changing sign (not very probable in real life)
Simple examples are reserved for the lecturer was a pretty common phrase in my university times
I dont know how to deal with a general case yet
select * from trade_0 order by position,time
TIME
POSITION
DIRECTION
QUANTITY
8
GOOG
S
100
13
GOOG
B
20
16
GOOG
B
30
17
GOOG
B
30
19
GOOG
B
20
22
GOOG
B
20
25
GOOG
B
30
26
GOOG
B
20
30
GOOG
B
30
34
GOOG
B
20
38
GOOG
B
30
41
GOOG
S
150
7
YHOO
S
10
12
YHOO
S
20
15
YHOO
S
30
16
YHOO
S
40
18
YHOO
S
60
21
YHOO
S
30
24
YHOO
S
10
25
YHOO
B
100
29
YHOO
B
300
33
YHOO
S
100
37
YHOO
S
80
40
YHOO
S
20
your condition 4. first stock purchased are sold first requires a procedural solution so model clause must be used if you want to do it in SQL.
Model Men, bear with me, please !
select m.*,
avg(abs(x_time - decode(kind,'B',time_b,time_s))) over (partition by position
order by rn rows between unbounded preceding
and unbounded following
) average
from (select *
from (select nvl(b.position,s.position) position,
nvl(b.rn,s.rn) rn,
nvl(b.cnt,0) cnt_b,
nvl(s.cnt,0) cnt_s,
b.time time_b,
s.time time_s,
b.quantity qty_b,
s.quantity qty_s
from (select time,position,quantity,
row_number() over (partition by position order by time) rn,
count(*) over (partition by position) cnt
from trade_0
where direction = 'B'
) b
full outer join
(select time,position,quantity,
row_number() over (partition by position order by time) rn,
count(*) over (partition by position) cnt
from trade_0
where direction = 'S'
) s
on b.position = s.position
and b.rn = s.rn
model
partition by (position)
dimension by (rn)
measures (0 loc,
case when cnt_b >= cnt_s then 'B' else 'S' end kind,
time_b,
time_s,
qty_b,
qty_s,
0 qty_left,
0 x_time
rules iterate (1000000) until (loc[iteration_number] is null)
loc[0] = nvl2(loc[0],loc[0],1),
qty_left[loc[0]] = case when iteration_number > 0
then qty_left[loc[0]] + case when kind[iteration_number] = 'B'
then qty_b[iteration_number]
else qty_s[iteration_number]
end
else 0
end,
x_time[iteration_number] = case when kind[iteration_number] = 'B'
then time_s[loc[0]]
else time_b[loc[0]]
end,
loc[0] = loc[0] + case when qty_left[loc[0]] = case when kind[iteration_number] = 'B'
then qty_s[loc[0]]
else qty_b[loc[0]]
end
then 1
else 0
end
) m
where kind is not null
order by position,rn
POSITION
RN
LOC
KIND
TIME_B
TIME_S
QTY_B
QTY_S
QTY_LEFT
X_TIME
AVERAGE
GOOG
1
0
B
13
8
20
100
100
8
10.4
GOOG
2
0
B
16
41
30
150
150
8
10.4
GOOG
3
0
B
17
30
8
10.4
GOOG
4
0
B
19
20
0
8
10.4
GOOG
5
0
B
22
20
0
41
10.4
GOOG
6
0
B
25
30
0
41
10.4
GOOG
7
0 -
Dynamic Time Series - Consolidation
<p>Hello</p><p> </p><p>I verified that time balance attribute allow control the memberconsolidation for differents time periods.</p><p>I have a time and accountsdimension on outline. My Time dimension is dividedin Year and quarter without months.</p><p> </p><p>But What is the default period to TBFIRST, TBLAST and TBAVERAGEin this case??</p><p> </p><p>I guess that TBFIRST will be the first and secondquarter of the time dimension, TBLAST theremaining, after all, The TBAVERAGE will be using theconsolidation of the year 2006. Am I right??</p><p> </p><p>Thanks advanced,</p><p> </p><p>Wallace</p>
<p>Yeah, you're basically right with Time Balance metrics...</p><p>If you were looking for a TBAverage, this could be useful inQuarter or YTD rollups depending on what kind of measure youare working with. </p>
-
Oracle 8i hase Time series for defining calendars and other functions. How does Oracle 10g/11g support Time series features. I could not find any information about Time Series in the 10g/11g documentation.
Thanks a lot for the responses.
I looked at the 11g Pivot operator and is altogether a new feature compared to the Time series of 8i.
I would like to explain with an example.
1) The following query creates a table named stockdemo_calendars and defines a calendar
named BusinessDays. The BusinessDays calendar includes Mondays through Fridays,
but excludes 28-Nov-1996 and 25-Dec-1996. Explanatory notes follow the example.
CREATE TABLE stockdemo_calendars of ORDSYS.ORDTCalendar (
name CONSTRAINT calkey PRIMARY KEY);
INSERT INTO stockdemo_calendars VALUES(
ORDSYS.ORDTCalendar(
0
’BusinessDays’,
4,
ORDSYS.ORDTPattern(
ORDSYS.ORDTPatternBits(0,1,1,1,1,1,0),
TO_DATE(’01-JAN-1995’,’DD-MON-YYYY’)),
TO_DATE(’01-JAN-1990’,’DD-MON-YYYY’),
TO_DATE(’01-JAN-2001’,’DD-MON-YYYY’),
ORDSYS.ORDTExceptions(TO_DATE(’28-NOV-1996’,’DD-MON-YYYY’),
TO_DATE(’25-DEC-1996’,’DD-MON-YYYY’)),
ORDSYS.ORDTExceptions()
-------------- How can I create such calendars in 11g?
2) For example, the following statement returns the last closing prices for stock
SAMCO for the months of October, November, and December of 1996:
select * from the
(select cast(ORDSYS.TimeSeries.ExtractTable(
ORDSYS.TimeSeries.ScaleupLast(
ts.close,
sc.calendar,
to_date(’01-OCT-1996’,’DD-MON-YYYY’),
to_date(’01-JAN-1997’,’DD-MON-YYYY’)
) as ORDSYS.ORDTNumTab)
from tsdev.stockdemo_ts ts, tsdev.scale sc
where ts.ticker=’SAMCO’ and
sc.name =’MONTHLY’);
This example might produce the following output:
TSTAMP VALUE
01-OCT-96 42.375
01-NOV-96 38.25
01-DEC-96 39.75
3 rows selected.
--------------------- How can I get the above ouput without Time series functions and calendars in Oracle 11g? -
How does xMII calculate a time weighted average?
I am in the process of doing data validation for tag query result sets from IP21 process data. The comparison is against an IP21 add in to Excel which generates numbers that don't correspond to the numbers generated by xMII. The add in appears to be generating values according to a Time Weighted Average but it is returning different numbers. The differences are not large, but it is a virtual certainty that a plant manager or process engineer will take issue. So we (I and my client) would like to know how to explain the difference in the calculation results. We suspect that it involves interpolation for missing data, but would like to know for sure. We are also checking the IP21 algorithm(s) to make sure we understand how it works. Any insight would be appreciated.
Rick/Jeremy,
I am still not getting the same values. My correlation testing is in Excel 2003. In xMII I have the number format set to 0.000000. Typical values in the result set are coming in with five decimal place accuracy. I am using one hour's data at 1 minute intervals with date accuracy to the second and the data is compressed to 52 actual datapoints. The time range is 11/07/2007 00:00:00 to 11/07/2007 01:00:00. It is in HistoryEvent mode. The row count is set to 20000.
When I run the TWA from xMII, I get 22773.576740.
When I ran my first algorithm in Excel, I get 22767.37321. When I made the change to what Rick said, I get closer but am still off and get 22769.35835. Can either of you (or anyone else) explain the delta? I have rerun both calculations twice on all three sides and get the same results.
Here is my data:
DateTime ActualValue
11/07/2007 00:00:01 23841.572266
11/07/2007 00:01:01 23608.240234
11/07/2007 00:02:01 23238.177734
11/07/2007 00:03:01 23373.197266
11/07/2007 00:04:01 23244.857422
11/07/2007 00:05:02 23408.943359
11/07/2007 00:06:01 23138.939453
11/07/2007 00:07:01 23325.080078
11/07/2007 00:08:01 23433.609375
11/07/2007 00:10:01 22783.115234
11/07/2007 00:11:01 22642.421875
11/07/2007 00:12:01 22226.654297
11/07/2007 00:13:01 22741.773438
11/07/2007 00:14:01 22611.898438
11/07/2007 00:16:01 22810.580078
11/07/2007 00:17:01 22404.023438
11/07/2007 00:18:01 22627.966797
11/07/2007 00:19:01 22672.619141
11/07/2007 00:20:02 22268.031250
11/07/2007 00:21:01 21770.136719
11/07/2007 00:22:01 22877.025391
11/07/2007 00:23:01 22565.330078
11/07/2007 00:24:01 22781.949219
11/07/2007 00:25:01 22324.345703
11/07/2007 00:26:01 22170.953125
11/07/2007 00:27:01 21960.464844
11/07/2007 00:30:01 22495.685547
11/07/2007 00:31:01 21899.714844
11/07/2007 00:32:01 22487.603516
11/07/2007 00:33:01 22723.382812
11/07/2007 00:34:01 22461.189453
11/07/2007 00:35:01 22823.869141
11/07/2007 00:36:01 22599.085938
11/07/2007 00:37:01 22902.572266
11/07/2007 00:41:01 22894.148438
11/07/2007 00:42:01 22724.144531
11/07/2007 00:43:01 22659.437500
11/07/2007 00:44:01 23105.324219
11/07/2007 00:45:01 22634.763672
11/07/2007 00:46:01 22929.582031
11/07/2007 00:47:01 22475.185547
11/07/2007 00:48:01 22739.755859
11/07/2007 00:50:01 22864.958984
11/07/2007 00:51:01 22556.917969
11/07/2007 00:52:01 22794.998047
11/07/2007 00:53:02 22934.748047
11/07/2007 00:54:01 23206.296875
11/07/2007 00:55:03 22412.509766
11/07/2007 00:56:02 22935.718750
11/07/2007 00:57:01 23924.460938
11/07/2007 00:58:01 22918.369141
11/07/2007 00:59:02 23048.748047
Message was edited by: MAppleby
Michael Appleby -
Averaging FFT of mutiple time series to get averaged fft results
Hi
I have multiple time series for the same set of measurement done at different times. Though all of them are not of same length but sampling frequency is same for all of them. When I do FFT of individual time series I will get magnitude & phase.
Is it correct to do linear averaging of the magnitude & Phase of FFT OR its necessary to convert magnitude & phase in to real & imaginary terms & then to averaging of real & imaginary parts for particular frequency & transform them back in to magnitude & phase.
To me later appear correct. Just want to confirm if there is any more simpler way to achieve this.
Thanks in advance
IshaIf the data sets are of different lengths, then the FFTs will have different frequency resolutions and cannot be averaged by any simple technique. I suggest that you determine the longest possible data set and then zero pad all data sets to that length before doing the FFTs. Or take subsets of the data where all the subsets are of the same length. After all the data sets are adjusted to the same lengths I think you can average the magnitudes and phases separately.
Lynn -
Error in Source System, Time series does not exist
Hi Guys,
I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
Error Message : Time series does not exist,
Error in Source System
I have pasted the ststus message below
Diagnosis
An error occurred in the source system.
System Response
Caller 09 contains an error message.
Further analysis:
The error occurred in Extractor .
Refer to the error message.
Procedure
How you remove the error depends on the error message.
Note
If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
Thanks,
YJHi,
You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
Few notes are:
528028,542946,367951,391403,362386.
With rgds,
Anil Kumar Sharma .P -
Time series does not exist, Error in Source System
Hi friends,
I am loading the data from APO system and i am getting the below error after scheduling the info Packs.. can you analyze and let me know your suggestions
Error Message : Time series does not exist,
Error in Source System
I have pasted the ststus message below
Diagnosis
An error occurred in the source system.
System Response
Caller 09 contains an error message.
Further analysis:
The error occurred in Extractor .
Refer to the error message.
Procedure
How you remove the error depends on the error message.
Note
If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
Thanks,
YJHi,
You better search for the notes with the message ""Time series does not exist". You will get nearly 18 notes. Go through each note and see the relevence to your problem and do the needful as it is mentioned in the note .
Few notes are:
528028,542946,367951,391403,362386.
With rgds,
Anil Kumar Sharma .P -
I've created a time series graph on numbers on my iMac but I need to add moving averages. Is there a function an if so where is it and if not is there a way to get around it
Badunit,
Here is an example plot, data sorted from most recent data at top of the table...
You can see the moving average (of 20) is plotted from right to left.
The Moving Average calculation is now wrong, and should have been calculated and presented from oldest to most recent.
Here is the same data, with table sorted from oldest data at the top of the table.
The moving average is also plotted from right to left, and shows the correct Moving Average for the most recent data.
That is, it is calculated from oldest to most recent, with the last Moving Average data point plotted on "todays" date.
What I want to see is my table displayed from most recent at the top (the top table), and moving average calculated and displayed as per the bottom graph.
Edit: So, think about this some more,
I need an option to tell Numbers to do the Moving Average calculation from the bottom of the table up, not from the top of the table down. -
TB Average and Dynamic Time series
Hi,
In our outline, we have Time dimension tagged ad Dynamic Time series.
There few measures which have member formula attached, they are dynamic calc, two pass.
Among these measures which are tagged as TB average and skip missing, we get zero for QTD,YTD on retrieval.
But for other measures which do not have TB average and skip missing, we get values for QTD and YTD.
Please help !!Just an update on the above query,
I was able to see numbers when I used the attribute dimension.
For eg.-
we have 5 standard dimension and 2 attribute dimensions which are attached to LOB standard dimension.
In excel addin when i retrieve withour taking the attribute dimension, i dont see numbers and when i put the attribute dimension as header, i see numbers.
Why this attribute dimension is making diffrence -
Time Series Function not doing it right
Hello guys
I have setting up the time dimensional hierarchy with Year -- QTR---Month--- Days order..
I have a measure call 'Forward Amt' which needs to applied with Month-to-date calculation. Therefore I am using time series functions.
I have copied the same measure and renamed it as 'MTD Forward Amt' and defined the todate function according to the syntax:
TODATE("Forward Details"."Forward fact"."Forward Amt", "Forward Details"."DatesDim"." Dates Month")
I set everything according to the standard steps
In the presentation however when I am running reports using dates = 03/10/2010, I am getting:
Month Forward Amt MTD Forward AMT
3 3000000 The $3000000 is actually correct value for the date 03/10/2010, but the MTD Forward AMT is empty..
For further investigations, I decided to take out the date column and date filter, then I got:
Month Forward Amt MTD Forward AMT
1 500000000
2 10000000
3 349000000
4 500000000
5 10000000
6 349000000I don't think this is correct result of the todate function at month level..
Could anybody suggestion any approach for me to investigate this behavior?
ThanksI’ve got Live Trace so I can’t vouch for Image Trace, but if what Monika says is right (and it usually is) you can select the areas with black fill and lock them. Then delete the stuff that isn’t coloured.
I believe that Image Trace has the possibility of producing stacks rather than compounds (like Streamline in the old days).
You should definitely go for stacks. They make editing much easier. -
Scatter plot using time series function - Flash charting
Apex 3 + XE + XP
I am trying to build a time series scatter plot chart using flash chart component.
Situation :
On each scout date counts are taken within each crop. I want to order them by scout dates and display them in a time series chart. Each series represents different crop.
I am posting the two series queries I used
Queries:
Series 1
select null LINK, "SCOUTDATES"."SCOUTDATE" LABEL, INSECTDISEASESCOUT.AVERAGECOUNT as "AVERAGE COUNT" from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings' AND CROP.CROPID=1
order by SCOUTDATES.SCOUTDATE' ASC
Series 2:
select null LINK, "SCOUTDATES"."SCOUTDATE" LABEL, INSECTDISEASESCOUT.AVERAGECOUNT as "AVERAGE COUNT" from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings' AND CROP.CROPID=4
order by SCOUTDATES.SCOUTDATE' ASC
Problem
As you can see the observations are ordered by scout date. However when the chart appears, the dates dont appear in order. The chart displays the data from crop 1 and then followed by crop 4 data, which is not exactly a time series chart. Does flash chart support time series or they have no clue that the data type is date and it should be progressive in charting ? I tried to use to_char(date,'j') to converting them and apply the same principle however it did not help either.
Any suggestions ?
Message was edited by:
tarumugam
Message was edited by:
aruArumugam,
All labels are treated as strings, so APEX will not compare them as dates.
There are two workarounds to get all your data in the right order:
1) Combine the SQL statements into single-query multi-series format, something like this:
select null LINK,
"SCOUTDATES"."SCOUTDATE" LABEL,
decode(CROP.CROPID,1,INSECTDISEASESCOUT.AVERAGECOUNT) as "Crop 1",
decode(CROP.CROPID,4,INSECTDISEASESCOUT.AVERAGECOUNT) as "Crop 4"
from "COUNTY" "COUNTY",
"FIELD" "FIELD",
"VARIETYLIST" "VARIETYLIST",
"INSECTDISEASESCOUT" "INSECTDISEASESCOUT",
"SCOUTDATES" "SCOUTDATES",
"CROP" "CROP"
where "SCOUTDATES"."CROPID"="CROP"."CROPID"
and "SCOUTDATES"."SCOUTID"="INSECTDISEASESCOUT"."SCOUTID"
and "CROP"."VARIETYID"="VARIETYLIST"."VARIETYLISTID"
and "CROP"."FIELDID"="FIELD"."FIELDID"
and "FIELD"."COUNTYID"="COUNTY"."COUNTYID"
and "INSECTDISEASESCOUT"."PESTNAME" ='APHIDS'
and "VARIETYLIST"."VARIETYNAME" ='SUGARSNAX'
and "COUNTY"."COUNTNAME" ='Kings'
AND CROP.CROPID in (1,4)
order by SCOUTDATES.SCOUTDATE ASC2) Union the full domain of labels into your first query. Then the sorting will be applied to the full list, and the values of the second series will be associated with the matching labels from the first.
- Marco -
Leading Zeros of Consumption in historical time series are not considered
We are using VM MRP Type and weighted average model to calculate forecast value using historical consumption values. I have 36 historical periods(months) but have consumption incurred only in last period i-e "1". I have choosen a weighted group 01 (20% for 1-12 month, 30% for 13-24 month, 50% for 25-36 month). Using these setting it should calculate the basic value as 1*0.5/12 = .042 but it gives "1" as basic value not considering leading zero consumption value in historical time series. If i put a value in first period it calculates correctly and considers the leading zero.
We have implemented note 1113276, 1145764 & 1113277 and have chosen CV_OPTION="X" in the interface as per theses notes instructions but still while executing the forecast it is not considering the leading zeros.
We are using ECC 6.0 MM with 602 support pack.
Need urgent help in this regard.Have you followed the steps as per SAP Note 1113276 , Just check below link with same subject discussed check it helps you,
[Forecast with consumption zero are not considered |Forecast with consumption zero are not considered] -
Introduction
In SQL Server Reporting Service (SSRS), you may need an average value on column chart.
For the above chart, add an average score line to the chart, you can get which student’s score is larger than average score, and which student’s score is less than average score clearly. This document demonstrates how to add an average line to series groups
on SSRS column chart.
Solution
To achieve this requirement, you can add another values to the chart, change the chart type to line chart. Set the value to average value of the series group and set the line to show only once by using expression. Please refer to the link below to see the
detail steps.
Click the chart to display the Chart Data pane.
Add Score field to the Values area.
Right-click the new inserted Score1 and select Change Chart Type. And then change chart type to line chart in the Select Chart Type window.
Change the Category Group name to Subject. Here are the screenshot for you reference.
Right-click the new inserted Score1 and select Series Properties.
Click the expression button on the right of Value field textbox, type the expression below:
=Avg(Fields!Score.Value,"Subject"))
Click Visibility in the left pane, select “Show or hide based on an expression”, and type in the expression below:
=IIF(Fields!Name.Value="Rancy",FALSE,TRUE)
Name in the expression is one of the Students. Then only one line chart is be displayed by using this expression.
Click Legend in the left pane, type Average_Score to the Custom legend text box.
The report looks like below:
Applies to
Microsoft SQL Server 2005
Microsoft SQL Server 2008
Microsoft SQL Server 2008 R2
Microsoft SQL Server 2012
Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.Thanks,
Is this a supported scenario, or does it use unsupported features?
For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
in a supported way?
Thanks! Josh -
How to apply time series function on physical columns in OBIEE RPD
Hi,
I know a way to apply time series function(Ago and ToDate) by using existing logical columns as the source. I have set the chronological key and created time dimension. In the expressiion builder for the same Time Dimension appears at the Top so that we can use its level.
But I couldn't apply a time series function when i have to create a logical column using physical columns. In the expression builder for the same, Time dimension does not appear in the list. Neither can i use any column from the time dimension. Please let me know a way to do it.
Thanks.Time series functions are - by design and purpose - only valid for derived logical columns and not useable inside physical mappings.
If you want / need to do it on a physical level, then abandon the time series functions and do it the old-school way with multiple LTS instances.
Maybe you are looking for
-
Photo Files Losing Color when placing in INDD...
Hi, using INDD CS6,I am noticing my jpg files are losing some of their color information when I link them into INDD, I am doing a photo book, and the colors really need to be preserved. It's almost like they are being compressed a little in INDD simi
-
Can we display the field name vertically
Hi BI Experts , Iam using Bex Analyzer to run reports. 1.in report i want to display the field name vertically. is there any possibilities. 2.i had tried using format cells in excel settings , but i have to make settings every time after running the
-
Reordering photos in folders by drag and drop
Hi, When reordering photos by attempting to drag and drop a stack of photos between two other stacks, instead of inserting the selected stack between the 2 stacks, my v2.3 insists on adding the stack to one of the existing stacks. If I drag and drop
-
Error: NDMP operation failed due to media error
I scheduled a RMAN backup job on oracle 11g EM and sometime it failed when running datafile backup. The log showed the error come from media layer and did not specify detail message. We used secure backup to transfer data to tape. There are 4 tape d
-
Hi, I am working on a idco to file scnario. I have an issue here. I trigger an idoc ,from the erp and in the XI monitoring (SXMB_MONI) , the status of the IDOC is LOG VERSION . Then i trigger nine more idocs from ERP. ONce the 10th idoc reaches the X