Hierarchy Aggregation
Hi guys,
This is my requirement, i have to load
hierarchy on material no's
matnr level kf
12234 1 10
23325 2 24
32342 1 32
i have to aggregate the data based on the hierarchy level
how is it possible
can the output be like this (how? what do i put in rows column in bex
designer to get the level as below)
mtnr level kf
12234 1 10
32342 1 32
result 42
23325 2 24
help me out guys throw any ideas
your help will be greatly appreciated
Well, exception aggregate might not solve your issue... you can add a characteristic for hierarchy level and specify exception aggregation 'first value' for your key figure. Select hierarchy level characteristic as exception characteristic. For your example this will work as it adds up all KF values for the minimum hierarchy level
12234 1 10
32342 1 32
result 42
23325 2 24
but if you require result row for each hierarchy level it will not work as you can specify either first value or last value or something like min or max.
Best regards,
Björn
Similar Messages
-
Ragged Hierarchy - aggregation problem
I build dimension with Ragged Hierarchy as posted in [|http://oracleolap.blogspot.com/2008/01/olap-workshop-4-managing-different.html]
in "*Skip, Ragged and Ragged-Skip Level Hiearchies*" section.
I use scott schema for test
_1- build dimension emp with 4 levels using this data_
==> these data come from relation between EMPNO and MGR columns of EMP table
LVL1_CODE, LVL1_DESC, LVL2_CODE, LVL2_DESC, LVL3_CODE, LVL3_DESC, LVL4_CODE, LVL4_DESC
7839, KING
7839, KING , 7566, JONES
7839, KING, 7566, JONES, 7788, SCOTT
7839, KING, 7566, JONES, 7788, SCOTT, 7876, ADAMS
7839, KING, 7566, JONES, 7902, FORD
7839, KING , 7566, JONES, 7902, FORD, 7369, SMITH
7839, KING , 7698, BLAKE
7839, KING , 7698, BLAKE, 7499, ALLEN
7839, KING , 7698, BLAKE, 7521, WARD
7839, KING , 7698, BLAKE, 7654, MARTIN
7839, KING , 7698, BLAKE, 7844, TURNER
7839, KING , 7698, BLAKE, 7900, JAMES
7839, KING , 7782, CLARK
7839, KING , 7782, CLARK, 7934 MILLER
_2- build cube salary cube using this data_
EMPNO SAL
7369 800
7499 1600
7521 1250
7566 2975
7654 1250
7698 2850
7782 2450
7788 3000
7839 5000
7844 1500
7876 1100
7900 950
7902 3000
7934 1300
The total sum of salary on the top of hierarchy "KING" is 9,750$ and the correct value must be 29,025$.
I Notice that, in any node in hierachy that has childern the value of salary sum is the summation of its chiildern only without its value.
so what is the problem??EMPNO SAL
7369 800
7499 1600
7521 1250
7566 2975
7654 1250
7698 2850
7782 2450
7788 3000
7839 5000
7844 1500
7876 1100
7900 950
7902 3000
7934 1300I can see the above data and looks like you are loading some values at higher level i.e. for emp no 7566. In DWH you will be loading data at leaf level and OLAP engine does aggregation(solve) and store data at higher level. What you are seeing is correct as any node's value is equal to the sum of values of its children.
Thanks,
Brijesh -
CVC - Hierarchy aggregated level - dis aggregation level
Dear Experts,
I have simple question
just wonder how system recognize which characteristics is higher lever and which is lower level
i know we assign location and product in planning area so that system can recognize
like say i have region and brand how the system recognize it
Profit center ->Region->location->salesorg->product and so on ...
Regards
RajHi Raj,
System finds the aggregates based on the characteristics in MPOS
I can explain you via an example:
Let us say you have an MPOS with only 3 Characteristics - P, L1, L2
And the above 2 rows (excl Header) are your CVCs.
A-B-C AND A-B-D are the detailed level, accordingly the Keyfigures
Now, in your shuffler you selected only A-B as your selection that includes C and D also, so
Key figures for selection A-B (aggregated) = KFs of A-B-C + A-B-D
Guess this explains.
Regards
JB -
Reading aggregated data from a cube/multiprovider
Hi BI people
My project is currently looking for a functionmodule that reads aggregated data from a cube/multiprovider.
I already have a functionmodule that reads data from a cube and returns it in a flat format. I have debugged this, but have not found any flags that can enable the OLAP functionality needed to perform the aggregation. The functionmodule is "RSDRI_INFOPROV_READ_RFC".
The situation is that I need to apply the aggregation logic of a profit center hierrarchy to the data I read from RSDRI_INFOPROV_READ_RFC, this means manually replicating the the OLAP engine functionality (keyfigure aggregation exception, ect.) and this is not an option with the available time/budget.
Please have a look at the example below:
Say that I have a profit center hierarchy as displayed below (with postable nodes).
PC1 - $10
|---- PC2 - $30
|---- PC3 - $20
The data I'm getting back from the functionmodule RSDRI_INFOPROV_READ_RFC looks like this:
PC1 $10
PC2 $30
PC3 $20
But I need the data aggregated. An aggregation utilizing the hierarchy above will make the data look like this:
PC1 $60
PC2 $30
PC3 $20
Instead of building an aggregation program, it would be usefull if it was possible to extract aggregated data.
Any comments appreciated.
Regards
MartinThx Olivier,
The problem is that I need a functionmodule that can apply the OLAP aggregation for a hierarchy to the data outpu from RSDRI_INFOPROV_READ_RFC.
... or the best alternative would be if there were a fm/class that could provide me with the hierarchy aggregation of the data.
/Martin -
Can we Join 2 fact tables ?
Hi All,
We are trying to join 2 fact tables using a logical dimension. Let me explain my structure fact1 dim1 and fact2 dim2 another dim dim3 which is joined to Fact1. Other dim contains only columns for joining fact1 and fact2.
When I pick columns from fact1 and dim3,dim1,dim2 it works good. Either Fact2,dim1,dim2,dim3 works good but does not wrok when using columns for both fact1 and fact2.
Here we are using couple of materialized views both on Oracle Apps tables and some Custom tables.
Can any one have some kind of solution when using multiple fact tables.
Thanks,Yo can join as the same joins mentioned as per your physical joins.
But you need to set dimensions hierarchy & aggregation level in BMM layer for sure.
In Reports set the aggregation rule to complex server level.
Regards,
Darwin
Edited by: Darwin S on Dec 24, 2009 1:13 PM -
Reading config data in a project panel for a jDev 10 Extension
I have made it past creating config and project setting panels as jdev extension. My question is how can I read the config data when the project panel is displayed? 1) I would like to use the config data as default when the project is first created and/or 2) the panel extension is first accessed. Thanks!
Thx Olivier,
The problem is that I need a functionmodule that can apply the OLAP aggregation for a hierarchy to the data outpu from RSDRI_INFOPROV_READ_RFC.
... or the best alternative would be if there were a fm/class that could provide me with the hierarchy aggregation of the data.
/Martin -
WebI issue with hierarchy display and aggregation
Trying to wrangle what looks like a defect in WebI's handling of hierarchy display and aggregation. We just completed an update cycle and are running BOBJ 4.1 SP4.
The hierarchy is a standard FM Commitment Item hierarchy in which both the nodes and leaves are Commitment Items (i.e. it uses InfoObject nodes, not text nodes). An example of one of these nodes looks like this in BW:
Cmmt_Item A - Node
Cmmt_Item B - Leaf
Cmmt_Item C - Leaf
Let's pretend Commitment Item A has $50 posted to it, B has $20 and C has $30. Analysis for OLAP handles this by adding a virtual leaf line to distinguish postings that are on the parent node like so:
Cmmt_Item A - Node $100
Cmmt_Item A - Leaf $50
Cmmt_Item B - Leaf $20
Cmmt_Item C - Leaf $30
So you see both the total for the node ($100) and a line for each Commitment Items with KFs posted to them. Our users like this. They can easily see the aggregation and the breakdown.
WebI, on the other hand, will display it like this:
Cmmt_Item A - Node $150
Cmmt_Item B - Leaf $20
Cmmt_Item C - Leaf $30
It doesn't create a separate line for the value of the parent node, but it does add it's value into the aggregate. Twice. Modifying the table with the 'avoid duplicate row aggregation' checkbox yields output like this:
Cmmt_Item A - Node $100
Cmmt_Item A - Node $50
Cmmt_Item B - Leaf $20
Cmmt_Item C - Leaf $30
We're about halfway there. While the top row now shows the correct aggregation and it creates a new line to show the distinct amount on the parent node, that new line appears on the same level as the parent. It's no longer clear that there's an aggregate and a breakdown. And attempting to expand or contract a node will now crash the report with one of those 'Error 16' messages.
Has anyone encountered this issue with hierarchies in WebI? This report was built from scratch in 4.1, so I'm not sure if this affects older versions or not. Or if it would affect any hierarchy that uses InfoObject nodes instead of text nodes.Without a fix, the simplest workaround I can think of would be to restructure the hierarchy. It can't use postable nodes, so Cmmt_Item A - Node from my example would need to be converted into a text node and the postable characteristic added as a child on the same level as the B and C leaves.
This looks like it would affect anyone using hierarchies with postable nodes in a WebI report.
Another oddity in WebI's behavior here - even though the postable nodes show incorrect sums the sum at the root node is correct. So extending my examples from the original post:
Root Node $100
Cmmt_Item A - Node $150
Cmmt_Item B - Leaf $20
Cmmt_Item C - Leaf $30 -
Aggregation Issue when we use Hierarchy InfoObject in the Bex Query.
Hi All,
I have created a bex Query having some Char's and one hierarchy Infoobject in Rowa and RKF's. I haven't used any Exception aggreagation Objects in RKF. but, when I execute a query in the Over all result it's showing the Exceptional aggregation based on the Hierarchy Object.
Briefly Illustrated my problem here.
OrgUnitHierarchy EmpID RKF
Root 1 1
RootA1 1 1
RootA2 1 1
Root 2 1
RootB1 2 1
RootB2 2 1
Root 3 1
RootC1 3 1
RootC2 3 1
Over all result 3
In the above example the Sum of the RKF is 9. but its showing only 3. When I Connect this with crystal report is the sum of RKF is showing 9. Please help me which is the correct one and why it's not aggregating child nodes?
Is there any Config needs to be done to aggregate all the nodes of the Hierarchy? Thanks for your support in advance
Regards,
ShivaHi,
is this related to BEx Analyzer or BEx Web Reporting ? if so then I would suggest to post the entry into BEx Suite forum as this forum is for the SAP Integration Kit from BusinessObjects.
Ingo -
Use of filters and aggregations based on hierarchy nodes in an update rule
Hello,
I need to calculate some indicators from a ODS (BW 3.5) that contain raw data to another one that will contain indicators. These figures are the results of the use of filters and aggregations based on hierarchy nodes (for example: all sales accounts under a node).
In fact, this is typically a query but I want to store these figures, so I need
I understood I have to use a start routine. I never did that before.
Could you provide me with easy-to-understand-for-newbies examples of:
- filtering data based on the value of an infoobject (value must be empty, for example)
- filtering and aggregation of data based on the appartenance to hierarchy nodes (all sales figures, ....)
- aggregation of the key figures based on different characteristics after filtering of these
Well, I am asking a lot ...
Thank you very much
ThomasPlease go through the following link to learn more on aggregates:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
Also go through the very detailed documentation:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a
Regards,
Mahesh -
Exception (Aggregation) in Hierarchy nodes results
Hi All,
We have a requirement to sum all values (as results in hierarchy levels) based on the value of a specific characteristics. example as below is what we want to achieve. Country, Product group, materials are all characteristics and this is viewed as hierarchy in the query. In each material there is a calculation components that will shown the value of Sales Qty, Price/unit and Amount. What we want is to sum up ONLY the Amount value and presented in the hierarchy nodes so that we can see how much sales we did by product group and subsequently by country as whole. If we drill across we will be able to achieve this but we want to drill down as we have the time characteristics and variances at the columns. and no, we do not want user to drill across the calculation components. This query is used in Integrated Planning for planning purposes integrated with BO and that is the structure of the planning layout we want. "Calculation component" is a infoobject that contains 'sales qty', 'price/unit' and 'sales amount' as master data values.
I have tried exception aggregation on calc component infoobject for 'LAST VALUE'. and it did capture only the Sales Amount value as result for node Calc Components. But the other results node State, country and Region also takes LAST VALUE.. which is not what i want. what i want is esseentially summation of all Sales amount for each calculation components by state, country and region.
Last Year | Variance | Current Year | Jan | Feb | Mar |......
Region X 200 (sum of all country's sales amount)
->Country A 200 (sum of all states' sales amount)
-> State Y1 80 (sum of all components's sales amount)
->Calc Components Z1 30
- Sales Qty 10
- Price/Unit 3
- Sales Amount 30
->Calc Components Z2 50
- Sales Qty 5
- Price/Unit 10
- Sales Amount 50
-> State Y2 120
->Calc Components Z1 40
- Sales Qty 10
- Price/Unit 4
- Sales Amount 40
->Calc Components Z2 80
- Sales Qty 10
- Price/Unit 8
- Sales Amount 80
any help would be appreciated...
eddiePlease try the below steps and I hope it should work for you.
Instead of calculated keyfigures,try to use local formula .Because i think sometimes CKF will not give the expected result if you use exception aggregation (not sure though,but i used only local formulas)
1. Create a first local keyfigure (Say F1)for the calculation and use exception aggreagation as 'Average' and reference characteristics as 'PDU'.
2. Create a second keyfigure ( Say F2) using the first keyfigure(F1) and use exception aggregation as 'Average' and reference characteristics as 'Sales Document', if you need the values based on average of the sales documents and hide the first keyfigure(F1)
I consider Sales document is the first level and PDU is the second level ( i mean lowest level) . Because you need to consider these sequencens as well when you design the exception aggregation.
Likewise you can create nested aggregations based on your requirements.
Please do let me know if it works.
Thanks. -
Aggregation On Value Based Hierarchy
Hi
I am having a problem with aggregation on value based hierarchy
Well I have a table which will serve as my fact and dimension table
It is as follows
ID Name MID Salary
0 All
1 A 0 10000
2 B 1 9000
3 C 1 9000
4 D 1 9000
5 E 2 8000
6 F 2 8000
I created a value based dimension named EMPLOYEE , with child as ID and parent as MID
I created a cube EMP_SALARY with a measure salary mapped to the Salary column in Employee
My expectation here is to see the total salary at every level including the salary at that level
So let us take employee B as an example. He is the manager for employees E and F
So what I would like to see at level B is sal of B + sal of E + Sal of F = 9000 + 8000 + 8000 = 25000
But what I get from the cube is 9000. Now is the above possible ? If so please so provide me suggestions
I can achieve the same by using the following sql query
select e1.id,rpad('*',2*level,'*') || e1.name,e1.sal,
select sum(e2.sal)
from test_emp e2
start WITH e2.id = e1.id
CONNECT by PRIOR e2.id = e2.mid
) sum_sal
from test_emp e1
start with e1.mid is null
CONNECT by prior e1.id = e1.mid;The same basic problem, along with a solution, was discussed in the following thread.
Re: Value Based Dimension causing Aggrega tion problems -
Dimension table to support Hierarchy and the aggregation functions
Hello expert,
Now I seem to know why those aggregation functions e.g. SUM, COUNT failed whenever the report is executed.
I have a fact table REJECT_FACT contains all the information of rejects:
Reject ID
Reject Category
Reject Code
Reject Desc
Site Desc
Site Code
Region Desc
Age Group
Reject Date
So I created a alias REJECT_DIM based on REJECT_FACT. After several trials, I think that the aggregation functions do not work with alias because after I remove the REJECT_DIM, the aggregation seem working.
Is my concept right? Or I am missing something? I don't understand that the data model for datawarehouse should be simple, why do we need to create many dimension tables to support the hierarchy?Hello expert,
Thank you very much for your reply.
Actually the data model is very simple. There is only one physical table REJECT_FACT. The structure is as follows:
Reject ID (NUMBER)
Reject Category (VARCHAR2)
Reject Code (VARCHAR2)
Reject Code Desc (VARCHAR2)
Site Desc (VARCHAR2)
Site Code (VARCHAR2)
Region Desc (VARCHAR2)
Age Group (VARCHAR2)
Reject Date (DATE)
The hierarchy required is as follows:
Reject Category -> Reject Code Desc -> Site Desc -> Region Desc -> Age Group -> Reject Date.
I want to produce count on each hierachy level.
How to populate the hierachy structure effectively?
Thanks...... -
OBIEE BI Answers: Wrong Aggregation Measures on top level of hierarchy
Hi to all,
I have following problem. I hope to be clear in my English because it's a bit complicated to explain.
I have following fact table:
Drug Id Ordered Quantity
1 9
2 4
1 3
2 2
and following Drug Table:
Drug Brand Id Brand Description Drug Active Ingredient Id Drug Active Ingredient Description
1 Aulin 1 Nimesulide
2 Asprina 2 Acetilsalicilico
In AWM i've defined a Drug Dimension based on following hierarchy: Drug Active Ingredient (parent) - Drug Brand Description (leaf) mapped as:
Drug Active Ingredient = Drug Active Ingredient Id of my Drug Table (LONG DESCRIPTION Attribute=Drug Active Ingredient Description)
Drug Brand Description = Drug Brand Id of my Drug Table (LONG DESCRIPTION Attribute = Drug Brand Description)
Indeed in my cube I've mapped leaf level Drug Brand Description = Drug Id of my fact table. In AWM Drug Dimension is mapped as Sum Aggregation Operator
If I select on Answers Drug Active Ingredient (parent of my hierarchy) and Ordered Quantity I see following result
Drug Active Ingredient Description Ordered Quantity
Acetilsalicilico 24
Nimesulide 12
indeed of correct values
Drug Active Ingredient Description Ordered Quantity
Acetilsalicilico 12
Nimesulide 6
EXACTLY the double!!!!!!! But if I drill down Drug Active Ingredient Description Acetilsalicilico I see correctly:
Drug Active Ingredient Description Drug Brand Description Ordered Quantity
Acetilsalicilico
- Aspirina 12
Total 12
Wrong Aggregation is only on top level of hierarchy. Aggregation on lower level of hierarchy is correct. Maybe Answers sum also Total Row????? Why?????
I'm frustrated. I beg your help, please!!!!!!!!
GiancarloHi,
in NQSConfig.ini I can't find Cache Section. I post all file. Tell me what I must change. I know your patient is quite at limit!!!!!!! But I'm a new user of OBIEE.
# NQSConfig.INI
# Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
# INI file parser rules are:
# If values are in literals, digits or _, they can be
# given as such. If values contain characters other than
# literals, digits or _, values must be given in quotes.
# Repository Section
# Repositories are defined as logical repository name - file name
# pairs. ODBC drivers use logical repository name defined in this
# section.
# All repositories must reside in OracleBI\server\Repository
# directory, where OracleBI is the directory in which the Oracle BI
# Server software is installed.
[ REPOSITORY ]
#Star = samplesales.rpd, DEFAULT;
Star = Step3.rpd, DEFAULT;
# Query Result Cache Section
[ CACHE ]
ENABLE = YES;
// A comma separated list of <directory maxSize> pair(s)
// e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
DATA_STORAGE_PATHS = "C:\OracleBIData\cache" 500 MB;
MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
MAX_CACHE_ENTRY_SIZE = 1 MB;
MAX_CACHE_ENTRIES = 1000;
POPULATE_AGGREGATE_ROLLUP_HITS = NO;
USE_ADVANCED_HIT_DETECTION = NO;
MAX_SUBEXPR_SEARCH_DEPTH = 7;
// Cluster-aware cache
// GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
// MAX_GLOBAL_CACHE_ENTRIES = 1000;
// CACHE_POLL_SECONDS = 300;
// CLUSTER_AWARE_CACHE_LOGGING = NO;
# General Section
# Contains general server default parameters, including localization
# and internationalization, temporary space and memory allocation,
# and other default parameters used to determine how data is returned
# from the server to a client.
[ GENERAL ]
// Localization/Internationalization parameters.
LOCALE = "Italian";
SORT_ORDER_LOCALE = "Italian";
SORT_TYPE = "binary";
// Case sensitivity should be set to match the remote
// target database.
CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
// SQLServer65 sorts nulls first, whereas Oracle sorts
// nulls last. This ini file property should conform to
// that of the remote target database, if there is a
// single remote database. Otherwise, choose the order
// that matches the predominant database (i.e. on the
// basis of data volume, frequency of access, sort
// performance, network bandwidth).
NULL_VALUES_SORT_FIRST = OFF;
DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
// Temporary space, memory, and resource allocation
// parameters.
// You may use KB, MB for memory size.
WORK_DIRECTORY_PATHS = "C:\OracleBIData\tmp";
SORT_MEMORY_SIZE = 4 MB ;
SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
// Analytics Server will return all month and day names as three
// letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
// To use complete names, set the following values to YES.
USE_LONG_MONTH_NAMES = NO;
USE_LONG_DAY_NAMES = NO;
UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
// Aggregate Persistence defaults
// The prefix must be between 1 and 8 characters long
// and should not have any special characters ('_' is allowed).
AGGREGATE_PREFIX = "SA_" ;
# Security Section
# Legal value for DEFAULT_PRIVILEGES are:
# NONE READ
[ SECURITY ]
DEFAULT_PRIVILEGES = READ;
PROJECT_INACCESSIBLE_COLUMN_AS_NULL = NO;
MINIMUM_PASSWORD_LENGTH = 0;
#IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
#SSL=NO;
#SSL_CERTIFICATE_FILE="servercert.pem";
#SSL_PRIVATE_KEY_FILE="serverkey.pem";
#SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
#SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
#SSL_VERIFY_PEER=NO;
#SSL_CA_CERTIFICATE_DIR="CACertDIR";
#SSL_CA_CERTIFICATE_FILE="CACertFile";
#SSL_TRUSTED_PEER_DNS="";
#SSL_CERT_VERIFICATION_DEPTH=9;
#SSL_CIPHER_LIST="";
# There are 3 types of authentication. The default is NQS
# You can select only one of them
#----- 1 -----
#AUTHENTICATION_TYPE = NQS; // optional and default
#----- 2 -----
#AUTHENTICATION_TYPE = DATABASE;
# [ DATABASE ]
# DATABASE = "some_data_base";
#----- 3 -----
#AUTHENTICATION_TYPE = BYPASS_NQS;
# Server Section
[ SERVER ]
SERVER_NAME = Oracle_BI_Server ;
READ_ONLY_MODE = NO; // default is "NO". That is, repositories can be edited online.
MAX_SESSION_LIMIT = 2000 ;
MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
SERVER_THREAD_RANGE = 40-100;
SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
DB_GATEWAY_THREAD_RANGE = 40-200;
DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
CLIENT_MGMT_THREADS_MAX = 5; // default is 5
# The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
# a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
RPC_SERVICE_OR_PORT = 9703; // default is 9703
# If port is not specified with a host name or IP in the following option, the port
# number specified at RPC_SERVICE_OR_PORT will be considered.
# When port number is specified, it will override the one specified with
# RPC_SERVICE_OR_PORT.
SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
# or "IP1","IP2":port or
# "hostname":port,"IP":port2.
# Note: When this option is active,
# CLUSTER_PARTICIPANT should be set to NO.
ENABLE_DB_HINTS = YES; // default is yes
PREVENT_DIVIDE_BY_ZERO = YES;
CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
# SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
# for the cluster participant yet.
// Following required if CLUSTER_PARTICIPANT = YES
#REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
#REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
DISCONNECTED = NO;
AUTOMATIC_RESTART = YES;
# Dynamic Library Section
# The dynamic libraries specified in this section
# are categorized by the CLI they support.
[ DB_DYNAMIC_LIBRARY ]
ODBC200 = nqsdbgatewayodbc;
ODBC350 = nqsdbgatewayodbc35;
OCI7 = nqsdbgatewayoci7;
OCI8 = nqsdbgatewayoci8;
OCI8i = nqsdbgatewayoci8i;
OCI10g = nqsdbgatewayoci10g;
DB2CLI = nqsdbgatewaydb2cli;
DB2CLI35 = nqsdbgatewaydb2cli35;
NQSXML = nqsdbgatewayxml;
XMLA = nqsdbgatewayxmla;
ESSBASE = nqsdbgatewayessbasecapi;
# User Log Section
# The user log NQQuery.log is kept in the server\log directory. It logs
# activity about queries when enabled for a user. Entries can be
# viewed using a text editor or the nQLogViewer executable.
[ USER_LOG ]
USER_LOG_FILE_SIZE = 10 MB; // default size
CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
# Usage Tracking Section
# Collect usage statistics on each logical query submitted to the
# server.
[ USAGE_TRACKING ]
ENABLE = NO;
//==============================================================================
// Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
STORAGE_DIRECTORY = "<full directory path>";
CHECKPOINT_INTERVAL_MINUTES = 5;
FILE_ROLLOVER_INTERVAL_MINUTES = 30;
CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
//==============================================================================
DIRECT_INSERT = YES;
//==============================================================================
// Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
PHYSICAL_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ; // Or "<Database>"."<Schema>"."<Table>" ;
CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
BUFFER_SIZE = 10 MB ;
BUFFER_TIME_LIMIT_SECONDS = 5 ;
NUM_INSERT_THREADS = 5 ;
MAX_INSERTS_PER_TRANSACTION = 1 ;
//==============================================================================
# Query Optimization Flags
[ OPTIMIZATION_FLAGS ]
STRONG_DATETIME_TYPE_CHECKING = ON ;
# CubeViews Section
[ CUBE_VIEWS ]
DISTINCT_COUNT_SUPPORTED = NO ;
STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
USE_SCHEMA_NAME = YES ;
USE_SCHEMA_NAME_FROM_RPD = YES ;
DEFAULT_SCHEMA_NAME = "ORACLE";
CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
LOG_FAILURES = YES ;
LOG_SUCCESS = NO ;
LOG_FILE_NAME = "C:\OracleBI\server\Log\CubeViews.Log";
# MDX Member Name Cache Section
# Cache subsystem for mapping between unique name and caption of
# members for all SAP/BW cubes in the repository.
[ MDX_MEMBER_CACHE ]
// The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
ENABLE = NO ;
// The path to the location where cache will be persisted, only applied to a single location,
// the number at the end indicates the capacity of the storage. When the feature is enabled,
// administrator needs to replace the "<full directory path>" with a valid path,
// e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
DATA_STORAGE_PATH = "C:\OracleBIData\cache" 500 MB;
// Maximum disk space allowed for each user;
MAX_SIZE_PER_USER = 100 MB ;
// Maximum number of members in a level will be able to be persisted to disk
MAX_MEMBER_PER_LEVEL = 1000 ;
// Maximum size for each individual cache entry size
MAX_CACHE_SIZE = 100 MB ;
# Oracle Dimension Export Section
[ ORA_DIM_EXPORT ]
USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
DEFAULT_SCHEMA_NAME = "ORACLE";
ORA_DIM_SCHEMA_NAME = "ORACLE";
LOGGING = ON ; # OFF, DEBUG
LOG_FILE_NAME = "C:\OracleBI\server\Log\OraDimExp.Log"; -
Last Aggregation Method not getting applied for Time hierarchy(Y, Q & M)
Hi,
OBIEE: 11.1.1.6.2
In my application there is a measure (closing balance) that we wish pick as LAST when ever we associate this measure to time hierarchy. In order to achieve this configuration we have set the Aggregation method as:
Logical Column Window > Aggregation tab > Based on Dimensions (checked) > Under Logical Dimension
Other: SUM(Measure)
Time Hierarchy: LAST(Measure)
Now what happens is that when we see this measure against the time hierarchy the the LAST value is not picked up, Example:
Time-Hierarchy:
2008
2008-Q4
Oct-2008
Nov-2008
Dec-2008
What's happening is, instead of picking the Dec-2008 value, the 2008-Q4 value is displayed as Oct-2008 value. Which means that OBIEE is sorting the months under the Quarter node in alphabetical order (D, N & O) and then picking the LAST value as 'Oct-2008'.
Kindly suggest what can be done to pick the right node value of Dec-2008.
Thanks
PrashantHi,
chek weather u have any data for DEC-2008 in fact table
in which level u put the week,month, year hirarchy.chckthe session query weather it is doing any filter or order by
mark helps
thanks
Edited by: Rupesh Shelar on Apr 7, 2013 10:03 PM -
ALV as Hierarchy with aggregation
Hi
I have 5 columns: H1, H2, H3, SUM1, SUM2
For columns SUM1, SUM2:
lo_field = lo_value->if_salv_wd_field_settings~get_field( <fs_col>-id ).
lo_field->if_salv_wd_aggr~create_aggr_rule( ).
lo_aggr_rule = lo_field->if_salv_wd_aggr~get_aggr_rule( ).
lo_aggr_rule->set_aggregation_type( if_salv_wd_c_aggregation=>aggrtype_total ).
For columns H1, H2, H3:
<fs_col>-r_column->if_salv_wd_column_hierarchy~set_hierarchy_column( value = abap_true ).
If i create sort rule for H1,H2,H3 hierarchy automatic sort is broken.
How i can create aggregation on Hierarchy ALV?hi arjun,
it still doesn't work. when i debug it, the system sets the column hierarchy "false"...
the code is like this now:
DATA:
l_ref_cmp_usage TYPE REF TO if_wd_component_usage,
l_ref_interfacecontroller TYPE REF TO iwci_salv_wd_table,
l_value TYPE REF TO cl_salv_wd_config_table.
l_ref_cmp_usage = wd_this->wd_cpuse_salv( ).
IF l_ref_cmp_usage->has_active_component( ) IS INITIAL.
l_ref_cmp_usage->create_component( ).
ENDIF.
l_ref_interfacecontroller = wd_this->wd_cpifc_salv( ).
l_value = l_ref_interfacecontroller->get_model( ).
* cl_salv_wd_model_table_util=>if_salv_wd_table_util_stdfuncs~set_all(
* r_model = l_value ).
l_value->if_salv_wd_std_functions~set_hierarchy_allowed( abap_true ).
l_value->if_salv_wd_table_settings~set_display_type(
if_salv_wd_c_table_settings=>display_type_hierarchy ).
DATA: lr_column TYPE REF TO cl_salv_wd_column.
lr_column = l_value->if_salv_wd_column_settings~get_column( 'CNOID' ).
lr_column->if_salv_wd_column_hierarchy~set_hierarchy_column( abap_true ).
lr_column = l_value->if_salv_wd_column_settings~get_column( 'STOID' ).
lr_column->if_salv_wd_column_hierarchy~set_hierarchy_column( abap_true ).
l_value->if_salv_wd_table_hierarchy~set_last_hier_column_as_leaf( abap_true ).
Maybe you are looking for
-
Can I create a form in muse and have it work with a different hosting service?
I contacted a hosting service that I need to upload the site I built in MUSE to. (Hosting has already been purchased). I asked OMNIS: I want to create a form with Muse and then upload my site to OMNIS, not muse, the forms will display as I designed t
-
How to use PDA module to access the database of PDA?
I want to develop a program which can access the database of PDA by the pda module. give me some advice! thanks!
-
G/L account problems for vendor
Hi Gurus, I came across an issue in my work where we have a vendor with 2 different vendor numbers used by two different business units. offlate we are seeing that some payments for that vendor were flowing into the other business unit's G/L account
-
How do I preserve a website's fonts when I copy and paste from Firefox into Word?
I often paste copy from websites when I am researching. In Safari, the fonts and font sizes of the copy are preserved, but when I do it in Firefox, they are changed to the same size of Verdana. I prefer the fonts to be retained from the websites, but
-
Inexplicable delays in browser-to-servlet-to-browser communication
We are seeing some inexplicable "pauses" in the round-trip from browser-to-servlet-to-browser communication. The browser and Weblogic instance are on different machines, so correlating absolute times has been difficult. So at this point it's hard to