No rows to 0CPH_* cubes
I have a problem,
I have activated Business Content 0CCMS according to instructions in
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/2017610d472be9e10000000a1553f7/frameset.htm
Everything seems to be ok, data flows nicely to PSA, and data in PSA looks fine and complete, BUT...
Update rules 0CCMS_CPH_DATA_DELTAINIT and 0CCMS_CPH_DATA_DELTA deletes all rows, with status "green".
We have tried - among other things - to comment rows:
*DELETE DATA_PACKAGE
*WHERE CCM_RES NE CPH_BI_BCT_RES_HOUR.
but that has no affect to the result.
Any ideas?
Thanks in advance,
Tero
I forgot this open... Anyway, here is the solution SAP provided, some parts of the Business Content were not activated correctly. In BI 7.0, use RSA1OLD, otherwise it may be difficult to follow the solution.
br,
Tero
I found one reply from our development for the similar other customer
issue:
======================
The content activation tool (RSORBCT) is usually unable to find
transfer rules for the CCMS datasources (maybe because the
datasources were changed in PI-BASIS support packages in past).
To get the correct transfer rules please do the following:
1. Goto RSA1 -> Infosources. Check the infosources 0CCMS_CPH_DATA,
0CCMS_CPH_ONLINE_DATA, 0CCMS_CPH_LIFETIME_DATA,
0CCMS_MTE_TIMEDEP_MASTERDATA and 0CCMS_MTE_STATIC_MASTERDATA: do
they already have any transfer rules? If yes, then delete them by
choosing "delete source system assignment" from the context
menu of the rule. If you have a new BI_CONT support package you need
to do the same with the infosources 0CCMS_CPH_DATA_ALTERNATIV,
0CCMS_CPH_DATA_DELTAINIT, 0CCMS_CPH_DATA_DELTAREPEAT and
0CCMS_CPH_DATA_DELTA.
2. Goto RSA1 -> Source Systems. Choose your source system. In the
datasource overview below the tree node "PI Basis" you will find all
CCMS datasources. They should have a button with a red minus (if
there is a green plus then step 1 wasn't successful and the following
steps will not work).
3. Choose "assign infosource..." from the context menu of datasource
0CCMS_MTE_STATIC_MASTER_DATA. A popup with a suggestion for a
assignment from the content will appear -> press the ok-button. Then
the content activation tool will appear with preselected objects ->
press the install-button and choose "install" from the menu. When the
content activation is finished, leave the screen with shift-F3.
(If there is no suggestion within the popup, you have to (re-)acivate
the source system (note 301192) and replicate the datasources.)
4. Repeat step 3 with all other CCMS datasources.
5. Delete all data from all CPH infoprovider (cubes 0CPH_15M, 0CPH_DAY,
0CPH_HOUR and 0CPH_MIN, ODS 0CPH_PODS).
6. Delete all data from DB table CCMSBI_CPH_GUIDS
7. reset the delta upload from ODS 0CPH_PODS to cubes:
- RSA1 -> InfoProvider -> right-chlick on 0CPH_PODS -> "Update ODS
Data in Data Target..." from context menu
- confirm all popups until you get a infopackage maintainance screen
- choose "Scheduler" -> "Initialization Options for Source System"
from menu
- delete all entries in following popup.
8. All update and transfer rules should be correct and the CPH data
upload will work now.
Similar Messages
-
How to count the number of rows in a cube!!??
Experts,
I can somebody tell me how do I count the number of rows in my cube when i am using listcube???..
Thanks
AshwinHi,
have a look ath this theard too
Number of Records in Cube
regards -
Selective deletion of a row from a cube
Hi Friends,
I need to delete a row from a cube through selective deletion from manage tab. I had struck while giving selecting the criteria for that deletion.
My Quesion is;
1) What criteria i need to give to delete the row apart from business related information?
2) What is data packet SID ? what i have to mention in that column?
3) What is Request id SID? what i have to mention in that column?
Please help me on the above issue.
regards,
MaheshHallo
Of xourse you do it for example from Infoprovider, right click maanage, Content (first tab) Selective deletion.
Then you get a tab where you can schedule the job according to the selections in the tab Selective Deletion where you can set a filter for the characteristics included in the Cube.
You can select single Characteristics vlue oder range or multiple.
If you want to dlete a record than you have to know the key of this records. For example if you have
customer
company
month
if you select only month, then you delete all month but if you restrict to month and cusotmer than you delete the combination of them.
Reuest sid is the identifier of the request which you see on the tab manage of the cube. you select on that and in this case you will delete all request it means all records loaded with the request.
http://help.sap.com/saphelp_nw04/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
Mike
Best Regards
Mike -
No rows inserted into cube REL_EXPENSE
Dear all,
This week I've spent some hours walking through the online training for Oracle Warehouse Builder 10g on Oracle Technet which can be found here:
http://www.oracle.com/technology/obe/admin/owb10gr2_gs.htm
Everything went fine until I had to load data into cube REL_EXPENSE. Somehow the mapping that I created (REL_EXPENSE_MAP) does not insert any rows into table REL_EXPENSE.
The lesson I'm referring to which builds the mapping for the cube is:
http://www.oracle.com/technology/obe/10gr2_owb/owb10gr2_gs/owb/lesson4/etl-mappings.htm
Additional information:
-- The target table (REL_EXPENSE) that is needed for loading the data is created successfully.
-- The external table which i created during an earlier lesson was created without any problems. If I SELECT the table (EXPENSE_DATE) I see the records from the CSV file in the table.
-- I've tried to debug the mapping and I could see the rows go through the External Table Operator and through the Expression operator (which converts dates to numbers). The records won't get inserted into the target table by the Cube Operator though.
Since I'm new to OWB I am not able to solve this problem on my own, so I hope you guys can give me a hint where to look so I can solve the problem.
Thanks in advance!
Kind regards,
TheoInsert into ....
INSERT
INTO "IIA_DATA"."FAITS_EX_POST"@"DIIAV3@EMP_IIA_DATA"
"NOMBRE" ,
"DIM_EX_POST" ,
"DIM_SERVICE_EX_POST",
"DIM_TEMPS_EX_POST"
(SELECT to_number( "V_FAITS"."R_VALEUR" ) "NOMBRE" ,
"DIM_EX_POST"."DIMENSION_KEY" "DIMENSION_KEY" ,
"DIM_SERVICE"."DIMENSION_KEY" "DIMENSION_KEY_2",
"DIM_TEMPS"."DIMENSION_KEY" "DIMENSION_KEY_1"
FROM "V_FAITS" "V_FAITS" ,
"V_DIMENSIONS_FAITS" "V_DIMENSIONS_FAITS" ,
"IIA_DATA"."DIM_TEMPS"@"DIIAV3@EMP_IIA_DATA" "DIM_TEMPS" ,
"IIA_DATA"."DIM_SERVICE"@"DIIAV3@EMP_IIA_DATA" "DIM_SERVICE",
"IIA_DATA"."DIM_EX_POST"@"DIIAV3@EMP_IIA_DATA" "DIM_EX_POST"
WHERE ( "V_FAITS"."I_ID" BETWEEN 1 AND 142 )
AND ( "V_FAITS"."R_ID" = "V_DIMENSIONS_FAITS"."R_ID" )
AND ( "V_FAITS"."MONTH_CAL_CODE_ID" = "DIM_TEMPS"."CALENDAR_MONTH_CAL_MONTH_CODE" )
AND ( "V_FAITS"."R_VALEUR_AXE" = "DIM_SERVICE"."SERVICE_ACS" )
AND ( (to_number("DIM_EX_POST"."COMPTABILISATION_ID_FONC" )) = "V_DIMENSIONS_FAITS"."NE_ID" )
)Output :
0 rows inserted -
AWM adding very less rows in cube as compared to source fact table
AWM is adding just 4 lack rows in the cube. While there are above 10 billion rows in fact table. Even I have applied no aggregation rule at any dimension in cube. And number of rejected records is zero.I am confused what could be the possible reason
Take a look at the SQL query generated by OLAP during cube load.
You can look at it in AWM cube mapping, OR in the CUBE_BUILD_LOG table.
Maybe a SUM..GROUP BY ... is happening during cube load and that is why 1 billion rows are becoming 400,000 rows.
Are the numbers correct when you look at the data at top nodes? -
MDX query with DMV to get all cubes and aggregation row count on SSAS engine
Hi All,
How can I get all cube names on a SSAS engine server and count of number of aggregation rows in each cube ?
I got a DMV where it shows all catalogs names and description but where can I found aggregation row count of each cube.
Please let me know, thanks in advance.
Maruthi...Hi Maruthi,
Please check below link, hope this will help you.
SSAS 2008 CTP6 – new DMV $SYSTEM. DISCOVER_ OBJECT_ ACTIVITY
lists the memory usage and CPU time for each object i.e. cube, dimension, cache, measure, partition, etc. They also show which aggregations were hit or missed, how many times these objects were read, and how many rows were returned by them:
Discover_object_memory_usage and discover_object_activity
select * from $system.discover_object_memory_usage
select * from $system.discover_object_activity
Thanks
Suhas
Mark as Answer if this resolves your problem or "Vote as Helpful" if you find it helpful.
My Blog
Follow @SuhasKudekar -
hi
How to load measures to a cube.
I created a cube, validated and deployed successfully.
I am loading only a single measure from a table, but it does not load anything.
even the keys of the dimensions are not in the database table.
do i need to map on the measure to the proper row in the cube or i also need the keys to be mapped?
regards
Arif
P.S. I am unable to include a screen dump in the forum, how to do it?
Edited by: badwanpk on Nov 4, 2008 2:36 PMYes, you need to build a row for insertion to the fact by joining your source tables.
At the moment you are building a Data Mart/Star Schema (fact and dimension tables i.e. a cube), this Star Schema will be used for reporting i.e. no need to go back to the source. Ultimately, this could grow into a Data Warehouse that has dimensions and facts populated from many sources allowing reporting across your whole organisation rather than a single application. The dimensions are likely to contain far more descriptive i.e. text attributes than the source systems and will be used to constrain your reporting queries.
I suggest reading some articles/books by Ralph Kimbal, they should give you a good overview of Data Warehousing and Dimensional modelling.
http://www.kimballgroup.com/html/articles.html
Si -
Delta with reversal into cube from LO extractors
Hi SDN,
In our LO extraction design we have write-optimized DSO used as EDW as the first layer in BI. Then data is sent to Business Content infocubes, such as 0PUR_C01 though we are using their copies and instead of Update Rules/Transfer Rules, we are using PSA, Transformation, Write Optimized DSO, another transformation and onto the cube. DS and all other objects in the data flow are BI 7.
What we are not clear about is the role or ROCANCEL when passed to both STORNO and RECORDMODE and its impact on the cube after the write-optimized DSO. Some key figures are reversed for 'R' but others are not.
We had thought that 2LIS_02_ITM via a Write Optimized DSO will pass on STORNO and RECORDMODE to the cube which should have reversed all key figures in the cube if the value is 'R'. But it is not happening. Whatever key figures were reversed in the source system, they remained reversed, others are still there and adding another row in the cube with some positive values.
Can anyone shed light on this or provide reference to documentation regarding delta processing in the cube, especially from LO extractors like 2LIS_02_ITM, etc.?
Thanks
SMFriends,
Thanks for the reply.
My question is more fundamental.
In 3.5 There was expected to be no DSO between Source system and the cube. How were the Update, add and delete handled then?
As I see it, write-optimized DSO is no hindrance because essentially the data is coming into the cube as if there was no DSO in the middle. I do not see SAP suggesting a DSO anywhere between the source and business content cubes, especially in our case dealting with 2LIS_02_ITM, SCL, etc. to handle delta.
Are you saying that no delta management is possible without inserting a DSO before the cube? Please realize that WO DSO only brings data request by request without any summation and overwrite confusion.
What I puzzled at is the lack of comprehensive unfied and detail information either in documentation or a blog.
Please advise.
SM -
Hi,
I have a problem on my report. All sales data, cost data, GL data of a project on a single row but paid invoices are displayed on a separate line while it should be on the same line as the project.
The paid invoices have project on the data rows in the cube but appear on a separeate line.
Any ideas will help.
This a very open question but cannot explain in detail, complex report.
Kind Regards
NirenHi V,
It is not a one to one mapping but paid invoices are calculated as
( Cummilative BALANCE) Cleared Document type (RV) - (Cummilative BALANCE) (Open Document type (DR) - Open Document type (DG))
Kind Regards
Niren -
Date parameter in report whose source is SSAS cube shows all dates in DimDate?
My cube consists of a FactTable that has a foreign key to [DimDate], through column [DateKey]. My DateKey has dates from
20010101 to 20201231. My fact table only has data from today (20141017).
In SSRS, I add the dimension Dim Date, Date Key as parameter. When I run the report, everything runs great, the only problem being that the date dropdown shows all the
DateKeys from [DimDate] (20010101 to 20201231).
How can I only show in the dropdown parameter only the DateKeys that have actual data? In this case, the parameter would only only display 20141017.
Thanks.
VMThanks, but I don't think you read the whole question.
I'm using as datasource an SSAS cube. The query that populates the parameter looks like this:
WITH MEMBER [Measures].[ParameterCaption] AS [Dim Date].[Date Key].CURRENTMEMBER.MEMBER_CAPTION MEMBER
[Measures].[ParameterValue] AS [Dim Date].[Date Key].CURRENTMEMBER.UNIQUENAME MEMBER
[Measures].[ParameterLevel] AS [Dim Date].[Date Key].CURRENTMEMBER.LEVEL.ORDINAL
SELECT {[Measures].[ParameterCaption], [Measures].[ParameterValue], [Measures].[ParameterLevel]} ON COLUMNS ,
[Dim Date].[Date Key].ALLMEMBERS ON ROWS FROM [Sales cube]
VM -
How to get sum distinct in the cube. Is it possible.
Here is the scenario.
One report has many countries on it but only one amount.
For a particular day we have the following data in the fact.
TRANSACTION_DAY_NO
Country
Total Amount
19900101
US
34
19900101
IND
35
19900101
IND
36
19900101
AUS
37
19900101
UNKNOWN
38
19900101
UNKNOWN
39
19900101
UNKNOWN
40
19900101
UNKNOWN
41
19900101
UNKNOWN
42
19900101
UNKNOWN
43
19900101
US
43
19900101
IND
42
There are 2 dimensions on the cube.
Date, Country.
I am not sure how to build a cube on this data.
with t as (
select 19900101 transaction_Day_no, 'US' country_no, 34 total_amount from dual union all
select 19900101, 'IND', 35 from dual union all
select 19900101, 'IND', 36 from dual union all
select 19900101, 'AUS', 37 from dual union all
select 19900101, 'UNKNOWN', 38 from dual union all
select 19900101, 'UNKNOWN', 39 from dual union all
select 19900101, 'UNKNOWN', 40 from dual union all
select 19900101, 'UNKNOWN', 41 from dual union all
select 19900101, 'UNKNOWN', 42 from dual union all
select 19900101, 'UNKNOWN', 43 from dual union all
select 19900101, 'US', 43 from dual union all
select 19900101, 'IND', 42 from dual
select transaction_day_no, country_no, sum(distinct total_amount) from t
group by cube(transaction_Day_no, country_no);
I am using AWM. I have tried to build by selecting the following aggregate for the cube
max for the country_no and
sum for the tranaction_Day_no
But i am getting incorrect results.
If i select sum for both country_no and transaction_no then also i get incorrect results.
Please help me solve this issue.
thanksThanks for all your reply's.
The problem is that i have duplicates because
One report can have many customers.
One customer can have many countries.
One customer can have many reports.
If i include the report number in the above data and do a sum on both day and report_number and max for everything else then everything is find and i am getting correct results.
But if i take out the report dimension then i am stuffed.
Also the problem is that i can't have one big dimension for the report as the number of reports are in access of 300M
We have tried to solve this issue by having the fullowing.
Dummy Cube.
This has all the combination of all the dimension in the fact table with the report dimension as only one row(-1)
Report Dimension for each Quarter(34M rows each)
Quarter Cube is build.
Then add the values from all the Quarter Cube with the Dummy Cube.
Tried for 2 Quarter and its working fine results are correct as well.
Only problem is that its taking a long time to build the cube because of the report dimension.
I am trying to find a way to remove the report dimension but still use it. As we only use report dimension at level 'ALL'
But if we do aggregation at 'ALL' level the answers are wrong again.
Thanks for looking into this and taking time to reply.
Regards
Alvinder -
Hi,
We have a new implementation of Enterprise X1 3.1 and are trying to determine what determines the maximum number of rows that can successfully be generated using Webi. Is there a hard limit that Webi has (assuming the Universe max rows is deactivated)?
Also, is there a limit for Live Office extractions into Excel? What are the main drivers for this limit?
Finally, all other factors being equal, are there different limits for the number of rows a query can return successfully in Webi vs Deski vs BEx? How does the performance of these three tools compare to each other in terms of speed?
Thanks!
DarrylThanks for your reply! I also got a response back from SAP / BObj that I'll share:
Q) We have a new implementation of Enterprise X1 3.1 and are trying to determine what determines the maximum number of rows that can successfully be generated using Webi. Is there a hard limit that Webi has (assuming the Universe max rows are deactivated)?
=> There is a limit to the number of pages Web Intelligence reports can have. This is because there are a maximum number of pages (per report) the server can process. This number depends on the paper size chosen. Generally, it is approximately 590000 inches vertically by 590000 inches horizontally. Therefore, if you choose a smaller paper size, you get more pages but may not see all the pages of the report.
Resolution:
To work around this behavior, you can either make each line smaller to get more records, or split your report into more than one report to see all your data.
Q) Is there a limit for Live Office extractions into Excel? What are the main drivers for this limit?
=> By default its set to 512 rows and columns....however you can change this by modifying the Liveoffice_config.properties file...you can locate this file in /dswsbobje/lib/dsws_liveoffice_provider.jar.
Q) Are there different limits for the number of rows a query can return successfully in Webi vs. Deski?
=> Although there is no official limit on the number of rows that can be handled by DeskI, it is really not meant to be used like a database, i.e. for returning millions of rows. The purpose of BusinessObjects reporting tools is to get useful information, which is not possible with millions of rows.
=> It has been observed that when the number of rows increases beyond 1.5 million approximately, it starts affecting the Desktop Intelligence performance. This behavior may vary from machine to machine depending upon the available resources and programs (processes) running in the background.
=> The limit is related to the number of rows those can actually be displayed in the report and not how many can be loaded into the data cube. All the rows get loaded in the cube however when it comes to rendering them in the report the problem may arise.
Example: Report A returns 2 million rows into the cube however due to the report design there are only 200 rows in the actual report page and can be run successfully. Report B tries to render all 2 million rows into the actual report page and can fail.
Workaround:
Use filters in the report so that it will retrieve less number of rows and will also help to display useful data. -
I have created a cube using Analytic workspace manager (oracle 10G R2) which is to be used (via a view) in OBIEE.
One of the dimensions (event_dim) is extremely sparse (it has been marked as sparse and is at the appropriate level in the cube's dimensional hierarchy).
In general, when I query the cube (via the view) at high levels of aggregation, the performance is good, but when I query the cube at the lowest level of granulrity for the event_dim dimension, the performance is extremely poor (more than a minute to return).
The problem seems to be that the view is returning data for all possible rows in the cube even if most of the measures are NA (i.e null since there is no data present).
For example if I run a query against the cube with no filter on the measures I get around 20,000 rows returned - obviously this takes a while. If I then put a 'my_measure > 0' clause on the query I get 2 rows back (which is correct). However this still takes more than a minute to return - I assume that this is because the query is having to process the 20,000 rows to find the two that actually have data.
Is there any way to control this - I never need to see the NA data so would like to be able to disable this in either the cube or the view - and hence improve performance.
Note: I cannot use the compression option since I need to be able to override the default aggregation plan for certain dimension/measure combinations and it appears that compression and overriding the plan are incompatible (AWM gives the error "Default Aggregation Plan for Cube is required when creating cube with the Compression option").
Thanks,
ChrisI have seen this in some examples/mails. I havent tried it out myself :)
Try using a OLAP_CONDITION filter with an appropriate entry point option (1) on the OLAP_TABLE based query and achieve the goal of restricting output from query to value with meas > 0. This condition can be added as part of a specific query or as part of the OLAP_TABLE view definition (applicable to all queries). Hopefully this way there is no need to customize the limitmap variable to suit the cube implementation internal details like compression, partitioning, presummarization, global composite etc.
NOTE1: The olap_condition entry point 1 pushes the measure based dimension filter within the cube before fetching results from cube. Hopefully this will help speed up the retrieval of results. This should work well if we want the restriction to apply across 1 dimension.. Time or Product alone.. only 1 olap_condition is sufficient.
SELECT ...
FROM <olap_table_based_view>
where ...
and olap_condition(olap_calc, ' limit time KEEP sales_sales > 0', 1)=1
--and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
NOTE2:
For cases where both time and product (and more dimensions) need to be restricted then we can use 2 olap_conditions to restrict data to set of time and products where some data exists but you could still end up with a specific row (cross combination of product and time) with zero value. You may want to bolster the pre-fetch filtering by olap_condition via a regular sql filter referencing the external measure column (and sales_view_col >0) which is applied on to the results after it is fetched from the cube.
E.g:
SELECT ...
FROM <olap_table_based_view>
where ...
and olap_condition(olap_calc, ' limit product KEEP any(sales_sales, time) > 0', 1)=1
and olap_condition(olap_calc, ' limit time KEEP any(sales_sales, product) > 0', 1)=1
and sales_view_col >0
HTH
Shankar -
Urgent question empty rows NonEmpty calculation
Hi all,
The query below works in cube A. I'm aligning my current repots with my new cube which is exactly the same but with some different namings.
WITH
MEMBER [Measures].[ParameterCaption] AS
[Product].[Product].CURRENTMEMBER.MEMBER_CAPTION
MEMBER [Measures].[ParameterValue] AS
[Product].[Product].CURRENTMEMBER.UNIQUENAME MEMBER
[Measures].[ParameterLevel] AS
[Product].[Product].CURRENTMEMBER.LEVEL.ORDINAL
SELECT
[Measures].[ParameterCaption],
[Measures].[ParameterValue],
[Measures].[ParameterLevel]
} ON COLUMNS,
NonEmpty(
[Product].[Product].ALLMEMBERS,
NonEmpty(
[Draw].[Draw calendar - Week].[Draw date],
{[Measures].[Sales amount - vouchers]}
ON ROWS
FROM
[New Cube]
The query above works for my old cube and also works in my new cube if I take the nonempty separatly. That is if I take nonempty(products, sales) or nonempty(draw date, sales). When I combine both like above it gives me no results.
Anyone has an idea what might cause this?
Thanks!When I browse the cube it works, so I assume it must be the mdw query although i didnt change anything..
-
<SQL> Connect Scott/Tiger
Connected.
<SQL> Select Deptno,
2 Decode(Ename,Null,'T O T A L',Ename),
3 SUM(Sal)
4 From Emp
5 Group By ROLLUP(Deptno,Ename);
DEPTNO DECODE(ENA SUM(SAL)
10 AWAIS 5000
10 CLARK 2450
10 MILLER 1300
10 T O T A L 8750
20 23132121 1000
20 ADAMS 1100
20 FORD 3000
20 JONES 2975
20 SCOTT 3000
20 SMITH 800
20 T O T A L 11875
30 ALLEN 1600
30 BLAKE 2850
30 JAMES 950
30 MARTIN 1250
30 TURNER 1500
30 WARD 1250
30 T O T A L 9400
T O T A L 30025
19 rows selected.
RollUp Functionality is Perfect.
But what is this ?
SQL> Select Deptno,
2 Decode(Ename,Null,'T O T A L',Ename),
3 SUM(Sal)
4 From Emp
5 Group By CUBE(Deptno,Ename);
DEPTNO DECODE(ENA SUM(SAL)
10 AWAIS 5000
10 CLARK 2450
10 MILLER 1300
10 T O T A L 8750
20 23132121 1000
20 ADAMS 1100
20 FORD 3000
20 JONES 2975
20 SCOTT 3000
20 SMITH 800
20 T O T A L 11875
30 ALLEN 1600
30 BLAKE 2850
30 JAMES 950
30 MARTIN 1250
30 TURNER 1500
30 WARD 1250
30 T O T A L 9400
23132121 1000
ADAMS 1100
ALLEN 1600
DEPTNO DECODE(ENA SUM(SAL)
AWAIS 5000
BLAKE 2850
CLARK 2450
FORD 3000
JAMES 950
JONES 2975
MARTIN 1250
MILLER 1300
SCOTT 3000
SMITH 800
TURNER 1500
WARD 1250
T O T A L 30025
34 rows selected.
Cube is Selecting 34 Rows, Why ?
I am not understanding the functionalilty of Cube.
Can any one define it ?
Thanks in Advance...Hi,
With cube, you create subtotals in different dimension. Your example does not make sense for cube, since ename is unique for all departments.
Look at this. Also note the "GROUPING" function, which distinguish pure null values and null because of grouping (giving 1).
Kind regards
Laurent Schneider
OCM-DBA
select decode(grouping(JOB), 1, 'T O T A L', job) JOB,
decode(grouping(DNAME), 1, 'T O T A L', dname) DNAME,
sum(nvl(sal,0))
from emp natural full outer join dept
group by cube(job,dname)
order by 1 nulls first, 2 nulls first;
JOB DNAME SUM(NVL(SAL,0))
OPERATIONS 0
T O T A L 0
ANALYST RESEARCH 6000
ANALYST T O T A L 6000
CLERK ACCOUNTING 1300
CLERK RESEARCH 1900
CLERK SALES 950
CLERK T O T A L 4150
MANAGER ACCOUNTING 2450
MANAGER RESEARCH 2975
MANAGER SALES 2850
MANAGER T O T A L 8275
PRESIDENT ACCOUNTING 5000
PRESIDENT T O T A L 5000
SALESMAN SALES 5600
SALESMAN T O T A L 5600
T O T A L ACCOUNTING 8750
T O T A L OPERATIONS 0
T O T A L RESEARCH 10875
T O T A L SALES 9400
T O T A L T O T A L 29025
Maybe you are looking for
-
What's wrong with the Zen Nano pl
I've been looking around for the MP3 player that will suit my needs. The key features I need are... . Above average sound quality. 2. Flash based. 3. Small. Can accompany me anywhere with ease. 4. User replaceable battery. (after few years, built-in
-
Can only print from certain programs. Started suddenly.
Everything was working just fine and then suddenly I can only print from InDesign and Word. And when I print from InDesign I can't change any settings in the "print" dialog box or the program freezes and crashes. When I try to print from firefox, pho
-
How do i know when the 30 days is over?
We got the Cloud through work and only want it for a couple computers after the first 30 days. How can I check on the status of where we are at within the 30 days?
-
Clearing g/l account with upload template and ZFG1
Hi there I wonder if someone can help me please. I need to manually clear some line items from 2 g/l accounts that cannot be cleared using the clearing program (they are duplicate documents created in error and since reversed therefore account balanc
-
CJS-20058 J2EE Engine JC02 of SAP system B00 did not reach state "Starting
Error when reached the 21st step of the installation - Starting J2EE. The error message was CJS-20058 J2EE Engine JC02 of SAP system B00 did not reach state "Starting Applications" after 1121 seconds: giving up. The log showed the installation kept