Query on Aggregate
Hi Gurus,
I am into archiving, and my question is we have an infocube with some aggregates, So i want to know how queries are created for the aggregates and how it is accessible, as per my knowledge, the query defined to the infocube for which aggregates exists, is this the same query which also gets the results for the aggregates according to the query definition.
My question if i change or do anything with the infocube, does it affect the aggregates. Can anyone please let me know on this, and when it is possible to query the aggregates separately. Query can be created only for a particular aggregated cube. Can anyone please let me on this.
Thanks,
Hem
Hello Hem,
it is completely transparent to the user if and which aggregates are used when executing a query. Only in debug mode (transaction RSRT) you can force the system to use (or not use) an aggregate.
If you make changes to the InfoCube, you might have to rebuild the aggregates. Reasons among others
- If you add or change any key figure
- If you change a characteristic that's used in the aggregate
- If you change a hierarchy that is used in the aggregate
Regards,
Marc
SAP NetWeaver RIG
Similar Messages
-
Query with aggregates over collection of trans. instances throws an error
Hi, I'm executing a query with aggregates an it throws an exception with the following message "Queries with aggregates or projections using variables currently cannot be executed in-memory. Either set the javax.jdo.option.IgnoreCache property to true, set IgnoreCache to true for this query,
set the kodo.FlushBeforeQueries property to true, or execute the query before changing any instances in the transaction.
The offending query was on type "class Pago" with filter "productosServicios.contains(item)".
The class Pago has the field productosServicios which is a List of Pago$ItemMonto, the relevant code is :
KodoQuery query = (KodoQuery)pm.newQuery(Pago.class,
pagos);
where pagos is a list of transient instances of type Pago.
query.declareVariables("Pago$ItemMonto item");
query.setFilter("productosServicios.contains(item)");
query.setGrouping("item.id");
query.setResult("item.id as idProductoServicio,
sum(montoTotal) as montoTotal");
query.setResultClass(PagoAgrupado.class);
where the class PagoAgrupado has the corresponding fields idProductoServicio and montoTotal.
In other words, I want to aggregate the id field of class ItemMonto over the instances contained in the productosServicios field of class Pago.
I have set to true the ignoreCache and kodo.FlushBeforeQueries flags in the kodo.properties file and in the instances of the pm and the query but it has not worked, what can be wrong?.
I'm using Kodo 3.2.4, MySQL 5.0
Thanks,
Jaime.
Message was edited by:
jdelajarafThanks, you nailed it! I tried comparing the two files myself, but Bridge told me that the 72.009 dpi document was 72 dpi.
I have no idea why the resolution mess things up, but as long as I know how to avoid the bug, things are grand! -
How does u find whether query touches aggregates or not?
Hi gurus
How does u find whether query touches aggregates or not?
Thanks in advance
RajHi Rajaiah.
You can test this from TA RSRT -> Execute and debug -> Display aggregate found.
Hope it helps.
BR
Stefan -
Querying on aggregates created on Virtual Cube
Hello,
I have implemented a virtual InfoProvider with Services.When I create queries directly on the Virtual Infoprovider the query runs fine and I see the report.
As per my requirement I create an aggregate on the Virtual Infoprovider .Then I define a query on the aggregate .But when I execute this query I get the following errors :
Error reading the data of InfoProvider AG4
An exception with the type CX_SY_REF_IS_INITIAL occurred, but was neither handled locally, nor declared in a RAISING clause
Dereferencing of the NULL reference.
Would appreciate any assistance on this topic.
Thanks
PriyadarshiYes it is possible to create aggregates on Virtual cubes.
I will be grateful if hope anybody who is aware of the method of aggreagate creation and who has faced similar issues comes forward and throws some light on what could be the error.
Thanks -
How query read aggregate???
Experts !
I have problem with my aggregate design.
i have created one aggregate on one of my cube. when i try to check using rsrt in debug mode, looks like that query is not hitting the aggregate which i have just created. it goes to another aggregate.
Now, does it mean that query will always go to the same aggregate ? or when users pulls different characteristics from free charastericts , my query might jump to another aggregates ??
OR, At the beginig whatever aggregate it hits, the query will only stick to that !
hows the process works ?
thanksWhen youu2019re not sure how to design a good aggregate. Let the system propose for you but you have to use the cube in question for some time. The reason is the system need to gather statistics, before it can propose a good one for you.
Designing an aggregate (drag and drop) is easy, but designing a good one is not as easy as it looks. It requires some skills. But the good news is that skills can be learned.
When you execute a query, OLAP Processor will look for data (based on the criteria) in the following order.
Local OLAP Cache
Global OLAP Cache
Aggregate
Cube
The goal is the OLAP Processor should hit either of the first 3 guys, then bingo ! good hit. But if all of them are missed , it has to go to the cube to fetch the data. Then it defeats the purpose of aggregate.
Remember the main purpose of aggregate is speeding up data retrieval. But there is associated overhead. You should check the rating and delete bad aggregates.
Cheers.
Jen -
Slow query on aggregate.
Hello Experts,
We have very specific issue on query performance. We have a query which uses to execute within 3 minutes. We have created Aggregates on the following cube and the same query partially uses the aggregate and cube. The particular query after creating the new aggregate taking more than 25 minutes to execute.
If we switch-off the aggregate and execute the query, the following query takes only 3 minutes. The query uses Count Function in few formulas. Can you please suggest, is there any option for particular query to ignore Aggregate and use only infocube.
Regards
Christopher FrancisHi Francis,
First of all this is not a common issue seen across SAP System.
According to your issue...exceution time on aggregate is more than on cube.
According to my analysis what could be the reason ..is that the characterstics on which you made the aggregate would have different.
example : you want to create aggregate on cust_no..but aggregate might have created on different characterstic..suppose prod_no
so When query hits the aggrgate it doesnot find any record of cust_no and again search in the cube.
so it takes more time for serching aggreagate as well as in the cube.
that is why i think it may be taking more time..for execution on aggregate.
please check your required characterstics..on which you have created aggregate.
Regards,
Sidhartha -
Pivot type query without aggregate function. Transposing
Hi experts,
Oracle 11g.
I have a table (see code example to reproduce), that has a date, a grouping, and the count of that grouping (determined in another query). I need a pivot type query, but, without the aggregate functions. This is just for a report display. I can not seem to figure this one out. Thanks for your help.
CREATE TABLE temp_task
AS
SELECT TO_DATE ('15-NOV-2012') validation_date,
'GROUP 1' AS group_number,
42 AS monthly_count
FROM DUAL
UNION ALL
SELECT TO_DATE ('14-DEC-2012') validation_date,
'GROUP 1' AS group_number,
33 AS monthly_count
FROM DUAL
UNION ALL
SELECT TO_DATE ('15-NOV-2012') validation_date,
'GROUP 2' AS group_number,
10 AS monthly_count
FROM DUAL
UNION ALL
SELECT TO_DATE ('14-DEC-2012') validation_date,
'GROUP 2' AS group_number,
32 AS monthly_count
FROM DUAL
UNION ALL
SELECT TO_DATE ('15-NOV-2012') validation_date,
'GROUP 3' AS group_number,
7 AS monthly_count
FROM DUAL
UNION ALL
SELECT TO_DATE ('14-DEC-2012') validation_date,
'GROUP 3' AS group_number,
9 AS monthly_count
FROM DUAL;Using only SQL I need to return the following:
VALIDATION_DATE | GROUP 1 | GROUP 2 | GROUP 3
11/15/2012 | 42 | 10 | 7
12/14/2012 | 33 | 32 | 9Hi
You always need to use an aggregate function while pivoting.
Even if you don't really need any aggregation, that is, when what you see in the table is what you'll get in the result set, you still have to use an aggregate function. If there will only be one value contrinuting to each cell, then you can use MIN or MAX. It won't matter which; since there's only 1 value, that value will be the highest of the 1, and it will also be the lowest. For NUMBER columns, you could also use SUM or AVG.
SELECT *
FROM temp_task
PIVOT ( MIN (monthly_count)
FOR group_number IN ( 'GROUP 1'
, 'GROUP 2'
, 'GROUP 3'
ORDER BY validation_date
; Output:
VALIDATION_ 'GROUP 1' 'GROUP 2' 'GROUP 3'
15-Nov-2012 42 10 7
14-Dec-2012 33 32 9It sounds like you're doing real aggregation someplace, to get monthly_count. Maybe it would be simpler and more efficient to do the pivoting at that point. What is the big picture here? Post some sample data as it is before you compute monthly_count, and the results you want from that data (if different from what you've already posted), and then let's see if we can't aggregte it and pivot it at the same time. -
Query with aggregate on custom mapping returning wrong type
I've got a JDOQL query that returns the sum of a single column, where that
column is custom-mapped, but the result I get back is losing precision.
I create the JDOQL query as normal and set the result to the aggregate
expression:
KodoQuery query = (KodoQuery) pm.newQuery(candidateClass, filter);
query.setResult("sum(amount)");
I can also setUnique for good measure as I am expecting just 1 row back:
query.setUnique(true);
The query returns an Integer, but my amount column is a decimal with 5
digits after the decimal point. If I ask for a Double or BigDecimal as the
resultClass, it does return an object of that type, but loses all
precision after the decimal point:
query.setResultClass(BigDecimal.class);
The amount field in my candidate class is of the class Money, a class that
encapsulates a currency and a BigDecimal amount. See
http://www.martinfowler.com/ap2/quantity.html
It is mapped as a custom money mapping to an amount and currency column,
based on the custom mapping in the Kodo examples. I have tried mapping the
amount as a BigDecimal value, and querying the sum of this works. So the
problem seems to be the aggregate query on my custom mapping. Do I need to
write some code for my custom mapping to be able to handle aggregates?
Thanks,
AlexCan you post your custom mapping?
Also, does casting the value have any effect?
q.setResult ("sum((BigDecimal) amount)"); -
Hi, I need some help coming up with a query for department analysis. I am providing test cases below. My data is currently being taken from a Oracle 10g materialized view. What I need to do is give a date interval and find out how many people were in each dept (over all depts that exist in the view) at the beginning and end of the interval. If a person was in more than 1 dept during that period, only the last one is considered. Also, how many new people were added to the company
For example, using the data below and 13-AUG-04 through 30-AUG-04, I would get:
Unit #Beg #End #New
18 0 0 1
33 1 1 0
56 0 0 0
70 1 1 0
71 1 1 0
The last two columns in the test table (First/Last) refer to the person's employment in the company. There will be for sure either Y/Y (if the person has one period of employment only) or both Y/N, N/Y.
Where I'm having problems is to keep track of both current departments as well as new to the company. Someone can be new to the company (evidenced by First='Y'), but if he 's only moving to a different department then he's not new to the company.
Thanks,
Rob
create table test_dept
dept number,
head varchar2(20),
wrkid number,
name varchar2(50),
unitid number,
startdate date,
enddate date,
firstentry char(1),
lastentry char(1)
insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',71,to_date('09-AUG-04','DD-MON-YY'),to_date('11-AUG-04','DD-MON-YY'),'Y','N');
insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',71,to_date('11-AUG-04','DD-MON-YY'),to_date('04-OCT-04','DD-MON-YY'),'N','N');
insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',33,to_date('04-OCT-04','DD-MON-YY'),to_date('05-OCT-04','DD-MON-YY'),'N','N');
insert into test_dept values(20812,'JONES',27264,'SMITH, Mary',33,to_date('05-OCT-04','DD-MON-YY'),to_date('19-APR-05','DD-MON-YY'),'N','Y');
insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',71,to_date('09-AUG-04','DD-MON-YY'),to_date('11-AUG-04','DD-MON-YY'),'Y','N');
insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',33,to_date('10-AUG-04','DD-MON-YY'),to_date('05-OCT-04','DD-MON-YY'),'N','N');
insert into test_dept values(20812,'JONES',27265,'SMITH, Jack',33,to_date('05-OCT-04','DD-MON-YY'),to_date('20-APR-05','DD-MON-YY'),'N','Y');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',70,to_date('11-AUG-04','DD-MON-YY'),to_date('29-OCT-04','DD-MON-YY'),'Y','N');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('19-OCT-04','DD-MON-YY'),to_date('25-OCT-04','DD-MON-YY'),'N','N');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('25-OCT-04','DD-MON-YY'),to_date('23-NOV-04','DD-MON-YY'),'N','N');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',70,to_date('23-NOV-04','DD-MON-YY'),to_date('27-JAN-05','DD-MON-YY'),'N','N');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',33,to_date('08-FEB-05','DD-MON-YY'),to_date('13-JUL-05','DD-MON-YY'),'N','N');
insert into test_dept values(28022,'BABS',39220,'RUPERT, A',56,to_date('13-JUL-05','DD-MON-YY'),to_date('31-OCT-05','DD-MON-YY'),'N','Y');
insert into test_dept values(20812,'JONES',10000,'B',18,to_date('15-AUG-04','DD-MON-YY'),to_date('29-AUG-04','DD-MON-YY'),'Y','Y');this?
var d1 varchar2(100)
var d2 varchar2(100)
exec :d1 := '13-AUG-04'
exec :d2 := '30-AUG-04'
select unitid,
sum( case when startdate <= to_date(:d1) and enddate >= to_Date(:d1)
and nextstart >= to_date(:d2) then 1 else 0 end ) beg#,
sum( case when startdate <= to_date(:d2) and enddate >= to_Date(:d2) then 1 else 0 end ) end#,
sum( case when firstentry='Y' and startdate between to_Date(:d1) and to_date(:d2) then 1 else 0 end ) new#
from (
select unitid, wrkid, name,
startdate, enddate, firstentry,
last_value(startdate) over (partition by wrkid order by startdate desc
rows between unbounded preceding and 1 preceding) nextstart
from test_dept
group by unitid
/you should have test data where a dept has different beg and end numbers. -
Update of a table from a select query with aggregate functions.
Hello All,
I have problem here:
I have 2 tables A(a1, a2, a3, a4, a4....... ) and B( a1, a2, b1, b2, b3). I need to calculate the avg(a4-a3), Max(a4-a3) and Min(a4-a3) and insert it into table B. If the foreign keys a1, a2 already exist in table B, I need to do an update of the computed values into column b1, b2 and b3 respectively, for a1, a2.
Q1. Is it possible to do this with a single query ? I would prefer not to join A with B because the table A is very large. Also columns b1, b2 and b3 are non-nullable.
Q2. Also if a4 and a3 are timestamps what is the best way to find the average? A difference of timestamps yields INTERVAL DAY TO SECOND over which the avg function doesn't seem to work. The averages, max and min in my case would be less than a day and hence all I need is to get the data in the hh:mm:ss format.
As of now I'm using :
TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
extract( minute FROM (last_modified_date - created_date))*60 +
extract( second FROM (last_modified_date - created_date)))
),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
But this is very long drawn. Something more compact and efficient would be nice.
Thanks in advance for your inputs.
Edited by: 847764 on Mar 27, 2011 5:35 PM847764 wrote:
Hi,
Thanks everyone for such fast replies. Malakshinov's example worked fine for me as far as updating the table goes. As for the timestamp computations, I'm posting additional info: Sorry, I don't understand.
If Malakshinov's example worked for updating the table, but you still have problems, does that mean you have to do something else besides update the table? If so, what?
Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
Here are the table details :
DESC Table A
Name Null Type
ID NOT NULL NUMBER
A1 NOT NULL VARCHAR2(4)
A2 NOT NULL VARCHAR2(40)
A3 NOT NULL VARCHAR2(40)
CREATED_DATE NOT NULL TIMESTAMP(6)
LAST_MODIFIED_DATE TIMESTAMP(6) DESCribing the tables can help clarify some things, but it's no substitute for posting CREATE TABLE and INSERT statements. With only a description of the table, nobody can re-create the problem or test their ideas. Please post CREATE TABLE and INSERT statements for both tables as they exist before the MERGE. If table b doen't contain any rows before the MERGE, then just say so, but you still need to post a CREATE TABLE statement for both tables, and INSERT statements for table a.
The objective is to compute the response times : avg (LAST_MODIFIED_DATE - CREATED_DATE), max (LAST_MODIFIED_DATE - CREATED_DATE) and min (LAST_MODIFIED_DATE - CREATED_DATE) grouped by A1 and A2 and store it in table B under AVG_T, MAX_T and MIN_T. Since AVG_T, MAX_T and MIN_T are only used for reporting purposes we have kept it as Varchar (though I think keeping it as timestamp would make more sense). I think a NUMBER would make more sense (the number of minutes, for example), or perhaps an INTERVAL DAY TO SECOND. If you stored a NUMBER, it would be easy to compute averages.
In table B the times are stored in the format : hh:mm:ss. We don't need milliseconds precision. If you don;'t need milliseconds, then you should use DATE instead of TIMESTAMP. The functions for manipulating DATEs are much better.
Hence I was calculating is as follows:
-- Avg Time
TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
extract( minute FROM (last_modified_date - created_date))*60 +
extract( second FROM (last_modified_date - created_date)))
),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
--Max Time
extract (hour FROM MAX(last_modified_date - created_date))||':'||extract (minute FROM MAX(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MAX(last_modified_date - created_date))) AS max_time,
--Min Time
extract (hour FROM MIN(last_modified_date - created_date))||':'||extract (minute FROM MIN(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MIN(last_modified_date - created_date))) AS min_timeIs this something that has to be done before or after the MERGE?
Post the complete statement.
Is this part of a query? Where's the SELECT keyword?
Is this part of a DML operation? Where's the INSERT, or UPDATE, or MERGE keyword?
What are the exact results you want from this? Explain how you get those results.
Is the code above getting the right results? Are you just asking if there's a better way to get the same results?
You have to explain things very carefully. None of the people who want to help you are familiar with your application, or your needs.
I just noticed that my reply is horribly formatted - apologies! I'm just getting the hang of it.Whenever you post formatted text (such as query results) on this site, type these 6 characters:
\(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing. -
Query for aggregates for each date in a date range
Hi,
I want to generate a Trend report with a T-SQL proc, which needs following logic.
Input -
Date Range say '10/10/12' to '20/10/12' (Say to check the trend of Size of account in 20 days of Trend report)
Account balance is captured randomly, (i mean not every day)
Table with date looks like this..
--Account Balance Table
CREATE TABLE AccBanalce (
BranchId SMALLINT
NOT NULL,
AccId CHAR(9)
NOT NULL,
Amount DECIMAL(9,3)
NOT NULL,
SnapShotDate DATETIME
NOT NULL
CONSTRAINT PK_AccBanalce PRIMARY KEY NONCLUSTERED (AccId, SnapShotDate) )
GO
Create CLUSTERED INDEX CIx_AccBanalce ON AccBanalce (SnapShotDate)
GO
--Date Range table
CREATE TABLE DateRange ( StartDate DATETIME, EndDate DATETIME)
GO
--Date for the Account Balance Table
INSERT INTO AccBanalce (BranchId, AccId, Amount, SnapShotDate)
VALUES (1, 'C1-100', 10.4, '10/11/2010' ),
(1, 'G1-110', 20.5, '10/11/2010' ),
(2, 'GC-120', 23.7, '10/11/2010' ),
(2, 'Gk-130', 78.9, '10/13/2010' ),
(3, 'GH-150', 23.5, '10/14/2010'),
(1, 'C1-100', 31.8, '10/16/2010' ),
(1, 'G1-110', 54.8, '10/16/2010' ),
(2, 'GC-120', 99.0, '10/16/2010' ),
(3, 'Gk-130', 110.0, '10/16/2010' ),
(3, 'G5-140', 102.8, '10/16/2010' ),
(2, 'GC-120', 105, '10/18/2010' ),
(2, 'Gk-130', 56.7, '10/18/2010' ),
(1, 'C1-100', 84.3, '10/18/2010' ),
(1, 'G1-110', 75.2, '10/19/2010' ),
(2, 'GC-120', 64.9, '10/20/2010' ),
(3, 'GH-150', 84.0, '10/20/2010' ),
(1, 'C1-100', 78.0, '10/20/2010' ),
(1, 'G1-110', 89.5, '10/20/2010' )
GO
--Date for DateRange Table
INSERT INTO DateRange (StartDate, EndDate) VALUES
('2010-10-11 00:00:00.000', '2010-10-11 23:59:59.997'),
('2010-10-12 00:00:00.000', '2010-10-12 23:59:59.997'),
('2010-10-13 00:00:00.000', '2010-10-13 23:59:59.997'),
('2010-10-14 00:00:00.000', '2010-10-14 23:59:59.997'),
('2010-10-15 00:00:00.000', '2010-10-15 23:59:59.997'),
('2010-10-16 00:00:00.000', '2010-10-16 23:59:59.997'),
('2010-10-17 00:00:00.000', '2010-10-17 23:59:59.997'),
('2010-10-18 00:00:00.000', '2010-10-18 23:59:59.997'),
('2010-10-19 00:00:00.000', '2010-10-19 23:59:59.997'),
('2010-10-20 00:00:00.000', '2010-10-20 23:59:59.997')
GO
Question -
I want TOTAL Balance of all Accounts in a Branch per each day between 10/11/2010 to 10/20/2010
If the Snapshotdate (date) on which the account was not made an entery to AccBalance table, last available balance to be considered for that account.
like for account [C1-100] on 10/15/2010 the balance should be [10.4]
--Group By Branch
--Last valid Account balance to be considered.
I know, this is long solution, but any one who is expert in T-SQL can help me in this solution.
Thanks,
KrishnaThanks Himanshu You almost solved my issue...but can you provide the final output as following...
Actually you are aggregating the Amount, which is not required, as it is the total available in that account.
But the missing pint is I need the SUM of all the accounts for each DAY in a BRANCH.
The 3rd Result Query modified to get DAILY balances for each account as following...
--*RESULT*
SELECT a.AccId, a.StartDate,
(SELECT TOP 1 b.Amount
FROM #InterimOutput b
WHERE b.AccId = a.AccId and b.Amount > 0
AND B.StartDate<=A.StartDate ORDER BY B.StartDate DESC) as ToDateBal
FROM #InterimOutput a
ORDER BY a.AccId
go
Now I need SUM of all Account Balances AT each BRANCH on DAILY basics. Can you help on that?
Thanks again
Krishna -
How do u do Query Caching/Aggregates/Optimise ETL
Hello
How do u do the following?A document or step wise approach would be really handy
1.How do u do Query caching?The pro and cons?How to optimize?
2.How do u create aggragates?Step by step method?
3.How do u optimize ETL?Whats the benefits of it?Again a document would be handy
ThanksSearch SDN and ASUG for many good presentations.
Hee's a couple to get you started:
http://www.asug.com/client_files/Calendar/Upload/ACF3DBF.ppt
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/p-r/performance in sap bw.pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/asug-biti-03/sap bw query performance tuning with aggregates -
Grouping result set by a column in a query without aggregate function
In the below result set, several columns appear for one table.
col data_Type format a12
col column_name format a10
set lines 100
set pages 50
SELECT table_name, column_name FROM user_tab_cols WHERE char_used = 'B'
AND TABLE_name NOT LIKE 'BIN%' ORDER BY TABLE_NAME;
TABLE_NAME COLUMN_NAM
BONUS JOB
BONUS ENAME
DEPT DNAME
DEPT LOC
EMP JOB
EMP ENAME
EMP_DTL ENAME
EMP_DTL LOC
EMP_DTL CONVN_LOC
MEMBER GENDER
MEMBER TEAM
MEMBER MEMBERTYPE
MEMBER FIRSTNAME
MEMBER LASTNAME
MEMBER PHONE
ORDERS STATUS
SYS_CONFIG CODE_ID
SYS_CONFIG FLAG_A
TOURNAMENT TOURTYPE
TOURNAMENT TOURNAMEI don't want the table_name to repeat for every columns within a table_name group. If i use SQLPLUS's BREAK command, it would
suppress duplicates
break on table_nameand the resultset would look like
TABLE_NAME COLUMN_NAM
BONUS JOB
ENAME
DEPT DNAME
LOC
EMP JOB
ENAME
EMP_DTL ENAME
LOC
CONVN_LOC
MEMBER GENDER
TEAM
MEMBERTYPE
FIRSTNAME
LASTNAME
PHONE
ORDERS STATUS
SYS_CONFIG CODE_ID
FLAG_A
TOURNAMENT TOURTYPE
TOURNAME
TYPE TYPE
X ENAME
Y ENAMEBut how can i do this using oracle SQL?Analytics?
SQL> with t as
2 (
3 select 'A' col1, 100 col2 from dual
4 union all
5 select 'A' col1, 200 col2 from dual
6 union all
7 select 'B' col1, 800 col2 from dual
8 union all
9 select 'B' col1, 400 col2 from dual
10 union all
11 select 'C' col1, 500 col2 from dual
12 )
13 select decode(row_number() over (partition by col1 order by col2), 1, col1, null) col1
14 ,col2
15 from t
16 /
C COL2
A 100.00
200.00
B 400.00
800.00
C 500.00
SQL>Cheers
Sarma. -
Aggregate function in the query
Hi gurus,
What makes presentation layer send queries with Aggregate( by ) function eventhough this function is not used in the column formulas?
Now comes a long explanation - only for the patient ones :-)
I am working OBIEE / Essbase.
A strange thing happened to one of my reports - exactly the same report (pivot) in production (old repository) works fine, but brings wrong results on the new repository.
I found out that SQL sent to OBIEE is different.
With old repository, it sends a query with 'Aggregate ( by)' function used for measures. However with the new one, it sends just column names without the Aggregate.
Where shall I look for the difference?
-- OLD (Good) query -----------
SELECT DIM_BATCH_HEADERS."Gen3,DIM_BATCH_HEADERS" saw_0,
PROD_QTY.PROD_ACTUAL_QTY_KG/1000 saw_1,
DIM_COMPONENT."Gen3,DIM_COMPONENT" saw_2, ' ' saw_3,
COST.COMP_ACTUAL_COST_ILS_PER_TON saw_4, 0 saw_5,
COST.ACTUAL_COST_K_ILS saw_6, 0 saw_7,
AGGREGATE(saw_1 BY ),
AGGREGATE(saw_4 BY ),
AGGREGATE(saw_6 BY ),
AGGREGATE(saw_1 BY saw_0, saw_5),
AGGREGATE(saw_4 BY saw_0, saw_5),
AGGREGATE(saw_6 BY saw_0, saw_5)
FROM "COP_MAAD#1" WHERE (COMP_QTY.COMP_RELEVANT > 0)
AND (DIM_TIME."YEAR" = '2009')
AND (DIM_BATCH_HEADERS."Gen2,DIM_BATCH_HEADERS" = 'ABC')
ORDER BY saw_1 DESC, saw_4 DESC, saw_0, saw_2, saw_3, saw_5, saw_7
-- NEW (Incorrect) Query ---------------
SELECT DIM_BATCH_HEADERS."Gen3,DIM_BATCH_HEADERS" saw_0,
PROD_QTY.PROD_ACTUAL_QTY_KG/1000 saw_1,
DIM_COMPONENT."Gen3,DIM_COMPONENT" saw_2,
' ' saw_3, COST.COMP_ACTUAL_COST_ILS_PER_TON saw_4,
0 saw_5,
COST.ACTUAL_COST_K_ILS saw_6,
0 saw_7 FROM "COP_MAAD#1"
WHERE (COMP_QTY.COMP_RELEVANT > 0) AND (DIM_TIME."YEAR" = '2009')
AND (DIM_BATCH_HEADERS."Gen2,DIM_BATCH_HEADERS" = 'ABC') ORDER BY saw_1 DESC, saw_4 DESC
Thanks,
AlexHi,
Can anybody reply to the above question please?
regards,
Siva -
Hello,
i've a problem with an aggregate i've created for a query: the Aggregate is not used.
i've tried it in RSRT with "run and debug" and there the aggregate will be found and the aggregate statistik in the InfoCube gets a new "last used" date.
But when i run it in RSRT just clicking the "run" button or within the Bex-Analyzer the aggregate statistik will not be updated -> the aggreate is not used.
What did i wrong?Hi Dennis,
if the query is in the cache then it's read from there. If it'S not in the cache then it will use the aggregates. You may see it in execution statistics of the query, if it was executed in cache or by using aggregates. Activate statistics in RSRT and debug, then you should see how it is executed.
Regards,
Jürgen
Maybe you are looking for
-
After I updated my Ipad2 to IOS5 my calendar lost all appointments. After re-entering a couple of appointments it froze. After getting it going again, the calendar will not populate an end date for the current day and consequently I can't save the
-
Could you tell me how to configure my hp photosmart d110 to print in black and white only?
I have a HP Photosmart D110, when I printing a web page how I could configure my printer to print only in black and white? Thanks Pietro1708
-
The fullstop was working when i first got the phone but now its not. I am new to the i phone and not sure what i am doing. Its the i phone 4s. Also i dont know how to get the menu up while i am on the phone talking. For example if the company i am ri
-
I got involved with a training course that offers its custom software to download, but of course, they're Widows people and so are the apps. Is there anyway do stay in my Mac, say using Parallel Desktop?
-
I am a Network Manager in a school. With the recent news of textbooks being available on iPad's, we are currently looking at how feasible it would be to introduce iPads in school to replace textbooks. As our current budget would not allow the purcha