Aggregation query

Dear colleagues,
I would like to create the query that:
1 step: find the max of NSAVERAGE per day
2 step: max of "1st step" per NE & LAC
My tables:
A. Table "SUB"
Name Null? Type
MSC_ID NUMBER
LAC_ID NUMBER
PERIOD_START_TIME DATE
PERIOD_DURATION NUMBER
NSCURRENT NUMBER
NSAVERAGE NUMBER
B. Table NE_ID
Name Null? Type
NE_ID NUMBER
NE_NAME VARCHAR2(10)
For the 1st step I created the following query:
SELECT n.ne_name AS "MSS",
s.LAC_ID,
TO_CHAR(s.PERIOD_START_TIME,'DD.MM.YYYY') AS "DATE",
MAX(s.NSAVERAGE) AS "MAX_SUB"
FROM ne_id n JOIN SUB s
ON (n.ne_id = s.msc_id)
WHERE MOD(TO_CHAR(s.period_start_time,'J'),7)+1 ----- the weekend values are excluded
IN (1,2,3,4,5)
GROUP BY n.ne_name, s.LAC_ID, TO_CHAR(s.PERIOD_START_TIME,'DD.MM.YYYY')
ORDER BY s.LAC_ID ASC;
and now I'd like to create 2nd step: let's say, to get max of max from step1 grouped by ne_name & LAC
Could you please help how to modify my query?
Thanks
Lukasz

might be easier for people to work on your problem if you post your source data as either insert statments or a with t kind of construct.
WITH t
     AS (SELECT 'M1' mss,
                1 lac,
                TO_DATE ('2012/07/06 12:00:00', 'yyyy/mm/dd HH24:MI:SS')
                   period_start_time,
                100 nsaverage
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                1,
                TO_DATE ('2012/07/06 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                150
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                1,
                TO_DATE ('2012/07/06 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                200
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                1,
                TO_DATE ('2012/07/07 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                110
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                1,
                TO_DATE ('2012/07/07 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                100
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                1,
                TO_DATE ('2012/07/07 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                120
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/06 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                100
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/06 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                120
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/06 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                200
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/07 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                180
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/07 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                160
           FROM DUAL
         UNION ALL
         SELECT 'M1',
                2,
                TO_DATE ('2012/07/07 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                150
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/06 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                100
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/06 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                150
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/06 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                200
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/07 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                110
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/07 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                100
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                1,
                TO_DATE ('2012/07/07 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                120
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/06 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                100
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/06 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                120
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/06 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                200
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/07 1200:00', 'yyyy/mm/dd HH24:MI:SS'),
                180
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/07 1300:00', 'yyyy/mm/dd HH24:MI:SS'),
                160
           FROM DUAL
         UNION ALL
         SELECT 'M2',
                2,
                TO_DATE ('2012/07/07 1400:00', 'yyyy/mm/dd HH24:MI:SS'),
                150
           FROM DUAL)

Similar Messages

  • Link aggregation query

    Hi,
    I've recently taken over a network which has got 4 Linksys switches - 3 * SRW2048 and 1 * SRW2024
    One of the SRW2048 devices has 2 LAGs setup (of 4 ports each) connecting to the other 2 SRW2048s (both with a single LAG). So far, so good.
    However, between the 'main' SRW2048 and the SRW2024 there are 4 ethernet cables, but no Link Aggregation is set up. Everything seems to be working OK, but I'm wondering if this is this an 'OK setup'? If so, where does it rate in performance terms between having just one connecting cable, and having all 4 with Link Aggregation?
    Thanks for any help
    Michael

    Hi Michael,
    It could be that spanning tree is blocking three of those active  links. 
    Might i suggest you save the configurations to your PC, so they can be restored,  if needed.
    I think it's a great idea to add the four switch ports to a new Link Aggregation (LAG) group on each switch.
    Make sure,  on both switches that you  click 'save settings'.
    LAG provides link redundancy and load sharing between the switches, so i persoanally love the idea of using Link Aggregation (LAG)
    regards Dave

  • Double aggregation in a single query block doesn't make any sence.

    How can I argue with something that apparently has been cast in stone by ANSI SQL committee? Well the answer is famous: "Search any park in any city: you'll find no statue of committee".
    OK, why
    select count(1) from (
    select deptno from emp
    group by deptno
    is an easy to understand query, and why
    select count(count(*)) from emp
    group by deptno
    is not? I already mentioned one reason why count shouldn't accept any arguments, therefore count(count(*)) is a nonsence.
    The other reason is that aggregation without grouping is essentially aggregation within a single group. Once you realize that
    select sum(1) from emp
    is the same as
    select sum(1) from emp
    group by -1
    (where -1 or any other constant for that matter is a dummy pseudocolumn), then it becomes obvious that what we are doing in the infamous
    select count(count(*)) from emp
    group by deptno
    is a query with two blocks
    select count(1) from (
    select deptno from emp
    group by deptno
    ) group by -1
    We are not allowed to combine two "group by" into a single query, aren't we?

    Aggregate function always goes together with grouping. Grouping can partition the set of rows into many classes or a single class. Therefore, if we have 2 nested aggregation functions, we'd better be able to identify the corresponding groupings easily:
    select state, avg(min(tax_return)) from household
    group by city, state then statewhich is a shorthand for
    select state, avg(m) from (
       select city, state, min(tax_return) m
       from household
       group by city, state
    ) group by stateSpeaking of double aggregation, it is frequent in graph queries. The part explosion query is posted repeatedly virtually every month on this fine forum:-) The part explosion is double aggregation: multiply the quantities along each path in the assembly hierarchy. Then add the quantities along alternative paths. Likewise, finding a shortest path between two nodes in a graph is double aggregation query. First, we calculate the length buy adding the distances along each path, and then we choose a path with minimal length. Wouldn't it be nice to have this double aggregation wired into the connect by syntax? Note that connect_by_path is a surrogate aggregate which concatenates strings. People invent all kind of functions which parse this path and make other aggregates out of this value (such as sum and product).

  • The rows of a query become columns

    Hi , everybody
    I have an aggregated query . The format of the result is :
    COL1 COL2 COL3
    ==== ==== ====
    VAL1 1800 CVAL1
    VAL1 780 CVAL2
    VAL1 800 CVAL3
    VAL2 3450 CVAL2
    VAL2 890 CVAL3
    e.t.c.
    I want ,the quicker way, the format of the result to be as follows (regarding the above sample data):
    COL1 CVAL1 CVAL2 CVAL3
    ==== ==== ===== ======
    VAL1 1800 780 800
    VAL2 3450 890
    In other words , I want the each value in COL1 appear just one time and the data in other columns to be developed ...as columns
    Thanks , Simon

    Hi,
    This will help.
    ABANSVI@CTSQA>SELECT * FROM TEST;
    COL1             COL2 COL3
    VAL1             1800 CVAL1
    VAL1              780 CVAL2
    VAL1              800 CVAL3
    VAL2             3450 CVAL2
    VAL2              890 CVAL3
    Elapsed: 00:00:00.47
    ABANSVI@CTSQA>SELECT COL1,
      2  MAX(DECODE(COL3, 'CVAL1', SM, NULL)) CVAL1,
      3  MAX(DECODE(COL3, 'CVAL2', SM, NULL)) CVAL2,
      4  MAX(DECODE(COL3, 'CVAL3', SM, NULL)) CVAL3
      5  FROM (
      6  SELECT COL1, COL3, SUM(COL2) SM FROM TEST
      7  GROUP BY COL1, COL3)
      8  GROUP BY COL1;
    COL1            CVAL1      CVAL2      CVAL3
    VAL1             1800        780        800
    VAL2                        3450        890
    Elapsed: 00:00:00.34Vineet

  • BI statistic query

    Hi All,
    Could you please let me know the BI statistic query will give the data for, which user running queries against cube for last 3 months?  we are using BI7
    (Query name, user name, Day and time)
    Regards,
    Ravi

    Hi Ravi,
    Please check these queries..
    0TCT_MC01_Q200
    0TCT_MC01_Q201
    0TCT_MCA1_Q202
    0TCT_MCA1_Q200
    0TCT_MC02_Q200
    0TCT_MC02_Q202
    I am not sure which one is for that in the above queries. you can check the queries in the business content and you can directly use the queries.one query will show you aggregated query runtime stats. and detailed query runtime stats i.e, by user by query by executions..you can see all..
    Thanks,
    Ashok

  • Warning before aggregation is osbsolete

    Hi guru's
    I'm migrating our BW system and in some queries we have the message Warning calculation before aggregation is obsolete.
    We still use BW 3.x analyzer.
    when i execute queries the message is only a warning and i can see result in my reports.
    i need to check the data but do you know if the message is a warning because this functionnality does not existe in BI 7 analyzer tool and if i use Bex 3.x will the data be correct ?
    Thanks
    Cyril

    Good day Cyril,
    When you migrate a query which has CKF with 'Before Aggregation', you will be requested to change it to 'after aggregation'. After turning on the option for 'After Aggregation query behaves same as what it used to do in BEx 3.5 with option 'B4 Ag'gregation'. Results are same.
    The use of BEFORE aggregation was removed as this can cause a drain on the system resources and therefore cause performance problems. For this reason it was made obsolete with the 7.x system. The OLAP processor handles data differently in the 7.x systems so removing the before aggregation option should not cause too many problems for customers migrating over.
    With this in mind, however, there are differences to BEFORE and AFTER aggregation. It may not affect the calculations you are using in your queries but you must be aware that some calculations are carried out at different times and this can cause hiccups. Most note-worthy would be when formulae are used and when replacement path variables are needed. In some cases, the necessary calculations may not be carried out and the figures may not be available which would not have been an issue in the 3.x system using before aggregation. 
    Check out SAP Note 1151957 which is excellent for understanding how the OLAP processor carries out calculations. It also details the differences between BEFORE and AFTER aggregation and should be very helpful for you going forward - particularly if you use formulae or replacement path functions.
    Be aware that there are differences and there may be queries where you get a red X instead of a value. In most cases, this means rewriting a formula so that the processor can now accomodate the aggregation you need to achieve.
    Hope this helps, Cyril.
    Regards,
    Karen

  • Exception Aggregation takes a lot of time

    Hello, guys!
    I'm having problems with exception aggregation for some of my key figures in BEX Query Designer.
    Well, they are working properly (Summation and Counting), but it takes up to ten minutes for them to calculate, and this is not acceptable for both me and my client.
    Without those Exception Aggregations, query runs in a seconds.
    Do you know any way to avoid those problems? Maybe it is possible to do something on BW side?
    I'd appreciate any advise on this.
    Thank you in advance for your help,
    Nik.

    Nikita,Exception aggregation definitely takes a toll on the performance.There might be a situation where it is aggregation huge number of records with the unique combination of reference characteristic.
    You can either try to filter out the data as much as possible after analyzing the data in cube.
    Also i can suggest to do a close health check up of your cube with the help of program
    SAP_INFOCUBE_DESIGNS
    Regards,
    AL

  • Run time for a query

    Hello all,
    Can you please tell me how can I see the total running time for a query?
    What other transactions besides ST03 and RSRT with statistics on?
    Points will be assigned
    Thanks
    Ramona

    Hi........
    You can use ST03N -> BW System Load
    Depending on the time frame you select, you get historical data or
    current data.
    To get to a specific query you need to drill down using the InfoCube
    name
    Use Aggregation Query to get more runtime information about a
    single query. Use tab All data to get to the details.
    (DB, OLAP, and Frontend time, plus Select/ Transferred records,
    plus number of cells and formats)
    Also You can get it in RSRT,RSRTQ...
    WE07  IDoc statistics 
    DB20  Update DB Statistics 
    Regards,
    Debjani........
    Edited by: Debjani  Mukherjee on Sep 25, 2008 2:42 PM

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • Order of values passed to ODCIAggregateIterate

    I have implemented my own aggregate function that concatenates varchar2 values together. It works perfectly, however, the order of the aggregated data is inconsistent and does not match the order that it occurs in the table the aggregation is over ( I have sorted the data). As an example:
    original data:
    column_a     column_b
    a                1-abc
    a                2-abc
    a                3-abc
    b                1-abc
    b                2-abc
    b                3-abcafter aggregation query
    column_a         column_b_concat
    a                    1-abc,3-abc,2-abc
    b                    1-abc,2-abc,3-abcMost of the concatenated data is correct (2nd row) but some of it is not (1st row). My question is....Is there a way to force the order of the aggregated rows to be maintained? I understand for aggregations such as count, sum or avg order is unimportant and so the db optimizes it to run as fast as possible. I can do this using PL/SQL but the execution time is much longer and would rather avoid it if possible.
    Thanks.
    Edited by: user8092994 on Feb 23, 2009 5:11 PM

    Hi,
    Let's phrase the question in terms of the scott.emp table and the wm_concat function, which a lot of people using Oracle 10 (and up) have available.
    Right now you're doing something like:
    SELECT    deptno
    ,       WM_CONCAT (ename)     AS ename_concat
    FROM       scott.emp
    GROUP BY  deptno;and getting results like
    .   DEPTNO ENAME_CONCAT
            10 CLARK,KING,MILLER
            20 SMITH,FORD,ADAMS,SCOTT,JONES
            30 ALLEN,BLAKE,MARTIN,TURNER,JAMES,WARDexcept that the names aren't in order (which is exactly the problem: you need to be sure they are in order, as shown above).
    I can think of three ways to do that:
    (1) modify the function (as you have already considered)
    (2) use CONNECT_BY_PATH instead of the function
    (3) use the analytic form of the function
    My guess is that the options above are listed in order of execution speed ((1) is fastest), but that's just my hunch.
    Option (2) works like this:
    WITH       got_r_num   AS
         SELECT     deptno
         ,     ename
         ,     ROW_NUMBER () OVER
                      (     PARTITION BY     deptno
                           ORDER BY       ename
                      ) AS r_num
         FROM     scott.emp
    SELECT     deptno
    ,     SUBSTR ( SYS_CONNECT_BY_PATH (ename, ',')
                , 2
                )     AS ename_concat
    FROM     got_r_num
    WHERE     CONNECT_BY_ISLEAF     = 1
    START WITH     r_num     = 1
    CONNECT BY     r_num     = PRIOR r_num + 1
         AND     deptno     = PRIOR deptno;Option (3) works like this:
    WITH  got_concat     AS
         SELECT     deptno
         ,     WM_CONCAT (ename) OVER
                     (       PARTITION BY     deptno
                            ORDER BY       ename
                     ) AS e_concat
         FROM     scott.emp
    SELECT       deptno
    ,       MAX (e_concat)     AS ename_concat
    FROM       got_concat
    GROUP BY  deptno;Any user-defined aggregate function can also serve as an analytic function: just add "OVER (...)" when you run it.
    Within each partition, the various values of e_concat will all be longer versions of one another, all starting the same way.
    For example, in deoptno = 10, the various values are
    CLARK
    CLARK,KING
    CLARK,KING,MILLERThe one you want to display is the longest one, which, since they all start the same way, will also be the MAX.

  • Error in Running a dimension ( BIB-9509 Oracle OLAP did not create cursor.)

    oracle.dss.dataSource.common.QueryRuntimeException: BIB-9509 Oracle OLAP did not create cursor.
    oracle.express.ExpressServerExceptionError class: OLAPI
    Server error descriptions:
    DPR: Unable to create server cursor, Generic at TxsOqDefinitionManagerSince9202::crtCurMgrs4
    OES: ORA-00938: not enough arguments for function
    , Generic at TxsRdbRandomAccessQuery::TxsRdbRandomAccessQuery
    help is appreciated
    Thanks, Prasad

    This is the patch: 2529822
    "Best Practices for Tabular Cube Aggregation & Query Operations". Thanks very much for your advice.
    I followed the instructions in the document, but it did not seem to make a difference when
    trying to follow the tutorial instructions and using the SALES measure.
    However, I selected the stock price measures and it worked ok.
    Is there a size limit here? I am looking at Oracle OLAP and BIBeans to replace our Cognos installation,
    which is expensive and unreliable on unix. Of course our managers and customers want every dimension
    across a huge fact table to be available for end users...

  • Concept of use.

    Hello,
    Let's say I have two queues one is for Queries, second for Updates, and I want to simulate workload balance algorithm with following steps:
    {Query} is loaded from CSV,analyzed and converted to {Query,ClassifiedSpeed} with SVM classifier.
    {Update} is loaded from CSV, analyzed and also classified to {Update,SomeParam} but without SVM.
    I'm aggregating {Query,ClassifiedSpeed} by ClassifiedSpeed value in range 30 rows, mayby range unbaunded...
    I'm updating order of {Update,SomeParam} by SomeParam, just to optimize update for single table in SQL
    Now I want to pick row with update or query by some rules, execute it on database ( inside Bean ) and when it's {Query} i want to
    compare {ClassifiedSpeed} with real execution time, that's why I need to communicate with Classifier bean once again.
    Anyway I'm a bit lost with concept of that application.
    I know how to create almost every step on that list, but I'm not sure how control channels and processors.
    For example: I have Queries A, B, C, A, C, C, D ( timeline -> )
    That gives me input channel ( grouped and counted set, ordered by Count )
    {Query,Count,ClassifiedSpeed}
    C, 3, Fast
    A, 2, Slow
    B, 1, Fast
    D, 1, Slow
    Now i get updates which are only ordered by some extra param:
    {Updatem, SomeParam}
    U1, TABLE_FOO
    U3, TABLE_FOO
    U2, TABLE_XXX
    the thing is that when I connect both streams into one bean via processors, collect length of queries for statistics, and retrain
    SVM if percent of classified event's is lower then expected. So I need to be able to say to processor of aggregates Queries:
    Give me best aggregated row, the same to Update: Give me set of updates ( list of events in input here, I know how to handle that ),
    but CEP application is based on evens starting from flow order. I need to have working buffers which should be able to give me some
    data only when I want.
    What Is also important for me: I need to collect "Quality of Data" it's relation between on timestamp and correlation of single rows in
    both queues. It's like: Give me number of rows starting from timestamp of Single Query, which are older and affects the same table'
    I'm not sure how can I count it when I aggregate queries for processing.
    Some extra quesion: Is there possibilty to aggregate ids of queries in CQL? I know that I can do that type of things in SQL (eg. mySQL GROUP_CONCAT )
    From data :
    101, AAA
    102, BBB
    103, CCC
    104, AAA
    105, AAA
    106, CCC
    it's possibile to get two columns, where "|" is just separator of ids in single string parameter
    AAA,101|104|105
    BBB,102
    CCC,103|106
    Bye.

    Ok. I'm one step closer to understand how can use CEP. I spent 15 minutes on reading 12th chapter ( it's 4.30 AM ;) ) and I think I know how to store results but it still seems to be a bit mad solution for me.
    Let's go back to example, which is closer to my task :
    SELECT Query, QueryHashCode, QueryOrderNumberInStream, COUNT( QueryHashCode ) AS CountOfDuplictesInQueue
    FROM Stream[RANGE 20.../ still not sure/ ]
    GROUP BY QueryHashCode,Query, QueryOrderNumberInStream
    ORDER BY CountOfDuplictesInQueue DESC, QueryOrderNumberInStream ASC;
    .. bean processing.
    Classification here of before first select.
    Query,
    QueryHashCode,
    QueryPredictedSpeed ( 1 - 3 ),
    QueryLength ( 0 - 5000 ),
    CountOfDuplictesInQueue ( 0 - 1000 )
    Data :
    Query = "SELECT * FROM foo ";
    QueryHashCode = 312321; //. Query.hashCode();
    QueryPredictedSpeed = 3;
    QueryLength =18;
    CountOfDuplictesInQueue=2
    Now I'll use processor to count something similar to :
    SELECT Query, ( QueryPredictedSpeed /CountOfDuplictesInQueue + 0.001*QueryLength) AS some_order
    FROM foo [ .. ROW 1 ]
    and here is new issue :
    When I put it into cache I'll have to define timeout etc. and keep just one row in cache because new rows can modify order ( COUNT () )
    so it's a bit sick stuff for me to build few components structure for one row?
    1. When I Group few rows in CQL are they going to be removed from stream? I guess Yes.
    2. I think that processor will input data into stream in loop so it's also doesn't make sens when we are overwriting row which is not processed.
    3. Cleaning cache each time is madness for me ;)
    R.

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

Maybe you are looking for