Calculation of peaks in queries

Hello,
I'm facing a problem concerning a query, and I need help to find the best choice.
The aim is to display maximum issues of stock movements for one day, using infocube containing movements from standard extractor 2LIS_03_BF. More precisely, the key figures awaited are :
- 1st peak : maximum of stock issues in one day,
- 2nd peak : 2nd maximum of stock issues in one day,
- 3rd peak : 3rd maximum of stock issues in one day.
The first key figure is easy to implement, by using aggregation on calendar day, and maximum in local calculations.
The challenge is for the two other KF. For the moment, I have no solution to calculate it properly with query designer options.
I'm currently trying to solve this problem by using virtual key figures but, as far as I know, the ABAP code related is executed for each line selected from infoprovider, so it's difficult to calculate the peaks.
Is there some experts who can propose a solution to this huge problem ?
In advance, thanks for your contributions.
Regards,
F. BIDAUD
Edited by: Fabrice BIDAUD on Mar 11, 2008 10:27 AM

Hello,
I'm facing a problem concerning a query, and I need help to find the best choice.
The aim is to display maximum issues of stock movements for one day, using infocube containing movements from standard extractor 2LIS_03_BF. More precisely, the key figures awaited are :
- 1st peak : maximum of stock issues in one day,
- 2nd peak : 2nd maximum of stock issues in one day,
- 3rd peak : 3rd maximum of stock issues in one day.
The first key figure is easy to implement, by using aggregation on calendar day, and maximum in local calculations.
The challenge is for the two other KF. For the moment, I have no solution to calculate it properly with query designer options.
I'm currently trying to solve this problem by using virtual key figures but, as far as I know, the ABAP code related is executed for each line selected from infoprovider, so it's difficult to calculate the peaks.
Is there some experts who can propose a solution to this huge problem ?
In advance, thanks for your contributions.
Regards,
F. BIDAUD
Edited by: Fabrice BIDAUD on Mar 11, 2008 10:27 AM

Similar Messages

  • Inserting calculated fields in database queries

    I have just changed from using StarOffice version 7 to version 8.
    I am having difficulties in creating a calculated field in a query.
    Would anyone have some ideas for me to follow?
    Thanks - Dan

    1.  Order as a column name could be problematical as it might conflict with the SQL 'ORDER BY'.  Wrapping it in square brackets should cater for this.
    2.  You are missing some spaces.  
    3.  The use of the arithmetical + operator could be problematical with dates as the string expressions, which might be interpreted as arithmetical expressions.  The + operator is used rather than the ampersand concatenation operator to suppress
    Nulls, but that's not the case here.  
    4.  As you are only returning columns from one table you do not need to qualify the column names with the table name.  
    5.  Defining a range as on or later than the start date and before the day following the end date is more reliable as it allows for rows with date/time values on the last day of the range with a non-zero time of day element, something you cannot rule out
    with complete confidence unless you have taken specific steps in the table definition to disallow such values.
    So taking these points into account:
    strSQL6 = _
            "SELECT [Order], [OperWorkCenter], [Created on], " & _
            "[Actfinish date]," [Actfinish date]-[Created on] AS Delta " & _
            "FROM [WOMP Work Orders] " & _
            "WHERE [Created on] >= #" & FStartDate$ & "# " & _
             "AND [Created on] < #" & FEndDate$  & "# + 1"
    The FStartDate$ and FEndDate$ values must of course be in US short date format or an otherwise internationally unambiguous format such as the
    ISO standard date format of YYYY-MM-DD.
    Ken Sheridan, Stafford, England

  • Evaluating Calculated members in Sub Cube space

    Hello all,
    I have a question about evaluating calculated members against sub queries. For an example take a look into the following MDX query;
    with
    member [Product].[Category].[All Categories]
    as sum({[Product].[Category].&[1], [Product].[Category].&[3]})
    select {[Measures].[Sales Amount]}
    on columns,
           [Date].[Calendar].[Month].members * {{[Product].[Category].AllMembers} - [Product].[Category].[All Products]}
    } on
    rows
    from
    select {[Product].[Category].&[1], [Product].[Category].&[3]}
    on columns,
           {[Date].[Calendar].[Month].&[2008]&[6] :
    parallelperiod([Date].[Calendar].[Month], 5, [Date].[Calendar].[Month].&[2008]&[6])}
    on rows
    from [Adventure Works]
    This query returns the data from January 2008 to June 2008 for Bikes and Clothing categories. I noticed that the Product dimension’s Category hierarchy’s [All] member is named as “[All Products]” ([All Categories] would be a nice name) and I want to name it
    a bit different. So I created “with
    member [Product].[Category].[All Categories]
    as sum({[Product].[Category].&[1], [Product].[Category].&[3]})” calculated member. So far I get the expected results.
    Further I wanted to get rid of the hard-coded members from the calculated member and I wanted to force the members in the current context (members which is specified in the sub cube space). So I modified the query as follows;
    with
    member [Product].[Category].[All Categories]
    as [Product].[Category].[All Products]
    select {[Measures].[Sales Amount]}
    on columns,
           [Date].[Calendar].[Month].members * {{[Product].[Category].AllMembers} }
    } on
    rows
    from
    select {[Product].[Category].&[1], [Product].[Category].&[3]}
    on columns,
           {[Date].[Calendar].[Month].&[2008]&[6] :
    parallelperiod([Date].[Calendar].[Month], 5, [Date].[Calendar].[Month].&[2008]&[6])}
    on rows
    from [Adventure Works]
    I expected “[Product].[Category].[All Products]” will accept the current context and will give me the results only for sub cube context. Unfortunately [All Products] returns all
    the members in the Category hierarchy. My question is how would I force the calculated member to take only members specified in the sub cube space?  

    Hi Lakmal,
    Thank you for your question. I am currently looking into this issue and will give you an update as soon as possible.
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Excel calculatio​n in DIAdem for Peak Sun Hours

    I am trying to perform a calculation for peak sun hours in DIAdem that we typically perform in Excel.  Any advice on how to approach this calculation in the DIAdem environment much appreciated.
    Solved!
    Go to Solution.
    Attachments:
    Irradiance Data.xlsx ‏2050 KB

    Hi moses montoya,
    I was able to recreate the Peak Sun Hours data with the following script.
    Dim PeakSun, intLoop
    PeakSun = ChnAlloc("Peak Sun Hours")
    Data.Root.ChannelGroups(1).Channels("Peak Sun Hours").Values(1) = 0
    For intLoop = 2 to (Data.Root.ChannelGroups(1).Channels("Channel 2").Size - 1)
      Data.Root.ChannelGroups(1).Channels("Peak Sun Hours").Values(intLoop) = Data.Root.ChannelGroups(1).Channels("Peak Sun Hours").Values(intLoop - 1) _
      + (Data.Root.ChannelGroups(1).Channels("Channel 1").Values(intLoop) - Data.Root.ChannelGroups(1).Channels("Channel 1").Values(intLoop - 1)) * 24 / 1000 _
      * Data.Root.ChannelGroups(1).Channels("Channel 2").Values(intLoop)  
    Next
    Channel 1 was the time, and Channel 2 is the irradiance.
    Regards,
    Brandon V.
    Applications Engineering
    National Instruments
    www.ni.com/support

  • Peak picker routine

    I am trying to write a routine for peak/valley detection. What I want to do is be able to scan data for a period of time (say 5 or 10 seconds) and have my routine spit out the max and min values detected during that time, regardless of level. I don't want to put in a hard threshold value as I can't say for certain what my values will be, so if there must be a threshold it has to somehow float with the signal. 
    I want to manipulate the data as it updates (at the 5 or 10 second interval), display it and log it to a file.
    I also want to be able to have a running display of the raw data that will be updated at some fractional second interval.
    Can anybody recommend which peak detector functions and/or VIs would be best for this, or better yet which example program might be the best for a starting point?

    Look at the peak detector in the waveform monitoring.  It is very nice and uses 1st and 2nd derivitives to get a good calculation of peake in a signal.
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • What does the location of the peak detector mean

    i would like to know what the locations of the peak detector refer to? what does the value of the location of the peak detector tell? does it tell the location of the peak and if so, what unit is it in? also, if i set a threshold value, does it mean that only one peak would be detected or any value higher than that threshold would also be detected? if i were to set the sampling rate for my peak detection to be 400 per millisecond, do i expect to see an array of values or one absolute peak value after one cycle?

    Greetings!
    The peak value VI gives you four outputs, actually: positive and negative peak values and the positive and negative peak "horizontal" locations. The locations are in data points, not actual time. To get the time, you have to calculate it from your sample rate. Also, the vi does indeed give you the absolute peak of a waveform, which, as you can guess, requires that the entire waveform is sampled BEFORE the calculation is made. This can slow down real-time sampling, since there is considerable processing involved in calculating the peak value...you need to keep this in mind when designing your application. To answer your last question, you will obtain ONE reading per trigger.
    Hope this helps!
    Eric
    Eric P. Nichols
    P.O. Box 56235
    North Pole, AK 99705

  • Using bind variables (in & out) with dynamic sql

    I got a table that holds pl/sql code snippets to do validations on a set of data. what the code basically does is receiving a ID and returning a number of errors found.
    To execute the code I use dynamic sql with two bind variables.
    When the codes consists of a simpel query, it works like a charm, for example with this code:
    BEGIN
       SELECT COUNT (1)
       INTO :1
       FROM articles atl
       WHERE ATL.CSE_ID = :2 AND cgp_id IS NULL;
    END;however when I get to some more complex validations that need to do calculations or execute multiple queries, I'm running into trouble.
    I've boiled the problem down into this:
    DECLARE
       counter   NUMBER;
       my_id     NUMBER := 61;
    BEGIN
       EXECUTE IMMEDIATE ('
          declare
             some_var number;
          begin
          select 1 into some_var from dual
          where :2 = 61;
          :1 := :2;
          end;
          USING OUT counter, IN my_id;
       DBMS_OUTPUT.put_line (counter || '-' || my_id);
    END;this code doesn't really make any sense, but it's just to show you what the problem is. When I execute this code, I get the error
    ORA-6537 OUT bind variable bound to an IN position
    The error doesn't seem to make sense, :2 is the only IN bind variable, and it's only used in a where clause.
    As soon as I remove that where clause , the code will work again (giving me 61-61, in case you liked to know).
    Any idea whats going wrong? Am I just using the bind variables in a way you're not supposed to use them?
    I'm using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit

    Correction. With execute immediate binding is by position, but binds do not need to be repeated. So my statement above is incorrect..
    You need to bind it once only - but bind by position. And the bind must match how the bind variable is used.
    If the bind variable never assigns a value in the code, bind as IN.
    If the bind variable assigns a value in the code, bind as OUT.
    If the bind variable assigns a value and is used a variable in any other statement in the code, bind as IN OUT.
    E.g.
    SQL> create or replace procedure FooProc is
      2          cnt     number;
      3          id      number := 61;
      4  begin
      5          execute immediate
      6  'declare
      7          n       number;
      8  begin
      9          select
    10                  1 into n
    11          from dual
    12          where :var1 = 61;       --// var1 is used as IN
    13 
    14          :var2 := n * :var1;     --// var2 is used as OUT and var1 as IN
    15          :var2 := -1 * :var2;    --// var2 is used as OUT and IN
    16  end;
    17  '
    18          using
    19                  in out id, in out cnt;  --// must reflect usage above
    20 
    21          DBMS_OUTPUT.put_line ( 'cnt='||cnt || ' id=' || id);
    22  end;
    23  /
    Procedure created.
    SQL>
    SQL> exec FooProc
    cnt=-61 id=61
    PL/SQL procedure successfully completed.
    SQL>

  • Reporting

    Hi every1,
    Please explain about these points...
    •     Created Update rules and Transfer routines/User Exits to harmonize the data for Global reporting.
    •     Extensively used Restricted Key Figures, Variables, Conditions, Exceptions and Calculated Key Figures in Queries and created custom interactive Reports with drill-down features as per user requirements.
    •     Developed queries by defining navigational attributes, free characteristics for drill downs, variables, filters and other features using BEx Analyzer and presented results in workbooks.
    •     Worked with transport connector for transporting the BW objects and reports between Development, Test and Production systems.
    •     After Go-live, as a Production Support team member resolved issues connected with running the query.
    •     Assisted in Performance tuning measures for loading data and improving the query response time using aggregates, infocube partitioning and indexing.
    •     Created documentation as necessary in each phase of the project implementation. Performed knowledge transfer, user training on queries etc.
    .Worked with customer exits, enhancements and routines.

    I think this a extract from somebody's resume. I dont think anybody would be willing to explain them as they relate to everybody's own exp - and moreover the bullets you pasted are pretty much self - explanatory.
    Please lets not make this forum a platform to discuss somebody else's resume.

  • How to capture transaction response time in SQL

    I need to capture  Transaction response time (i.e. ping test) to calculated the peak hours and averaged
    on a daily basis.
    and
    Page refresh time that is calculated no less than every 2 hours for peak hours and averaged on a daily basis. 
    Please assist
    k

    My best guess as to what you are looking for is something like the following (C#):
    private int? Ping()
    System.Data.SqlClient.SqlConnection objConnection;
    System.Data.SqlClient.SqlCommand objCommand;
    System.Data.SqlClient.SqlParameter objParameter;
    System.Diagnostics.Stopwatch objStopWatch = new System.Diagnostics.Stopwatch();
    DateTime objStartTime, objEndTime, objServerTime;
    int intToServer, intFromServer;
    int? intResult = null;
    objConnection = new System.Data.SqlClient.SqlConnection("Data Source=myserver;Initial Catalog=master;Integrated Security=True;Connect Timeout=3;Network Library=dbmssocn;");
    using (objConnection)
    objConnection.Open();
    using (objCommand = new System.Data.SqlClient.SqlCommand())
    objCommand.Connection = objConnection;
    objCommand.CommandType = CommandType.Text;
    objCommand.CommandText = @"select @ServerTime = sysdatetime()";
    objParameter = new System.Data.SqlClient.SqlParameter("@ServerTime", SqlDbType.DateTime2, 7);
    objParameter.Direction = ParameterDirection.Output;
    objCommand.Parameters.Add(objParameter);
    objStopWatch.Start();
    objStartTime = DateTime.Now;
    objCommand.ExecuteNonQuery();
    objEndTime = DateTime.Now;
    objStopWatch.Stop();
    objServerTime = DateTime.Parse(objCommand.Parameters["@ServerTime"].Value.ToString());
    intToServer = objServerTime.Subtract(objStartTime).Milliseconds;
    intFromServer = objEndTime.Subtract(objServerTime).Milliseconds;
    intResult = (int?)objStopWatch.ElapsedMilliseconds;
    System.Diagnostics.Debug.Print(string.Format("Milliseconds from client to server {0}, milliseconds from server back to client {1}, and milliseconds round trip {2}.", intToServer, intFromServer, intResult));
    return intResult;
    Now, while the round trip measurement is fairly accurate give or take 100ms, any measurement of latency to and from SQL Server is going to be subject to the accuracy of the time synchronization of the client and server.  If the server's and client's
    time isn't synchronized precisely then you will get odd results in the variables intToServer and intFromServer.
    Since the round trip result of the test is measured entirely on the client that value isn't subject to the whims of client/server time synchronization.

  • Creating balance sheet report without using hierarchies

    Hi all,
    Is it possible to create balance sheet report with a fixed layout, without using hierarchies? If yes, how?
    Thanks
    Nitika

    I see....  I guess in that case you shoul use also Cell Definition.  Let me explain you step by step.
    You´ve already created a structure at ROW level.
    1) Create another structure at COLUMN level and name it, for instance, "Balance KF".
    2) Within the new structure, create 1 selection field filtering the apropiate key figure.  Give it an auxiliar name "auxiliar", cause it´s going to be used just for calculations (not visible in queries).
    3) Set properties of this selection as Hidden.
    4) Create 2 new selection filds within the same structure.  Name them but don´t make any selection of characteristics, cause we´re going to assign values with cell option (these are col 1 and col 2 in my previous example).
    5) In the tool bar you will find a "Cells" button.
    6) You have to write formulas en col 1 and col 2 (as you were working in an workbook), making reference to the values in hidden column.  In column 1 make reference to account level values; in column 2 make reference to total level values.
    If you have problems defining reference cells, let me know, but this will solve your issue.
    Regards, Leticia

  • Business Content "problem" in IS-Utilities

    Hello to everyone!
    I'm currently working with IS-Utilities Business Content and I have a litle problem( or misunderstanding). The infocube 0UCSS_C04 has 2 key figures defined (in Cube definition -RSA1) : 0UCCOUNTDIF and 0UCCOUNT.
    But when I have checked the dictionary I discovered that in the fact table (/BI0/F0UCSS_C04) there is only one key figure defined (UCCOUNTDIF).
    This "problem" exists in several cubes, not only in this one.
    Could you explain why there are such differences?
    Thanks in advance

    Hi,
    Could the other be a virtual keyfigure not going into the infocube, rather getting calculated at runtime (of queries)?
    cheers,

  • BEx Precalculation..

    Hi everybody,  I almost succeeded in implementing the BEx Precalculation for my workbooks, but : 
    1. It does not work when I try to schedule a workbook precalculation from the HTML Broadcaster. I obtain the following message : 'Error occurred during processing of framework class CL_RSRD_PRODUCER_EXCEL, type PROD'. And I did not found any solutions for this problem. This error occurs only for the calculation of the workbooks (queries or webtemplates work) 
    2. The Workbook precalculation works but only using the RSPRECADMIN transaction. But can I get my workbook from an e-mail without Javascript commands ?
    Idealy, I would like only the XLS file into the e-mail ?   
    Thank you for reply ! 
    Best regards,  Vincent

    Hello Deepu,   
    As the Broadcaster from the HTML interface does not still work ( the SAP note 855220 was applied... ), I do my test using RSPRECADMIN transaction. And when I receive a mail from RSPRECADMIN, there are Javascript commands inside. And I had to disable all Java command from Lotus Notes mail browser  if I would like to open it without errors.  From RSPRECADMIN, you can applied a webtemplate to your mail (if you do not, a webtemplate by default is loaded...), and even a blank webtemplate produced by the Web Apllication Designer contains Javascript command that bug my Lotus Notes...   
    Regards,  Vincent

  • Before Aggregation in NW2004s(BI7)

    Hi Bex Gurus,
      We have upgraded our BI System to NW2004s and Bex version is still 3.5.
      Now we are also planning to migrate Bex to new version(NW2004s ).
      We have used "Before Aggregation" in some of the calculated KFs in the queries.  "Before Aggregation" is obsolete in NW2004s. So, I want to know whether these calculated KFs will give the same result when we migrate our front end to NW2004s.
    Please provide your experience.
    Regards
    MB

    Peter,
    Thank you for your reply.
    Is there a automatic way to do what you mentioned in below sentance?.
    " Using the new front end tools you can apply exception aggregation to all calculated key figures including those with formulas in them"
    Our scenario is,
    We have a many CKFs with "Before Aggregation" setting in BW 3.5 version.
    Example : CKF1 with "Before Aggregation" setting. This works irrespective of the drill down ( i.e Palnt, country or region) in BW 3.5. But, when I migrate the query  to NW2004s, CKF1 is not giving the expected result when we drill down on Palnt, country or region.
    So, Do I need to create three new CKFs (on Plant, Country and Region)  using exception aggregation and use corresponding CKF based on drill down  to get correct result in NW2004s?.
    Appreciate your help.
    Regards,
    Madhukar

  • Stored Procedure Keys

    Hi
    I have a function in the db which returns an Numeric value after passing an Id (numeric) . how do i create a join in physical Layer as funtion will reutrn a numeric and since it is a function it can return only one coulmn .
    is there any way i can keep the id (paramater) in the phycisal layer which i am passing to the function , so that i can neccasy join in the physical layer with fact/Dim to the Stored procedure .
    is the possible to create mutiple stored procedure in Physical if my function return only one column.
    Regards
    Riyaz

    If I understood your question correctly you have a function that returns a numeric value based on an ID, let's call it Customer ID as a sample. What you need to do is to create a view like this:
    CREATE VIEW OUTSTANDING AS
    SELECT CUSTOMER_ID, get_outstanding(CUSTOMER_ID) FROM CUSTOMER
    You can then query this view like this:
    SELECT * FROM OUTSTANDING WHERE CUSTOMER_ID = 999
    You could also use this view in your physical layer and join it by Customer ID. If you already have a fact table you could put a view on top to add the extra column with your function. This is all good and should work fine with one caveat. Execution PL/SQL functions on SQL is not efficient. Oracle has to change the context between PL/SQL and SQL for each row and context switching is an expensive operation. In a proper DWH system you will pre-populate your facts table with the results of get_outstanding for all customers so that there is not delay in calculating it when OBIEE queries the data.

  • Same infoprovider, different results

    Hello together,
    here it is my problem: I have one infoprovider and two queries based on it. In both queries I have one identical formula, but the result is different ..
    Could you please help me? Where could be the problem?
    Thank you,
    Iuliana

    Please chacek all the InfoObjects in rows and columns in both Queries and Formulas/CKFs/RKFs/ and variavles too....Definetly there will be a difference..so you are getting different results...all calculated kfs in both Queries using same formulas?
    There is no rule that both Queries should display same result if both are on Same infoProvider unless and until both are similar(one Query is copy of another one)...
    i don't think in u r case it is same..
    Please don't forget to assign points if it is helpfulu.its the way of saying thnaks here in SDN.

Maybe you are looking for

  • How to change the datasource from one server to another server?

    i need to change the datasource from development environment to production server. i had configured the JNDI informations like the following in the 'context.xml': <Context path="/RFT" docBase="RFT" reloadable="true"> <Resource auth="Container" type="

  • Excel Error When Using Microsoft Data Link

    I   have   a   user   who   is   running   the   following: Windows   7   w/64   bit   OS MS   Office   2010 SAS   9.3   Software They   created   a   database   in   Excel   and   have   integrated   the   information   with   SAS   9.3   database. 

  • Account details on HP live photo

    HP Live Photo Software I have put a wrong Account name on this application.  How do you change the account name?

  • HT4623 HOW TO INSTALL IOS 6 IN IPHONE 3GS

    HOW TO INSTALL IOS 6 IN IPHONE 3GS

  • Interface creations Problem

    I am new to interface design. I have a main class (class A) instantiate a new contructor from class B which extends JPanel that draw images from a newly instantiated class C. But in my class A, when load a file, I instantiates a new constructor class