Help with aggregate function max query

I have a large database which stores daily oil life, odo readings from thousands of vehicles being tested but only once a month.
I am trying to grab data from one month where only the EOL_READ = 0 and put all of those values into the previous month's query where EOL_READ = anything.
Here is the original query which grabs everything
(select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('04/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
order by OIL_LIFE_PREDICTION)
which gives 5 columns:
STID, CASE_SAK, EOL_READ, ODO_READ, and OIL_LIFE_PREDICTION
and gives me about 80000 rows returned for the above query
I only want one reading per month, but sometimes I get two.
STID is the unique identifier for a vehicle.
I tried this query:
I tried this query which nests one request within the other and SQL times out every time:
select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('02/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
and vdh.stid in (select distinct vdh.STID, vdh.CASE_SAK,
max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000))) EOL_READ,
max(to_number(decode(COMMAND_ID,'ODO_READ',COMMAND_RESULT,-100000))) ODO_READ,
max(to_number(decode(COMMAND_ID,'OIL_LIFE_PREDICTION',COMMAND_RESULT,-100000))) OIL_LIFE_PREDICTION
from veh_diag_history vdh, c2pt_data_history cdh
where vdh.veh_diag_history_sak = cdh.veh_diag_history_sak
and cdh.COMMAND_ID in ('EOL_READ','ODO_READ','OIL_LIFE_PREDICTION')
and vdh.LOGICAL_TRIGGER_SAK = 3
and cdh.CREATED_TIMESTAMP =vdh.CREATED_TIMESTAMP
and cdh.CREATED_TIMESTAMP >= to_date('03/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
and cdh.CREATED_TIMESTAMP < to_date('04/01/2007 12:00:00 AM','mm/dd/yyyy HH:MI:SS AM')
group by vdh.STID, vdh.case_sak
having count(distinct command_id) = 3
and (max(to_number(decode(COMMAND_ID,'EOL_READ',COMMAND_RESULT,-100000)))) = 0)
order by OIL_LIFE_PREDICTION
so in summary I am trying to select values from the previous month only from those stids where this month's value for EOL_READ = 0
Any ideas.....please help.

You can use row_number() within each stid and each month to determine the first read of each month. Then you can use lag() to get the previous month's reading of the current month's reading is zero.
with all_data as (
select stid,
       case_sak,
       eol_read,
       timestamp,
       row_number() over (partition by stid, trunc(timestamp,'mm') order by timestamp) AS rn
  from veh_diag_history
select stid,
       case_sak,
       case
         when eol_read = 0
           then lag(eol_read) over (partition by stid order by timestamp)
         else eol_read
       end as eol_read
  from all_data
where rn = 1;

Similar Messages

  • Select with aggregate function MAX

    Hi,
    Can I write a query like:
    select field1 field2 MAX( field3 )
    from z_table
    into table itab
    for all entries in itab2
    where field1 = itab2-field1
         and field2 = itab2-field2.
    In the above case all the 3 fields are key fields.
    Thanks in advance.

    Hi,
    I did try it.. but got an error like: "The field "z_table~field2" from the select list is missing in the GROUP BY clause" but my aggregate funtion is used only for the field3..... So, I wanted to know if anybody has come across such a query....
    Thanks,

  • Help with stored function

    Hi...I was wondering if I could get help with this function. How do i write a function to return hours between a begin date and an end date for an employee. Thanks so much

    EdStevens wrote:
    AlexeyDev wrote:
    sb92075 wrote:
    select (date2-date1)*24 from dual;not as above but as below
    select (date2-date1)/24 from dual;date2-date1 is amount of days. Divide it by 24 and what? if you multiply it on 24 you will have a chance to know how many hours between these two dates. :-)Don't forget that a DATE type also includes a time component.I suppose it doesn't matter if you did a difference between two dates. The result is always number of days.

  • Join tables with aggregate function

    Hello
    I have 4 views that I need to perform aggregate function, count, on and then join them for query output.
    Basically each view has a column with a score and a subcontractor name. One subcontractor may have more than one score in each view.
    Here is the sql that I'm working with thus far:
    select e.sub_name, (count(e.score) - 1), (count(v.score) - 1), (count(s.score) - 1), (count(u.score) - 1)
    from speed.v_ratings_ex e, speed.v_ratings_vg v, speed.v_ratings_sat s, speed.v_ratings_uns u
    where v.sub_name = e.sub_name
    and s.sub_name = e.sub_name
    and u.sub_name = e.sub_name
    group by e.sub_name
    order by e.sub_name asc;
    This results in each column returning the same value b/c the join is performed before the aggregate function.
    Can anyone offer some help so that I may get the desired results?

    You need to use in-line views to perform the aggregates, then join the in-line views.
    Something like:
    SELECT e.sub_name, e.score - 1 escore, v.score - 1 vscore,
           s.score - 1 sscore, u.score - 1 uscore
    FROM (SELECT sub_name,count(*) score
          FROM v_ratings_ex
          GROUP BY sub_name) e,
         (SELECT sub_name,count(*) score
          FROM v_ratings_vg
          GROUP BY sub_name) v,
         (SELECT sub_name,count(*) score
          FROM v_ratings_sat
          GROUP BY sub_name) s,
         (SELECT sub_name,count(*) score
          FROM v_ratings_uns
          GROUP BY sub_name) u
    WHERE v.sub_name = e.sub_name and
          s.sub_name = e.sub_name and
          u.sub_name = e.sub_name
    ORDER BY e.sub_name asc;TTFn
    John

  • Problem with aggregate function

    Hello,
    the prerequisite is to have a table with the following columns (#id, a_date).
    Now the task is to select the maximum date from a range of rows of this table
    select max(t.a_date)
    from the_table t
    where <condition to get only some rows of the_table>
    and to get the id of the record which holds the maximum date. That's where the problem is located. It should look like the following code (yes, I know this cannot work because grouping by the id makes the aggregate senseless) just to make clear what I want to do.
    select max(t.a_date), t.id
    from the_table t
    where <condition to get only some rows of the_table>
    group by t.id
    Does anybody have any idea how to solve the problem?
    With regards,
    Cosima
    Message was edited by:
    Cosima Laube

    Hi all,
    as the condition in the where-clause is very complex, I decided to use the solution with the FIRST function. After googling and reading a bit, I have just two more questions for my deeper understanding:
    1) am I right that both snippets below do exactly the same?
    SELECT MAX (t.a_date)
         , MAX (t.ID)KEEP (DENSE_RANK FIRST ORDER BY t.a_date DESC)
      FROM the_table t
    WHERE <condition to get only some rows of the_table>
    SELECT MAX (t.a_date)
         , MAX (t.ID)KEEP (DENSE_RANK LAST ORDER BY t.a_date ASC)
      FROM the_table t
    WHERE <condition to get only some rows of the_table>2) can I use als the min function for t.ID as there the aggregate function does not have any effect (but is necessary in order to use KEEP and FIRST or KEEP and LAST)?
    With regards,
    Cosima
    Message was edited by:
    Cosima Laube

  • Help with count and sum query

    Hi I am using oracle 10g. Trying to aggregate duplicate count records. I have so far:
    SELECT 'ST' LEDGER,
    CASE
    WHEN c.Category = 'E' THEN 'Headcount Exempt'
    ELSE 'Headcount Non-Exempt'
    END
    ACCOUNTS,
    CASE WHEN a.COMPANY = 'ZEE' THEN 'OH' ELSE 'NA' END MARKET,
    'MARCH_12' AS PERIOD,
    COUNT (a.empl_id) head_count
    FROM essbase.employee_pubinfo a
    LEFT OUTER JOIN MMS_DIST_COPY b
    ON a.cost_ctr = TRIM (b.bu)
    INNER JOIN MMS_GL_PAY_GROUPS c
    ON a.pay_group = c.group_code
    WHERE a.employee_status IN ('A', 'L', 'P', 'S')
    AND FISCAL_YEAR = '2012'
    AND FISCAL_MONTH = 'MARCH'
    GROUP BY a.company,
    b.district,
    a.cost_ctr,
    c.category,
    a.fiscal_month,
    a.fiscal_year;
    which gives me same rows with different head_counts. I am trying to combine the same rows as a total (one record). Do I use a subquery?

    Hi,
    Whenever you have a problem, please post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) from all tables involved.
    Also post the results you want from that data, and an explanation of how you get those results from that data, with specific examples.
    user610131 wrote:
    ... which gives me same rows with different head_counts.If they have different head_counts, then the rows are not the same.
    I am trying to combine the same rows as a total (one record). Do I use a subquery?Maybe. It's more likely that you need a different GROUP BY clause, since the GROUP BY clause determines how many rows of output there will be. I'll be able to say more after you post the sample data, results, and explanation.
    You may want both a sub-query and a different GROUP BY clause. For example:
    WITH    got_group_by_columns     AS
         SELECT  a.empl_id
         ,     CASE
                        WHEN  c.category = 'E'
                  THEN  'Headcount Exempt'
                        ELSE  'Headcount Non-Exempt'
                END          AS accounts
         ,       CASE
                        WHEN a.company = 'ZEE'
                        THEN 'OH'
                        ELSE 'NA'
                END          AS market
         FROM              essbase.employee_pubinfo a
         LEFT OUTER JOIN  mms_dist_copy             b  ON   a.cost_ctr     = TRIM (b.bu)
         INNER JOIN       mms_gl_pay_groups        c  ON   a.pay_group      = c.group_code
         WHERE     a.employee_status     IN ('A', 'L', 'P', 'S')
         AND        fiscal_year           = '2012'
         AND        fiscal_month          = 'MARCH'
    SELECT    'ST'               AS ledger
    ,       accounts
    ,       market
    ,       'MARCH_12'          AS period
    ,       COUNT (empl_id)       AS head_count
    FROM       got_group_by_columns
    GROUP BY  accounts
    ,            market
    ;But that's just a wild guess.
    You said you wanted "Help with count and sum". I see the COUNT, but what do you want with SUM? No doubt this will be clearer after you post the sample data and results.
    Edited by: Frank Kulash on Apr 4, 2012 5:31 PM

  • Help with finding name for query type

    Hi,
    A number of years ago I remember at a SQL course coming across a query that had output that looked something like this;
    header_1     header_2
    value 1        value 2
    col_1        col_2       col_3
    value_a     value b     value c
    value_b     value c     value dIm pretty sure its based on a group by query. Would anybody be able to point me in the right direction to find more information for a query that would output in SQL Plus that is a very basic form of a report with header information. Or give me the name of this sort of query so that I can google it?
    Benton
    Found it Heirarchy Query
    Found this as well http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:556385200346423686
    Edited by: Benton on Aug 12, 2011 1:21 PM
    Edited by: Benton on Aug 12, 2011 1:36 PM

    Trooper wrote:
    is there any fuction like nearby, cant use betweenThe MIN aggregate function perhaps?
    select   min(g.grade) from comqdhb.gradevalues@glink g ...Regards,
    Rob.

  • Help with ASO function

    Hi all,
    I need some help with ASO mdx function.
    Avg({Leaves([Employees].Currentmember)}, [Calculated_Field]). This will give me the average for Calculated_Field for all levels of Employees. But i want to add more dimensions like Region and year.
    Please advice how can I achieve this.
    Thanks
    Andy

    you have to use cross join in order to add more dimension members to the formula.This will give you some idea
    Re: Writing formula in Outline??????
    Regards,
    RSG

  • Aggregate function max

    hi evrybdy
    can we use MAX aggregate  function to pick a field of type char from database table ...if yes thn how ..?
    plzzz ans soon .
    thank u .

    No. Max is only for numbers. ( even it will not work for NUMC type )
    OPENSQL statements will not work as ASCII format.
    Cheers
    Kothand

  • Help with Converting an Access query

    Hello:
    First time poster and would appreciate some help in converting this Access query into T-SQL so I can use in a Stored Procedure. I don't need help in resolving the syntax for the variables as parameter in the second query, but stuck in writing the first query
    in T-SQL . I assume I need some type of nested query with a left join to another query in T-SQL. 
    How do I convert this from Access into MS-SQL
    SELECT TBL_MONTH.L_Month, TBL_MONTH.L_MonthName
    FROM TBL_MONTH LEFT JOIN qry_MonthCheck ON TBL_MONTH.L_Month = qry_MonthCheck.L_Month
    WHERE (((qry_MonthCheck.L_Month) Is Null));
    qryMonthCheck in Access is defined as
    SELECT TBL_MONTH.L_Month, TBL_SIGNOFF.SO_YEAR, TBL_SIGNOFF.SO_USER, TBL_SIGNOFF.SO_LOCATION
    FROM TBL_MONTH RIGHT JOIN TBL_SIGNOFF ON TBL_MONTH.L_Month = TBL_SIGNOFF.SO_MONTH
    WHERE (((TBL_SIGNOFF.SO_YEAR)=[forms]![frm_signoff]![cboyear]) AND ((TBL_SIGNOFF.SO_LOCATION)=[Tempvars]![tmpVarUserLOC]));

    It is very much up to personal taste. For tables I use no prefix at all. Or suffix. I prefer plural: categories, products, etc. For junction tables, I drop the plural in the first entity, for instance productcategories. As for the case, many people prefer
    initial uppercase, while I go for lowercase only. But initial uppercase has its points, particularly in documentation. I am a staunch advocate of using case-sensitive collation for metadata. I don't see any point in mixing productcategories, Productcategories,
    ProductCategories etc. That can only cause confusion.
    As for either entities, I don't use views much, and I don't have any prefix for these either. In fact, several views I've been involved used to be tables once upon a time.
    If you want to add a marker to the object name, I recommend using a suffix. It is much easier to find objects in the version control system and other browser, when not everything is cluttered on the same letter. For instance, I typically add _sp at the end
    of stored procedure names.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • I need help with Analytic Function

    Hi,
    I have this little problem that I need help with.
    My datafile has thousands of records that look like...
    Client_Id Region Countries
    [1] [1] [USA, Canada]
    [1] [2] [Australia, France, Germany]
    [1] [3] [China, India, Korea]
    [1] [4] [Brazil, Mexico]
    [8] [1] [USA, Canada]
    [9] [1] [USA, Canada]
    [9] [4] [Argentina, Brazil]
    [13] [1] [USA, Canada]
    [15] [1] [USA]
    [15] [4] [Argentina, Brazil]
    etc
    My task is is to create a report with 2 columns - Client_Id and Countries, to look something like...
    Client_Id Countries
    [1] [USA, Canada, Australia, France, Germany, China, India, Korea, Brazil, Mexico]
    [8] [USA, Canada]
    [9] [USA, Canada, Argentina, Brazil]
    [13] [USA, Canada]
    [15] [USA, Argentina, Brazil]
    etc.
    How can I achieve this using Analytic Function(s)?
    Thanks.
    BDF

    Hi,
    That's called String Aggregation , and the following site shows many ways to do it:
    http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php
    Which one should you use? That depends on which version of Oracle you're using, and your exact requirements.
    For example, is order importatn? You said the results shoudl include:
    CLIENT_ID  COUNTRIES
    1        USA, Canada, Australia, France, Germany, China, India, Korea, Brazil, Mexicobut would you be equally happy with
    CLIENT_ID  COUNTRIES
    1        Australia, France, Germany, China, India, Korea, Brazil, Mexico, USA, Canadaor
    CLIENT_ID  COUNTRIES
    1        Australia, France, Germany, USA, Canada, Brazil, Mexico, China, India, Korea?
    Mwalimu wrote:
    ... How can I achieve this using Analytic Function(s)?The best solution may not involve analytic functions at all. Is that okay?
    If you'd like help, post your best attempt, a little sample data (CREATE TABLE and INSERT statements), the results you want from that data, and an explanation of how you get those results from that data.
    Always say which version of Oracle you're using.
    Edited by: Frank Kulash on Aug 29, 2011 3:05 PM

  • Help with Sort function in Terminal

    Hello all... this is my first post on here as I'm having some trouble with some Termianl commands. I'm trying to learn Terminal at the moment as it is but I would appreciate some help with this one....
    I'm trying to sort a rather large txt file into alphabetical order and also delete any duplicates. I've been using the following command in Terminal:
    sort -u words.txt > words1.txt
    but after a while I get the following error
    sort: string comparison failed: Illegal byte sequence
    sort: Set LC_ALL='C' to work around the problem.
    sort: The strings compared were `ariadnetr\345dens\r' and `ariadnetr\345ds\r'.
    What should my initial command be? What is Set LC_ALL='C'?
    Hope you guys can help?

    Various languages distinct sorting - collation - sequences. 
    The characters can and variously do sort differently, depending on what language is involved. 
    Languages here can include the written languages of humans, and a few settings associated with programming languages.  This is all part of what is known as internationalization and localization, and there are are various documents around on that topic.
    The LC_ALL environment variable sets all of the locale-related settings en-mass, including the collation sequence that is established via LC_COLLATE et al, and the sort tool is suggesting selecting the C language collation.
    Here, the tool is suggesting the following syntax:
    LC_ALL=C sort -u words.txt > words1.txt
    This can also be done by exporting the LC_ALL, but it's probably better to just do this locally before invoking the tool.
    Also look at the lines of text in question within the files, and confirm the character encoding of the file.
    Files can have different character encodings, and there's no reliable means to guess the encoding.  For some related information, see the file command:
    file words.txt
    ...and start reading some of the materials on internationalization and localization that are posted around the 'net. Here's Apple's top-level overview.
    In this case, it looks like there's an "odd" character and probably an å character on that line and apparently the Svenska ariadnetrådens. 
    Switching collation can help here, or - if the character is not necessary - removing it via tr or replacing it via sed can be equally effective solutions. 
    Given it appears to be Svenska, it might work better to switch to Svenska collation thanto  the suggested C collation.
    I think that's going to be sv_SE, which would make the command:
    LC_ALL=sv_SE sort -u words.txt > words1.txt
    This is all generic bash shell scripting stuff, and not specific to OS X.  If you haven't already seen them, the folks over at tldp have various guides including a bash guide for beginners, and an advanced bash scripting guide - both can be worth skimming.  They're not exactly the same as bash on OS X and some specific commands and switches can differ, and as bash versions can differ, but bash is quite similar across all the platforms.

  • Help with bash function(set background=dark/light in vimrc)

    I couldn't find any gvimrc files so I guess it uses the regular one. And since I work pretty much in X too  I thought it would be nice with a function that sets background=light if I'm in X an background=dark if not. Is that possible?
    /Richard

    vimrc configuration is not the same as bash.
    You probably want something like this in your ~/.vimrc:
    if has('gui_running')
    set background=light
    else
    set background = dark
    endif

  • Update of a table from a select query with aggregate functions.

    Hello All,
    I have problem here:
    I have 2 tables A(a1, a2, a3, a4, a4....... ) and B( a1, a2, b1, b2, b3). I need to calculate the avg(a4-a3), Max(a4-a3) and Min(a4-a3) and insert it into table B. If the foreign keys a1, a2 already exist in table B, I need to do an update of the computed values into column b1, b2 and b3 respectively, for a1, a2.
    Q1. Is it possible to do this with a single query ? I would prefer not to join A with B because the table A is very large. Also columns b1, b2 and b3 are non-nullable.
    Q2. Also if a4 and a3 are timestamps what is the best way to find the average? A difference of timestamps yields INTERVAL DAY TO SECOND over which the avg function doesn't seem to work. The averages, max and min in my case would be less than a day and hence all I need is to get the data in the hh:mm:ss format.
    As of now I'm using :
    TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
    extract( minute FROM (last_modified_date - created_date))*60 +
    extract( second FROM (last_modified_date - created_date)))
    ),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
    But this is very long drawn. Something more compact and efficient would be nice.
    Thanks in advance for your inputs.
    Edited by: 847764 on Mar 27, 2011 5:35 PM

    847764 wrote:
    Hi,
    Thanks everyone for such fast replies. Malakshinov's example worked fine for me as far as updating the table goes. As for the timestamp computations, I'm posting additional info: Sorry, I don't understand.
    If Malakshinov's example worked for updating the table, but you still have problems, does that mean you have to do something else besides update the table? If so, what?
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
    Here are the table details :
    DESC Table A
    Name Null Type
    ID               NOT NULL NUMBER
    A1               NOT NULL VARCHAR2(4)
    A2               NOT NULL VARCHAR2(40)
    A3               NOT NULL VARCHAR2(40)
    CREATED_DATE NOT NULL TIMESTAMP(6)
    LAST_MODIFIED_DATE TIMESTAMP(6) DESCribing the tables can help clarify some things, but it's no substitute for posting CREATE TABLE and INSERT statements. With only a description of the table, nobody can re-create the problem or test their ideas. Please post CREATE TABLE and INSERT statements for both tables as they exist before the MERGE. If table b doen't contain any rows before the MERGE, then just say so, but you still need to post a CREATE TABLE statement for both tables, and INSERT statements for table a.
    The objective is to compute the response times : avg (LAST_MODIFIED_DATE - CREATED_DATE), max (LAST_MODIFIED_DATE - CREATED_DATE) and min (LAST_MODIFIED_DATE - CREATED_DATE) grouped by A1 and A2 and store it in table B under AVG_T, MAX_T and MIN_T. Since AVG_T, MAX_T and MIN_T are only used for reporting purposes we have kept it as Varchar (though I think keeping it as timestamp would make more sense). I think a NUMBER would make more sense (the number of minutes, for example), or perhaps an INTERVAL DAY TO SECOND. If you stored a NUMBER, it would be easy to compute averages.
    In table B the times are stored in the format : hh:mm:ss. We don't need milliseconds precision. If you don;'t need milliseconds, then you should use DATE instead of TIMESTAMP. The functions for manipulating DATEs are much better.
    Hence I was calculating is as follows:
    -- Avg Time
    TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
    extract( minute FROM (last_modified_date - created_date))*60 +
    extract( second FROM (last_modified_date - created_date)))
    ),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
    --Max Time
    extract (hour FROM MAX(last_modified_date - created_date))||':'||extract (minute FROM MAX(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MAX(last_modified_date - created_date))) AS max_time,
    --Min Time
    extract (hour FROM MIN(last_modified_date - created_date))||':'||extract (minute FROM MIN(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MIN(last_modified_date - created_date))) AS min_timeIs this something that has to be done before or after the MERGE?
    Post the complete statement.
    Is this part of a query? Where's the SELECT keyword?
    Is this part of a DML operation? Where's the INSERT, or UPDATE, or MERGE keyword?
    What are the exact results you want from this? Explain how you get those results.
    Is the code above getting the right results? Are you just asking if there's a better way to get the same results?
    You have to explain things very carefully. None of the people who want to help you are familiar with your application, or your needs.
    I just noticed that my reply is horribly formatted - apologies! I'm just getting the hang of it.Whenever you post formatted text (such as query results) on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Select with aggregate functions? i.e. select max(code)...

    The code field on my user table is too small for the key field I want to use but it is still necessary I assume I must generate my own unique value.
    The DBDataSource.Query() method doesn't seem appropriate so should I create a SAPbobsCOM.Recordset and use the DoQuery method to select the maximum value currently in the user table?
    I found some mentions of requiring such a select statement on this board but no examples of the actual code. I'll go ahead and try the DoQuery method unless I hear of something more appropriate.
    Also, I saw a comment that a future release of SAP would have a larger code field for user tables but obviously that didn't happen in 6.7.
    Thanks for any comments.
    Bill Faulk

    Here's what I came up with...
    If you are happy with 99,999,999 records instead of 4,294,967,295 then I guess you can forego the hexadecimal bit. I'd welcome any alternatives if someone has one.
    With the hex:
    private string GetNextKey(string strTable)
        SAPbobsCOM.Recordset rs =
            (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(BoObjectTypes.BoRecordset);
        rs.DoQuery("select isnull(max(code),'00000000') from [" + strTable + "]");
        string strCode = (string)rs.Fields.Item(0).Value;
        int intKey = Convert.ToInt32(strCode,16) + 1;
        return intKey.ToString("X8");
    No Hex:
    private string GetNextKey(string strTable)
        SAPbobsCOM.Recordset rs =
          (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(BoObjectTypes.BoRecordset);
        rs.DoQuery("select isnull(max(code),'00000000') from [" + strTable + "]");
        string strCode = (string)rs.Fields.Item(0).Value;
        int intKey = Convert.ToInt32(strCode) + 1;
        return intKey.ToString("00000000");

Maybe you are looking for

  • Error (SRAS121) when trying to set up messages account

    Hello, I am part of a large goverment organization and we were issues iphones on at&t. I would like to setup your "messages" application on my pc so that I can send and receive emails on the computer due to poor reception where I work. When I try to

  • Video attachments not appearing in iCloud mail

    I use my Apple email on my iMac, and have iCloud email as one of my account. The problem I'm having is that when people often send me a video to watch, the attachment does not come through. It used to. I used to be able to watch videos right in the m

  • User Exit for payment term

    Hi Friends how to write the User Exit for Payment terms at Item Level. Any Sample User Exits can help me if no exact User Exit Is there User Exit Name : USEREXIT_MOVE_FIELD_TO_VBKD Thanks in Advance. Regards sahiti

  • Reason for rejection at line itme question - ?

    Hi how the line item will pick reason for rejection automatically in the order -? Normally reason for rejection will enter manually  for line items -? In my scenario when i create a order I am seeing reason for rejection at line item. what could be t

  • HT201210 how i can restore my iPhone which is in the recovery mode?

    i am also not know. i need for yours help to i restore my iphone.