Best way to calculate maximum of many columns/values per row

Hello,
for a new report with some joined tables I have to calculate the last change date. Since every table has a creation and a (last) modification date, I need to get the maximum value from 8 columns to display it.
Of course I can compare all values by pairs or create a PL/SQL function. But maybe there already exists a fast and easy way to gain this date?
My query so far. Each of the tables has a cr_date and mod_date column:
SELECT
    egvg.baugruppe_nummer || egvg.anlagennummer || egvg.geraetenummer AS systemnummer
    , '=' || egvg.baugruppe_nummer || egvg.anlagennummer || egvg.geraetenummer AS systemnr2
    , egvg.gebrauchsname AS gebrauchsname
    , NVL2(egvr.raumkode, '+' || egvr.raumkode, NULL) AS raumkode
    , egvg.lieferant AS lieferant
    , CASE egvgs.kabelanschluss WHEN 1 THEN 'ja' WHEN 0 THEN 'nein' ELSE NULL END AS kabelanschluss
    , egvges.nennleistung_kva AS nennleistung
    , NULL AS letzte_aenderung
    , CASE WHEN egvg.status_id = -1 THEN 'GERÄT GELÖSCHT' WHEN egvges.status_id = -1 THEN 'EDATEN GELÖSCHT' ELSE NULL END AS status
  FROM
    egv_geraete egvg
    LEFT JOIN egv_geraetestamm egvgs ON egvgs.id = egvg.geraetestamm_id
    LEFT JOIN egv_geraetestamm_edatenstamm egvges ON egvges.geraetestamm_id = egvgs.id
    LEFT JOIN egv_raeume egvr ON egvr.id = egvg.raum_id
    LEFT JOIN egv_edaten_sfp egvsfp ON egvsfp.id = egvges.sfp_idThank you,
Stefan

>
This method works but is there an easier (faster) way to update another table with new data only?
>
Almost anything would be better than that slow-by-slow loop processing.
You don't need a procedure you should just use MERGE for that. See the examples in the MERGE section of the SQL Language doc
http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
MERGE INTO bonuses D
   USING (SELECT employee_id, salary, department_id FROM employees
   WHERE department_id = 80) S
   ON (D.employee_id = S.employee_id)
   WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
     DELETE WHERE (S.salary > 8000)
   WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
     VALUES (S.employee_id, S.salary*.01)
     WHERE (S.salary <= 8000);

Similar Messages

  • Best way to calculate a duration

    Hi Everybody,
    loading data I have to calculate a key figure as a duration..it means as result from the difference between a start and end timestamps..
    how do you think would be the best way to calculate this keyfigure?
    I would rather prefer to do it in the update rule..
    I am going to appreciate any idea.
    Best regard,
    FedeX

    Hi
    It all depends upon the requirement. If you have a separate key figure in cube and you are calculating the value in update run it will increase the size on the cube a bit and loading process but will help to run the query fast. But if you create a calculated key figure on cube when you execute the query the value is calculated when query is running and increase query execution time.
    So it’s a kind of trade off to see how often you run query and often you load cube
    Thanks

  • Column values to row

    plz help me to print column values into row values.
    for eg.
    1
    2
    output will be 1,2

    Check this out.. might be useful
    recursive select?

  • What is the best way to install snapdrive to many hosts?

    Hi all, Next weekend, I planed to install snapdrive to over 160 hosts. So what is the best way to install snapdrive? Over 160 UCS hots! I found "remote install" in snapmanager for Hyper-v. This is useful for me? Please let me know easy way to install snapdrive to so many hosts. Thank you.

    The current generation of SnapDrive does not support remote installing to multiple hosts.What you saw about HyperV is simply limited to the HyperV host, so it's not remote to multiple hosts across the network.Since SnapDrive has a command line to install, you CAN however script the remote installation.I suggest you find out the exact installation command which works for your environment and use a program like psexec to remotely execute the installation command against multiple targeted servers.If you need help with the script, NetApp Professional Service can be engaged and can even come on site and do it for you, if needed. hope that helps,Domenico

  • ADF: Best way to find out how many rows are fetched?

    Hello,
    I have overridden method executeQueryForCollection of ViewObject in which I execute supper.executeQueryForCollection and after that want to find out how many rows are fetched during the execution.
    If I try to use getFetchedRowCount I always get "0", if using getEstimatedRowCount, query are being re-executed.
    What method is better to use for that?
    Thank you,
    Veniamin Goldin
    Forbis, Ltd.

    I have a 'home-made' view called RBS, whose definition is this:
    create view RBS as
    select /*+ RULE  */ substr(s.username,1,10) oracle,
           substr(case when s.osuser like 'oramt%'
                       then nvl(upper(s.client_info),'client_info not set')
                       else substr(s.machine,instr(s.machine, '\')+1,length(s.machine))||':'||s.osuser
                  end
                  ,1,20) CLIENT
    ,      substr(''''||s.sid||','||s.serial#||''''||decode(s.status,'KILLED','*',''),1,12) kill_id
    ,      lpad(to_char(t.xidusn),4) rbs#
    ,      lpad(to_char(t.used_ublk),4) ublk
    ,      lpad(to_char(t.used_urec),8) urecords
    ,      i.block_gets
    ,      lpad(to_number(round((sysdate - to_date(t.start_time,'MM/DD/YY HH24:MI:SS')) * 60 * 60 * 24)),9) time
    ,      upper(substr(s.program,1,20)) PROGRAM
    ,      to_char(s.LOGON_TIME,'HH24:MI:SS DD-MON') LOGIN_TIME
    from   sys.v_$transaction t
    ,      sys.v_$session s
    ,      sys.v_$sess_io i
    ,      sys.v_$process p
    where  s.saddr = t.ses_addr
    and    i.sid = s.sid
    and    p.addr = s.paddr
    /By monitoring the URECORDS column value of the row that corresponds to my session doing a transaction, I can see how it progresses.
    Toon

  • Best way to pre-populate material variable with values for users

    Hi,  I have a requirement to prepopulate a material variable with about 5 materials and that is the materials that will default when the query is called.  The users would also need the ability to change those values.
    My thought is to create a User-exit variable that derives the values from a user maintained table (infoobject). 
    Does anyone else have any suggestions or ideas on the best way to handle this?

    I don't know if there is a best solution...
    Infoobject
    With this option you have to create a new infoobject (ZMATERIAL) without attribute (you need only a list material codes) and then to set the authorization profile for the user in order to manage the content.
    The creation of an infoobject corresponds to a table creation, but you don't need any other specific options that belong to the infooject (as technical object)...
    Table
    With this option you have to create a Z table with only one field and then to allow the maintenance of the table by SM30....
    In the ending, if you want to be a purist use the table, otherwise use an infoobject (but there are no significant differences !
    Bye,
    Roberto

  • What is the best way to update /101 Total Gross YTD values within CRT

    Hello - Can someone please suggest the best way that I can update the current /101 YTD values within the CRT?  We had a conversion effort take place earlier in the year that did not accumulate the amounts of one particular wage type to the /101 bucket.  The wage type itself is currently in the CRT with the appropriate accumulators set up correctly but the /101 YTD values are too low due to the missing amounts.  Any suggestions would be greatly appreciated.
    Thanks!

    Hello Kristy,
    Did you try RPUCRT00? This program is for Recreation of Payroll Cumulation tables and might help in this case.
    Hopefully this information helps.
    Kind regards,
    Graziela Dondoni

  • What's the best way to calculate table usage in Mb?

    Given a table structure, and knowing it will receive 30.000 new entries on each month, how do I calculate the table growth in kb?
    Also, how can I calculate initial disk ocupation from the table?
    Is there an equation or formula that I can apply using column definition as params?
    That's one of the tables on my application I have to calculate:
    CREATE TABLE "LOG_PESQUISA"
    (     "COD_LOG_PESQUISA" NUMBER(10,0) NOT NULL ENABLE,
         "TXT_PCHAVE" VARCHAR2(100),
         "IND_DIGITAL" NUMBER(1,0) NOT NULL ENABLE,
         "VLR_TOTALFISICO" NUMBER(9,0) NOT NULL ENABLE,
         "VLR_TOTALDIGITAL" NUMBER(9,0) NOT NULL ENABLE,
         "IND_FISICO" NUMBER(1,0) NOT NULL ENABLE,
         "TXT_CODIGO" VARCHAR2(50),
         "TXT_BOOLEAN_TYPE" VARCHAR2(3),
         "IND_DATA" NUMBER(1,0) NOT NULL ENABLE,
         "DAT_INI" DATE,
         "DAT_FIM" DATE,
         "COD_COLECAO" NUMBER(10,0),
         "COD_CREDITO" NUMBER(10,0),
         "VLR_EDICAO" NUMBER(5,0),
         "IND_TAMANHO" VARCHAR2(2),
         "IND_PUBLICADAS" NUMBER(1,0) NOT NULL ENABLE,
         "IND_SOBRAS" NUMBER(1,0) NOT NULL ENABLE,
         "IND_LUI" NUMBER(1,0) NOT NULL ENABLE,
         "IND_AUI" NUMBER(1,0) NOT NULL ENABLE,
         "COD_USU_INC" VARCHAR2(30) NOT NULL ENABLE,
         "DAT_INC" DATE NOT NULL ENABLE,
         "ID_CREDITO" NUMBER(10,0),
         "IND_PCHAVE" NUMBER(1,0) NOT NULL ENABLE,
         "IND_HEADLINE" NUMBER(1,0) NOT NULL ENABLE,
         "IND_LEGENDA" NUMBER(1,0) NOT NULL ENABLE,
         CONSTRAINT "LOG_PESQUISA_PESQUISA_PK" PRIMARY KEY ("COD_LOG_PESQUISA") ENABLE
    /

    The most accurate approach would be to
    1) Create the table
    2) Load it with some sample data. Make sure that the sample data is as close to reality as possible (i.e. if txt_pchave will be 90 characters 20% of the time, 50 characters 50% of the time, 10 characters 20% of the time and NULL 10% of the time, make sure your sample data reflects that).
    3) Query DBA_SEGMENTS to see how big the table is
    SELECT segment_name table_name, sum(bytes)/1024/1024 MB
      FROM dba_segments
    WHERE segment_name = 'LOG_PESQUISA'4) Multiply by the number of rows that you really expect. You might load 3,000 records in step 2, for example, so you'd multiply by 10 to get the size if you loaded 30,000 records.
    You can extend this to look at the size of the index(es) relatively easily. If you are doing more than straight inserts, you would also want to simulate that to account for things like row migration, empty blocks, etc.
    Justin

  • Best Way to calculate totals from query

    Could someone point me in the right direction to add up my
    data and distinctly show it in my query?
    I have a table with the following fields:
    id, team_id, compname, teamname, totallost
    I want to add up the "totallost" row where the "team_id" and
    "compname" fields are the same...then show the compname with the
    sum of the totallost once in my table and determine who is
    winning.

    Thank you for the great help. This code works well, but is
    there a way to display the highest totallost and differentiate
    between competition names? My example is for one compname, but the
    table will have multiple compname's and I want to build a table
    showing only the highest totallost for each compname.
    You guys have been a great help. I learned something new
    today already.

  • Best way to calculate these steps?

    I have a canvas with something at a point and when the user enters in a new point, to move it to there. The issue is that i want to move it "evenly" and smoothly.
    If abs( x - newX ) == abs( y - newY ), then there's no issue. Just move +/-1 in each direction. The issue becomes when the diff.'s in the x points is not the same as the diff's in the y points. What algorithm would i use to make it move in a straight line to the point, in consistent steps?

    duffymo wrote:
    DrLaszloJamf wrote:
    Suppose you want to move from (x0,y0) to (x1, y1) as t goes from 0 to 1.
    Then you want x(t=0) to be x0 and x(t=1) to be x1, so
    x(t) = x0*(1-t) + x1*(t)Note that there is nothing magical about this being along the x-axis. The same reasoning yields:
    y(t) = y0*(1-t) + y1*(t)
    it's just a linear parameterization using simple shape functions where 0 <= t <= 1.
    %translation:
    Bullwinkle: Nuttin' up mah sleeve!

  • Best way to update a table with disinct values

    Hi, i would really appreciate some advise:
    I need to reguarly perform a task where i update 1 table with all the new data that has been entered from another table. I cant perform a complete insert as this will create duplicate data every time it runs so the only way i can think of is using cursors as per the script below:
    CREATE OR REPLACE PROCEDURE update_new_mem IS
    tmpVar NUMBER;
    CURSOR c_mem IS
    SELECT member_name,member_id
    FROM gym.members;
    crec c_mem%ROWTYPE;
    BEGIN
    OPEN c_mem;
    LOOP
    FETCH c_mem INTO crec;
    EXIT WHEN c_mem%NOTFOUND;
    BEGIN
    UPDATE gym.lifts
    SET name = crec.member_name
    WHERE member_id = crec.member_id;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN NULL;
    END;
    IF SQL%NOTFOUND THEN
    BEGIN
    INSERT INTO gym.lifts
    (name,member_id)
    VALUES (crec.member_name,crec.member_id);
    END;
    END IF;
    END LOOP;
    CLOSE c_mem;
    END update_new_mem;
    This method works but is there an easier (faster) way to update another table with new data only?
    Many thanks

    >
    This method works but is there an easier (faster) way to update another table with new data only?
    >
    Almost anything would be better than that slow-by-slow loop processing.
    You don't need a procedure you should just use MERGE for that. See the examples in the MERGE section of the SQL Language doc
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9016.htm
    MERGE INTO bonuses D
       USING (SELECT employee_id, salary, department_id FROM employees
       WHERE department_id = 80) S
       ON (D.employee_id = S.employee_id)
       WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
         DELETE WHERE (S.salary > 8000)
       WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
         VALUES (S.employee_id, S.salary*.01)
         WHERE (S.salary <= 8000);

  • Best way to create an array of bound values (to the resourceManager for example)

    What's the best practice for creating an array (array collection) where each element is bound to another value?
    An example would be an array of error strings that are used by an application.  Each string needs to change as the resource manager changes.
    Passing a list of strings to a list control is another example.  These should get localized as well.

    hobby1 wrote:
    Crossrulz, I was planning on handling it like this: 
    I have a total of 17 element I need to build an array from.  Each case would include another element to be written to the array.  Is this the correct way to handle this?
    Thats one way to do it.  Not nearly as efficient as using the Build Array, but it will work.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Best way to select match on collection of values

    Suppose two tables that are related by a third:
    TABLE_ONE
         one_id NUMBER
    TABLE_TWO
         two_id NUMBER
    TABLE_THREE
         one_id
         two_id
    one_id and two_id are primary keys in their respective tables
    all combinations of one_id and two_id in TABLE_THREE are guaranteed to be unique, so there will be either zero or one set of records correlating some exact collection of one_id values with the same two_id.
    suppose TABLE_ONE has records with the following values for one_id:
    1,3,5,7
    suppose TABLE_TWO has records with the following values for two_id:
    2,4,6
    suppose TABLE_THREE has records with the following pairs of values for one_id,two_id:
    1,2
    3,2
    3,4
    5,4
    1,6
    3,6
    5,6
    7,6
    finally, suppose I need to find the value of two_id (if there is one) that EXACTLY correlates with exactly two values of one_id: 3 and 5.
    select two_id
    from table_three
    where one_id in (3,5)
    group by two_id
    won't work:
    * it matches 2, because a match with only the 3,2 record is enough to satisfy the condition
    * it matches 4 because it should, but...
    * it also matches 6, even though 6 has more records.
    in other words, I need a stronger condition than "in" that requires the collection of records in each two_id group match the specified collection EXACTLY (but ideally, in an order-agnostic way, so a match for 3 and 5 would be the same as a match for 5 and 3).
    to give a fictional, but illustrative example...
    select two_id
    from table_three
    where one_id IS (3,5)
    group by two_id

    First a little set up
    SQL> SELECT two_id FROM table_three where one_id = 3;
        TWO_ID
             3
         13545
    SQL> INSERT INTO table_three VALUES(5,13545);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> SELECT COUNT(*) FROM table_three;
      COUNT(*)
        108376
    SQL> SELECT COUNT(*) FROM (SELECT DISTINCT one_id, two_id FROM table_three);
      COUNT(*)
        108376Now, I have 108,376 distinct rows, and only two_id 13545 has both 3 and 5 in one_id.
    Andrew's version, which can be simplified (Andrew: You're not ususally so vebose :-) )
    SQL> SELECT two_id
      2  FROM table_three
      3  WHERE one_id IN (3,5)
      4  GROUP BY two_id
      5  HAVING COUNT(*) = 2;
    Statistics
              0  recursive calls
              0  db block gets
            113  consistent gets
              0  physical reads
              0  redo size
            491  bytes sent via SQL*Net to client
            651  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processedNow, 270432:
    SQL> SELECT tt1.two_id
      2  FROM   table_three tt1, table_three tt2
      3  WHERE  tt1.two_id = tt2.two_id AND
      4         tt1.one_id = 3 AND
      5         tt2.one_id = 5 AND
      6         NOT EXISTS (SELECT NULL
      7                     FROM   table_three   tt3
      8                     WHERE  tt3.two_id  = tt1.two_id AND
      9                            tt3.one_id <> tt1.one_id AND
    10                            tt3.one_id <> tt2.one_id);
    Statistics
              0  recursive calls
              0  db block gets
            242  consistent gets
              0  physical reads
              0  redo size
            274  bytes sent via SQL*Net to client
            456  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              0 rows processed Note 0 rows processed.
    Now David's first query:
    SQL> SELECT two_id
      2  FROM table_three
      3  GROUP BY two_id
      4  HAVING COUNT(*)    = 2 And
      5         MAX(one_id) = 5 And
      6         MIN(one_id) = 3;
    Statistics
              0  recursive calls
              0  db block gets
            113  consistent gets
              0  physical reads
              0  redo size
            274  bytes sent via SQL*Net to client
            456  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              0 rows processedAs efficient as Andrew's, but again, no rows processed.
    David's Non-set based method:
    SQL> SELECT two_id
      2  FROM table_three t3
      3  WHERE one_id = 3 And
      4        EXISTS (SELECT 1 FROM table_three t3_x
      5                WHERE  t3_x.two_id = t3.two_id And
      6                       t3_x.one_id = 5) And
      7        NOT EXISTS (SELECT 1 FROM table_three t3_x
      8                    WHERE  t3_x.two_id = t3.two_id And
      9                           t3_x.one_id NOT IN (3,5));
    Statistics
              0  recursive calls
              0  db block gets
            265  consistent gets
              0  physical reads
              0  redo size
            274  bytes sent via SQL*Net to client
            456  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0 rows processedNow, my alternative:
    SQL> SELECT two_id FROM table_three
      2  WHERE one_id = 3
      3  INTERSECT
      4  SELECT two_id FROM table_three
      5  WHERE one_id = 5;
    Statistics
              0  recursive calls
              0  db block gets
            226  consistent gets
              0  physical reads
              0  redo size
            491  bytes sent via SQL*Net to client
            651  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processedSo, not as efficient as Andrew's, but at least correct. Does an Index help Andrew or me?
    SQL> CREATE INDEX t3_one_id ON table_three(one_id);
    Index created.Now, Andrew, then me.
    SQL> SELECT two_id
      2  FROM table_three
      3  WHERE one_id IN (3,5)
      4  GROUP BY two_id
      5  HAVING COUNT(*) = 2;
    Statistics
              0  recursive calls
              0  db block gets
              9  consistent gets
              0  physical reads
              0  redo size
            491  bytes sent via SQL*Net to client
            651  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> SELECT two_id FROM table_three
      2  WHERE one_id = 3
      3  INTERSECT
      4  SELECT two_id FROM table_three
      5  WHERE one_id = 5;
    Statistics
              0  recursive calls
              0  db block gets
              9  consistent gets
              0  physical reads
              0  redo size
            491  bytes sent via SQL*Net to client
            651  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processedWith an index, equivalent gets, but I still do two sorts.
    John

  • Best way to add additional parent child attribute values.

    I have a parent Child attribute in my dimension.  I am currently displaying the correct ID value as the business wants.  So now they can see the rollup of the ID(intOrgNodeID
    values.They would also like to see the same rollup of the Name (vcharOrgNodeName)  for this ID.However they do
    not want it concatenated.  They want to be able to see them separate.
    You cannot create two parent child attibutes in one dimension so not sure if there is some simple trick to make this work? 
    It seems like there should be some simple trick for this. 
    My dimension table looks something like this
    intdimOrgNodeID int Key (surreget key)
    intOrgNodeID int (Actual ID)
    intDimParentOrgNodeID
    vcharOrgNodeName
    In the Propertys I have set this below.
    KeyColumns  = tbldimOrgNode.intDimParentOrgNodeID
    NameColumn = tbldimOrgNode.intOrgNodeID
    Ken Craig

    Hi Ken,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best way to reuse a layout with different values?

    I want to reuse the code of the <div id="placeInfo"> on the next page (place.xhtml). I simplified it's code for this question.
    Of course I can use <ui:include> for <div id="placeInfo"> but on the next page I would need e.g. #{myBean.selectedPlace.name} instead of #{place.name}.
    Any smart idea how to do this efficiently?
    <ui:repeat value="#{myBean.places}" var="place">
        <p:commandLink action="place?faces-redirect=true" ajax="false">
            <f:setPropertyActionListener value="#{place}" target="#{myBean.selectedPlace}" />
            <div id="placeInfo">
                #{place.name}
                #{place.openHours}
            </div>
        </p:commandLink>
    </ui:repeat>

    Research composite components. Its a little cumbersome to do these things in jsf to be honest. I just copypaste the xhtml content most of the times. Its not like it will save you time to turn this into a reusable component.

Maybe you are looking for