Analytic Fns to eliminate rows...

I wish to write a Query using Analytic functions, pls help
I have the following Data set on which I wish to eliminate the rows having Rnk greater than or equal to the Rnk of the row where TypCd = 'I'
If a CollId group does not have a row containing TypCd = 'I' then all rows should be selected.
If a CollId group has the least Rnk for TypCd = 'I' then all rows have to be eliminated.
Below is the sample dataset -
CollId TypCd Rnk
A0001 D 1
A0001 R 1
A0001 I 2
A0001 F 3
A0002 I 1
A0002 D 1
A0002 R 2
A0003 C 1
A0003 F 4
The Query should return the foll output -
A0001 D 1
A0001 R 1
A0003 C 1
A0003 F 4
I need to get this done using Analytic Functions only. Any ideas pls ?

Hi,
SQL> select * from coll_gr
  2  /
&nbsp
COLLID     TY        RNK
A0001      D           1
A0001      R           1
A0001      I           2
A0001      F           3
A0002      I           1
A0002      D           1
A0002      R           2
A0003      C           1
A0003      F           4
&nbsp
9 rows selected.
&nbsp
SQL> select collid, typecd, rnk
  2  from (
  3  select collid, typecd,
  4  first_value((case when typecd = 'I' then rnk else 100000 end)) over(partition by collid
  5  order by (case when typecd = 'I' then '0' else typeCd end)) mrnk, rnk
  6  from coll_gr
  7  )
  8  where rnk < mrnk
  9  /
&nbsp
COLLID     TY        RNK
A0001      D           1
A0001      R           1
A0003      C           1
A0003      F           4Rgds.

Similar Messages

  • Analytic function to count rows based on Special criteria

    Hi
    I have the following query with analytic function but wrong results on the last column COUNT.
    Please help me to achive the required result.Need to change the way how I select the last column.
    1)I am getting the output order by b.sequence_no column . This is a must.
    2)COUNT Column :
    I don't want the total count based on thor column hence there is no point in grouping by that column.
    The actual requirement to achieve COUNT is:
    2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1
    (In other words, if it is different from the following row)
    2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different.
    (In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
    My present query:
    select    r.name REGION ,
              p.name PT,
              do.name DELOFF,
              ro.name ROUTE,
    decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
               THOR,
             l.name LOC ,
              b.sequence_no SEQ,
               CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
                OVER (order by b.sequence_no)
                or th.thorfare_name = LEAD (th.thorfare_name)
                OVER (order by b.sequence_no)
                THEN  COUNT(b.sequence_no) OVER (partition by r.name,th.thorfare_name,l.name order BY b.sequence_no
              ELSE 1
              END COUNT
    from   t_regions r,t_post_towns p,t_delivery_offices do, t_routes ro, t_counties c,t_head_offices ho,
    t_buildings b,t_thoroughfares th,t_localities l
    where   th.thorfare_id = b.thorfare_id
    and    nvl(b.invalid,'N')='N'
    and    b.route_id=ro.route_id(+)
    and    b.locality_id =l.locality_id(+)
    and    ro.delivery_office_id=do.delivery_office_id(+)
    and    do.post_town_id = p.post_town_id(+)
    and    p.ho_id=ho.ho_id(+)
    and     ho.county_id = c.county_id(+)
    and     c.region_id = r.region_id(+)
    and    r.name='NAAS'
    and    do.DELIVERY_OFFICE_id= &&DELIVERY_OFFICE_id
    and    ro.route_id=3405
    group by r.name,p.name,do.name,ro.name,th.thorfare_name,l.name,b.sequence_no
    ORDER BY ro.name,b.sequence_no;My incorrect output[PART OF DATA]:
    >
    REGION     PT DELOFF ROUTE     THOR LOC SEQ COUNT
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 1 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 2 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 4 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 5 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 THEGROVE CEL 2 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 7 3
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 8 4
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 9 5
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 10 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 11 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 12 8
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 15 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 19 3
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 24 4
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 29 5
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 34 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 39 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 42 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 43 2
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 44 3
    My required output[PART OF DATA]-Please compare with the above.:
    >
    REGION     PT DELOFF ROUTE     THOR LOC COUNT
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 THEGROVE CEL 1
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 NEWTOWNRD CEL 6
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 DUBLINRD CEL 7
    NAAS     NAAS MAYNOOTH     MAYNOOTHR010 PRIMHILL CEL 2
    NOTE :Count as 1 is correctly coming.
    But where there is same rows and I want to take the total count on them, I am not getting.
    Pls pls help.
    Thanks
    Edited by: Krithi on 04-Nov-2010 05:28

    Nicosa wrote:
    Hi,
    Can you give us some sample data (create table + inserts orders) to play with ?
    Considering your output, I'm not even sure you need analytic count.Yes sure.
    I am describing the query again here with 3 tables now to make this understand better.
    Given below are the create table statements and insert statements for these 3 tables.
    These tables are - BULDINGSV,THORV and LOCV
    CREATE TABLE BUILDINGSV
      BUILDING_ID                  NUMBER(10)       NOT NULL,
      INVALID                      VARCHAR2(1 BYTE),
      ROUTE_ID                     NUMBER(10),
      LOCALITY_ID                  NUMBER(10),
      SEQUENCE_NO                  NUMBER(4),
      THORFARE_ID                  NUMBER(10) NOT NULL
    CREATE TABLE THORV
      THORFARE_ID            NUMBER(10)             NOT NULL,
      THORFARE_NAME          VARCHAR2(40 BYTE)      NOT NULL
    CREATE TABLE LOCV
      LOCALITY_ID            NUMBER(10)             NOT NULL,
      NAME                   VARCHAR2(40 BYTE)      NOT NULL);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002372, 'N', 3405, 37382613, 5, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002363, 'N', 3405, 37382613, 57, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002362, 'N', 3405, 37382613, 56, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002360, 'N', 3405, 37382613, 52, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002358, 'N', 3405, 37382613, 1, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002240, 'N', 3405, 37382613, 6, 9002284);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002229, 'N', 3405, 37382613, 66, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002228, 'N', 3405, 37382613, 65, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002226, 'N', 3405, 37382613, 62, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002222, 'N', 3405, 37382613, 43, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002217, 'N', 3405, 37382613, 125, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002221, 'N', 3405, 37382613, 58, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002214, 'N', 3405, 37382613, 128, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33363182, 'N', 3405, 37382613, 114, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33363185, 'N', 3405, 37382613, 115, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002371, 'N', 3405, 37382613, 2, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003329, 'N', 3405, 37382613, 415, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002359, 'N', 3405, 37382613, 15, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002224, 'N', 3405, 37382613, 61, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003318, 'N', 3405, 37382613, 411, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003326, 'N', 3405, 37382613, 412, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003327, 'N', 3405, 37382613, 413, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003328, 'N', 3405, 37382613, 414, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003330, 'N', 3405, 37382613, 416, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003331, 'N', 3405, 37382613, 417, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27003332, 'N', 3405, 37382613, 410, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27004795, 'N', 3405, 37382613, 514, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (27004807, 'N', 3405, 37382613, 515, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (59002227, 'N', 3405, 37382613, 64, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33230805, 'N', 3405, 37382613, 44, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231027, 'N', 3405, 37382613, 7, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231058, 'N', 3405, 37382613, 9, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231078, 'N', 3405, 37382613, 10, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231087, 'N', 3405, 37382613, 11, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33231093, 'N', 3405, 37382613, 12, 9002375);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (33229890, 'N', 3405, 37382613, 55, 9002364);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561996, 'N', 3405, 34224751, 544, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561997, 'N', 3405, 34224751, 543, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561998, 'N', 3405, 34224751, 555, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562000, 'N', 3405, 34224751, 541, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562001, 'N', 3405, 34224751, 538, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562028, 'N', 3405, 35417256, 525, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562031, 'N', 3405, 35417256, 518, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562032, 'N', 3405, 35417256, 519, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562033, 'N', 3405, 35417256, 523, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561939, 'N', 3405, 34224751, 551, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561940, 'N', 3405, 34224751, 552, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561941, 'N', 3405, 34224751, 553, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561942, 'N', 3405, 35417256, 536, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561943, 'N', 3405, 35417256, 537, 0);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561970, 'N', 3405, 35417256, 522, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561972, 'N', 3405, 35417256, 527, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561974, 'N', 3405, 35417256, 530, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561975, 'N', 3405, 35417256, 531, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561980, 'N', 3405, 34224751, 575, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561981, 'N', 3405, 34224751, 574, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561983, 'N', 3405, 34224751, 571, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561984, 'N', 3405, 34224751, 570, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561985, 'N', 3405, 34224751, 568, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561986, 'N', 3405, 34224751, 567, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561987, 'N', 3405, 34224751, 566, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561989, 'N', 3405, 34224751, 563, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561990, 'N', 3405, 34224751, 562, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561991, 'N', 3405, 34224751, 560, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561992, 'N', 3405, 34224751, 559, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561993, 'N', 3405, 34224751, 558, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561994, 'N', 3405, 34224751, 548, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80561995, 'N', 3405, 34224751, 546, 35417360);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562160, 'N', 3405, 37382613, 139, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562161, 'N', 3405, 37382613, 140, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562162, 'N', 3405, 37382613, 141, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562163, 'N', 3405, 37382613, 142, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562164, 'N', 3405, 37382613, 143, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562165, 'N', 3405, 37382613, 145, 35291878);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562166, 'N', 3405, 37382613, 100, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562167, 'N', 3405, 37382613, 102, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562171, 'N', 3405, 37382613, 107, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562172, 'N', 3405, 37382613, 108, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562174, 'N', 3405, 37382613, 110, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562175, 'N', 3405, 37382613, 111, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562176, 'N', 3405, 37382613, 112, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562177, 'N', 3405, 37382613, 113, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562182, 'N', 3405, 37382613, 123, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562183, 'N', 3405, 37382613, 121, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562184, 'N', 3405, 37382613, 120, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562185, 'N', 3405, 37382613, 118, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562186, 'N', 3405, 37382613, 117, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562187, 'N', 3405, 37382613, 116, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562189, 'N', 3405, 37382613, 95, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562190, 'N', 3405, 37382613, 94, 35291883);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562213, 'N', 3405, 37382613, 89, 35291872);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (80562240, 'N', 3405, 35417256, 516, 35417271);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329559, 'N', 3405, 35329152, 443, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329560, 'N', 3405, 35329152, 444, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329562, 'N', 3405, 35329152, 446, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329109, 'N', 3405, 35329152, 433, 35329181);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329169, 'N', 3405, 35329152, 434, 35329181);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329557, 'N', 3405, 35329152, 441, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329558, 'N', 3405, 35329152, 442, 35329551);
    Insert into BUILDINGSV
       (BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
    Values
       (35329191, 'N', 3405, 35329152, 436, 35329181);
    COMMIT;
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (0, 'OSIUNKNOWN');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002284, 'THE GROVE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002364, 'DUBLIN ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (9002375, 'NEWTOWN ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291872, 'HAZELHATCH ROAD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291878, 'SIMMONSTOWN PARK');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35291883, 'PRIMROSE HILL');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329181, 'THE COPSE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329213, 'THE COURT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329529, 'THE CRESCENT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329551, 'THE LAWNS');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35329580, 'THE DRIVE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35417271, 'TEMPLEMILLS COTTAGES');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (35417360, 'CHELMSFORD');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (36500023, 'THE CLOSE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (36500101, 'THE GREEN');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375569, 'THE DOWNS');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375595, 'THE PARK');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375754, 'THE AVENUE');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37375781, 'THE VIEW');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37376046, 'THE CRESCENT');
    Insert into THORV
       (THORFARE_ID, THORFARE_NAME)
    Values
       (37376048, 'THE GLADE');
    COMMIT;
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (34224751, 'SIMMONSTOWN');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (35417256, 'TEMPLEMILLS');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (35329152, 'TEMPLE MANOR');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (37382613, 'CELBRIDGE');
    Insert into LOCV
       (LOCALITY_ID, NAME)
    Values
       (37375570, 'SAINT WOLSTAN''S ABBEY');
    COMMIT;
    ------------------------------------------------------------------------------Now the query with wrong result:
    select decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
               THOR,
                l.name LOC,
                b.sequence_no SEQ,
               CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
                OVER (order by b.sequence_no)
                or th.thorfare_name = LEAD (th.thorfare_name)
                OVER (order by b.sequence_no)
                THEN  COUNT(b.sequence_no) OVER (partition by th.thorfare_name,l.name order BY b.sequence_no
              ELSE 1
              END COUNT   from BUILDINGSV b,THORV th,LOCV l                 
    where   th.thorfare_id = b.thorfare_id
    and    nvl(b.invalid,'N')='N'
    and    b.route_id=3405
    and    b.locality_id =l.locality_id(+)
    order by b.sequence_no;The query result -WRONG (only first few lines)
    THOR                        LOC        SEQ    COUNT
    DUBLIN ROAD     CELBRIDGE    1     1
    NEWTOWN ROAD     CELBRIDGE        2     1
    NEWTOWN ROAD     CELBRIDGE        5     2
    THE GROVE     CELBRIDGE        6     1
    NEWTOWN ROAD     CELBRIDGE        7     3
    NEWTOWN ROAD     CELBRIDGE        9     4
    NEWTOWN ROAD     CELBRIDGE       10     5
    NEWTOWN ROAD     CELBRIDGE       11     6
    NEWTOWN ROAD     CELBRIDGE       12     7
    DUBLIN ROAD     CELBRIDGE       15     1
    PRIMROSE HILL     CELBRIDGE       43     1
    PRIMROSE HILL     CELBRIDGE       44     2
    DUBLIN ROAD     CELBRIDGE       52     3
    DUBLIN ROAD     CELBRIDGE       55     4
    DUBLIN ROAD     CELBRIDGE       56     5
    DUBLIN ROAD     CELBRIDGE       57     6
    DUBLIN ROAD     CELBRIDGE       58     7
    PRIMROSE HILL     CELBRIDGE       61     3
    PRIMROSE HILL     CELBRIDGE       62     4
    HAZELHATCH ROAD     CELBRIDGE       64     1
    HAZELHATCH ROAD     CELBRIDGE       65     2The query result -EXPECTED (only first few lines)
    THOR                     LOC     COUNT
    DUBLIN ROAD     CELBRIDGE      1
    NEWTOWN ROAD     CELBRIDGE      2
    THE GROVE     CELBRIDGE      1
    NEWTOWN ROAD     CELBRIDGE      5
    DUBLIN ROAD     CELBRIDGE      1
    PRIMROSE HILL     CELBRIDGE      2
    DUBLIN ROAD     CELBRIDGE      5
    PRIMROSE HILL     CELBRIDGE      2
    HAZELHATCH ROAD     CELBRIDGE      2Please note, in the expected result, I only need 1 row but need to show the total count of rows until the names change.
    So the issues are
    1) the count column values are wrong in my query.
    2)I dont want to repeat the same rows(Please see EXPECTED output and compare it against the original)
    3)Want the output in exactly same way as in EXPECTED OUTPUT as I dont want to group by thor name(Eg. I dont want the count for all DUBLIN ROAD but I want to examine rows for the next one, if THOR/LOC combination is different in next row then COUNT=1 else COUNT=Count of no. of rows for that thor/loc combination until the combination change -So there are same value multiple rows which i need to show it in 1 row with the total count)
    I am explaining below this in more detail!!
    I only need 1 row per same THOR/LOC names coming multiple times but I need the count shown against that 1 row(i.e COUNT= how many rows with same thor/loc combination until THOR/LOC combo changes value).
    Then repeat the process until all rows are finished..
    If there is no multiple row with same THOR/LOC coming in the following row-i.e the following row is a different THOR/LOC combination, then the count for that row is 1.
    Hope this is clear.
    Is this doable?
    Thanks in advance.
    Edited by: Krithi on 04-Nov-2010 07:45
    Edited by: Krithi on 04-Nov-2010 07:45
    Edited by: Krithi on 04-Nov-2010 08:31

  • Eliminate rows in a windows of x seconds

    Hi,
    Please consider the following case:
    I have a table with millions of rows.
    I need to eliminate from the statement results each set of rows (two or more) , which exists in a window of 60 second.
    For example:
    create table test
    (id number,
    b date);
    insert into test values(1,to_date('01/01/2012 16:00:10','dd/mm/yyyy hh24:mi:ss'));
    insert into test values(2,to_date('01/01/2012 16:00:20','dd/mm/yyyy hh24:mi:ss'));
    insert into test values(3,to_date('01/01/2012 16:03:10','dd/mm/yyyy hh24:mi:ss'));
    insert into test values(4,to_date('01/01/2012 16:45:50','dd/mm/yyyy hh24:mi:ss'));
    insert into test values(5,to_date('01/01/2012 16:46:20','dd/mm/yyyy hh24:mi:ss'));
    insert into test values(6,to_date('01/01/2012 18:59:50','dd/mm/yyyy hh24:mi:ss'));
    commit;
    select id , to_char(b, '/dd/mm/yyyy hh24:mi:ss') from test;
    SQL> select id , to_char(b, '/dd/mm/yyyy hh24:mi:ss') from test;
            ID TO_CHAR(B,'/DD/MM/YY
             1 /01/01/2012 16:00:10
             2 /01/01/2012 16:00:20
             3 /01/01/2012 16:03:10
             4 /01/01/2012 16:45:50
             5 /01/01/2012 16:46:20
             6 /01/01/2012 18:59:50 -- Based on the logic above only id : 3 and 6 should be return from the statments.
    I succeeded to eliminate the rows which are in the very same minute (id 1 +2 ) but not in a windows of 60 seconds
    such as id 4+5 .
    Please advice

    Like this?
    SQL> ed
    Wrote file afiedt.buf
      1  select id, b
      2  from (
      3        select id, b
      4              ,case when abs(b-lag(b) over (order by b)) < (1/(24*60))
      5                      or abs(b-lead(b) over (order by b)) < (1/(24*60))
      6               then 0
      7               else 1
      8               end as chk
      9        from test
    10       )
    11* where chk = 1
    SQL> /
            ID B
             3 01-JAN-2012 16:03:10
             6 01-JAN-2012 18:59:50Edited by: BluShadow on 07-Jun-2012 14:08
    ps. you may want to have the condition as "<=" rather than "<" depending on what you mean by "in" the window of 60 seconds.

  • Analytical functions-ROLLUP&CUBE

    Hi,
    Can anyone give me a clear explanation on the 2 analytical fns rollup and cube?I'm finding it a bit confusing.
    Thanks in advance,
    Sudarshan.S

    http://asktom.oracle.com/pls/ask/f?p=4950:8:3940765887821547038::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1512805503041

  • Analytic function problem

    Hi,
    I have a problem using analytic function: when I execute this query
    SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
    sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
    TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN,TSIUP_TRT ) CONTA_ARTICOLO
    FROM TST_FLIISR_VTEREMART
    WHERE 1=1 --TSIUP_TRT = 1
    AND TSIUPDATE=to_date('27082012','ddmmyyyy')
    and TSIUP_NTRX =172
    AND TSIUPSITE = 10025
    AND TSIUPCEAN = '8012452018825'
    GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
    ORDER BY TSIUPSITE,TSIUPDATE ;
    I have the error ORA-00979: not a GROUP BY expression related to TSIUP_TRT field,infact, if I execute this one
    SELECT TSIUPSITE, TSIUPCEAN , TSIUPDATE, sum(TSIUPCA) TSIUPCA, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    sum(TSIUPQTE) TSIUPQTE,sum(TSIUPQTEP) TSIUPQTEP, TSIUPMDIU,TSIUPMDar,
    sum(TSIUPCRIU) TSIUPCRIU,sum(TSIUPCRAR) TSIUPCRAR, trunc(TSIUPDCRE) TSIUPDCRE ,trunc(TSIUPDMAJ) TSIUPDMAJ ,
    TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, sum(TSIUPMHT) TSIUPMHT, 0 vtanfisc,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV ,count(*) over (partition by TSIUPSITE,TSIUPCEAN ) CONTA_ARTICOLO
    FROM TST_FLIISR_VTEREMART
    WHERE 1=1 --TSIUP_TRT = 1
    AND TSIUPDATE=to_date('27082012','ddmmyyyy')
    and TSIUP_NTRX =172
    AND TSIUPSITE = 10025
    AND TSIUPCEAN = '8012452018825'
    GROUP BY TSIUPSITE, TSIUPCEAN , TSIUPDATE, TSIUPCTVA,TSIUPP4N,TSIUPPIEC,
    TSIUPMDIU,TSIUPMDar, trunc(TSIUPDCRE),trunc(TSIUPDMAJ),TSIUPUTIL,TSIUPTRT,TSIUPNERR,TSIUPMESS,
    TSIUPTMVT,TSIUPSMAN, TSIUPMOTIF, 0,
    TSIUPDATEVERIF,TSIUPNSEQ,TSIUPCINV
    ORDER BY TSIUPSITE,TSIUPDATE ;
    I have no problem. Now the difference between TSIUPCEAN ( or TSIUPSITE) and TSIUP_TRT is that TSIUP_TRT is not in Group By clause, but, to be honest, I don't know why I have this problem using using an analitic function.
    Thanks for help

    Hi,
    I think you are not using analytic function properly.
    Analytical functions will execute for each row. Where as Group BY will execute for groups of data.
    See below example for you reference.
    Example 1:
    -- Below query displays number of employees for each department. Since we have used analytical function for each row you are getting the number of employees based on the department id.
    SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30);
    DEPARTMENT_ID CNT_ANALYTIC
               10            1
               20            2
               20            2
               30            6
               30            6
               30            6
               30            6
               30            6
               30            6
    9 rows selected.
    Example 2:
    -- Since I have used GROUP BY clause I'm getting only single row for each department.
    SQL> SELECT e.department_id, count(*) cnt_group
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30)
      4  GROUP BY e.department_id;
    DEPARTMENT_ID  CNT_GROUP
               10          1
               20          2
               30          6Finally, what I'm trying to explain you is - If you use Analytical function with GROUP BY clause, the query will not give the menaing ful result set.
    See below
    SQL> SELECT e.department_id,count(*) OVER (PARTITION BY e.department_id) cnt_analytic, count(*) cnt_grp
      2  FROM employees e
      3  WHERE e.department_id IN (10,20,30)
      4  GROUP BY e.department_id;
    DEPARTMENT_ID CNT_ANALYTIC    CNT_GRP
               10            1          1
               20            1          2
               30            1          6

  • What to do if row chaining is found?

    Hello Alls,
    If i found rows chaining in my table then what i have to do?
    also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?
    and how to check performance of oracle 10g database. since installed i am not checking any things in database?
    how to check in database which patches are applied on the database?
    can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my database.

    If i found rows chaining in my table then what i have to do?In most cases chaining is unavoidable, especially when this involves tables
    with large columns such as LONGS, LOBs, etc. When you have a lot of chained
    rows in different tables and the average row length of these tables is not
    that large, then you might consider rebuilding the database with a larger
    blocksize.
    e.g.: You have a database with a 2K block size. Different tables have multiple
    large varchar columns with an average row length of more than 2K. Then this
    means that you will have a lot of chained rows because you block size is
    too small. Rebuilding the database with a larger block size can give you
    a significant performance benefit.
    Migration is caused by PCTFREE being set too low, there is not enough room in
    the block for updates. To avoid migration, all tables that are updated should
    have their PCTFREE set so that there is enough space within the block for updates.
    You need to increase PCTFREE to avoid migrated rows. If you leave more free
    space available in the block for updates, then the row will have more room to
    grow.
    SQL Script to eliminate row migration/chaining :
    Get the name of the table with migrated rows:
    ACCEPT table_name PROMPT 'Enter the name of the table with migrated rows: '
    -- Clean up from last execution
    set echo off
    DROP TABLE migrated_rows;
    DROP TABLE chained_rows;
    -- Create the CHAINED_ROWS table
    @.../rdbms/admin/utlchain.sql
    set echo on
    spool fix_mig
    -- List the chained and migrated rows
    ANALYZE TABLE &table_name LIST CHAINED ROWS;
    -- Copy the chained/migrated rows to another table
    create table migrated_rows as
    SELECT orig.*
    FROM &table_name orig, chained_rows cr
    WHERE orig.rowid = cr.head_rowid
    AND cr.table_name = upper('&table_name');
    -- Delete the chained/migrated rows from the original table
    DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);
    -- Copy the chained/migrated rows back into the original table
    INSERT INTO &table_name SELECT * FROM migrated_rows;
    spool off
    also in my database there is one table which contain the 2,00,00,000 of records so it is advisable to make partition of this table for faster searching?download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm
    and how to check performance of oracle 10g database. since installed i am not checking any things in database?
    can any body give me basic guidance from that i can check my database works fine or not. i want to check its response time and all performance related. currently i am getting very slow response from my databasedownload-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
    Jafar

  • Supress Invalid rows after Drilldown

    Hi Experts
    I have a BEx query with row and column structures, where within the row structure I am using a hierarchy of (SEM-BCS) items columns are key figures.  Within Analyzer, when I select from the navigation menu add drilldown according to items, I would expect that the query would expand to include only the relevant items within each row.  Instead, every value of an item including zero values are added to each row thus exploding the report unnecessarily.
    This occurs from both the filter and row specific right click menu.
    Zero suppression is activated for the query for both rows and columns (Query properties -> Rows/Columns tab) Supress zero = Active for both rows and columns.
    Zero suppress is also activated from the row and column key figure structure via Display tab u2013 Structure as Group Only apply suppress when all elements are 0.  I turned this off too and tried without success.
    Any ideas to correct this?
    Thank you in advance.
    Eyal Feiler

    Eyal,
    Let me just say that I wish it worked like you are expecting it to work .....
    Now an explanation of my previous post:
    Suppose you have a query where Rows have InfooBject A and InfoObject B. Zero suppression eliminates rows where both are blank.
    Now lets us say you have  Structure S in rows with two elements in the structure E1 and E2. Then you drill down by InfoObject B. In this case system treats Structure S equivalent to InfoOBject A with two values E1 and E2. This will never be blank and will never be elimnated by zero suppression. You might have defined element E1 as InfoObject B = Values V1 and V2, but the drill down by InfoOBject B will show all the values of InfoObject B, because the definition of structure element is only used to compute the value of the structure element and doesn't behave the same way as using the InfoObject. I guess this is becuase the structure element definitions can be very complex and it may not be possible to evaluate the relvancy fo drill-down rows in all conditions.
    Hope it helps
    Thanks
    Vineet

  • Rows Elimination in Distributed Workbook

    Hello Forum.
    We are facing a problem in distributed workbooks, the users desire to eliminate the few rows from the report.
    Is is possible to remove / eliminate rows from the distributed query ?
    Regards
    Ashish

    Hello..
    thanks for the prompt reply..
    but let me clarify it.. we need to eliminate rows such as the rows that contains query definition in the result set, the fileterd values info , and the rows displayed between the query heading and the output.
    Regards
    Ashish

  • OLAP differences between 10g and 9i

    Hi, is there any documentation on new / changed features in OLAP 10g vs. OLAP 9i. I'm currently installing all of the 10g products (now that the new BI Beans for 10g is out), but couldn't find a "whats new" on the OLAP side.
    Thanks!
    Scott

    There use to be an OLAP 10g Data Sheet up on OTN that provided the new features. I'll ping them and ask them to re-post it and the other missing OLAP 10g documentation and white papers.
    I pasted it below:
    Partitioned Variables
    The multidimensional engine provides direct support for partitioned variables. This support for partitioning presents many opportunities for both enhancing manageability and supporting large multidimensional data sets.
    Three partitioning methods are supported:
    • Range partitioning allows data to be partitioned based on a range of dimension members. For example, one partition might contain time dimension members that are less than '13', another that are less than '25', and so on.
    • List partitioning allows data to be partitioned based on a list of specific dimension members. For example, a partition might contain dimension members <'JAN02','FEB02','MAR02'> and other partition might contain members <'JAN03','FEB03','MAR03'>.
    • CONCAT partitioning partitions data according to the dimension members that belong to a CONCAT dimension.
    With each partitioning method, the multidimensional engine creates separate variables to store data. To the application, it appears that all data is stored in a single variable.
    Scalability is enhanced in a number of different ways:
    • Data can be partitioned across time, thus providing the ability to store more historical data in the analytic workspace without affecting performance or manageability.
    • Calculations can be easily limited to a subset of dimension members, or they can be parallelized. For example, aggregations, allocations and other calculations can be performed on time periods within a particular partition.
    • Data loading can be parallelized.
    • When partitioned by the logical model, for example, by level of summarization, the definition of the variable can be adjusted to account for changes in sparsity between detail data and summary data.
    • Disaster recovery tasks can be performed on subsets of data and can be parallelized.
    • Partitioned variables can be partitioned across different data files and disks to minimize I/O bottlenecks.
    Enhanced Storage Model
    The storage model is enhanced to support the placement of objects in the analytic workspaces into specific rows of the AW$ table. Objects can be further partitioned by segment size to allow for large objects. The AW$ table can then be partitioned across multiple data files.
    The obvious benefit of the enhanced storage model is that database administrators have complete control over how data is distributed across data files and can therefore optimize I/O for data and data access patterns.
    Multi-Writer Mode
    The multidimensional engine supports a multi-writer attachment mode, which allows an analytic workspace to be modified simultaneously by several sessions. In multi-writer mode, users can simultaneously modify the same analytic workspace in a controlled manner by specifying the attachment mode (read-only or read-write) for individual variables, relations, valuesets and dimensions.
    The MULTI attach mode provides the opportunity to parallelize any number of activities in the analytic workspace. Some examples follow:
    • Using separate simultaneous sessions to load data into different variables can parallelize data loading tasks. For example, different sessions could be used to load data into SALES and COST variables. When combined with partitioned variables, different sessions could load into each partition in parallel.
    • Separate sessions can be used to aggregate separate variables or partitions of a variable.
    • Separate sessions can be used to solve models, allocations and virtually any other calculation within the analytic workspace as long as the calculation is directed to different variables or partitions of a variable.
    Parallel Update
    The OLAP DML UPDATE command runs automatically in parallel on partitioned variables, thus optimizing performance of this command on servers with multiple processors. Significant improvements will be seen in cases where large volumes of data are updated (such as a data load or aggregation) and partitioned variables are used.
    Aggregation from Formulas
    Oracle OLAP 10g allows formulas to be used as a source of data to the AGGREGATE command. This eliminates the need to calculate and store data at the detail level, yet still retains the ability to aggregate to summary levels. The benefit is that the multidimensional engine presents large volumes of derived information from relatively little stored data.
    Optimizations to Composite Dimension Indexing
    New 64-bit B-Tree+ indexes and optimizations to the process of synchronizing composite dimensions to base dimensions support excellent query response times with very large composite dimensions (for example, composite dimensions in excess of 1 billion members).
    Certified with Real Application Clusters and Grid Computing Real Application Clusters and Oracle Grid Computing provide a database platform of virtually limitless computing capacity and scalability. The multidimensional engine and data types of the OLAP option, being part of the Oracle Database, have been tested with Real Application Clusters and Oracle Grid Computing. This provides Oracle OLAP the capability to support very large user communities and data sets.
    Wider Relational Filters to Multidimensional Data Types
    The OLAP 10g optimizes a wider range of SQL predicates when selecting from multidimensional data types. This is accomplished by applying SQL filters before the data is converted to a row set using OLAP_TABLE. As a result, the risk of pushing large volumes of data through OLAP_TABLE is minimized and applications need not be as concerned with optimizing SQL for selecting from OLAP_TABLE. The net result is that a wider variety of SQL applications can be used with the OLAP option without special considerations.
    Support of SQL Model Clause
    Oracle Database 10g introduces OLAP-like calculations that are expressed with a SQL MODEL clause, which is similar to what the OLAP community commonly refers to as custom dimension members. A custom dimension member is a virtual member whose value is calculated at runtime.
    The SQL MODEL clause provides an additional method for defining certain types of calculations against multidimensional data types, and the SQL interface to multidimensional data types has been optimized for SQL models. Optimization occurs by having the multidimensional engine completely bypass OLAP_TABLE as data is being returned.
    As a result, the processing of SQL with the MODEL clause is highly efficient against multidimensional data types. In many cases, performance of MODEL with multidimensional data types exceeds that of the same SQL against relational tables. This provides SQL based applications with both new analytic features and performance advantages.
    Query Rewrite to Views over Multidimensional Data Types
    In Oracle Database 10g a new feature, query equivalence, allows query rewrite to be used with views. With query equivalence, the DBA indicates to the database what SQL could have been used to create the view even if the view was created in some other way. For example, if the application likes to emit SQL with SUM … GROUP BY but the view was created with entirely different SQL, the DBA could indicate that the view is equivalent to SUM … GROUP BY.
    This feature of the database is extremely useful with the OLAP option since SQL access is always through views. This provides the DBA and application with benefits similar to those of materialized views – simplified maintenance and improved query performance.
    Automatic Runtime Generation of Abstract Data Types
    Abstract data types are used by object technology of the Oracle Database to define the relational columns for data that is returned from a non-relational data source. In the case of the OLAP option, abstract data types describe data being selected from analytic workspaces in terms of relational columns.
    Previously it was a requirement that abstract data types be created as part of the administrative process of enabling analytic workspaces for query by SQL. To provide applications and database administrators with additional flexibility in the administration of SQL access to analytic workspaces, Oracle OLAP 10g supports automatic runtime generation of abstract data types as part of the query process.
    With the addition of this new feature, it is now possible to query analytic workspaces without requiring the DBA to predefine either abstract data types or views.

  • Data Selection for report based upon a 'Prompt Value'

    I want to report information in my report based upon a 'user input prompt value'
    for example:
    'Enter Shareholder Selection - A-Active, I-Inactive, B-Both Active and Inactive'
    if the user enters 'A', the report selects only active shareholders
    if the user enters 'I', the report selects only inactive shareholders
    if the user enters 'B' the report selects all shareholders, active and inactive
    the field in the database that this based upon is their total share value.
    if this field is greater than zero (>0) they are considerd 'active'
    if this field is equal to zero (=0) they are considered 'inactive'.
    I have tried creating some type of filter,  but am not having any luck. 
    I saw a few examples within the forums that I have tried without any luck....unfortunately most of the examples I've seen are base one only two choices.
    I'm sure I need to create some type of 'independant varible' but am not sure how to do that either.
    Any suggestions would be appreciated.
    Thanks.

    Hi Daryl,
    I Tried this unsuccessfully in DESKI . We can't Eliminate Rows having Empty Measure Values or Measure with 0 as values using Table Level Filter as FIlter can't FIlter rows based on Prompt value selection dynamically. Filters filter rows at a time and not based on 3 condition as Active, Inactive and Both. thus filters are of no use.
    I Tried this in WEBI, and it is working perfectly you donu2019t have to create any Object in Universe, you can do it using function UserResponse() at report level.
    Hence if you are comfortable using WEBI for Generating this report then Follow the steps.
    1. Create Report With Name and Shares Object. It will display all Shareholder Names and No.of shares they hold.
    2. Use Status Object in Query filter, use condition as u201CEqual Tou201D and Select prompt. It  contains Active, Inactive and Both as values.
    3. Report will Display all Shareholder names and No. of  shares  like 45, 789, 0, 4562 where 0 is inactive Shareholder and all other are active shareholder.
    4. Create Variable using Formula.
    =If(UserResponse("Enter Status:")="Active" And [Shares]>0;[Shares];If(UserResponse("Enter Status:")="Inactive" And [Shares]<=0;[Shares];If(UserResponse("Enter Status:")="Both";[Shares])))
    5. Remove Shares Object from the report and Put Variable created with Names of Shareholders.
    6. Select Table-> Properties-> Display-> Uncheck the Option u201CShow Rows with Empty Measure Valuesu201D
    7. Report will display Value correctly as per your Prompt value selection.
    I Hope this Helpsu2026
    Thanksu2026
    Pratik

  • Reduce chaining / increase density in partitioned table

    Hi,
    I have a large table, partitioned by date, with each partition containing ~ 10 M rows. Partitions older than a month are changed only very rarely. Because of the way the rows were created - INSERT followed by several UPDATEs - there is a high degree of row chaining within this table. Older partitions in the table are mostly accessed via indexes.
    I want to eliminate (or nearly eliminate) row chaining, and increase the number of rows per block in the older partitions. After reading http://www.akadia.com/services/ora_chained_rows.html I tried to do this (halving the PCTFREE value):
    ALTER table MY_BIG_TABLE MOVE partition OLD_PARTITION
    PCTFREE 5
    PCTUSED 40
    STORAGE (INITIAL 20K
    NEXT 40K
    MINEXTENTS 2
    MAXEXTENTS 20
    PCTINCREASE 0);
    However, I found this had no affect whatsoever on the degree of chaining or the number of blocks used - I assume that no actual block rewriting took place.
    Is there away to force the rewrite of the blocks, on a partition by partition basis, without doing an export/import or truncate and re-INSERT? Are there any potentially useful options for row compression?
    any pointers appreciated!
    thanks,
    James

    V 10.1.0.4.0, while I remember.

  • Left Outer Join & IS NULL Not Working

    In a Data Flow Query element I am using a Left Outer Join to correlate two tables. In order to eliminate rows in which the Left Outer Join found a match with the Right table I am using IS NULL in the Where clause. Unfortunately IS NULL doesn't seem to return true when the NULL is caused by a lack of a match on a Left Outer Join. Has anybody been able to work around this?

    Use not is NULL in next query after the join.
    A source 1 ---
                           C (join query) ----- D (filter condition IS NULL)
    B source 2 ---

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Issue with sys_connect_by_path

    Hi all,
    Let me preface this post with I'm new to Oracle, using 10g, and mostly a self taught SQL person so it's not my strong suit. The overall goal is I'm trying to find folders within an area of our document management system that haven't been touched so we can perform some clean up. What I have to do is first find all active documents and its' parent folder in the hierarchy then compare the active folders to all folders to get my inactive folders. I have the first part of this working with the following:
    SELECT      Distinct(ActiveFolderID)
    FROM        (SELECT DataID, substr(sys_connect_by_path(DataID, '\'), instr(sys_connect_by_path(DataID, '\'),
                        '\', 1,2) + 1, instr(sys_connect_by_path(DataID, '\'), '\',1,3)
                        - instr(sys_connect_by_path(DataID, '\'), '\',1,2)-1) ActiveFolderID
                 FROM DTree
                 START WITH DataID = 9081729
                 CONNECT BY PRIOR DataID = ParentID) dt, DAuditNew da
    WHERE dt.DataID=da.DataID AND DA.PERFORMERID != 11681125 AND da.AuditDate > SysDate - 90Where I run into an issue is when I add the next part to select folders that aren't in the above result:
    SELECT      DataID, Name
    FROM        DTree
    WHERE       SubType=0 AND ParentID=9081729 AND DataID NOT IN (SELECT Distinct(ActiveFolderID)
                        FROM (SELECT DataID, substr(sys_connect_by_path(DataID, '\'), instr(sys_connect_by_path(DataID, '\'),
                                     '\', 1,2) + 1, instr(sys_connect_by_path(DataID, '\'), '\',1,3)
                                     - instr(sys_connect_by_path(DataID, '\'), '\',1,2)-1) ActiveFolderID
                              FROM DTree
                              START WITH DataID = 9081729
                              CONNECT BY PRIOR DataID = ParentID) dt, DAuditNew da
                        WHERE dt.DataID=da.DataID AND DA.PERFORMERID != 11681125 AND da.AuditDate > SysDate - 90)I get the following error:
    ORA-30004: when using SYS_CONNECT_BY_PATH function, cannot have seperator as part of column value
    30004. 00000 - "when using SYS_CONNECT_BY_PATH function, cannot have seperator as part of column value"
    *Cause:   
    *Action:   Use another seperator which does not occur in any column value,
    then retry.
    I know there are no \ in DataID as it a numeric field, but I have tried other seperators with no luck. Any ideas? Hopefully it's not something simple that I screwed up.
    Thanks,
    Bryan

    Hi, Byran,
    One way to get the results you want would be a MINUS operation. If I understand the requirements, you want
    (a) the set of all folders that are one level down from any node called 'root'
    MINUS
    (b) the subset of folders in (a) that have descendants whose names start with 'Activefile' and were audited no more than 90 days ago.
    We can get (a) with a self-join.
    We can get (b) with CONNECT BY, using (a) in the START WITH clause. In Oracle 9, we would have had to use SYS_CONNECT_BY_PATH for this, but starting in Oracle 10 we have CONNECT_BY_ROOT.
    WITH     top_level_folders     AS
         SELECT     c.dataid
         ,     c.name
         FROM     dtree     p
         JOIN     dtree     c  ON     c.parentid     = p.dataid
         WHERE     p.name     = 'root'
    ,     active_files     AS
         SELECT     CONNECT_BY_ROOT dataid     AS top_dataid
         ,     CONNECT_BY_ROOT name     AS top_name
         ,     dataid
         FROM     dtree
         WHERE     name     LIKE 'Activefile%'
         START WITH     dataid IN (
                             SELECT  dataid
                             FROM     top_level_folders
         CONNECT BY     parentid     = PRIOR dataid
    SELECT       dataid
    ,       name
    FROM       top_level_folders
         MINUS
    SELECT       af.top_dataid
    ,       af.top_name
    FROM       active_files     af
    JOIN       dauditnew     dan  ON     af.dataid     = dan.dataid
    WHERE       dan.auditdate     > SYSDATE - 90
    ORDER BY  dataid
    ;Output (when run on Feb. 2, 2012):
    `   DATAID NAME
             2 folder1
             3 folder2
             5 folder4As you can see, I did the filtering for audit dates near the end of the query. Depending on your data and your requirements, there might be a more efficient way to do it near the start of the query. The CONNECT BY is probably going to be the slow part of his job, so if there's some way to eliminate rows before that, it would probably be more efficient that running a longer CONNECT BY part, only to discard some of the results at the end. You don't want to do a join in the same sub-query as the CONNECT BY; that's extremely inefficient.
    I still have no idea why your original query was getting that ORA-30004 error. I'd be interested in finding out. I can't run the original query, apparantly because you simplified the tables for posting. If you can post a test case that gets the error using SYS_CONNECT_BY_PATH, I'll see if I can find out what caused it.

  • Query parse and execution order

    Hi,
    In the below SQL, I could understand that it would parse from Right to Left, first filter condition fieldname, Syntax is verified and then join column names.
    From the below execution plan, I believe it executes from right to left, first filters the data and then joins the two tables, is this correct.
    Is this in the fixed order or keep changing based on the statistics or any other parameter.
    I would like to know how this SQL Select statement is executed in the run-time, whether data is joined first and then the filter condition is applied or
    the other way round. Please give me more details on this, thank you.
    SELECT * FROM EMP E, DEPT D
    WHERE E.DEPTID = D.DEPTID AND
    D.DEPTNAME = 'DEPT1';
    Below is the execution plan,
    SQL > SELECT * FROM EMP E, DEPT D
    2 WHERE E.DEPTID = D.DEPTID AND
    3 D.DEPTNAME = 'DEPT1';
    Execution Plan
    Plan hash value: 1123238657
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 143 | 5 (20)| 00:00:01 |
    |* 1 | HASH JOIN | | 1 | 143 | 5 (20)| 00:00:01 |
    | 2 | TABLE ACCESS FULL| EMP | 1 | 78 | 2 (0)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| DEPT | 1 | 65 | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - access("E"."DEPTID"="D"."DEPTID")
    3 - filter("D"."DEPTNAME"='DEPT1')
    Note
    - dynamic sampling used for this statement

    >
    - Oracle does the full table scan of EMP table and makes an in-memory hash table with DEPTID as the hash key
    - DEPT table is being read; as Oracle reads DEPT table, applies the filter predicate ("D"."DEPTNAME"='DEPT1'), applies the hashing function to the join key (DEPTID)
    and uses it to locate the matching row from EMP
    - rows are returned to the client
    >
    I believe that is correct for this particular query and plan only because there is only one row in each table. If the tables had many more records then the smaller of the two tables would be chosen to create the hash table and the following should apply.
    The DEPT table is the smaller of the two tables so Oracle would do a full table scan of DEPT to make the in-memory hash with DEPTID as the hash key.
    Then the EMP table (the larger table) is scanned and the DEPTID value used to probe the hash table to find the matching record and then the other filter predicate ("D"."DEPTNAME"='DEPT1') used to eliminate rows.
    See section 11.6.4 Hash Joins in the Performance Tuning Guide
    >
    Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
    This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.
    >
    This example uses a copy of emp and dept with no primary key or constraints
    SQL> SELECT * FROM EMP1 E, DEPT1 D
      2  WHERE E.DEPTNO = D.DEPTNO AND
      3  D.DNAME = 'RESEARCH';
    Execution Plan
    Plan hash value: 619452140
    | Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |       |     5 |   380 |     7  (15)| 00:00:01 |
    |*  1 |  HASH JOIN         |       |     5 |   380 |     7  (15)| 00:00:01 |
    |*  2 |   TABLE ACCESS FULL| DEPT1 |     1 |    30 |     3   (0)| 00:00:01 |
    |   3 |   TABLE ACCESS FULL| EMP1  |    14 |   644 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("E"."DEPTNO"="D"."DEPTNO")
       2 - filter("D"."DNAME"='RESEARCH')
    Note
       - dynamic sampling used for this statement (level=2)

Maybe you are looking for

  • LV 8.0 PDA "unknown error"

    Whenever i try to build this VI, I get an unknown error prompt. I have attached the VI's for reference. Attachments: Filedialog.vi ‏20 KB y2kserflnme.vi ‏29 KB

  • Show Items in Subfolders

    About two months ago I noticed that Adobe Bridge was not showing all of my files.  If I check "Show Items in Subfolders" they appear (or if I search for them).  I know I can go check this option every time I use Bridge but since I'm in it constantly,

  • Meta JCo connection issue - cannot have 2 systems for ESS????

    I have a finance R/3 system and also an HR R/3 System. I need to be able to use the FI system for the timesheet ESS screens and the HR system for the others.  I have used the iView Application parameter as documented; (sap-wd-arfc-useSys=SAP_R3_Human

  • R/3 Users of Type system Change Passwords

    Hi, I have the following scenario, I have users from R/3 that can access portals, but i don't want them to access from dialog in R/3. I created them of type user "B" as "system users". How can i change the passwords of them in portals, like a service

  • BPCNW100 USGAAP SP02 INSTALLATION (failed to create Infoarea)

    Dear gurus, I am trying to install BPCNW100 USGAAP SP02. In UJBR, following error message is popped up after choosing SAP_USGAAP.Zip file which is downloaded from Marketplace. 1826111, 1775713 and 1847625 notes were implemented. We would apreciate yo