Query Running Diffrence from STATS$EVENT_HISTOGRAM

I am tiring to do some analysis on STATS$EVENT_HISTOGRAM (Created as Part of PERFSTAT).
I would like to end up with a result set like this;
SNAP_ID   SNAP_TIME                 DB_RESTART    WAIT_COUNT_LE_7MS   WAIT_COUNT_GT_7MS  TOTAL_WAIT_COUNT   WAIT_COUNT_PERCENT_LE_7MS  WAIT_COUNT_PERCENT_GT_7MS
397          2/9/2012 2:02:39 PM    NO             3,311              16,261             19,572             16.92                      83.08
398          2/9/2012 2:35:03 PM    NO             10,040             11,499             21,539             46.61                      53.39
399          2/9/2012 5:02:22 PM    YES            11,137             113,916            225,053            49.38                      50.62
400          2/9/2012 5:32:21 PM    NO             5,880              5,047              10,927             53.81                      46.19
401          2/9/2012 6:02:21 PM    NO             1,342              3,004              4,346              30.88                      69.12The rules are (That I know of so far):
1. All values are the difference from the previous SNAP_ID
2. The first SNAP_ID has no previous values so it will not be included in result set. In our case this was SNAP_ID 396.
3. When the STATS$SNAPSHOT.STARTUP_TIME changes from the previous row this indicates a DB_RESTART=YES.
4. When DB_RESTART=YES do not subtract values from previous SNAP_ID.
I am on Oracle 11.1.0.7
Create STATS$EVENT_HISTOGRAM table:
CREATE TABLE STATS$EVENT_HISTOGRAM
  SNAP_ID          NUMBER,
  DBID             NUMBER,
  INSTANCE_NUMBER  NUMBER,
  EVENT_ID         NUMBER,
  WAIT_TIME_MILLI  NUMBER,
  WAIT_COUNT       NUMBER
);Load my data into STATS$EVENT_HISTOGRAM table:
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI,WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 1,47088592);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 2, 7397910);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 4, 1049509);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 8, 2384662);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 16, 12446589);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 32, 6698196);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 64, 934431);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 128, 655758);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 256, 213053);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 512, 73814);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 1024, 6088);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 2048, 1825);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 4096, 2169);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 8192, 3122);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 16384, 4144);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 32768, 330);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 65536, 662);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 131072, 9);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 262144, 28);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (396, 2359907137, 1, 2652584166, 524288, 22);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 1, 47091161);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 2, 7398497);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 4, 1049664);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 8, 2386574);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 16, 12454531);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 32, 6701651);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 64, 934831);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 128, 656657);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 256, 213223);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 512, 74218);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 1024, 6167);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 2048, 1869);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 4096, 2237);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 8192, 3317);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 16384, 4779);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 32768, 358);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 65536, 663);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 131072, 12);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 262144, 47);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (397, 2359907137, 1, 2652584166, 524288, 29);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 1, 47100463);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 2, 7399116);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 4, 1049783);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 8, 2387726);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 16, 12459548);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 32, 6704135);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 64, 935351);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 128, 657496);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 256, 213525);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 512, 74515);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 1024, 6224);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 2048, 1898);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 4096, 2323);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 8192, 3503);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 16384, 5229);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 32768, 381);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 65536, 671);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 131072, 20);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 262144, 68);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (398, 2359907137, 1, 2652584166, 524288, 49);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 1, 86466);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 2, 20937);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 4, 3734);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 8, 11128);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 16, 58220);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 32, 33902);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 64, 5707);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 128, 3308);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 256, 1149);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 512, 413);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 1024, 40);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 2048, 25);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 4096, 12);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 8192, 7);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 16384, 4);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 32768, 0);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (399, 2359907137, 1, 2652584166, 65536, 1);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 1, 88335);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 2, 24658);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 4, 4024);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 8, 11678);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 16, 61227);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 32, 35252);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 64, 5821);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 128, 3316);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 256, 1158);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 512, 421);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 1024, 41);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 2048, 25);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 4096, 12);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 8192, 7);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 16384, 4);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 32768, 0);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (400, 2359907137, 1, 2652584166, 65536, 1);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 1, 89498);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 2, 24811);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 4, 4050);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 8, 11919);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 16, 62776);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 32, 36157);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 64, 5993);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 128, 3424);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 256, 1181);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 512, 427);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 1024, 41);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 2048, 25);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 4096, 12);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 8192, 7);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 16384, 4);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 32768, 0);
Insert into STATS$EVENT_HISTOGRAM (SNAP_ID, DBID, INSTANCE_NUMBER, EVENT_ID, WAIT_TIME_MILLI, WAIT_COUNT)
Values (401, 2359907137, 1, 2652584166, 65536, 1);
COMMIT;Create STATS$SNAPSHOT table:
CREATE TABLE STATS$SNAPSHOT
  SNAP_ID               NUMBER,
  DBID                  NUMBER,
  INSTANCE_NUMBER       NUMBER,
  SNAP_TIME             DATE,
  STARTUP_TIME          DATE,
  SESSION_ID            NUMBER,
  SERIAL#               NUMBER,
  SNAP_LEVEL            NUMBER,
  UCOMMENT              VARCHAR2(160 BYTE),
  EXECUTIONS_TH         NUMBER,
  PARSE_CALLS_TH        NUMBER,
  DISK_READS_TH         NUMBER,
  BUFFER_GETS_TH        NUMBER,
  SHARABLE_MEM_TH       NUMBER,
  VERSION_COUNT_TH      NUMBER,
  SEG_PHY_READS_TH      NUMBER,
  SEG_LOG_READS_TH      NUMBER,
  SEG_BUFF_BUSY_TH      NUMBER,
  SEG_ROWLOCK_W_TH      NUMBER,
  SEG_ITL_WAITS_TH      NUMBER,
  SEG_CR_BKS_RC_TH      NUMBER,
  SEG_CU_BKS_RC_TH      NUMBER,
  SEG_CR_BKS_SD_TH      NUMBER,
  SEG_CU_BKS_SD_TH      NUMBER,
  SNAPSHOT_EXEC_TIME_S  NUMBER,
  ALL_INIT              VARCHAR2(5 BYTE),
  BASELINE              VARCHAR2(1 BYTE)
);Load my data into STATS$SNAPSHOT table:
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (396, 2359907137, 1, TO_DATE('02/09/2012 13:32:26', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/05/2012 20:33:22', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 21.92, 'FALSE');
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (397, 2359907137, 1, TO_DATE('02/09/2012 14:02:39', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/05/2012 20:33:22', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 70.49, 'FALSE');
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (398, 2359907137, 1, TO_DATE('02/09/2012 14:35:03', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('01/05/2012 20:33:22', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 14.28, 'FALSE');
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (399, 2359907137, 1, TO_DATE('02/09/2012 17:02:22', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('02/09/2012 15:24:06', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 15.92, 'FALSE');
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (400, 2359907137, 1, TO_DATE('02/09/2012 17:32:21', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('02/09/2012 15:24:06', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 2.77, 'FALSE');
Insert into STATS$SNAPSHOT
   (SNAP_ID, DBID, INSTANCE_NUMBER, SNAP_TIME, STARTUP_TIME,
    SESSION_ID, SERIAL#, SNAP_LEVEL, EXECUTIONS_TH, PARSE_CALLS_TH,
    DISK_READS_TH, BUFFER_GETS_TH, SHARABLE_MEM_TH, VERSION_COUNT_TH, SEG_PHY_READS_TH,
    SEG_LOG_READS_TH, SEG_BUFF_BUSY_TH, SEG_ROWLOCK_W_TH, SEG_ITL_WAITS_TH, SEG_CR_BKS_RC_TH,
    SEG_CU_BKS_RC_TH, SNAPSHOT_EXEC_TIME_S, ALL_INIT)
Values
   (401, 2359907137, 1, TO_DATE('02/09/2012 18:02:21', 'MM/DD/YYYY HH24:MI:SS'), TO_DATE('02/09/2012 15:24:06', 'MM/DD/YYYY HH24:MI:SS'),
    0, 0, 7, 100, 1000,
    1000, 10000, 1048576, 20, 1000,
    10000, 100, 100, 100, 1000,
    1000, 1.72, 'FALSE');
COMMIT;

Hi,
Sky13 wrote:
OK I have it working, I'm glad you got it working. When I tried to run your latest query, I got an error. On about the 7th line, I think you meant
when lag(SS.STARTUP_TIME) over (order by SS.SNAP_ID) !=  SS.STARTUP_TIME then 'YES'This site doesn't like to display the <> inequality operator. When posting here, always use the other, equivalent inequality operator, !=.
When I fix that, I get "no rows selected".
but it is ugly! Any ideas on simplifying it?This is a little shorter, but still rather ugly:
WITH     got_wait_time_grp    AS
     SELECT     snap_id
     ,     wait_count
     ,     CASE
              WHEN  wait_time_milli <= 7
              THEN  '<= 7'
              ELSE  '> 7'
          END     AS wait_time_grp
     FROM    stats$event_histogram
     WHERE     event_id     = 2652584166
,     grouped_data     AS
     SELECT       ss.snap_id
     ,       ss.snap_time
     ,       eh.wait_time_grp
     ,       CASE
                WHEN  ss.startup_time != LAG (ss.startup_time) OVER ( PARTITION BY  eh.wait_time_grp
                                                                          ORDER BY         ss.snap_id
                THEN  'YES'
                ELSE  'NO'
            END                              AS db_restart
     ,       SUM (eh.wait_count) - LAG ( SUM (eh.wait_count)
                                         , 1
                             , 0
                             ) OVER ( PARTITION BY  ss.startup_time
                                         ,                    eh.wait_time_grp
                                         ORDER BY          ss.snap_id
                                 )              AS wait_dif
     ,       DENSE_RANK () OVER (ORDER BY ss.snap_id)     AS dr
     FROM      stats$snapshot      ss
     ,       got_wait_time_grp       eh
     WHERE       ss.snap_id           = eh.snap_id
     AND       ss.snap_id           BETWEEN 396
                            AND      401
     GROUP BY  GROUPING SETS ( (ss.snap_id, ss.snap_time, ss.startup_time, eh.wait_time_grp)
                               , (ss.snap_id, ss.snap_time, ss.startup_time)
SELECT       snap_id, snap_time, db_restart
,       MIN (CASE WHEN wait_time_grp = '<= 7' THEN wait_dif END)     AS wait_count_le_7ms
,       MIN (CASE WHEN wait_time_grp = '> 7'  THEN wait_dif END)     AS wait_count_gt_7ms
,       MIN (CASE WHEN wait_time_grp IS NULL  THEN wait_dif END)     AS total_wait_count
,       MIN (CASE WHEN wait_time_grp = '<= 7' THEN wait_dif END) * 100
     / MIN (CASE WHEN wait_time_grp IS NULL  THEN wait_dif END)     AS wait_count_percent_le_7ms
,       MIN (CASE WHEN wait_time_grp = '> 7'  THEN wait_dif END) * 100
     / MIN (CASE WHEN wait_time_grp IS NULL  THEN wait_dif END)     AS wait_count_percent_gt_7ms
FROM       grouped_data
WHERE       dr     > 1
GROUP BY  snap_id, snap_time, db_restart
ORDER BY  snap_id
;Output:
`                                                        WAIT_   WAIT_
                                 WAIT_   WAIT_          COUNT_  COUNT_
                         DB_    COUNT_  COUNT_  TOTAL_ PERCENT PERCENT
SNAP                     RE        LE_     GT_   WAIT_    _LE_    _GT_
_ID SNAP_TIME           START     7MS     7MS   COUNT     7MS     7MS
397 2/9/2012 2:02:39 PM NO       3311   16261   19572   16.92   83.08
398 2/9/2012 2:35:03 PM NO      10040   11499   21539   46.61   53.39
399 2/9/2012 5:02:22 PM YES    111137  113916  225053   49.38   50.62
400 2/9/2012 5:32:21 PM NO       5880    5047   10927   53.81   46.19
401 2/9/2012 6:02:21 PM NO       1342    3004    4346   30.88   69.12

Similar Messages

  • Query on a table runs more than 45mins(after stats) and same query runs 19secs(before stats - rebuild)

    Query on a table runs more than 45mins(after stats) and same query runs 19secs(before stats - rebuild) - Not sure what the cause is.
    - Analysed the explain the plan
    - different explain plan used afterr stats gather.
    Any idea what could be the cause with this kind of difference.
    Thank you!

    What is the difference you see in the explain plan ? Where it spends most time. All these needs to be analysed.

  • Sql query runs slower from the application

    Hi,
    We are using oracle 9ias on AIX box.The jdk version used: 1.3.1 . From the j2ee application when we perfom a search, the sql query takes for ever to return the results. I know that we are waiting on the database because I can see the query working when I look at TOAD.But if i run the same query on the database server itself, it returns the results in less than a sec. Could you guys throw some light on how we could troubleshoot this problem. Thanks.

    When the results have to travel over the network, it is slow, and when they don't, it is fast.
    That is what you are saying, correct?
    So your approach should be to not bring so much data over the network. Don't select columns you don't need, and don't select rows you don't need.

  • Dynamically change sql query (from statement)

    Hi all,
    Is it possible to change the 'from statement' dynamically in
    report 6i? I have 3 identical tables with different names (each
    to collect data in different area) and I want to be able to
    dynamically change the sql query at run time so I can use only
    one (1) report to print data in 3 different tables.
    Is it possible? Thanks for the tip!

    Yes you can. Create a user parameter lets say "frm". give the
    initial value for the parameter as the table a. Ex : FROM EMP .
    Go to the datamodel of the report . Change the query like this,
    Original query => Select * from emp
    Modified query => Select * &frm
    Coz frm has the default value FROM EMP, so it will replace the
    default value. When you call the report from differrent product
    you can pass the parameter value as table a, table b , table x.
    Hope you got your answer.
    Thanx
    Feroz

  • Can I transfer files from a Mini running 10.4.11 to a new Mini running Lion? States I need to upgrade Migration Assistant, but I see no viable path I can take. OPtions to get photos over to new Mini?

    Can I transfer files from a Mini running 10.4.11 to a new Mini running Lion? States I need to upgrade Migration Assistant, but I see no viable path I can take. OPtions to get photos over to new Mini?

    Thanks Mende1; I know that PPC apps won't function on Intel machines but I can migrate the Universal apps, right? There is such a large gap between Tiger and Mountain Lion - can I still migrate files directly from one to the other taking the Universal apps as well? Sorry, I'm not very computer savvy!
    Len57

  • Running an update statement on two dependent attributes

    Dear All,
    I have a repair_job table that contains values for work_cost, parts_cost and total_cost which is the sum of the work and parts cost values. I want to run an update statement that doubles the work cost and, naturally, updates the value of total cost as well. I tried to run it as:
    update repair_job
    set work_cost = 2 * work_cost, total_cost = work_cost + parts_cost
    where licence in (
    select licence from car
    where year = to_char(sysdate,'YYYY')
    thinking that because the update of work_cost is first on the list, the updated value of total cost would be correct. It seems however that the update of total cost happens first and then the work cost is been updated; not sure what the reason is for that and it happens no matter what the order is in the update statement.
    I know that I can do it in two separate statements, or use a trigger or PL/SQL to do it but I am curious as to why it has this behaviour. Also, is there a way to do it in a single SQL statement - i.e. forcing the update of the work_cost attribute first and then that of total_cost?
    I look forward to hearing from you soon.
    Regards,
    George

    Welcome to the forum!
    >
    thinking that because the update of work_cost is first on the list, the updated value of total cost would be correct. It seems however that the update of total cost happens first and then the work cost is been updated; not sure what the reason is for that and it happens no matter what the order is in the update statement.
    I know that I can do it in two separate statements, or use a trigger or PL/SQL to do it but I am curious as to why it has this behaviour. Also, is there a way to do it in a single SQL statement - i.e. forcing the update of the work_cost attribute first and then that of total_cost?
    >
    The update to all columns of the row happen at the same time - there is no order involved.
    You don't need two statements but you do need to do the updates based on the current value of the columns.
    set work_cost = 2 * work_cost, total_cost = 2 * work_cost + parts_cost----------
    In addition to sb92075's comments in 11 g you could also just define a virtual column for total_cost and then just query it like you do now.
    total_cost   NUMBER GENERATED ALWAYS AS (work_cost + parts_cost) VIRTUAL,See this Oracle-base article for an example
    http://www.oracle-base.com/articles/11g/virtual-columns-11gr1.php
    Edited to supplement sb92075's reply by mentioning virtual columns

  • ORA-12801 ORA-08103 while running gather schema stats in R12

    Hi All,
    We have recently upgraded from 11.5.9 to R12.1.1 on RHEL 4.8
    Database version is 10.2.0.5
    We are running Gather schema stats on R12.1.1 but its Errored out with below messages.
    ORA-12801: error signaled in parallel query server P006
    GL.GL_JE_LINES
    ORA-08103 : object no longer exists
    Please advice if anybdy face this issue.
    Thanks
    RB

    Please see (GATHER SCHEMA STATISTICS COMPLETED WITH ERROR WHEN RUNNING FOR GL SCHEMA. [ID 1068541.1]).
    Thanks,
    Hussein

  • High CPU usage running select from dba_ts_quotas

    We recently installed grid agent on a DEV box and have seen out CPU spike like crazy at times. I have 10 instances running on this box (cringe), they are at different versions from 10 -11G. The agent is at 10.2.0.4.
    I looked up the query that's eating my CPU and got the following:
    /* OracleOEM */
    SELECT 'table_space_quotas',
    USERNAME,
    TABLESPACE_NAME
    FROM dba_ts_quotas
    WHERE (max_bytes = -1
    OR max_blocks = -1)
    AND username NOT IN ('SYS','SYSTEM','SYSMAN','CTXSYS',
    'MDSYS','ORDSYS','ORDPLUGINS','OLAPSYS',
    'DBSNMP','MGMT_VIEW','OUTLN','ANONYMOUS',
    'DMSYS','EXFSYS','LBACSYS','SI_INFORMTN_SCHEMA',
    'SYSMAN','WKPROXY','WKSYS','WK_TEST',
    'WMSYS','XDB','TRACESVR','SCOTT',
    'ADAMS','BLAKE','CLARK','JONES',
    'HR')
    AND ROWNUM <= DECODE(:1,'-1',2147483647,
    :1)
    ORDER BY USERNAME
    I've done some research and followed the suggestions:
    There was a suggestion to set the following parameter: set optimizer_secure_view_merging=false
    Disable Security Policty to monitor table spaces
    Nothing seems to help.
    Has anyone else experienced this?

    I know its been a while, but though it worthwhile posting this for others viewing this post
    Try the following from Metalink note #395064.1
    Symptoms
    The following query that is fired from Grid Control once in a day takes a lot of time and it affects the entire performance of Grid Control:
    SELECT 'table_space_quotas', username, tablespace_name
    FROM dba_ts_quotas
    WHERE (max_bytes = -1
    OR max_blocks = -1)
    AND NOT username IN ('SYS', 'SYSTEM', 'SYSMAN', 'CTXSYS', 'MDSYS',
    'ORDSYS', 'ORDPLUGINS', 'OLAPSYS', 'DBSNMP', 'MGMT_VIEW', 'OUTLN',
    'ANONYMOUS', 'DMSYS', 'EXFSYS', 'LBACSYS', 'SI_INFORMTN_SCHEMA',
    'SYSMAN', 'WKPROXY', 'WKSYS', 'WK_TEST', 'WMSYS', 'XDB', 'TRACESVR',
    'SCOTT', 'ADAMS', 'BLAKE', 'CLARK', 'JONES', 'HR')
    AND rownum <= decode(:1, '-1', 2147483647, :1)
    ORDER BY username
    Cause
    The security policy run against the 10.2.0.2 database which ensures database users are allocated a limited tablespace quota is creating the problem.
    Solution
    - From the Grid Control home page click on Targets > Databases > select 10.2.0.2 database.
    - Click on 'Metric and Policy Settings' and select 'Policies' tab.
    - Now search for the Policy Rule 'Unlimited Tablespace Quota' and click on Schedule link.
    - The default collection is every 24 hours. You need to disable Collection Schedule and click
    continue button which will take you back to the previous page.
    - Also select 'Disabled' from the drop down box available near the Policy Evaluation and click continue. After this, the security policy which ensures database users are allocated a limited tablespace quota will not run and the statement won't be executed.

  • Update query running fine when subquery in where clause is wrong.

    Hi,
    I am running one update statement-
    Update table a set column1=2000
    where a.column2 in(select col 3 from Table b where b.col4=111)
    Now when I run the subquery: select col 3 from Table b where b.col4=111-----> It gives me error "col 3 invalid identifier"
    But when I run the full query then it updates the 700 rows.
    Can somebody please explain this?
    My subquery is throwing error but when i use that in another query it is running fine.

    Col_3 must be in your outer table (table_a, I guess).
    If you always prefixed column names with a table alias, you'd know in an instant.

  • Initial execution of a workbook/query runs without ending

    Hi, Experts,
    I encountered a very strange behavior of Bex tool. Sometimes, Initial execution of a workbook/query runs without ending at u2018Waiting for reply from BW Serveru2019 state, then I cancel the report and run it again and it would return results within minutes. Itu2019s like it forgot to return results.
    The basis team looked at the queries in the database, and said that there were actually a number of different queries being executed by my Bex query, but each individual query runs fairly quickly. The basis person pulled the statements out and ran them directly against the database and they returned result in sub seconds.
    Any thoughts? Possible solution?
    Thanks
    Chimei

    Hi,
    Could u let us know what is the GUI frontend patch version you are on. Also, are there any AddOn's Installed.
    What version of Excel are you using. Also just for a check, run sapbexc.xla on your local machine and check the output.
    Regards,
    Sree.

  • How to run SQL from OMB+

    For any of you who wanted to be able to query the database from OMB+ in order to gather object metadata or whatever as part of your deployment scripts, you have no doubt noticed (to your frustration) that this functionality is not provided.
    Sure, you can trigger execution of a sql/plus script as an external process, but that's hardly interactive now is it?
    the good news is that OMB+ DOES provide access to the standard java package, which of course means that you can utilize the standard java.sql classes, which means - JDBC.
    Here is a very simple starting point for you that provides simple connect, disconnect, and run query abilities. There is no exception handling and you'd need to add other functionality to execute prepared statements like procedure calls, or do DML like insert/update statement.
    Still, I hope that you find it of use as a starting point - if you need it. For documentation on the java.sql interfaces, go to the Sun Java Docs page at: http://java.sun.com/j2se/1.3/docs/api/ and scroll down to the java.sql package in the top left pane.
    Cheers,
    Mike
    package require java
    proc oracleConnect { serverName databaseName portNumber username password } {
       # import required classes
       java::import java.sql.Connection
       java::import java.sql.DriverManager
       java::import java.sql.ResultSet
       java::import java.sql.SQLWarning
       java::import java.sql.Statement
       java::import java.sql.ResultSetMetaData
       java::import java.sql.DatabaseMetaData
       java::import oracle.jdbc.OracleDatabaseMetaData
       # load database driver .
       java::call Class forName oracle.jdbc.OracleDriver
       # set the connection url.
       append url jdbc:oracle:thin
       append url :
       append url $username
       append url /
       append url $password
       append url "@"
       append url $serverName
       append url :
       append url $portNumber
       append url :
       append url $databaseName
       set oraConnection [ java::call DriverManager getConnection $url ]
       set oraDatabaseMetaData [ $oraConnection getMetaData ]
       set oraDatabaseVersion [ $oraDatabaseMetaData getDatabaseProductVersion ]
       puts "Connected to: $url"
       puts "$oraDatabaseVersion"
       return $oraConnection
    proc oracleDisconnect { oraConnect } {
      $oraConnect close
    proc oraJDBCType { oraType } {
      #translation of JDBC types as defined in XOPEN interface
      set rv "NUMBER"
      switch $oraType {
         "0" {set rv "NULL"}
         "1" {set rv "CHAR"}
         "2" {set rv "NUMBER"}
         "3" {set rv "DECIMAL"}
         "4" {set rv "INTEGER"}
         "5" {set rv "SMALLINT"}
         "6" {set rv "FLOAT"}
         "7" {set rv "REAL"}
         "8" {set rv "DOUBLE"}
         "12" {set rv "VARCHAR"}
         "16" {set rv "BOOLEAN"}
         "91" {set rv "DATE"}
         "92" {set rv "TIME"}
         "93" {set rv "TIMESTAMP"}
         default {set rv "OBJECT"}
      return $rv
    proc oracleQuery { oraConnect oraQuery } {
       set oraStatement [ $oraConnect createStatement ]
       set oraResults [ $oraStatement executeQuery $oraQuery ]
       # The following metadata dump is not required, but will be a helpfull sort of thing
       # if ever want to really build an abstraction layer
       set oraResultsMetaData [ $oraResults getMetaData ]
       set columnCount        [ $oraResultsMetaData getColumnCount ]
       set i 1
       puts "ResultSet Metadata:"
       while { $i <= $columnCount} {
          set fname [ $oraResultsMetaData getColumnName $i]
          set ftype [oraJDBCType [ $oraResultsMetaData getColumnType $i]]
          puts "Output Field $i Name: $fname Type: $ftype"
          incr i
       # end of metadata dump
       return $oraResults
    #now to run a quick query and dump the results.
    set oraConn [oracleConnect myserver orcl 1555 scott tiger ]
    set oraRs [oracleQuery $oraConn "select name, count(*) numlines from user_source group by name" ]
    #for each row in the result set
    while {[$oraRs next]} {
      #grab the field values
      set procName [$oraRs getString name]
      set procCount [$oraRs getInt numlines]
      puts "Program unit $procName comprises $procCount lines"
    $oraRs close
    $oraConn close

    Oracle may indeed have used OraTCL as the starting point for OMB+, but if so then they have clearly wrapped their own interface around it and hidden the original commands. Now, I suppose that it should be possible to add OraTcl as an external library, however the Oratcl distribution makes use of the TCL "load" command to load their binaries.
    You will quickly find that Oracle has also disabled the standard TCL "load" command in OMB+, thus making very difficult to add third-party plug-in packages.
    If you can find a pure-TCL script db interface similar to OraTcl to manage SQL*Plus connections that doesn't use any of the TCL commands that ORacle has disabled - well then you could probably get those to load as packages.
    Or, like me, you could just use the supplied java interface and code your own as needed.
    Cheers,
    Mike

  • Query BEX - slow - sql statement

    Hi to all.
    When I execute a query on my master data I see(with the ST05) that the sytem executes a lot of query like that:
    SELECT
    FROM
      "/BIC/SC_RAG_SOC"
    WHERE
      "SID" IN (:A0 ,:A1 ,:A2 ,:A3 ,:A4 )
    The SID values are updated 5 for time !!!
    Due to the fact that I need to be select almost 500 K of record, can I modify one of the system (database or java) paramenters in order to obtain the following code ?:
    SELECT
    FROM
      "/BIC/SC_RAG_SOC"
    Thank'you in advance

    May be first you want to try following:
    1. Run Configurator Purge concurrent program which will physically delete all the logically deleted records over the time. You must do this if you haven't done this since long time.
    2. Run Gather schema stats on the CZ schema.
    If that does not work, then you should upload file SR with explain plan.
    HTH

  • Query runs for ever - suggestions - ideas - tips

    Hello all,
    I have the following situation:
    1) Database 1: XE 10g Release 10.2.0.1.0 - Production
    2) Database 2: 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    3) Dblink from Database 1 pointing to Database 2
    4) Synonyms created withing Database 1, pointing to Database 2 packages and tables:
    - "mytable1" exists in Database 2 therefore a synonym "mytable1" is created in Database 1
    - "mytable2" exists in Database 2 therefore a synonym "mytable2" is created in Database 1
    - "mypackage" exists in Database 2 therefore a synonym "mypackage" is created in Database 1
    5) Of course select and execute rights are granted
    6) Query:
    SELECT field1
    FROM myshcema.mytable1
    WHERE field2 = myshcema.mypackage.myfunction ('PARAMVALUE'))
    AND EXISTS (SELECT 1 FROM myshcema.mytable2 WHERE field_in_tab2 = field_in_tab1)
    7) Problem.This query runs for ever
    8) Explain plan
    OPERATION - OBJECT_NAME - OPTIONS - COST
    SELECT STATEMENT - - - 17
    -> FILTER - - -
    -> -> FILTER - - - 3
    -> -> -> REMOTE - mytable1 - -
    -> -> REMOTE - mytable2 - - 1
    9) Now I get the value returned by the function and re-execute the previous query:
    - select myshcema.mypackage.myfunction ('PARAMVALUE')) from dual;
    - return value is *89787621*
    - re-executing the query:
    SELECT field1
    FROM myshcema.mytable1
    WHERE field2 = *89787621*
    AND EXISTS (SELECT 1 FROM myshcema.mytable2 WHERE field_in_tab2 = field_in_tab1)
    10) Explain plan
    OPERATION - OBJECT_NAME - OPTIONS - COST
    SELECT STATEMENT - - REMOTE - 3
    -> NESTED LOOPS - - SEMI - 3
    -> -> INDEX - MY_UK_NAME - RANGE SCAN - 3
    -> -> INDEX - MY_2ND_UK - UNIQUE SCAN - 0
    And the results are instantaneous
    Now my question: Can somebody direct me on whether a hint or something else will correct this situation? Due to restrictions that I cannot currently explain here, I am not able to change the code to first get the value and then execute the query.
    Thanks for helping
    G
    Edited by: G on May 17, 2011 9:07 AM
    Edited by: G on May 17, 2011 9:10 AM

    I would think the driving site hint woud be the frist thing I tried. If for some reason Oracle does not seem to be able to use the hint or if hints are heavily frowned on at your site then here is another approach to the problem.
    Code the query into a view and define the view on the remote database then issues a local query against the remote view.
    HTH -- Mark D Powell --

  • How to get query response time from ST03 via a script ?

    Hello People,
    I am trying to get average query response time for BW queries with a script (for monitoring/historisation).
    I know that this data can be found manually in ST03n in the "BI workload'.
    However, I don't know how to get this stat from a script.
    My idea is to run a SQL query to get this information, here is the state of my query :
    select count(*) from sapbw.rsddstat_olap
    where calday = 20140401
    and (eventid = 3100 or eventid = 3010)
    and steptp = 'BEX3'
    The problem is that this query is not returning the same number of navigations as the number shown in ST03n.
    Can you help me to set the correct filters to get the same number of navigation as in ST03n ?
    Regards.

    Hi Experts,
    Do you have ideas for this SQL query ?
    Regards.

  • Can not see running jobs from, package

    I have a problem with processing parallely runing jobs:
    I am creating another immediately runned jobs in a main job. Those two parallel jobs (2 loads from different databases) have to be finished before I run next operation (working out loaded data). Problem is that, after starting those 2 parallel jobs, I can not see them by select from ALL_SCHEDULER_RUNNING_JOBS that is executed immediately after I create those jobs in a package. If I take a look into ALL_SCHEDULER_RUNNING_JOBS from anonymous statement I can see all my running jobs. Let's sum it up:
    1/ Start of main job
    2/ Running 2 immediately created jobs (load data)
    3/ Checking in loop if jobs created in step 2 are still running
    3.1/ Jobs are running (ALL_SCHEDULER_RUNNING_JOBS check) - sleep for a
    while - It never happens - can't see any running jobs from select executed in
    package but can see them in
    anonymous statement
    3.2/ Jobs finished - start processing loaded data
    Can somebody help me with this task?
    Thanks alot!
    Jakub

    Hi,
    There is no reason a job should be visible from an anonymous block but not from inside a job. There are two things that may be happening here.
    - jobs scheduled to run immediately my not start running as soon as they are created/enabled, you may need to wait a bit before they start running (they will appear in all_scheduler_jobs immediately but maybe not all_scheduler_running_jobs immediately)
    - you may be running into privilege issues. Is the user that executes the anonymous block the same as the user that the job is running as (the job's schema) ? If not maybe the job user does not have privileges to see the job (you can grant alter on the job to the user to ensure this).
    Can you see the jobs in the all_scheduler_jobs view from within the job with status RUNNING ? If you can see jobs in all_scheduler_jobs as RUNNING but not in all_scheduler_running_jobs then this is a bug of some sort.
    Thanks,
    Ravi.

Maybe you are looking for

  • Best practice for sqlldr -- direct to core or to stage first?

    We want to begin using sql loader to load simple (but big) tables that have, up to this point, been loaded via perl and it's DBI connection to Oracle. The target tables typically receive 10-20 million rows per day (parsed log data from many thousands

  • Emails not being generated using UDF

    Hi, We had setup a functionality to send email via UDF using the following blog. Mail without email adapter? Part - I - Process Integration - SCN Wiki This worked fine for some time. However, it does not generate emails anymore. There were no changes

  • Process Chain Component for Deleting Openhub Destination

    I have an openhub that places the results of a DSO on AL11. I created / scheduled a process chain for updating the file. I'd like to modify the process chain and add a step to delete the file on AL11 before I run the DTP. Is this possible? Mike

  • Work in CS3?

    Does this widget browser work outside Dreamweaver then you take the code generated and add it to your page, or do you have to use it inside Dreamweaver?

  • Problem to load external swf

    hi! I have a movie clip with a background image and also I have another movie clip that load the first movie clip with a Loader, but in the second movie clip, do not see the image of the first movie clip. some idea?