Daily ASH Statistics Chart
Hi, I have two questions about this chart. I am using sqldeveloper 3.2.0.9
The one axis is obviously time, but what is the veritical axis? Is there a way to save and/or publish this graph?
Thank you
I have logged a bug.
Sue
Similar Messages
-
Error in "Daily ASH Statistics Chart"
Hello,
If you look at the query underneath that chart, you'll see the chart is ordered by "HH24:MI". When you run the query accross 2 days, what you get in an unordered resulting chart. Changing that by trunc((sample_time),'HH') fixes the issue. However, if I do it from a copied report that's ok but I cannot change the original.
Can you change that?
GregoryI have logged a bug.
Sue -
Daily UNDO statistics report?
i want to find the undostats for the last seven days on daily basis.I want the out put in the following format
date sum(undoblks) sum(txncount) max(maxmaxquerylen)
30/06/2008 100 200 300
01/06/2008 200 300 400
Thank You All...Nick, too bad I had not seen that thread.
-- my version
> @t9
> select to_char(begin_time,'YYYYMMDD') as TIME,
2 sum(undoblks), sum(txncount), max(maxquerylen)
3 from v$undostat
4 group by to_char(begin_time,'YYYYMMDD')
5 /
TIME SUM(UNDOBLKS) SUM(TXNCOUNT) MAX(MAXQUERYLEN)
20080629 85 7772 435
20080630 262410 6240097 638
20080701 304863 17472266 1681
20080702 87685 10133385 8287
-- Jonathan's
select 2 to_char(trunc(begin_time,'dd'),'mmdd hh24:mi:ss'),
3 sum(txncount),
4 sum(undoblks),
5 max(maxquerylen)
6 from v$undostat
7 group by
8 to_char(trunc(begin_time,'dd'),'mmdd hh24:mi:ss')
9 order by 1
10 /
TO_CHAR(TRUNC SUM(TXNCOUNT) SUM(UNDOBLKS) MAX(MAXQUERYLEN)
0629 00:00:00 7772 85 435
0630 00:00:00 6240097 262410 638
0701 00:00:00 17472266 304863 1681
0702 00:00:00 10133385 87685 8287
[pre]
-- Mark D Powell -- -
Daily Performance Reports...
Hi,
We need to get some daily Perf reports on the basic health of the Database...
Is there any way to get daily performance statistics of Database through OEM or so...
Thanks
Josephasifkabirdba,
He or she does not have to mark those threads as answered. Marking a thread as answered and assigning points to the people is ethical towards the community. May be he or she could not find the answers that he or she has been looking for?
Please read;
http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ#questions
What are "rewards points"?
It is possible for you to reward an answerer 5 points for a "helpful" answer or 10 points for a "correct" one (or none at all). Users who accumulate certain amounts of points reach different levels of recognition (see legend). In this manner, users who offer consistently helpful or correct answers raise their standing in the community. (Note: in order to award points, you must identify your post as a "question" first - which is set as default.)
What is proper discussion forum etiquette?
When asking a question, provide all the details that someone would need to answer it including your database version, e.g. Oracle 10.2.0.4. Format your code using [code] tags (see "How do I format code in my post?" below). Consulting documentation first is highly recommended. Furthermore, always be courteous; there are different levels of experience represented. A poorly worded question is better ignored than flamed - or better yet, help the poster ask a better question.
Finally, it is good form to reward answerers with points (see "What are 'reward points'?" below).Regards.
Ogan -
Can you import charts created in excel or charts from financial websites (i.e. daily stock market charts) into iweb? If yes, can you please explain? Thanks
If you are, in fact, using iWeb '08, you may find that it doesn't work too good under Mountain Lion. Several users have reported problems with saving changes and there is no fix for this.
You need to upgrade to V 3.0.4 or run iWeb on an external drive under Snow Leopard.
Some more info about using iWeb with Lion/Mountain Lion ...
http://www.iwebformusicians.com/iWeb/mountain-lion.html -
Hi,
I need some help on tuning this sql. We run a third party application and I have to ask thrid party for any changes. I have pasted the session statistice from the run for this sql.
SELECT DECODE( RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,
' ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' '),
RPAD(NVL(:zipout1,' '),4,' ')||RPAD(NVL(:zipin1,' '),3,' '),
'0001', RPAD(NVL(:zipout2,' '),4,'
')||RPAD(SUBSTR(NVL(:zipin2,' '),0,1),3,' '), '0002',
RPAD(NVL(:zipout3,' '),7,' '), '0003',
RPAD('ZZ999',7,' '), '0004' ) AS CHECKER
FROM NWKPCDREC
WHERE NWKPCDNETWORKID = :netid
AND NWKPCDSORTPOINT1TYPE != 'XXXXXXXX'
AND ( (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,' ')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
RPAD(NVL(:zipout4,' '),4,' ')||RPAD(NVL(:zipin3,' '),3,' '))
OR (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
RPAD(NVL(:zipout5,' '),4,' ')||RPAD(SUBSTR(NVL(:zipin4,' '),0,
1),3,' ')) OR (RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') =
RPAD(NVL(:zipout6,' '),7,' ')) OR
(RPAD(NVL(NWKPCDOUTWDPOSTCODE,' '),4,'
')||RPAD(NVL(NWKPCDINWDPOSTCODE,' '),3,' ') = RPAD('ZZ999',7,
' ')) ) ORDER BY CHECKER
Session Statistics 09 October 2007 22:44:56 GMT+00:00
Report Target : PRD1 (Database)
Session Statistics
(Chart form was tabular, see data table below)
SID Name Value Class
37 write clones created in foreground 0 Cache
37 write clones created in background 0 Cache
37 user rollbacks 16 User
37 user commits 8674 User
37 user calls 302838 User
37 transaction tables consistent reads - undo records applied 0 Debug
37 transaction tables consistent read rollbacks 0 Debug
37 transaction rollbacks 9 Debug
37 transaction lock foreground wait time 0 Debug
37 transaction lock foreground requests 0 Debug
37 transaction lock background gets 0 Debug
37 transaction lock background get time 0 Debug
37 total file opens 12 Cache
37 table scans (short tables) 8062 SQL
37 table scans (rowid ranges) 0 SQL
37 table scans (long tables) 89 SQL
37 table scans (direct read) 0 SQL
37 table scans (cache partitions) 2 SQL
37 table scan rows gotten 487042810 SQL
37 table scan blocks gotten 7327924 SQL
37 table fetch continued row 17 SQL
37 table fetch by rowid 26130550 SQL
37 switch current to new buffer 6400 Cache
37 summed dirty queue length 0 Cache
37 sorts (rows) 138607 SQL
37 sorts (memory) 13418 SQL
37 sorts (disk) 0 SQL
37 session uga memory max 5176776 User
37 session uga memory 81136 User
37 session stored procedure space 0 User
37 session pga memory max 5559884 User
37 session pga memory 5559884 User
37 session logical reads 115050107 User
37 session cursor cache hits 0 SQL
37 session cursor cache count 0 SQL
37 session connect time 1191953042 User
37 serializable aborts 0 User
37 rows fetched via callback 1295545 SQL
37 rollbacks only - consistent read gets 0 Debug
37 rollback changes - undo records applied 114 Debug
37 remote instance undo header writes 0 Global Cache
37 remote instance undo block writes 0 Global Cache
37 redo writes 0 Redo
37 redo writer latching time 0 Redo
37 redo write time 0 Redo
37 redo wastage 0 Redo
37 redo synch writes 8683 Cache
37 redo synch time 722 Cache
37 redo size 25463692 Redo
37 redo ordering marks 0 Redo
37 redo log switch interrupts 0 Redo
37 redo log space wait time 0 Redo
37 redo log space requests 1 Redo
37 redo entries 81930 Redo
37 redo buffer allocation retries 1 Redo
37 redo blocks written 0 Redo
37 recursive cpu usage 101 User
37 recursive calls 84413 User
37 recovery blocks read 0 Cache
37 recovery array reads 0 Cache
37 recovery array read time 0 Cache
37 queries parallelized 0 Parallel Server
37 process last non-idle time 1191953042 Debug
37 prefetched blocks aged out before use 0 Cache
37 prefetched blocks 1436767 Cache
37 pinned buffers inspected 89 Cache
37 physical writes non checkpoint 3507 Cache
37 physical writes direct (lob) 0 Cache
37 physical writes direct 3507 Cache
37 physical writes 3507 Cache
37 physical reads direct (lob) 0 Cache
37 physical reads direct 2499 Cache
37 physical reads 1591668 Cache
37 parse time elapsed 336 SQL
37 parse time cpu 315 SQL
37 parse count (total) 28651 SQL
37 parse count (hard) 1178 SQL
37 opens requiring cache replacement 0 Cache
37 opens of replaced files 0 Cache
37 opened cursors current 51 User
37 opened cursors cumulative 28651 User
37 no work - consistent read gets 59086317 Debug
37 no buffer to keep pinned count 0 Other
37 next scns gotten without going to DLM 0 Parallel Server
37 native hash arithmetic fail 0 SQL
37 native hash arithmetic execute 0 SQL
37 messages sent 9730 Debug
37 messages received 0 Debug
37 logons current 1 User
37 logons cumulative 1 User
37 leaf node splits 111 Debug
37 kcmgss waited for batching 0 Parallel Server
37 kcmgss read scn without going to DLM 0 Parallel Server
37 kcmccs called get current scn 0 Parallel Server
37 instance recovery database freeze count 0 Parallel Server
37 index fast full scans (rowid ranges) 0 SQL
37 index fast full scans (full) 210 SQL
37 index fast full scans (direct read) 0 SQL
37 immediate (CURRENT) block cleanout applications 4064 Debug
37 immediate (CR) block cleanout applications 83 Debug
37 hot buffers moved to head of LRU 20004 Cache
37 global lock sync gets 0 Parallel Server
37 global lock sync converts 0 Parallel Server
37 global lock releases 0 Parallel Server
37 global lock get time 0 Parallel Server
37 global lock convert time 0 Parallel Server
37 global lock async gets 0 Parallel Server
37 global lock async converts 0 Parallel Server
37 global cache read buffer lock timeouts 0 Global Cache
37 global cache read buffer blocks served 0 Global Cache
37 global cache read buffer blocks received 0 Global Cache
37 global cache read buffer block timeouts 0 Global Cache
37 global cache read buffer block send time 0 Global Cache
37 global cache read buffer block receive time 0 Global Cache
37 global cache read buffer block build time 0 Global Cache
37 global cache prepare failures 0 Global Cache
37 global cache gets 0 Global Cache
37 global cache get time 0 Global Cache
37 global cache freelist waits 0 Global Cache
37 global cache defers 0 Global Cache
37 global cache cr timeouts 0 Global Cache
37 global cache cr requests blocked 0 Global Cache
37 global cache cr blocks served 0 Global Cache
37 global cache cr blocks received 0 Global Cache
37 global cache cr block send time 0 Global Cache
37 global cache cr block receive time 0 Global Cache
37 global cache cr block flush time 0 Global Cache
37 global cache cr block build time 0 Global Cache
37 global cache converts 0 Global Cache
37 global cache convert timeouts 0 Global Cache
37 global cache convert time 0 Global Cache
37 global cache blocks corrupt 0 Global Cache
37 free buffer requested 1597281 Cache
37 free buffer inspected 659 Cache
37 execute count 128826 SQL
37 exchange deadlocks 1 Cache
37 enqueue waits 0 Enqueue
37 enqueue timeouts 0 Enqueue
37 enqueue requests 23715 Enqueue
37 enqueue releases 23715 Enqueue
37 enqueue deadlocks 0 Enqueue
37 enqueue conversions 0 Enqueue
37 dirty buffers inspected 437 Cache
37 deferred (CURRENT) block cleanout applications 21937 Debug
37 db block gets 230801 Cache
37 db block changes 160407 Cache
37 data blocks consistent reads - undo records applied 460 Debug
37 cursor authentications 488 Debug
37 current blocks converted for CR 0 Cache
37 consistent gets 114819307 Cache
37 consistent changes 460 Cache
37 commit cleanouts successfully completed 37201 Cache
37 commit cleanouts 37210 Cache
37 commit cleanout failures: write disabled 0 Cache
37 commit cleanout failures: hot backup in progress 0 Cache
37 commit cleanout failures: cannot pin 0 Cache
37 commit cleanout failures: callback failure 3 Cache
37 commit cleanout failures: buffer being written 0 Cache
37 commit cleanout failures: block lost 6 Cache
37 cold recycle reads 0 Cache
37 cluster key scans 17 SQL
37 cluster key scan block gets 36 SQL
37 cleanouts only - consistent read gets 83 Debug
37 cleanouts and rollbacks - consistent read gets 0 Debug
37 change write time 108 Cache
37 calls to kcmgrs 0 Debug
37 calls to kcmgcs 391 Debug
37 calls to kcmgas 8816 Debug
37 calls to get snapshot scn: kcmgss 171453 Parallel Server
37 bytes sent via SQL*Net to dblink 0 User
37 bytes sent via SQL*Net to client 25363874 User
37 bytes received via SQL*Net from dblink 0 User
37 bytes received via SQL*Net from client 29829542 User
37 buffer is pinned count 540816 Other
37 buffer is not pinned count 86108905 Other
37 branch node splits 6 Debug
37 background timeouts 0 Debug
37 background checkpoints started 0 Cache
37 background checkpoints completed 0 Cache
37 Unnecesary process cleanup for SCN batching 0 Parallel Server
37 SQL*Net roundtrips to/from dblink 0 User
37 SQL*Net roundtrips to/from client 302837 User
37 Parallel operations not downgraded 0 Parallel Server
37 Parallel operations downgraded to serial 0 Parallel Server
37 Parallel operations downgraded 75 to 99 pct 0 Parallel Server
37 Parallel operations downgraded 50 to 75 pct 0 Parallel Server
37 Parallel operations downgraded 25 to 50 pct 0 Parallel Server
37 Parallel operations downgraded 1 to 25 pct 0 Parallel Server
37 PX remote messages sent 0 Parallel Server
37 PX remote messages recv'd 0 Parallel Server
37 PX local messages sent 0 Parallel Server
37 PX local messages recv'd 0 Parallel Server
37 OS Voluntary context switches 0 OS
37 OS User time used 0 OS
37 OS System time used 0 OS
37 OS Swaps 0 OS
37 OS Socket messages sent 0 OS
37 OS Socket messages received 0 OS
37 OS Signals received 0 OS
37 OS Page reclaims 0 OS
37 OS Page faults 0 OS
37 OS Maximum resident set size 0 OS
37 OS Involuntary context switches 0 OS
37 OS Integral unshared stack size 0 OS
37 OS Integral unshared data size 0 OS
37 OS Integral shared text size 0 OS
37 OS Block output operations 0 OS
37 OS Block input operations 0 OS
37 DML statements parallelized 0 Parallel Server
37 DFO trees parallelized 0 Parallel Server
37 DDL statements parallelized 0 Parallel Server
37 DBWR undo block writes 0 Cache
37 DBWR transaction table writes 0 Cache
37 DBWR summed scan depth 0 Cache
37 DBWR revisited being-written buffer 0 Cache
37 DBWR make free requests 0 Cache
37 DBWR lru scans 0 Cache
37 DBWR free buffers found 0 Cache
37 DBWR cross instance writes 0 Global Cache
37 DBWR checkpoints 0 Cache
37 DBWR checkpoint buffers written 0 Cache
37 DBWR buffers scanned 0 Cache
37 Commit SCN cached 0 Debug
37 Cached Commit SCN referenced 1 Debug
37 CR blocks created 203 Cache
37 CPU used when call started 280528 Debug
37 CPU used by this session 280528 User
Regards
Raj
--------------------------------------------------------------------------------Thank you everybody for helping me out while tuning the query. I have managed to bring down the run time from 60 minutes to 12 minutes.
I am posting the exisitng query, existing database objects ddl and the new query and new ddl to share my learning. This is my first use of forum, senior members, please letme know if I shouldn't have put all this here.
/pre original code
SELECT decode(rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
a.nwkpcdinwdpostcode, ' '), 3, ' '), rpad(nvl(:zipout1, ' '), 4, ' ')
|| rpad(nvl(:zipin1, ' '), 3, ' '), '0001', rpad(nvl(:zipout2, ' '), 4,
' ') || rpad(substr(nvl(:zipin2, ' '), 0, 1), 3, ' '), '0002',
rpad(nvl(:zipout3, ' '), 7, ' '), '0003', rpad('ZZ999', 7, ' '), '0004')
AS checker, a.nwkpcdbarcode1to7 nwkpcdbarcode1to7,
a.nwkpcdbarcode15 nwkpcdbarcode15,
a.nwkpcdbarcodeseqkey nwkpcdbarcodeseqkey,
a.nwkpcdsortpoint1code nwkpcdsortpoint1code,
a.nwkpcdsortpoint1type nwkpcdsortpoint1type,
a.nwkpcdsortpoint1name nwkpcdsortpoint1name,
a.nwkpcdsortpoint1extra nwkpcdsortpoint1extra,
a.nwkpcdsortpoint2type nwkpcdsortpoint2type,
a.nwkpcdsortpoint2name nwkpcdsortpoint2name,
a.nwkpcdsortpoint3type nwkpcdsortpoint3type,
a.nwkpcdsortpoint3name nwkpcdsortpoint3name,
a.nwkpcdsortpoint4type nwkpcdsortpoint4type,
a.nwkpcdsortpoint4name nwkpcdsortpoint4name,
b.nwkprfnetworksequence nwkprfnetworksequence,
b.nwkprfnetworkid nwkprfnetworkid, b.nwkprfnetworkname nwkprfnetworkname,
b.nwkprfminweight / 100 AS nwkprfminweight, b.nwkprfmaxweight / 100 AS
nwkprfmaxweight, b.nwkprfminlengthgirth nwkprfminlengthgirth,
b.nwkprfmaxlengthgirth nwkprfmaxlengthgirth,
b.nwkprfminlength nwkprfminlength, b.nwkprfmaxlength nwkprfmaxlength,
b.nwkprfparceltypecode nwkprfparceltypecode,
b.nwkprfparceltypename nwkprfparceltypename
FROM wh1.nwkpcdrec a, wh1.nwkprefrec b
WHERE a.nwkpcdnetworkid = b.nwkprfnetworkid
AND a.nwkpcdsortpoint1type != 'XXXXXXXX'
AND (rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout4, ' '), 4, ' '
) || rpad(nvl(:zipin3, ' '), 3, ' ')
OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout5, ' '), 4, ' '
) || rpad(substr(nvl(:zipin4, ' '), 0, 1), 3, ' ')
OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad(nvl(:zipout6, ' '), 7, ' '
OR rpad(nvl(a.nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(
a.nwkpcdinwdpostcode, ' '), 3, ' ') = rpad('ZZ999', 7, ' '))
AND :weight1 >= b.nwkprfminweight
AND :weight2 <= b.nwkprfmaxweight
AND b.nwkprfminlengthgirth <= 60
AND b.nwkprfmaxlengthgirth >= 60
AND b.nwkprfminlength <= 15
AND b.nwkprfmaxlength >= 15
ORDER BY b.nwkprfnetworkid, checker
CREATE TABLE "WH1"."NWKPCDREC" ("NWKPCDFILECODE" VARCHAR2(2),
"NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
"NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
"NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
"NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
"NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
"NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
VARCHAR2(30),
CONSTRAINT "UK_NWKPCDREC" UNIQUE("NWKPCDNETWORKID",
"NWKPCDOUTWDPOSTCODE", "NWKPCDINWDPOSTCODE")
USING INDEX
TABLESPACE "WH1_INDEX"
STORAGE ( INITIAL 64K NEXT 0K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
PCTFREE 10 INITRANS 2 MAXTRANS 255)
TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
MAXTRANS 255
STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
NOLOGGING
pre original script/
/pre modified script
CREATE TABLE "WH1"."NWKPCEREC_OLD" ("NWKPCDFILECODE" VARCHAR2(2),
"NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
"NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
"NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
"NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
"NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
"NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
VARCHAR2(30))
TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
MAXTRANS 255
STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
NOLOGGING
insert into wh1.nwkpcdrec_old select * from wh1.nwkpcdrec;
drop table wh1.nwkpcdrec;
CREATE TABLE "WH1"."NWKPCDREC" ("NWKPCDFILECODE" VARCHAR2(2),
"NWKPCDRECORDTYPE" VARCHAR2(4), "NWKPCDNETWORKID" VARCHAR2(2),
"NWKPCDOUTINWDPOSTCODE" VARCHAR2(7) NOT NULL,
"NWKPCDOUTWDPOSTCODE" VARCHAR2(4), "NWKPCDINWDPOSTCODE"
VARCHAR2(3), "NWKPCDSORTPOINT1CODE" VARCHAR2(2),
"NWKPCDSORTPOINT1TYPE" VARCHAR2(8), "NWKPCDSORTPOINT1NAME"
VARCHAR2(16), "NWKPCDSORTPOINT1EXTRA" VARCHAR2(16),
"NWKPCDSORTPOINT2TYPE" VARCHAR2(8), "NWKPCDSORTPOINT2NAME"
VARCHAR2(8), "NWKPCDSORTPOINT3TYPE" VARCHAR2(8),
"NWKPCDSORTPOINT3NAME" VARCHAR2(8), "NWKPCDSORTPOINT4TYPE"
VARCHAR2(8), "NWKPCDSORTPOINT4NAME" VARCHAR2(8), "NWKPCDPPI"
VARCHAR2(8), "NWKPCDBARCODE1TO7" VARCHAR2(7),
"NWKPCDBARCODE15" VARCHAR2(1), "NWKPCDBARCODESEQKEY"
VARCHAR2(7), "NWKPCDFILLER1" VARCHAR2(7), "NWKPCDFILLER2"
VARCHAR2(30))
TABLESPACE "WH1_DATA_LARGE" PCTFREE 10 PCTUSED 40 INITRANS 1
MAXTRANS 255
STORAGE ( INITIAL 4096K NEXT 4096K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
NOLOGGING
INSERT INTO WH1.NWKPCDREC SELECT
NWKPCDFILECODE,
NWKPCDRECORDTYPE,
NWKPCDNETWORKID,
rpad(nvl(nwkpcdoutwdpostcode, ' '), 4, ' ') || rpad(nvl(nwkpcdinwdpostcode, ' '), 3, ' '),
nwkpcdoutwdpostcode,
nwkpcdinwdpostcode,
NWKPCDSORTPOINT1CODE,
NWKPCDSORTPOINT1TYPE,
NWKPCDSORTPOINT1NAME,
NWKPCDSORTPOINT1EXTRA,
NWKPCDSORTPOINT2TYPE,
NWKPCDSORTPOINT2NAME,
NWKPCDSORTPOINT3TYPE,
NWKPCDSORTPOINT3NAME,
NWKPCDSORTPOINT4TYPE,
NWKPCDSORTPOINT4NAME,
NWKPCDPPI,
NWKPCDBARCODE1TO7,
NWKPCDBARCODE15,
NWKPCDBARCODESEQKEY,
NWKPCDFILLER1,
NWKPCDFILLER2
FROM WH1.NWKPCDREC_OLD;
CREATE UNIQUE INDEX "WH1"."UK_NWKPCDREC"
ON "WH1"."NWKPCDREC" ("NWKPCDNETWORKID",
"NWKPCDOUTINWDPOSTCODE")
TABLESPACE "WH1_INDEX" PCTFREE 10 INITRANS 2 MAXTRANS
255
STORAGE ( INITIAL 8192K NEXT 8192K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
LOGGING
begin
dbms_stats.gather_table_stats(ownname=> 'WH1', tabname=> 'NWKPCDREC', partname=> NULL);
end;
begin
dbms_stats.gather_index_stats(ownname=> 'WH1', indname=> 'UK_NWKPCDREC', partname=> NULL);
end;
SELECT decode(a.nwkpcdoutinwdpostcode, rpad(nvl(:zipout1, ' '), 4, ' ') ||
rpad(nvl(:zipin1, ' '), 3, ' '), '0001', rpad(nvl(:zipout2, ' '), 4, ' '
) || rpad(substr(nvl(:zipin2, ' '), 0, 1), 3, ' '), '0002', rpad(
nvl(:zipout3, ' '), 7, ' '), '0003', rpad('ZZ999', 7, ' '), '0004') AS
checker, a.nwkpcdbarcode1to7 nwkpcdbarcode1to7,
a.nwkpcdbarcode15 nwkpcdbarcode15,
a.nwkpcdbarcodeseqkey nwkpcdbarcodeseqkey,
a.nwkpcdsortpoint1code nwkpcdsortpoint1code,
a.nwkpcdsortpoint1type nwkpcdsortpoint1type,
a.nwkpcdsortpoint1name nwkpcdsortpoint1name,
a.nwkpcdsortpoint1extra nwkpcdsortpoint1extra,
a.nwkpcdsortpoint2type nwkpcdsortpoint2type,
a.nwkpcdsortpoint2name nwkpcdsortpoint2name,
a.nwkpcdsortpoint3type nwkpcdsortpoint3type,
a.nwkpcdsortpoint3name nwkpcdsortpoint3name,
a.nwkpcdsortpoint4type nwkpcdsortpoint4type,
a.nwkpcdsortpoint4name nwkpcdsortpoint4name,
b.nwkprfnetworksequence nwkprfnetworksequence,
b.nwkprfnetworkid nwkprfnetworkid, b.nwkprfnetworkname nwkprfnetworkname,
b.nwkprfminweight / 100 AS nwkprfminweight, b.nwkprfmaxweight / 100 AS
nwkprfmaxweight, b.nwkprfminlengthgirth nwkprfminlengthgirth,
b.nwkprfmaxlengthgirth nwkprfmaxlengthgirth,
b.nwkprfminlength nwkprfminlength, b.nwkprfmaxlength nwkprfmaxlength,
b.nwkprfparceltypecode nwkprfparceltypecode,
b.nwkprfparceltypename nwkprfparceltypename
FROM wh1.nwkpcdrec a, wh1.nwkprefrec b
WHERE a.nwkpcdnetworkid = b.nwkprfnetworkid
AND a.nwkpcdoutinwdpostcode IN (rpad(nvl(:zipout4, ' '), 4, ' ') ||
rpad(nvl(:zipin3, ' '), 3, ' '), rpad(nvl(:zipout5, ' '), 4, ' ')
|| rpad(substr(nvl(:zipin4, ' '), 0, 1), 3, ' '), rpad(nvl(:zipout6,
' '), 7, ' '), rpad('ZZ999', 7, ' '))
AND a.nwkpcdsortpoint1type != 'XXXXXXXX'
AND :weight1 >= b.nwkprfminweight
AND :weight2 <= b.nwkprfmaxweight
AND b.nwkprfminlengthgirth <= 60
AND b.nwkprfmaxlengthgirth >= 60
AND b.nwkprfminlength <= 15
AND b.nwkprfmaxlength >= 15
ORDER BY b.nwkprfnetworkid, checker
pre modified script/ -
What are the side effects of setting "_object_statistics=false"?
Hi,
On a 10.2.0.3.0, due to frequent ORA-04031 errors, we have been recommended to set objectstatistics=false in init.ora and bounce the database.
Metalink Bug ID: 3519807
Going through the available documentation on net, I found that this parameter would have same effect as setting STATISTICS_LEVEL=BASIC or even worse.
Documentation (for statistics_level) says following options are affected.
* Automatic Workload Repository (AWR) Snapshots
* Automatic Database Diagnostic Monitor (ADDM)
* All server-generated alerts
* Automatic SGA Memory Management
* Automatic optimizer statistics collection
* Object level statistics
* End to End Application Tracing (V$CLIENT_STATS)
* Database time distribution statistics (V$SESS_TIME_MODEL and V$SYS_TIME_MODEL)
* Service level statistics
* Buffer cache advisory
* MTTR advisory
* Shared pool sizing advisory
* Segment level statistics
* PGA Target advisory
* Timed statistics
* Monitoring of statistics
We have daily schema statistics gathering schedule in place so I feel query performance should not be affected (of course, this is my assumption).
I just wanted to know what else would be affected due to this.
Thanks in advance.
Regards,Please let me know your ideas/inputs.
-
I am developing an AWR analysis tool...
Hi,
I am looking for help with Beta testing a tool that I am developing to analyze Oracle AWR reports (HTML format), produce graphs to assist in finding trends and correlations between the data.
What I can offer is if you can send 2 or more AWR HTML reports (from the same system, up to 24 of them) I will return a HTML report which graphs the key metrics (Instance Summary, Instant Efficiency, Wait Events, Top SQL by (CPU, Wait Time, Elapsed Time), Instance Activity Stats (with correlations against User Calls and DB Calls). There is a caveat to this – the code is in beta, if the tool cannot process the exact format I will work to fix it so that I can generate the report, but I cannot promise any particular time line on this (it may be days or weeks) as I am using my spare time to develop this. My main focus at the moment is on the parsing of the reports, rather than the prettiness of the graphs.
This service is currently provided with no warranty at all, you MUST review all data yourself against your own data, in sending me the reports you release me from any obligations or legal commitments (if you rely solely on this and your nuclear reactor melts down – do not try to blame me!).
{If you can send me one AWR HTML report, I will use it just to improve my parsing, and you will receive my thanks.}
“This all seems a bit suspicious - who is this guy, why is he not even telling me his real name?” Well I guess I probably would be wondering a little about this also. So let me tell you a little about myself. I used to do Oracle performance tuning professionally and it was all I did for more than ten years. During that time I worked on some of Oracles largest OLTP and data warehouse systems. Then I joined a startup company working in a completely different field, and while I now have a nice management job I really miss the performance work. We do have some customers using our software on large Oracle clusters and occasionally I would get a call from Support to help work out what was going wrong on a customers site. This usually meant getting 10+ awr reports emailed to me and having to spend some time wading through them looking for trends, which lead me to become convinced that 1. there must be a better way of reading the reports and 2. I am sure I am not the only one who has to figure out what's going on on a system to which the only connection is a HTML AWR report. So I went away and did some dev and reached a minimally viable product that is ready for someone other than its “daddy” (who thinks the world of it) to either love it or tell me its ugly.
Some of you will be concerned about leaking information (instance names, host names, ip addresses, database structure from SQL (and even real data if your not using bind variables)). So if I get enough people saying “I like the idea of the tool, but I am scared your looking to hack my system” then I will produce (and open source, so you can inspect it) a tool to strip out anything that is likely to be dangerous prior to sending it to me.
Regards,
AWR Reader.>
I'm afraid what you are trying to do has already been done.
>
ASH is not AWR - it is a subset of AWR
Performance Tuning Guide
http://docs.oracle.com/cd/B28359_01/server.111/b28274/autostat.htm#i27008
>
The statistics collected and processed by AWR include:
•Object statistics that determine both access and usage statistics of database segments
•Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views
•Some of the system and session statistics collected in the V$SYSSTAT and V$SESSTAT views
•SQL statements that are producing the highest load on the system, based on criteria such as elapsed time and CPU time
•Active Session History (ASH) statistics, representing the history of recent sessions activity -
When to expect July's sales reports?
We're August 1st and I'm wondering when will the sales reports be available? Now that we have daily download statistics, I hope to see how well I did for the month of July!
Yesterday there was a PDF guide in the sales/trends section that said that as far as sales go, months always end on a Saturday/begin on Sunday. Granted this guide was obviously for music and looked a little outdated, but I still wouldn't be surprised if we had our first financial reports by next week.
-
Can anyone help me on the how to develop GUI using only AWT..(the source code)
FIRST SCREEN --> Opening Splash Screen "Application to view log details" with the sub topics "daily transmission Statistics and "Hourly Transmission Statistics" and a ok button at bottom.My own logo in a canvas should be displayed
2nd screen---> Menu bar has 2 items such as file and help..If a button is pressed before using the File menu to open a file,then a dialog box should appear to tell the user to chose a file before selecting an option.Post your completly runnable demo program andget
a
complete solution,a runnable one!Alternatively, I will write it for you for $100.I'll lower the bid to $90 :-)I'm going to have to get nasty if you keep undercutting me like this. -
Help with documentation - APEX 4.0
We are developing an application to monitor server statistics and chart them into graphs over a period of time. We have a lot of applications and each application has several servers.
We are looking for some documentation or examples where we could use option buttons or some other methods to select multiple items from an option list or drop down box, use them to create dynamic charts.
For example, we should be able to choose a few applications from a list or query, choose some servers for those selected applications, and create the server statistics charts. The code samples given in APEX 4.0 documentation do not show how to use option buttons to choose multiple items in a list or how to dynamically create graphs with out hard coding the parameters.
Any help will be appreciated.APEX is a capability of the database, not the application server. I'd encourage you to ask the question in the databse -> Apex forum
-
Adding multiple rows to a cell?
Is it possible to add multiple rows of text to a single cell? I am trying to create a workout log. Typicaly I would record several exercises with weights and repititions to a single days workout. Is this possible in Numbers?
Suggestion?
Thanks.Hi Dcneuro,
I am trying to create a workout log. Typicaly I would record several exercises with weights and repititions to a single days workout.
You can enter multiple lines of text in a cell... But I'm wondering why you would want to do that. A log would be more useful if you structure it like this, one exercise to a row and repeat the date where appropriate:
If you structure it that way or similarly (rather than entering multiple lines in a cell) then you can then use Numbers to do the things spreadsheet software is designed to do: derive summary statistics, charts, etc. The formulas in this summary table are:
B2, copied down: =COUNTIF(Log::B,A2)
C2, copied down: =SUMIF(Log::B,A2,Log::D)
D2, copied down: =AVERAGEIF(Log::B,A2,Log::D)
E2, copied down: =AVERAGEIF(Log::B,A2,Log::C)
Obviously these workout numbers make no sense. This is just a simple example to give you an idea of the kinds of things you can do if you structure your log properly.
Also, have a look at the Running Log template (File>New>Personal>Running Log on the Mac, or on the iPad: + then Create Spreadsheet then scroll down to the Personal section).
SG -
Daily and weekly repoort about Oracle DB status/statistics to management
Hi,
My management asked me to send them a daily and weekly repoort about Oracle Databases status/statistics etc. I like to know which kind of report mostly other DBA use.
Thanks in advance- A - wrote:
Thanks sb92075
They want some thing like...
1-) CPU and RAM utilization. easy OS statistics
2-) Database Statisticslike what exactly?
3-) Database Performance which metrics show "database performance"?
4-) Database Storageeasy OS statistics
Edited by: sb92075 on May 27, 2012 10:03 PM -
Nakisa Org Chart Version 3.0 - Org Unit Position Statistics
Hi All,
Just a quick question, I am currently at a client site where org unit position statistics is always coming back with 0 for occupied and vacation positions.
There are clearly employees assigned to position in the org structure, but I am thinking it may be related to the work percentage on teh A008 relationship between the position and the person.
Can anyone confirm this or give some insight on why we could be getting zeros.
We are currently researching the authorizations to rule that out.
thanks.
JBHi Jamie,
I hope you are well. I assume you mean the OrgUnit Hierarchy > FTE View? For the OrgUnit FTE view, the data is calculated by counting all the Positions:
Occupied Positions: where Position has active Person assigned (defined by S A 008 P relationship and PA0000-STAT2 = 3 for related Person)
Vacant Positions: where Position has no active Person assigned (defined by S A 008 P relationship and PA0000-STAT2 = 3 for related Person)
However, this data element (Position_View_Element_Live.xml) uses the BAPI_SAP_NakisaRfcProcessor_PositionVacancyCount_Live class instead of a NakisaRFC Chart class but it does reference the SAPOrgUnitChart_Position NakisaRFC function. It might be that this class alter the output of the NakisaRFC Chart function.
Does that help?
Best regards,
Luke -
Gathering daily statistics of a table using DBMS_STATS..!!
Hi all,
Can please somebody help me for how to collect daily statistics of a table..!!
executing the DBMS_stats is fine but when i assign as a job using OEM the job never starts. It just shows the status as job submitted but never executes..
Is there any other way to execute the DBMS_STATS daily...??In 10g, it is executed daily at 22:00 by default.
Otherwise, you can execute something like
begin
dbms_job.isubmit(job => 1,
what => 'BEGIN
DBMS_STATS.GATHER_DATABASE_STATS (
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
block_sample => TRUE,
method_opt => ''FOR ALL COLUMNS SIZE AUTO'',
degree => 6,
granularity => ''ALL'',
cascade => TRUE,
options => ''GATHER STALE''
END;',
next_date => sysdate + 2,
interval => 'TRUNC(SysDate +1 ) + 22/24',
no_parse => TRUE);
end;
commit;Make sure you commit.
Maybe you are looking for
-
My Ipod 7th gen is not being recognized by itunes and Windows 8.1.
Hello. My ipod 7th gen is not being recognized by itunes and Windows 8.1. When I connect my ipod to my computer, itunes will open automatically and a message bar below will appear: I have followed all the instruction in the support and itunes still u
-
Trouble with pdf document converted to word
I converted a pdf to a word document but it will only allow picture tools. It does not let me make changes to the word document.
-
Progress bar for third party applications
i invoke the third party application from a java program, maybe using runtime class. what are the other ways to do invoke third party applications/programs. how do i know how much work the third party application has finished.
-
Bea JSP workshop plugin- and MyEclipse - problems
I have been using Bea workshop studio eclipse plugin ( JSP part) with MyEclipse. Bea JSP workshop( with Struts) is a very good product-- better than My Eclipse JSP's . I had been using MyEclipse (Build id: 20060810-5.0.1-GA windows )with BEA JSP work
-
"Your Mac OS X Startup Disk Has No More Space For Application Memory"
I've been having a problem in the last few days with my Mac. I would be surfing the net, or doing whatever, and an error message would randomly pop up on my screen. It would say: "_Your Mac OS X Startup Disk Has No More Space For Application Memory_"