Swing performance question: CPU-bound
Hi,
I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
Thanks,
Curt
You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.
Similar Messages
-
PL/SQL performance questions
Hi,
I am responsible for a large, computation-intensive PL/SQL program that performs some batch processing on a large number of records.
I am trying to improve the performance of this program and have a couple of questions that I am hoping this forum can answer.
I am running Oracle 11.1.0.7 on Windows.
1. How does compiling with DEBUG information affect performance?
I found that my program units (packages, procedures, object types, etc) run significantly slower if they are compiled with debug information
I am trying to understand why this is so. Does debug information instrument the code and result in more code that needs to be executed?
Does adding debug information prevent compiler optimizations? both?
The reason I ask this question is to understand if it is valid to compare the performance of two different implementations if they are both compiled with debug information. For example, if one approach is 20% faster when compiled with debug information, is it safe to assume that it will also be 20% faster in production (without debug information)? Or, as I expect, does the presence of debug information change the performance profile of the code?
2. What is the best way to measure how long a PL/SQL program takes?
I want to compare to approaches, such as using a VARRAY vs. a TABLE variable. I have been doing this by creating two test procedures that performs the same task using the two approaches I want to evalulate.
How should I measure the time an approach takes so that it is not affected by other activity on my system? I have tried using CPU time (dbms_utility.get_cpu_time) and elapsed time. CPU time seems to be much
more consistent between runs, however, I am concerned that CPU time might not reflect all the time the process takes.
(I am aware of the profiler and have used that as well, however, I am at the point where profiling is providing diminishing returns).
3. I tried recompiling my entire system to be native compiled but to my great surprise, did not notice any measurable difference in performance!
I compiled all specification and bodies in all schemas to be native compiled. Can anyone explain why native compilation would not result in a significant performance improvement on a process that seems to be CPU-bound when it is running? Are there any other settings or additional steps that need to be performed for native compilation to be effective?
Thank you,
EricYes, debug must add instrumentation. I think that is the point of it. Whether it lowers the compiler optimisation level I don't know (I haven't read anywhere that it does) but surely if you stepping through code manually to debug it then you don't care.
I don't know of a way to measure pure CPU time independently of other system activity. One common approach is to write a test program that repeats your sample code a large enough number of times for a pattern to emerge. To find how much time individual components contribute, dbms_profiler can be quite helpful (most conveniently via a button press in IDEs such as PL/SQL Developer, but it can also be invoked from the command line.)
It is strange that no native compilation appears to make no difference. Are you sure everything is actually using it? e.g. is it shown as natively compiled in ALL_PLSQL_OBJECT_SETTINGS?
I would not expect a PL/SQL VARRAY variable to perform any differently to a nested table one - I expect they have an identical internal implementation. The difference is that VARRAYs have much reduced functionality and a normally unhelpful limit setting.
Edited by: William Robertson on Nov 6, 2008 11:49 PM -
Spatial Queries are CPU bound and show very heavy use of query buffers
Hi,
Spatial Queries:
When using tkprof to analyse spatial queries it is clear that
there are implicit queries being done by Oracle spatial which
use vast amounts of buffers, and seem unable to cache basic
information from query to query - thus resulting in our machine
being CPU bound when stress testing Oracle Spatial, for example
the example below shows how information which is fixed for a
table and not likely to change very often is being retrieved
inefficiently (note the 26729 query buffers being used to do 6
executions of what should be immediately available!!!):
TKPROF: Release 8.1.7.0.0 - Production on Tue Oct 16 09:43:38
2001
(c) Copyright 2000 Oracle Corporation. All rights reserved.
SELECT ATTR_NO, ATTR_NAME, ATTR_TYPE_NAME, ATTR_TYPE_OWNER
FROM
ALL_TYPE_ATTRS WHERE OWNER = :1 AND TYPE_NAME = :2 ORDER BY
ATTR_NO
call count cpu elapsed disk query rows
Parse 6 0.00 0.01 0 0 0
Execute 6 0.00 0.01 0 0 0
Fetch 6 0.23 0.41 0 26729 5
total 18 0.23 0.43 0 26729 5
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
0 SORT ORDER BY
0 FILTER
1 NESTED LOOPS
1 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 TABLE ACCESS FULL ATTRIBUTE$
578 TABLE ACCESS CLUSTER TYPE$
578 TABLE ACCESS CLUSTER TYPE$
578 INDEX UNIQUE SCAN (object id 255)
578 TABLE ACCESS BY INDEX ROWID OBJ$
578 INDEX RANGE SCAN (object id 35)
578 TABLE ACCESS CLUSTER USER$
578 INDEX UNIQUE SCAN (object id 11)
289 TABLE ACCESS BY INDEX ROWID OBJ$
578 INDEX RANGE SCAN (object id 35)
0 TABLE ACCESS CLUSTER USER$
0 INDEX UNIQUE SCAN (object id 11)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the
view
and again:
SELECT diminfo, nvl(srid,0)
FROM
ALL_SDO_GEOM_METADATA WHERE OWNER = 'NAGYE' AND TABLE_NAME =
NLS_UPPER('TILE_MED_LINES_MBR') AND '"'||COLUMN_NAME||'"'
= '"GEOM"'
call count cpu elapsed disk query
current rows
Parse 20 0.00 0.04 0
0 0 0
Execute 20 0.00 0.00 0
0 0 0
Fetch 20 0.50 0.50 0 5960
100 20
total 60 0.50 0.54 0 5960
100 20
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE) (recursive depth: 1)
Rows Row Source Operation
1 FILTER
2 TABLE ACCESS BY INDEX ROWID SDO_GEOM_METADATA_TABLE
2 INDEX RANGE SCAN (object id 24672)
1 UNION-ALL
1 FILTER
1 NESTED LOOPS
1 NESTED LOOPS
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS
1 TABLE ACCESS FULL OBJ$
1 TABLE ACCESS CLUSTER TAB$
1 INDEX UNIQUE SCAN (object id 3)
0 TABLE ACCESS BY INDEX ROWID OBJ$
1 INDEX UNIQUE SCAN (object id 33)
0 INDEX UNIQUE SCAN (object id 33)
0 TABLE ACCESS CLUSTER USER$
1 INDEX UNIQUE SCAN (object id 11)
1 TABLE ACCESS CLUSTER SEG$
1 INDEX UNIQUE SCAN (object id 9)
1 TABLE ACCESS CLUSTER TS$
1 INDEX UNIQUE SCAN (object id 7)
1 TABLE ACCESS CLUSTER USER$
1 INDEX UNIQUE SCAN (object id 11)
0 FILTER
0 NESTED LOOPS
0 NESTED LOOPS OUTER
0 NESTED LOOPS
0 TABLE ACCESS FULL USER$
0 TABLE ACCESS BY INDEX ROWID OBJ$
0 INDEX RANGE SCAN (object id 34)
0 INDEX UNIQUE SCAN (object id 97)
0 INDEX UNIQUE SCAN (object id 96)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the
view
Note: The actual query being performed is:
select a.id, a.geom
from
tile_med_lines_mbr a where sdo_relate(a.geom,mdsys.sdo_geometry
(2003,NULL,
NULL,mdsys.sdo_elem_info_array
(1,1003,3),mdsys.sdo_ordinate_array(151.21121,
-33.86325,151.21132,-33.863136)), 'mask=anyinteract
querytype=WINDOW') =
'TRUE'
call count cpu elapsed disk query
current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.08 0.08 0 4 0 0
Fetch 5 1.62 21.70 0 56 0 827
total 7 1.70 21.78 0 60 0 827
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
827 TABLE ACCESS BY INDEX ROWID TILE_MED_LINES_MBR
828 DOMAIN INDEX
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
827 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'TILE_MED_LINES_MBR'
828 DOMAIN INDEX OF 'TILE_MLINES_SPIND'
CPU: none, I/O: none
call count cpu elapsed disk query
current rows
Parse 1 0.00 0.00 0 92
Execute 1 0.00 0.00 0 22
Fetch 1 0.00 0.00 38 236
total 3 0.00 0.00 38 350
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
12 TABLE ACCESS BY INDEX ROWID ROADELEMENT_MBR
178 DOMAIN INDEX
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
12 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ROADELEMENT_MBR'
178 DOMAIN INDEX OF 'RE_MBR_SPIND'
CPU: none, I/O: none
Can Oracle improve the performance of Oracle spatial by
improving the implementation so as to perform alternative
implicit queries so as not to use these vast amounts of memory?
Cheers
Alex EadieHi Ravi,
Thankyou for your reply.
Here are some more details for you:
Yes the queries are cached in that it gets its data from RAM and
not from disk however the number of buffers used internally by
Oracle RDBMS/Spatial is rather large and results in significant
CPU usage (namely > 5000 per query or >40MByte). Which I'm sure
you'd agree? Those numerous internal queries taking >10ms CPU
time each, which is culmulative.
A single real of ours query of will take between 180ms and 580ms
depending on the number of results returned.
An example query is:
select a.id, a.geom
from tile_med_lines_mbr a where sdo_relate
(a.geom,mdsys.sdo_geometry
(2003,NULL, NULL,mdsys.sdo_elem_info_array
(1,1003,3),mdsys.sdo_ordinate_array(151.21121,
-33.86325,151.21132,-33.863136)), 'mask=anyinteract
querytype=WINDOW') = 'TRUE'
Our 500Mhz PC Server database can only execute 3 processes
running these queries simultaneously to go to 100% CPU loaded.
The disk is hardly utilized.
The data is the main roads in Sydney, Australia.
The tables, data and indexes were created as shown below:
1. Create the Oracle tables:
create table tile_med_nodes_mbr (
id number not null,
geom mdsys.sdo_geometry not null,
xl number not null,
yl number not null,
xh number not null,
yh number not null);
create table tile_med_lines_mbr (
id number not null,
fromid number not null,
toid number not null,
geom mdsys.sdo_geometry not null,
xl number not null,
yl number not null,
xh number not null,
yh number not null);
2. Use the sqlldr Oracle loader utility to load the data
into Oracle.
% sqlldr userid=csiro_scats/demo control=nodes.ctl
% sqlldr userid=csiro_scats/demo control=lines.ctl
3. Determine the covering spatial extent for the tile
mosaic and use this to create the geometry metadata.
% sqlplus
SQLPLUS> set numw 12
SQLPLUS> select min(xl), min(yl), max(xh), max(yh)
from (select xl, yl, xh, yh
from tile_med_nodes_mbr union
select xl, yl, xh, yh
from tile_med_lines_mbr);
insert into USER_SDO_GEOM_METADATA
(TABLE_NAME, COLUMN_NAME, DIMINFO)
VALUES ('TILE_MED_NODES_MBR', 'GEOM',
MDSYS.SDO_DIM_ARRAY
(MDSYS.SDO_DIM_ELEMENT('X', 151.21093421,
151.21205421, 0.000000050),
MDSYS.SDO_DIM_ELEMENT('Y', -33.86347146,
-33.86234146, 0.000000050)));
insert into USER_SDO_GEOM_METADATA
(TABLE_NAME, COLUMN_NAME, DIMINFO)
VALUES ('TILE_MED_LINES_MBR', 'GEOM',
MDSYS.SDO_DIM_ARRAY
(MDSYS.SDO_DIM_ELEMENT('X', 151.21093421,
151.21205421, 0.000000050),
MDSYS.SDO_DIM_ELEMENT('Y', -33.86347146,
-33.86234146, 0.000000050)));
4. Validate the data loaded:
create table result
(UNIQ_ID number, result varchar2(10));
execute sdo_geom.validate_layer
('TILE_MED_NODES_MBR','GEOM','ID','RESULT');
select result, count(result)
from RESULT
group by result;
truncate table result;
execute sdo_geom.validate_layer
('TILE_MED_LINES_MBR','GEOM','ID','RESULT');
select result, count(result)
from RESULT
group by result;
drop table result;
5. Fix any problems reported in the result table.
6. Create a spatial index, use the spatial index advisor to
determine the sdo_level.
create index tile_mlines_spind on
tile_med_lines_mbr (geom) indextype is
mdsys.spatial_index parameters
( 'sdo_level=7,initial=1M,next=1M,pctincrease=0');
7. Analyse table:
analyze table TILE_MED_LINES_MBR compute statistics;
8. Find the spatial index table name:
select sdo_index_table, sdo_column_name
from user_sdo_index_metadata
where sdo_index_name in
(select index_name
from user_indexes
where ityp_name = 'SPATIAL_INDEX'
and table_name = 'TILE_MED_LINES_MBR');
9. Analyse spatial index table:
analyze table TILE_MLINES_SPIND_FL7$
compute statistics;
I hope this helps.
Cheers
Alex Eadie -
Cache / performance question...
Hello,
I've been running tests against an all-write load to try and understand limitations and BDB internals, and am wondering what causes BDB to begin 'forcing' pages from cache even though clean pages still exist, checkpoints are being regularly run, and trickling is being used? As an all-write load (in a testing environment) continues to operate, I notice performance begins to suffer and cannot recover at roughly the time db_stat begins reporting forced writes of both clean and dirty pages. For example, a write load of 5000 txn/sec degrades to 1000/sec and below after ~500K transactions, and I start to see things like the below in db_stat:
3219 Clean pages forced from the cache
7271 Dirty pages forced from the cache
183004 Dirty pages written by trickle-sync thread
Even when the cache has clean pages, as the database grows over time, I imagine that new keys may be written to "older" pages. This particular example uses DB_HASH. In this case, although the cache has a clean page, a given key "475000" may be written to an old page already on disk. This would, as far as I understand, require that page to be re-read into the cache to complete the write. However, monitoring vmstat does not show me any inbound IO activity whatsoever during the time that the performance begins to drop. I am therefore not certain that old pages are being read from disk, and confused as to why write performance begins to suffer so greatly when the cache fills up.
I'm trying this without DB_TXN_NOSYNC/etc - my environment favors steady performance over bursty speed, so I don't want to forward-load anything that might trigger a large number of writes later whenever possible.
I have sample source and testing results that I can attach, but I'm first trying to understand if I'm not understanding something basic before I post too much information. Thanks! :)
Edited by: user10542315 on Jan 1, 2009 7:22 PMHello,
Although I'm not an engineer on the product, and I'm sure one will chime in to help you tune your tests, I thought I'd ask you if you've seen the following white paper http://www.oracle.com/technology/products/berkeley-db/pdf/berkeley-db-perf.pdf ? It's a bit dated, but it is trying to do some of the same things you are doing. We're focusing a lot of resources on performance given the radical shifts in the ratio of cores, hyper-threads, memory, disk-speed, and flash-storage devices. All of these have changed the underlying assumptions one makes when building a database engine ("Will we be memory, or CPU bound? How big a bottle-neck will I/O be? How can we avoid that? Can we max out all the cores/threads of all resources when running purely in memory?"). There are a lot of interesting areas to research and improve. We're doing some of that work in 4.8, more later. I can't be specific, but we're trying our best to keep DB at the bleeding edge of hardware capabilities.
-greg
Gregory Burd - Product Manager - Oracle Berkeley DB - [email protected] -
Xcontrol: performance question (again)
Hello,
I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
Is there a way to reduce the cpu-load when using xcontrols?
If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
Regards,
soranito
Message Edited by soranito on 04-04-2010 08:16 PM
Message Edited by soranito on 04-04-2010 08:18 PM
Attachments:
XControl_performance_test.zip 60 KBsoranito wrote:
Hello,
I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
Okay, I think I understand question now. You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
Okay, try opening the project I replied yesterday. I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi. Notice how I separated up your data event into two events? Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE. While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE. This is will limit the number of times facade will be looping. It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution. If that doesn't work, just play around with the loopback statement. I can't remember the exact method.
Yeah, I agree xcontrol shouldn't be overconsuming system resource. I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it. Imo, I don't think xcontrol is quite ready for primetime yet. Just too many issues that need improvement.
Message Edited by lavalava on 04-06-2010 03:34 PM -
Simple performance question. the simplest way possible, assume
I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
When add is true, I must add a constant value to each element in the array.
When add is false, I must subtract a constant value to each element in the array.
Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
private void process(){
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
if (add)
matrix[i][ii][iii][...] += constant;
else
matrix[i][ii][iii][...] -= constant;
private void process(){
if (add)
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
matrix[i][ii][iii][...] += constant;
else
for (int i=0;i<dimension1;i++)
for (int ii=0;ii<dimension1;ii++)
for (int iii=0;iii<dimension1;iii++)
for (int iiii=0;iiii<dimension1;iiii++)
matrix[i][ii][iii][...] -= constant;
}Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.erjoalgo wrote:
I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
matrix[i][ii][iii][...] +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
Winston -
OnCommand Performance Manager CPU Graphs?
All, Does the OnCommand Performance Manager CPU graphs report average CPU utilization or ANY load? Thanks in advance for the help!
Hi, As per the documentation: Cluster CPU TimeDisplays a graph of the CPU usage time, in ms, for all nodes in the cluster used by the selected workload.The graph displays the combined CPU usage time for network processing and data processing. The CPU time for system-defined workloads that are associated to the selected workload, and are using the same nodes for data processing, is also included.You can use the chart to determine if the workload is a high consumer of the CPU resources on the cluster. You can also use the chart, in combination with the Reads/writes latency chart under the Response Time chart, or the Reads/writes/other chart under the Operations chart, to determine how changes to workload activity over time impact cluster CPU utilization. Thanks
-
Guys,
I do understand that ccPBM is very resource hungry but what I was wondering is this:
Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
Your opinion is appreciated.
Thanks a lot,
Viktor VargaHi,
In SXMB_ADM you can set the time out higher for the sync processing.
Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
when you go for BPM the whole workflow
has to come into action so for example
when your mapping last < 1 sec without bpm
if you do it in a BPM the transformation step
can last 2 seconds + one second mapping...
(that's just an example)
so the workflow gives you many design possibilities
(brigde, error handling) but it can
slow down the process and if you have
thousands of messages the preformance
can be much worse than having the same without BPM
see below links
http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
BPM Performance tuning
BPM Performance issue
BPM performance question
BPM performance- data aggregation persistance
Regards
Chilla.. -
CPU bound and distinct select tuning (10g)
Hi,
I think that my database is CPU bound :
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
CPU time 662 71.8
db file sequential read 11,082 66 6 7.2
control file sequential read 51,677 41 1 4.4
db file scattered read 8,133 39 5 4.3
SQL*Net more data to client 113,379 32 0 3.5
================
==================
And such stmt's probably cause this issue :
SQL ordered by CPU
CPU CPU per Elapsd Old
Time (s) Executions Exec (s) %Total Time (s) Buffer Gets Hash Value
101.49 160 0.63 15.9 102.55 252,160 2426247326
Module: dirdld.exe
SELECT DISTINCT(APPNM) FROM APPLICATION ORDER BY APPNM
===============================================
(SQL ordered by Gets)
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
713,149 33,280 21.4 13.3 39.67 42.70 2402103054
Module: ORACLE.EXE
INSERT INTO "PARAMETER" ("FAMNM","APPNM","PARTID","PARNAMELOC",
"DLDTYPE","VALUE","FLAG","SEQINFO") VALUES (:B8,:B7,:B6,TRIM(:B5
),:B4,TRIM(:B3),:B2,:B1)
489,906 114 4,297.4 9.1 20.16 29.46 2145980626
Module: ORACLE.EXE
DELETE FROM "PARAMETER" "A1" WHERE "A1"."PARTID"=:B1
============================
============================
So is there any possiblility to tune this query ? :
(it is already in keep buffer)
SELECT DISTINCT(APPNM) FROM APPLICATION ORDER BY APPNM
Plan
SELECT STATEMENT ALL_ROWSCost: 1,008 Bytes: 859,080 Cardinality: 85,908
3 SORT ORDER BY Cost: 1,008 Bytes: 859,080 Cardinality: 85,908
2 HASH UNIQUE Cost: 659 Bytes: 859,080 Cardinality: 85,908
1 TABLE ACCESS FULL TABLE VCESERVICE.APPLICATION Cost: 307 Bytes: 869,150 Cardinality: 86,915
Best Regards Arkadiusz MasnyHi,
Thank's Joe.
But this query is hardcoded into application logic (standalone application).
Is it possible to force oracle (application) to use materialized view instead of table ?
Best Regards Arek Masny -
Has anyone have any suggestions for the following set-up:
we have a 4 Processor NT box with 512MB memory, 15GHD.
We're running one instance of Weblogic (256 MB allocated) under Sun's 1.3
Hotspot Server edition JVM. The components are all java, ie we're not
calling any external interface, such as a database, filesystem etc at the
moment (its just for testing).
When we hit the server usings test clients , whatever we do its impossible
to get the CPU processor time above 50 percent, the clients themselves are
not CPU bound either (usings between 50 and 400 clients). Currently we're
getting about 50 http pages/sec, but I feel we could achieve more than this,
as it appears that Weblogic is not using all of the available CPU. I've
tried changing the amount of threads etc, but this does not seem to make any
difference.
The only thing that springs mind it the possibility that we're network
bound, but I find that very hard to believe, as we've never seen this befire
on the network here. Does anyone have any suggestions?
ThanksIt's impossible for us to tell your limiting factor without some more
information.
However, I have run similar benchmarks on NT machines with WLS serving
up web pages and Servlets/JSP. I was able to saturate 4 cpus, but I
used 4 100BaseT cards, and I needed to increase my execute thread count
to ~35.
-- Rob
wayne wrote:
>
Has anyone have any suggestions for the following set-up:
we have a 4 Processor NT box with 512MB memory, 15GHD.
We're running one instance of Weblogic (256 MB allocated) under Sun's 1.3
Hotspot Server edition JVM. The components are all java, ie we're not
calling any external interface, such as a database, filesystem etc at the
moment (its just for testing).
When we hit the server usings test clients , whatever we do its impossible
to get the CPU processor time above 50 percent, the clients themselves are
not CPU bound either (usings between 50 and 400 clients). Currently we're
getting about 50 http pages/sec, but I feel we could achieve more than this,
as it appears that Weblogic is not using all of the available CPU. I've
tried changing the amount of threads etc, but this does not seem to make any
difference.
The only thing that springs mind it the possibility that we're network
bound, but I find that very hard to believe, as we've never seen this befire
on the network here. Does anyone have any suggestions?
Thanks -
CPU Usage: Performance questions
Hi,
i've a two node rac 11.2 on aix.
There are some nightly job that consume high cpu.
Looking awr i see:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 7.1 0.0 0.00 0.07
DB CPU(s): 1.0 0.0 0.00 0.01
Redo size: 1,633,163.0 2,639.6
Logical reads: 112,042.4 181.1
Block changes: 10,777.0 17.4
Physical reads: 286.1 0.5
Physical writes: 174.2 0.3
User calls: 105.3 0.2
Parses: 1,172.9 1.9
Hard parses: 30.2 0.1
W/A MB processed: 11.0 0.0
Logons: 0.8 0.0
Executes: 8,386.1 13.6
Rollbacks: 0.1 0.0
Transactions: 618.7Top events:
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
DB CPU 7,215 14.1
db file sequential read 882,357 2,914 3 5.7 User I/O
resmgr:cpu quantum 406,418 1,915 5 3.7 Scheduler
buffer busy waits 192,071 466 2 .9 Concurrenc
gc cr grant 2-way 253,278 298 1 .6 ClusterHost cpu:
Host CPU (CPUs: 6 Cores: 3 Sockets: )
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
14.23 5.11 58.5 18.0 1.9 23.5It shows 23.5% of idle cpu.
Looking vmstat for the period:
DATA RUN BCK AVM FRE PRE PPI PPO PFR PSR PCY FIN FSY FCS Cpu User Cpu Sys Cpu Idle Cpu Wait
01/14/2013 22:33:54 16 0 6639173 8973 0 0 0 0 0 0 3716 124593 17276 87 13 0 0
01/14/2013 22:33:55 15 0 6639055 9073 0 0 0 0 0 0 3799 122613 17578 89 11 0 0
01/14/2013 22:33:55 14 0 6636873 11234 0 0 0 0 0 0 3144 108492 16358 90 10 0 0There is no idle.
Looking Active sessions graphs:
http://imageshack.us/photo/my-images/855/graph01.jpg/
So, why AWR show only 14.5% ob db cpu and 23.5% of idle and vmstat and active session history show 0 of idle?
Thank you.
Edited by: 842366 on 15-gen-2013 7.53842366 wrote:
i've a two node rac 11.2 on aix.
There are some nightly job that consume high cpu.
Looking awr i see:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 7.1 0.0 0.00 0.07
DB CPU(s): 1.0 0.0 0.00 0.01
Redo size: 1,633,163.0 2,639.6
Logical reads: 112,042.4 181.1
Block changes: 10,777.0 17.4
Physical reads: 286.1 0.5
Physical writes: 174.2 0.3
User calls: 105.3 0.2
Parses: 1,172.9 1.9
Hard parses: 30.2 0.1
W/A MB processed: 11.0 0.0
Logons: 0.8 0.0
Executes: 8,386.1 13.6
Rollbacks: 0.1 0.0
Transactions: 618.7[/code]
Top events:
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
DB CPU 7,215 14.1
db file sequential read 882,357 2,914 3 5.7 User I/O
resmgr:cpu quantum 406,418 1,915 5 3.7 Scheduler
buffer busy waits 192,071 466 2 .9 Concurrenc
gc cr grant 2-way 253,278 298 1 .6 Cluster[/code]
Host CPU (CPUs: 6 Cores: 3 Sockets: )
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
14.23 5.11 58.5 18.0 1.9 23.5[/code]
Is this a standard one-hour report, and does it cross the peak hour of the graph you published, and is it for the same node ?
There are several anomalies here, not the least of which is that your event percentages don't get close to 100%.
To me the thing that really stands out as a possible source of your problems is the 8,386 executions per second - if there has been a recent dramatic change in performance, it may be related to this.
I'd be taking a quick look at the SQL ordered by CPU and SQL ordered by Executions sections of the report for any clues.
I'd also be cross checking the Time model and OS statistics, and instance activity CPU details to see if they were consistent with the Event figures.
Regards
Jonathan Lewis -
GPU/CPU Temps and Graphics Performance Questions
I am also experiencing some similar oddities with my 15in UMBP. As background on my setup I am running 10.5.7 (the problem began before the update) and I work in an air-conditioned office with a cooling pad for my frequently "toasty" MBP. I have noticed that my temps are significantly higher while running on the NVIDIA 9600M GT vs. the 9400M, and also I use iStat to monitor my system.
Here's the problems. With only Mail, iChat, Safari 4, and iTunes running (87-93% idle) I frequently see temps hit 150-155 F for the GPU 145-150 F for the CPU. While running on the 9400 with the same applications (87-92% idle) I see temps from 90-95 F for both GPU and CPU. When I had first gotten my MBP I used it mostly with Aperture 2 and I thought it was strange that I couldn't tell the difference in performance between the 9600 and the 9400. Later I tried to use it from time to time in Win XP (bootcamp) for gaming, but it would get so hot that i could hardly touch the keyboard and the graphics performance wasn't nearly what i thought it should be. In fact I was only able to run games slightly better than on my company provided Dell Inspiron D620 (old POS). On the Mac side I tried to test out the performance of my GPU by setting the graphics in X-plane just to the point that with the 9600M GT enabled the program wouldn't tell me that I had too many graphics options turned on. After getting that dialed in I switched over to try it with the 9400 and the performance was nearly identical, and maybe even a little better... I'm not a genius when it comes to computers by any means. I don't know very much about what kind of performance I should be seeing from these two cards, but I would think that the 9600 would greatly out perform the 9400.
Any feedback or advice would be greatly appreciatedI used istat pro to get the temperatures and fan speeds. You can download it free here:
http://www.apple.com/downloads/dashboard/status/istatpro.html
I turned it on a few hours ago and the temperature of one of the cores reached 70 degrees C and the fan speeds all went up to around 3000 rpm. I'm just going to leave it off for now and take it in to get serviced. Only problem is I have to make an appointment and who knows how long it will take to get it in. Hope it works out for you.
Kyle -
Swing worker question -- two lines performing a similar task concurrently
Hi,
I've been playing around with swingworker.
The code below has a window with two buttons and two text areas.
When you click a button, the text area will count out 10000 random numbers, and print "done." when it's finished.
Each of the buttons works when pressed separately (one is running while the other is idle) But I can't push one, then the other and have the tasks run independantly.
One stops and abruptly finishes while the other pauses.
Sorry to post a lot of code, I'm going through the tutorials, I've looked at the example from the tutorial pretty extensivley, but that is only running one task.
thanks.
bp
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import javax.swing.border.Border;
import javax.swing.BorderFactory;
import java.awt.Insets;
import javax.swing.*;
import javax.swing.SwingWorker;
public class TestGui2 extends JFrame{
JButton button1;
JButton button2;
JTextArea tArea1;
JTextArea tArea2;
Border border;
public void buildGui(){
JPanel mainPanel = new JPanel();
mainPanel.setLayout(new GridBagLayout());
GridBagConstraints mpc = new GridBagConstraints();
mpc.insets = new Insets(3, 10, 3, 10);
border = BorderFactory.createLoweredBevelBorder();
JLabel titleLabel = new JLabel("<html><h1><font face=\"Courier\">GridBagLayout Test Gui</font></h1></html>");
mpc.weightx=0.5;
mpc.fill = GridBagConstraints.CENTER;
mpc.gridx=1;
mpc.gridy=0;
mainPanel.add(titleLabel, mpc);
button1 = new JButton("run job1");
button1.addActionListener(new Button1Listener());
mpc.weightx=0.5;
mpc.fill = GridBagConstraints.EAST;
mpc.gridx=0;
mpc.gridy=1;
mainPanel.add(button1, mpc);
tArea1 = new JTextArea("sample text");
tArea1.setBorder(border);
mpc.weightx=0.5;
mpc.fill = GridBagConstraints.HORIZONTAL;
mpc.gridx=1;
mpc.gridy=1;
mainPanel.add(tArea1, mpc);
button2 = new JButton("run job2");
button2.addActionListener(new Button2Listener());
mpc.weightx=0.5;
mpc.fill = GridBagConstraints.EAST;
mpc.gridx=0;
mpc.gridy=2;
mainPanel.add(button2, mpc);
tArea2 = new JTextArea("sample text");
tArea2.setBorder(border);
mpc.weightx=0.5;
mpc.fill = GridBagConstraints.HORIZONTAL;
mpc.gridx=1;
mpc.gridy=2;
mainPanel.add(tArea2, mpc);
// frame settings
add(mainPanel);
setSize(700,450);
setTitle("TestGui2");
setDefaultCloseOperation(EXIT_ON_CLOSE);
setVisible(true);
// inner classes
class Button1Listener implements ActionListener{
public void actionPerformed(ActionEvent e) {
BigJobWorker worker = new BigJobWorker();
worker.execute();
class Button2Listener implements ActionListener{
public void actionPerformed(ActionEvent e) {
BigJobWorker2 worker2 = new BigJobWorker2();
worker2.execute();
//SwingWorker class
class BigJobWorker2 extends SwingWorker<Void, Void>{
protected Void doInBackground(){
try{
for(int i=0; i<10000;i++){
int num2 = (int)(Math.random() * 100);
Integer integerNum2 = num2;
tArea2.setText("iteration: " + i + " " + integerNum2.toString());
//ReallyBigJob rbj = new ReallyBigJob();
//tArea1.setText(rbj.doBigJob().toString());
tArea2.setText("done.");
} catch (Exception e3){
e3.printStackTrace();
return null;
private class BigJobWorker extends SwingWorker<Void, Void>{
protected Void doInBackground(){
try{
for(int i=0; i<10000;i++){
int num = (int)(Math.random() * 100);
Integer integerNum = num;
tArea1.setText("iteration: " + i + " " + integerNum.toString());
//ReallyBigJob rbj = new ReallyBigJob();
//tArea1.setText(rbj.doBigJob().toString());
tArea1.setText("done.");
} catch (Exception e2){
e2.printStackTrace();
return null;
public static void main(String[] args) {
javax.swing.SwingUtilities.invokeLater(new Runnable() {
public void run() {
TestGui2 test = new TestGui2();
test.buildGui();
BTW, I realize this code isn't as OOO as it could be (2 SwingWorker classes that do the same thing, 2 listeners that do the same thing) But I wanted to isoloate the problem first.
thanks.
Also, it's not impossible the program is working properly, it just finishes before the textArea can be updated.
Edited by: badperson on Feb 28, 2008 8:24 AM
Edited by: badperson on Feb 28, 2008 8:57 AMHi,
I reworked the code, and it now sort of works What I had wanted was to watch each window increment at the same rate, independant of each other. This will show a number in the window intemittenly, but not in a smooth continuous flow.
class BigJobWorker2 extends SwingWorker<Void, String>{
@Override
protected Void doInBackground(){
try{
for(int i=0; i<100000;i++){
int num2 = (int)(Math.random() * 100);
Integer integerNum2 = num2;
//tArea2.setText("iteration: " + i + " " + integerNum2.toString());
publish("iteration: " + i + " " + integerNum2.toString());
//ReallyBigJob rbj = new ReallyBigJob();
//tArea1.setText(rbj.doBigJob().toString());
publish("done.");
} catch (Exception e3){
e3.printStackTrace();
tArea2.setText("EXCEPTION:");
return null;
// override process method
@Override
protected void process(List<String> messages) {
Iterator i = messages.iterator();
while (i.hasNext()){
String message = (String)i.next();
tArea2.setText(message);
} // end BigJobWorker2thanks,
badperson -
Javax.swing, ImageIcon question
I'm working on a game in java which i've done most of the programming for. Now I'm trying to work on the GUI - something i've never worked for in java, and when I asked, I was told i should use swing for my GUI, which looks about right - it's turn-based and won't be particularly fancy. I want a few things listed on the window, along with a textbox, and a visual map - and it's that map which is giving me the problem. Basically, i have an 800x600 window, and starting in the top left corner i want to have a 15x15 array of 'tiles' to make up the map. These 'tiles' are 32x32 .png files, so it comes out to 480x480 which leaves room for other things. I've been looking at the tutorials on swing ( http://java.sun.com/docs/books/tutorial/uiswing/TOC.html to be specific) and this is what i came up with.
public static void addComponentsToPane(Container pane) {
pane.setLayout(null);
ImageIcon icon = createImageIcon("images/Tile1.png", "grass");
JLabel mapLabels[][] = new JLabel[15][15];
for (int x = 0; x < 15; x++)
for (int y = 0; y < 15; y++){
mapLabels[x][y] = new JLabel(icon);
mapLabels[x][y].setBounds(x*32,y*32,(x+1)*32,(y+1)*32);
for (int x = 0; x < 15; x++)
for (int y = 0; y < 15; y++)
pane.add(mapLabels[x][y]);
}this is the segment of code that's putting an ImageIcon array (later to be more varied than what it currently is...) on the screen. I can give the whole class if you need it, but its really not much of a departure from some of the ones in the tutorial.
here is a screenshot of the problem i have:
http://img.photobucket.com/albums/v682/sqpat17/tiles.png
Basically it's that empty space i'm wondering about how to get out. I tried changing the bounds, and this worked when i got down to defining the label boundaries as multiples of 21, but not only did this not make sense (why 21 instead of 32?), it came out a little weird:
http://img.photobucket.com/albums/v682/sqpat17/tiles2.png
I'm thinking i need to either figure out a way to put more than one image icon on a single label, use something other than labels, or just figure out how to get rid of that empty space in some other way. Im using labels because they seem to be the most basic items i can put on a frame that hold ImageIcons. Anyways, help would be greatly appreciated :)You might want to consider using a LayoutManager.
mapLabels[x][y].setBounds(x*32,y*32,(x+1)*32,(y+1)*32);This would put them back to back if the pictures were 32x32 pixels. Check to see how many pixels wide they are and see if thats right. Also, in the future, Swing questions should be posted in the Swing forum. -
SQL performance question (between clause)
Hello,
I'm new to SQL tuning and bumped into the following performance problem:
Situation:
--Table 1
CREATE TABLE GGS
CHROM_ID NUMBER(2),
START_POS NUMBER(10),
TAG VARCHAR2(3 CHAR)
CREATE INDEX GGS_IDX ON GGS
(CHROM_ID, START_POS);
--Table 2
CREATE TABLE LEL
CHROM_ID NUMBER(2),
START_POS NUMBER(10),
TAG VARCHAR2(3 CHAR)
CREATE INDEX GGS_IDX ON LEL
(CHROM_ID, START_POS);
--Table 3
CREATE TABLE PGD
CHROM_ID NUMBER(2),
START_POS NUMBER(10),
TAG VARCHAR2(3 CHAR)
CREATE INDEX PGD_IDX ON LEL
(CHROM_ID, START_POS);
For these 3 tables & 3 indexes the statistics are gathered.
I'm issuing the following SQL statements:
select t1.tag,t1.chrom_id,t1.start_pos
from LEL t1
where exists
(select 'x' from GGS t2
where t2.chrom_id = t1.chrom_id
and t2.start_pos = t1.start_pos + 9
and exists
(select 'x' from PGD t3
where t3.chrom_id = t1.chrom_id
and t3.start_pos = t1.start_pos + 18
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 27 | 3677 (5)| 00:00:45 |
| 1 | NESTED LOOPS SEMI | | 1 | 27 | 3677 (5)| 00:00:45 |
|* 2 | HASH JOIN | | 118 | 2242 | 3323 (6)| 00:00:40 |
| 3 | SORT UNIQUE | | 428K| 3348K| 257 (5)| 00:00:04 |
| 4 | TABLE ACCESS FULL| PGD | 428K| 3348K| 257 (5)| 00:00:04 |
| 5 | TABLE ACCESS FULL | LEL | 2399K| 25M| 1435 (5)| 00:00:18 |
|* 6 | INDEX RANGE SCAN | GGS_IDX | 1 | 8 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T3"."CHROM_ID"="T1"."CHROM_ID" AND
"T3"."START_POS"="T1"."START_POS"+18)
6 - access("T2"."CHROM_ID"="T1"."CHROM_ID" AND
"T2"."START_POS"="T1"."START_POS"+9)
select t1.tag,t1.chrom_id,t1.start_pos
from LEL t1
where exists
(select 'x' from GGS t2
where t2.chrom_id = t1.chrom_id
and t2.start_pos between t1.start_pos - 25 and t1.start_pos + 25
and exists
(select 'x' from PGD t3
where t3.chrom_id = t1.chrom_id
and t3.start_pos between t1.start_pos - 25 and t1.start_pos + 25
Execution Plan
<source>
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 15 | 405 | | 5723 (4)| 00:01:09 |
|* 1 | HASH JOIN SEMI | | 15 | 405 | | 5723 (4)| 00:01:09 |
|* 2 | HASH JOIN RIGHT SEMI| | 5998 | 111K| 8376K| 4788 (4)| 00:00:58 |
| 3 | TABLE ACCESS FULL | PGD | 428K| 3348K| | 257 (5)| 00:00:04 |
| 4 | TABLE ACCESS FULL | LEL | 2399K| 25M| | 1435 (5)| 00:00:18 |
| 5 | TABLE ACCESS FULL | GGS | 1531K| 11M| | 913 (5)| 00:00:11 |
</source>
Predicate Information (identified by operation id):
1 - access("T2"."CHROM_ID"="T1"."CHROM_ID")
filter("T2"."START_POS">="T1"."START_POS"-25 AND
"T2"."START_POS"<="T1"."START_POS"+25)
2 - access("T3"."CHROM_ID"="T1"."CHROM_ID")
filter("T3"."START_POS">="T1"."START_POS"-25 AND
"T3"."START_POS"<="T1"."START_POS"+25)
The first query runs fast, a few seconds. The later runs for ages. Any idea how I could get better performance on the second query? How comes that the predicted run time for the two queries are not that different, 00:00:45 for query 1 versus 00:01:09 for query 2?
But in reality the difference is enormous, or am I mis interpreting the execution plan output?
Kind Regards,
Gerben
The table data looks like:
CHROM_ID;START_POS;TAG
1;3001429;LEL
1;3001837;LEL
1;3003352;LEL
1;3007849;LEL
1;3008347;LEL
1;3009100;LEL
1;3010504;LEL
1;3016300;LEL
1;3018445;LELHi Rob,
Since the PGA_AGGREGATE_TARGET is set, the AREASIZE parameters are automatically set. The PGA_AGGREGATE_TARGET is set to approximately 1.26Gb.
I reran the query to monitor the pga memory usage. Most of the work_area's were able to run in optimal mode, only a few in one-pass and none in multi-pass mode.
I know that our PROBLEMATIC query is not responsible for the one-pass mode work_areas.
Do you notice something abnormal in the PGA statistics? Maybe something else is causing the bad query performance? Other init parameters I should look into?
There's also this discrepancy between the explain plan and the reality which is puzzling me...
Groet,
Gerben
SQL> SELECT * from v$pgastat;
NAME VALUE UNIT
aggregate PGA target parameter 1321439232 bytes
aggregate PGA auto target 1152248832 bytes
global memory bound 132136960 bytes
total PGA inuse 41159680 bytes
total PGA allocated 95112192 bytes
maximum PGA allocated 253027328 bytes
total freeable PGA memory 12713984 bytes
process count 20
max processes count 22
PGA memory freed back to OS 1026097152 bytes
total PGA used for auto workareas 0 bytes
maximum PGA used for auto workareas 148202496 bytes
total PGA used for manual workareas 0 bytes
maximum PGA used for manual workareas 536576 bytes
over allocation count 0
bytes processed 1795721216 bytes
extra bytes read/written 657868800 bytes
cache hit percentage 73.18 percent
recompute count (total) 7982
SQL> SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
onepass_count, round(onepass_count*100/total, 2) onepass_perc,
multipass_count, round(multipass_count*100/total, 2) multipass_perc
FROM
(SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
sum(OPTIMAL_EXECUTIONS) optimal_count,
sum(ONEPASS_EXECUTIONS) onepass_count,
sum(MULTIPASSES_EXECUTIONS) multipass_count
FROM v$sql_workarea_histogram
WHERE low_optimal_size > 64*1024);
OPTIMAL_COUNT OPTIMAL_PERC ONEPASS_COUNT ONEPASS_PERC MULTIPASS_COUNT MULTIPASS_PERC
238 96.75 8 3.25 0 0
SQL> SELECT LOW_OPTIMAL_SIZE/1024 low_kb,
(HIGH_OPTIMAL_SIZE+1)/1024 high_kb,
OPTIMAL_EXECUTIONS, ONEPASS_EXECUTIONS, MULTIPASSES_EXECUTIONS
FROM V$SQL_WORKAREA_HISTOGRAM
WHERE TOTAL_EXECUTIONS != 0;
LOW_KB HIGH_KB OPTIMAL_EXECUTIONS ONEPASS_EXECUTIONS MULTIPASSES_EXECUTIONS
2 4 28661 0 0
64 128 27 0 0
128 256 2 0 0
256 512 5 0 0
512 1024 208 0 0
1024 2048 1 0 0
2048 4096 0 2 0
4096 8192 10 0 0
8192 16384 5 0 0
65536 131072 6 6 0
131072 262144 1 0 0
SQL> SELECT name profile, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage
FROM (SELECT name, value cnt, (sum(value) over ()) total
FROM V$SYSSTAT
WHERE name like 'workarea exec%');
PROFILE CNT PERCENTAGE
workarea executions - optimal 28930 100
workarea executions - onepass 8 0
workarea executions - multipass 0 0
Maybe you are looking for
-
HT201304 I forgot my restriction pass code , how do I retrieve it or change it?
I have forgotten my restrictions pass code. How do I retrieve my pass code or how can I change it. Thanks
-
Item Codes with description in inventory - R12
Dear All In R12, Inventory module, from where and how to get the item code and item description simultaneously. e.g: Suppose I have an item having 3 segments. I want to display the item code with descriptions like this: 01-Stores | 001-Waste | 0001-C
-
Multiple Macs, one iPhone 4
I have an iMac and a MacBook Pro. I want to sync iBook books between my iPhone 4 from my MacBook Pro's iTunes library. I want to sync my music and video content from my iMac's iTunes Library. All of my software and OS on my two Macs are current. The
-
Please somebody help me with this
1.1 modify the application so that: i) Information about any foreign key constraints for the selected table is displayed eg as additional labels in the table editor window, or in a separate dialog window. You can choose how you wish to display this i
-
Error when setting the Temporary Storage settings of DTP
Hi all, I am trying to set the temporary storage settings for DTP.But the moment i click on the temporary storage settings, it gives an ABAP Short Dump with the following message Runtime Errors OBJECTS_OBJREF_NOT_ASSIGNED Except.