Performance Degradation - High fetches and Prses
Hello,
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
2) High fetches and poor number/ low number of rows being processed
Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1) */ * FROM SAPNXP.INOB
WHERE MANDT = :A0
AND KLART = :A1
AND OBTAB = :A2
AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
call count cpu elapsed disk query current rows
Parse 119 0.00 0.00 0 0 0 0
Execute 239 0.16 0.13 0 0 0 0
Fetch 239 2069.31 2127.88 0 13738804 0 0
total 597 2069.47 2128.01 0 13738804 0 0
PLAN_TABLE_OUTPUT
Plan hash value: 1235313998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 268 | 1 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| INOB | 2 | 268 | 1 (0)| 00:00:01 |
|* 3 | INDEX SKIP SCAN | INOB~2 | 7514 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=TO_NUMBER(:A4))
2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
filter("OBTAB"=:A2)
18 rows selected.
SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
INDEX_NAME TABLE_NAME COLUMN_NAME
INOB~2 INOB MANDT
INOB~2 INOB CLINT
INOB~2 INOB OBTAB
Is it possible to Maximise the rows/fetch
call count cpu elapsed disk query current rows
Parse 163 0.03 0.00 0 0 0 0
Execute 163 0.01 0.03 0 0 0 0
Fetch 174899 55.26 59.14 0 1387649 0 4718932
total 175225 55.30 59.19 0 1387649 0 4718932
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 27
Rows Row Source Operation
28952 TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
28952 INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 174899 0.00 0.16
SQL*Net more data to client 155767 0.01 5.69
SQL*Net message from client 174899 0.11 208.21
latch: cache buffers chains 2 0.00 0.00
latch free 4 0.00 0.00
********************************************************************************
user4566776 wrote:
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
But if you look at the text you are using bind variables.
The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
Similar Messages
-
Performance Issue - higher fetch count
Hi,
The database version is 10.2.0.4.
Below is the tkprof report of an application session having performance issue.
We shared the screens of application team and were able to see the lag in report generation.
It shows an elapsed time of 157 seconds, however the same query when executed in database is taking fractions of a second.
Kindly help and suggest if more detail is needed.
call count cpu elapsed disk query current rows
Parse 149 0.00 0.00 0 0 0 0
Execute 298 0.02 0.02 0 0 0 0
Fetch 298 157.22 156.39 0 38336806 0 298
total 745 157.25 156.42 0 38336806 0 298
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 80
Rows Row Source Operation
2 SORT AGGREGATE (cr=257294 pr=0 pw=0 time=1023217 us)
32 FILTER (cr=257294 pr=0 pw=0 time=6944757 us)
22770 NESTED LOOPS (cr=166134 pr=0 pw=0 time=4691233 us)
22770 NESTED LOOPS (cr=166130 pr=0 pw=0 time=4600141 us)
82910 INDEX FULL SCAN S_LIT_BU_U1 (cr=326 pr=0 pw=0 time=248782 us)(object id 69340)
22770 TABLE ACCESS BY INDEX ROWID S_LIT (cr=165804 pr=0 pw=0 time=559291 us)
82890 INDEX UNIQUE SCAN S_LIT_P1 (cr=82914 pr=0 pw=0 time=247901 us)(object id 69332)
22770 INDEX UNIQUE SCAN S_BU_U2 (cr=4 pr=0 pw=0 time=48958 us)(object id 63064)
20 NESTED LOOPS (cr=91032 pr=0 pw=0 time=268508 us)
22758 INDEX UNIQUE SCAN S_ORDER_P1 (cr=45516 pr=0 pw=0 time=104182 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=45516 pr=0 pw=0 time=114669 us)(object id 158158)
20 NESTED LOOPS (cr=128 pr=0 pw=0 time=364 us)
32 INDEX UNIQUE SCAN S_ORDER_P1 (cr=64 pr=0 pw=0 time=144 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=64 pr=0 pw=0 time=158 us)(object id 158158)Rgds,
Sanjay
Edited by: 911847 on Feb 2, 2012 5:53 AM
Edited by: 911847 on Feb 5, 2012 11:50 PMHi,
I changed optimizer to first_rows and taken below details.
09:21:31 SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_mode string FIRST_ROWS_100
09:21:51 SQL> ALTER SESSION SET STATISTICS_LEVEL=ALL;
Session altered.
PLAN_TABLE_OUTPUT
SQL_ID fkcs93gkrt2zz, child number 0
SELECT COUNT (*) FROM SIEBEL.S_LIT_BU T1, SIEBEL.S_BU T2, SIEBEL.S_LIT T3
WHERE T3.BU_ID = T2.PAR_ROW_ID AND T1.BU_ID = '0-R9NH' AND T3.ROW_ID = T1.LIT_ID
AND (T3.X_VISIBILITY_BUSCOMP_ORDER = 'Y') AND (T3.ROW_ID = '1-28B0AH' OR T3.ROW_ID =
'1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR T3.ROW_ID =
'1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT SQ1_T1.LIT_ID FROM
SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE ( SQ1_T1.ORDER_ID =
SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61')) AND (T3.ROW_ID = '1-28B0AH' OR
T3.ROW_ID = '1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR
T3.ROW_ID = '1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT
SQ1_T1.LIT_ID FROM SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE (
SQ1_T1.ORDER_ID = SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61'))))
Plan hash value: 307628812
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT AGGREGATE | | 1 | | | |
|* 2 | FILTER | | | | | |
| 3 | NESTED LOOPS | | 7102 | | | |
| 4 | MERGE JOIN | | 7102 | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| S_LIT | 7102 | | | |
| 6 | INDEX FULL SCAN | S_LIT_P1 | 41408 | | | |
|* 7 | SORT JOIN | | 41360 | 1186K| 567K| 1054K (0)|
|* 8 | INDEX FULL SCAN | S_LIT_BU_U1 | 41360 | | | |
|* 9 | INDEX UNIQUE SCAN | S_BU_U2 | 1 | | | |
| 10 | NESTED LOOPS | | 1 | | | |
|* 11 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 12 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
| 13 | NESTED LOOPS | | 1 | | | |
|* 14 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 15 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
Predicate Information (identified by operation id):
2 - filter((((INTERNAL_FUNCTION("T3"."ROW_ID") OR IS NOT NULL) AND IS NOT NULL)
OR INTERNAL_FUNCTION("T3"."ROW_ID")))
5 - filter("T3"."X_VISIBILITY_BUSCOMP_ORDER"='Y')
7 - access("T3"."ROW_ID"="T1"."LIT_ID")
filter("T3"."ROW_ID"="T1"."LIT_ID")
8 - access("T1"."BU_ID"='0-R9NH')
filter("T1"."BU_ID"='0-R9NH')
9 - access("T3"."BU_ID"="T2"."PAR_ROW_ID")
11 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
12 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
14 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
15 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level -
Opera performance degraded after systemd and fixing catalyst
Two days ago, I ran pacman -Syu. I noticed after rebooting that the catalyst drivers weren't being loaded. Curiously, a previous pacman -R $(pacman -Qtdq) removed linux-headers, despite them being needed by the installed catalyst-utils. I reinstalled linux-headers, added fglrx to /etc/modules-load.d, and rebooted. The messages at boot showed that the drivers built just fine, and when the system started, all seemed well. 3D acceleration is indeed working, my games run just as they did before. Opera, however, does not. When it works, the performance is awful. Css animations play back between one and two frames per second. Also, the application occasionally crashes. When running on the terminal, there is no output at the point of the crash. The only trace that I have been able to find is in /var/log/everything.log, and is as follows:
Oct 15 22:02:45 localhost kernel: [ 243.616368] opera[16250]: segfault at 28 ip 00007fc94a0764ce sp 00007fffff8a1630 error 4 in fglrx-libGL.so.1.2[fc94a03f000+bf000]
Is there any way I can get opera back to running as well as it did before? I have tried removing opera, moving the ~/.opera directory, and reinstalling opera, the results were the same (minus my settings being gone). Any advice would be greatly appreciated.
There hasn't been an opera update in some time. Here's the log of the update that seems to have caused the problem:
[2012-10-14 17:41] Running 'pacman -Syu'
[2012-10-14 17:41] synchronizing package lists
[2012-10-14 17:41] starting full system upgrade
[2012-10-14 17:43] upgraded libtiff (4.0.2-1 -> 4.0.3-1)
[2012-10-14 17:43] upgraded openexr (1.7.0-2 -> 1.7.1-1)
[2012-10-14 17:43] upgraded xorg-server-common (1.12.4-1 -> 1.13.0-2)
[2012-10-14 17:43] warning: /etc/group installed as /etc/group.pacnew
[2012-10-14 17:43] warning: /etc/passwd installed as /etc/passwd.pacnew
[2012-10-14 17:43] warning: /etc/gshadow installed as /etc/gshadow.pacnew
[2012-10-14 17:43] warning: directory permissions differ on srv/http/
filesystem: 775 package: 755
[2012-10-14 17:43] upgraded filesystem (2012.8-1 -> 2012.10-1)
[2012-10-14 17:43] upgraded dbus-core (1.6.4-1 -> 1.6.8-1)
[2012-10-14 17:43] upgraded util-linux (2.22-6 -> 2.22-7)
[2012-10-14 17:43] upgraded systemd (193-1 -> 194-3)
[2012-10-14 17:43] upgraded mtdev (1.1.2-1 -> 1.1.3-1)
[2012-10-14 17:43] upgraded xf86-input-evdev (2.7.3-1 -> 2.7.3-2)
[2012-10-14 17:43] upgraded xorg-server (1.12.4-1 -> 1.13.0-2)
[2012-10-14 17:43] upgraded lib32-gcc-libs (4.7.1-6 -> 4.7.2-1)
[2012-10-14 17:43] upgraded gcc-libs-multilib (4.7.1-6 -> 4.7.2-1)
[2012-10-14 17:43] upgraded catalyst-utils (12.8-1 -> 12.9-0.1)
[2012-10-14 17:43] installed glu (9.0.0-1)
[2012-10-14 17:43] upgraded glew (1.8.0-1 -> 1.8.0-2)
[2012-10-14 17:43] upgraded freeglut (2.8.0-1 -> 2.8.0-2)
[2012-10-14 17:43] upgraded jasper (1.900.1-7 -> 1.900.1-8)
[2012-10-14 17:43] upgraded openimageio (1.0.8-1 -> 1.0.9-3)
[2012-10-14 17:43] upgraded jack (0.121.3-6 -> 0.121.3-7)
[2012-10-14 17:43] installed opencolorio (1.0.7-1)
[2012-10-14 17:43] upgraded blender (4:2.64-3 -> 5:2.64a-1)
[2012-10-14 17:43] upgraded cairo (1.12.2-2 -> 1.12.2-3)
[2012-10-14 17:43] upgraded xcb-proto (1.7.1-1 -> 1.8-1)
[2012-10-14 17:43] upgraded libxcb (1.8.1-1 -> 1.9-1)
[2012-10-14 17:43] upgraded mesa (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded cinelerra-cv (1:2.2-7 -> 1:2.2-9)
[2012-10-14 17:43] upgraded curl (7.27.0-1 -> 7.28.0-1)
[2012-10-14 17:43] upgraded dbus (1.6.4-1 -> 1.6.8-1)
[2012-10-14 17:43] upgraded flashplugin (11.2.202.238-1 -> 11.2.202.243-1)
[2012-10-14 17:43] upgraded gcc-multilib (4.7.1-6 -> 4.7.2-1)
[2012-10-14 17:43] upgraded gegl (0.2.0-3 -> 0.2.0-4)
[2012-10-14 17:43] upgraded git (1.7.12.2-1 -> 1.7.12.3-1)
[2012-10-14 17:43] upgraded gnutls (3.1.2-1 -> 3.1.3-1)
[2012-10-14 17:43] upgraded gstreamer0.10-bad (0.10.23-2 -> 0.10.23-3)
[2012-10-14 17:43] installed opus (1.0.1-2)
[2012-10-14 17:43] upgraded gstreamer0.10-bad-plugins (0.10.23-2 -> 0.10.23-3)
[2012-10-14 17:43] upgraded hdparm (9.39-1 -> 9.42-1)
[2012-10-14 17:43] upgraded libltdl (2.4.2-6 -> 2.4.2-7)
[2012-10-14 17:43] upgraded imagemagick (6.7.9.8-1 -> 6.7.9.8-2)
[2012-10-14 17:43] upgraded sysvinit-tools (2.88-8 -> 2.88-9)
[2012-10-14 17:43] warning: /etc/rc.conf installed as /etc/rc.conf.pacnew
[2012-10-14 17:43] ----
[2012-10-14 17:43] > systemd no longer reads MODULES from rc.conf.
[2012-10-14 17:43] ----
[2012-10-14 17:43] upgraded initscripts (2012.09.1-1 -> 2012.10.1-1)
[2012-10-14 17:43] upgraded iputils (20101006-4 -> 20101006-7)
[2012-10-14 17:43] upgraded khrplatform-devel (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded lib32-catalyst-utils (12.8-2 -> 12.9-0.1)
[2012-10-14 17:43] upgraded lib32-dbus-core (1.6.4-1 -> 1.6.8-1)
[2012-10-14 17:43] upgraded lib32-libltdl (2.4.2-6 -> 2.4.2-7)
[2012-10-14 17:43] upgraded lib32-libtiff (4.0.2-1 -> 4.0.3-1)
[2012-10-14 17:43] upgraded lib32-libxcb (1.8.1-2 -> 1.9-1)
[2012-10-14 17:43] installed lib32-libxxf86vm (1.1.2-1)
[2012-10-14 17:43] upgraded lib32-mesa (8.0.4-4 -> 9.0-1)
[2012-10-14 17:43] upgraded libbluray (0.2.2-1 -> 0.2.3-1)
[2012-10-14 17:43] upgraded libdmapsharing (2.9.12-2 -> 2.9.15-1)
[2012-10-14 17:43] upgraded libglapi (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded libgbm (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded libegl (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded libgles (8.0.4-3 -> 9.0-1)
[2012-10-14 17:43] upgraded libldap (2.4.32-1 -> 2.4.33-1)
[2012-10-14 17:43] upgraded libreoffice-en-US (3.6.1-4 -> 3.6.2-2)
[2012-10-14 17:44] upgraded libreoffice-common (3.6.1-4 -> 3.6.2-2)
[2012-10-14 17:44] upgraded libreoffice-calc (3.6.1-4 -> 3.6.2-2)
[2012-10-14 17:44] upgraded libreoffice-impress (3.6.1-4 -> 3.6.2-2)
[2012-10-14 17:44] upgraded libreoffice-writer (3.6.1-4 -> 3.6.2-2)
[2012-10-14 17:44] upgraded libshout (1:2.3.0-1 -> 1:2.3.1-1)
[2012-10-14 17:44] upgraded libtool-multilib (2.4.2-6 -> 2.4.2-7)
[2012-10-14 17:44] upgraded libusbx (1.0.12-2 -> 1.0.14-1)
[2012-10-14 17:44] upgraded libva (1.1.0-1 -> 1.1.0-2)
[2012-10-14 17:44] >>> Updating module dependencies. Please wait ...
[2012-10-14 17:44] >>> Generating initial ramdisk, using mkinitcpio. Please wait...
[2012-10-14 17:44] ==> Building image from preset: 'default'
[2012-10-14 17:44] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
[2012-10-14 17:44] ==> Starting build: 3.5.6-1-ARCH
[2012-10-14 17:44] -> Running build hook: [base]
[2012-10-14 17:44] -> Running build hook: [udev]
[2012-10-14 17:44] -> Running build hook: [autodetect]
[2012-10-14 17:44] -> Running build hook: [pata]
[2012-10-14 17:44] -> Running build hook: [scsi]
[2012-10-14 17:44] -> Running build hook: [sata]
[2012-10-14 17:44] -> Running build hook: [filesystems]
[2012-10-14 17:44] -> Running build hook: [usbinput]
[2012-10-14 17:44] -> Running build hook: [fsck]
[2012-10-14 17:44] ==> Generating module dependencies
[2012-10-14 17:44] ==> Creating gzip initcpio image: /boot/initramfs-linux.img
[2012-10-14 17:44] ==> Image generation successful
[2012-10-14 17:44] ==> Building image from preset: 'fallback'
[2012-10-14 17:44] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
[2012-10-14 17:44] ==> Starting build: 3.5.6-1-ARCH
[2012-10-14 17:44] -> Running build hook: [base]
[2012-10-14 17:44] -> Running build hook: [udev]
[2012-10-14 17:44] -> Running build hook: [pata]
[2012-10-14 17:44] -> Running build hook: [scsi]
[2012-10-14 17:44] -> Running build hook: [sata]
[2012-10-14 17:44] -> Running build hook: [filesystems]
[2012-10-14 17:44] -> Running build hook: [usbinput]
[2012-10-14 17:44] -> Running build hook: [fsck]
[2012-10-14 17:44] ==> Generating module dependencies
[2012-10-14 17:44] ==> Creating gzip initcpio image: /boot/initramfs-linux-fallback.img
[2012-10-14 17:44] ==> Image generation successful
[2012-10-14 17:44] upgraded linux (3.5.4-1 -> 3.5.6-1)
[2012-10-14 17:44] upgraded linux-api-headers (3.5.1-1 -> 3.5.5-1)
[2012-10-14 17:44] upgraded lirc-utils (1:0.9.0-28 -> 1:0.9.0-30)
[2012-10-14 17:44] upgraded net-snmp (5.7.1-3 -> 5.7.1-4)
[2012-10-14 17:44] upgraded nodejs (0.8.11-1 -> 0.8.12-1)
[2012-10-14 17:44] upgraded xine-lib (1.2.2-1 -> 1.2.2-2)
[2012-10-14 17:44] upgraded opencv (2.4.2-2 -> 2.4.2-4)
[2012-10-14 17:44] upgraded rsync (3.0.9-4 -> 3.0.9-5)
[2012-10-14 17:44] upgraded run-parts (4.3.2-1 -> 4.3.4-1)
[2012-10-14 17:44] upgraded smpeg (0.4.4-6 -> 0.4.4-7)
[2012-10-14 17:44] upgraded sqlite (3.7.14-1 -> 3.7.14.1-1)
[2012-10-14 17:44] upgraded sysvinit (2.88-8 -> 2.88-9)
[2012-10-14 17:44] In order to use the new version, reload all virtualbox modules manually.
[2012-10-14 17:44] upgraded virtualbox-host-modules (4.2.0-2 -> 4.2.0-5)
[2012-10-14 17:44] installed lib32-glu (9.0.0-1)
[2012-10-14 17:44] upgraded wine (1.5.14-1 -> 1.5.15-1)
[2012-10-14 17:44] upgraded xbmc (11.0-6 -> 11.0-8)
[2012-10-14 17:44] upgraded xf86-input-wacom (0.17.0-1 -> 0.17.0-2)
[2012-10-14 17:44] upgraded xorg-server-xephyr (1.12.4-1 -> 1.13.0-2)
[2012-10-14 17:44] upgraded xscreensaver (5.19-1 -> 5.19-2)
[2012-10-14 17:44] upgraded xterm (282-1 -> 283-1)
Finally, here is a list of the packages that were removed in the pacman invocation that removed linux-headers.
[2012-10-07 11:28] Running 'pacman -R torus-trooper'
[2012-10-07 11:28] removed torus-trooper (0.22-4)
[2012-10-07 11:28] Running 'pacman -R freealut lib32-alsa-lib lib32-curl lib32-libglapi lib32-libidn lib32-libxxf86vm lib32-nvidia-cg-toolkit lib32-sdl libbulletml linux-headers mono xclip'
[2012-10-07 11:28] removed xclip (0.12-3)
[2012-10-07 11:28] removed mono (2.10.8-1)
[2012-10-07 11:28] removed linux-headers (3.5.4-1)
[2012-10-07 11:28] removed libbulletml (0.0.6-4)
[2012-10-07 11:28] removed lib32-sdl (1.2.15-3)
[2012-10-07 11:28] removed lib32-nvidia-cg-toolkit (3.1-2)
[2012-10-07 11:28] removed lib32-libxxf86vm (1.1.2-1)
[2012-10-07 11:28] removed lib32-libidn (1.25-1)
[2012-10-07 11:28] removed lib32-libglapi (8.0.4-4)
[2012-10-07 11:28] removed lib32-curl (7.27.0-1)
[2012-10-07 11:28] removed lib32-alsa-lib (1.0.26-1)
[2012-10-07 11:28] removed freealut (1.1.0-4)
[2012-10-07 11:28] Running 'pacman -R lib32-libssh2 libgdiplus'
[2012-10-07 11:28] removed libgdiplus (2.10-2)
[2012-10-07 11:28] removed lib32-libssh2 (1.4.2-1)
The system in question is a Dell XPS, with an i7 processor and an ATI Radeon HD5700 series graphics card. I'm using slim and awesome wm, and the arch install is now seven months old.
I know that we're supposed to post entire logs, but I'm not sure which logs are actually relevant, and my logs are quite large. I can definitely provide more information, if it's helpful. I've done some looking around on the wiki, and on google in general. I haven't found anything useful, but it could always be that I'm just phrasing my searches poorly.
Thanks!I reinstalled the whole system. As said, I couldn't even go to tty. I know using installation disk I could have recovered it, but I decided to reinstall.
-
Performance degradation with COGNOS and BW
Hello,
Do you know how to increase performance when using Cognos to request in BW ? Cognos seems to need a lot of RAM.
Thanks for your help
Catherine BellecIn your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl. -
DFSR performance question - high latency and high bandwidth - windows 2012r2
Hello,
first question here...
We have a pair of backup servers, connected with a 1G WAN link (290 ms latency).
I am replicating backup files (mainly SQL files) between them (one-way), for DR/BCP purposes.
a Get-DfsrState regularly shows hundreds of files in waiting state, and only 16 or so being downloaded at the same time. The system is using very little bandwidth, much less than what I have set in the replication properties (256mbps)
Is there any parameter I can tweak to increase the number of files being transferred in parallel copies?
Before you ask, the WAN circuits are fine and currently ~80% free. I can easily saturate them using other technologies (rsync, gluster, HTTP). The servers are doing very little as well (2x6C, 32G ram, lots of disks, CPU usually under 10%).
Any idea on how to speed this up? I saw some registry settings in old articles for windows 2008r2, but nothing for 2012r2.
ThanksHi,
It seems that the number of downloading files is not related to bandwidth.
By default, a maximum of 16 (four in Windows Server 2003 R2) concurrent downloads are shared among all connections and replication groups. Because connections and replication group updates are not serialized, there is no specific order in which updates are
received. If two schedules are opened, updates are generally received and installed from both connections at the same time.
For more detailed information, please refer to the thread below:
DFSR to Multiple sites over slow bandwidth links
http://social.technet.microsoft.com/Forums/en-US/aedfc06d-9ffa-408c-8852-08fb14c115f0/dfsr-to-multiple-sites-over-slow-bandwidth-links
Regards,
Mandy
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
How to investigate DB performance degradation.
We use Oracle11gr2 on win2008R2.
I heard that DB performance degradation is happening and I would like to know how to improve DB performance.
How could I investigate the reason of DB performance degradation ?Hi,
the first thing to establish is the scope of the problem -- whether it's the entire database, a single query, or a group of queries which have something in common. You cannot rely on users for that.
Then depending on the scope of the problem, you pick a performance tool that matches the scope of the problem, and use it to obtain the diagnostic information.
If you can confirm that the issue is global (almost everything is slow, not just one query), then AWR and ASH may be helpful. For local (i.e. one or several queries) issues, you can use SQL trace, dbms_xplan and ASH. Keep in mind that ASH and AWR require a Diagnostic and Tuning Pack license.
Best regards,
Nikolay -
Performance issue with high CPU and IO
Hi guys,
I am encountering huge user response time on a production system and I don’t know how to solve it.
Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
And no or very small waits for this specific SQL.
it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
Also generated a trace for this SQL on the problematic machine:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 402.08 1038.31 1842916 6174343 0 1
total 3 402.08 1038.32 1842916 6174343 0 1
db file sequential read 12419 0.21 40.02
i/o slave wait 135475 0.51 613.03
db file scattered read 135475 0.52 675.15
log file switch completion 5 0.06 0.18
latch: In memory undo latch 6 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
On the other machine, the smaller and the faster one, it is other way around.
What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
we have 10G on sun os with ASM.
Thanks in advance.Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
Also, be sure and post exact patch levels for both Oracle and OS.
Be sure and check all your I/O settings and see what MOS has to say about those.
Are you using ASSM? See Long running update
Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
Edited by: jgarry on Nov 15, 2011 1:45 PM -
Performance degrade and TIMEDB
Hello,
Function RSDDCVER_RFC_BW_STATISTICS has returned
the result table with statistics. The "TimeDB"
for one of the cubes "Z_CUBE1" lasts much longer:
INFOCUBE____TIMEDB____DBSEL__DBSEL / TIMEDB
ZCUBE1___36,960937_______181_________4,897062
ZCUBE2___14,816407___141.457______9547,321425
ZCUBE3____0,644531_____3.732______5790,256791
What's a reason for such a performance degrade?
Thanks a lot!
AndyMLHi Andy,
From the above mentioned scenario, i see that infocubes has huge TIMEDB and also
the records selected were very less.
I think the infocubes are storing the line item level data.
The performance of queries on these cubes will be very poor.You should think about creating aggregates.
As a general rule
An aggregate is reasonable and may be created if
. Aggregation ratio > 10 , i.e 10 times more records are read than are displayed and
. Percentage of DB TIME > 30 % , i.e the time spent on database is substantial part of
the whole query runtime.
Above details can be known from Transaction ST03.
Hope this helps
-Doodle -
Database performance degrade - delete operation
Hi,
I have a big database. One of the table contains 120 milion records and many tables (more than 50) has referential integrity to this table.
Table structure
Customer (Cust_ID, and other columns). Cust_ID is Primary Key. Other tables has referential integrity to Customer.Cust_ID.
There are around 100 thousand records that have entry only in this (Customer) table. These records are identified and kept in a table Temp_cust(Cust_ID).
I am running a PL/SQL which fetches a Cust_ID from Temp_cust and delete that record from Customer.
It is observed that a delete command takes long time and the whole system performance degrades. Even a on-line service that inserts row into this table looks like almost in hand stage.
The system is 24/7 and I have no option to disable any constraint.
Can someone explain why such a simple operation degrades system performance. Please also suggest how to complete the operation without affecting performance of other operations.
Regards
KarimHi antti.koskinen
There is no on delete rule. All are simple
referential integrity.
Like REFERS CUSTOMER (Cust_ID).
Regards,
KarimCan you run the following snippet just to make sure (params are name and owner of the Customer table):
select table_name, constraint_name, constraint_type, delete_rule
from dba_constraints
where r_constraint_name in
select constraint_name
from dba_constraints
where owner = upper('&owner')
and table_name = upper('&table_name')
and constraint_type = 'P'
/Also check the last time the table was rebuilt - deletes w/o rebuilds tend to raise the high water mark. -
Performance degradation encountered while running BOE in clustered set up
Problem Statement:
We have a clustered BOE set up in Production with 2 CMS servers (named boe01 and boe02) . Mantenix application (Standard J2EE application in a clustered set up) points to these BOE services hosted on virtual machines to generate reports. As soon as BOE services on both boe01 and boe02 are up and running , performance degradation is observed i.e (response times varies from 7sec to 30sec) .
The same set up works fine when BOE services on boe02 is turned off i.e only boe01 is up and running.No drastic variation is noticed.
BOE Details : SAP BusinessObjects environment XIR2 SP3 running on Windows 2003 Servers.(Virtual machines)
Possible Problem Areas as per our analysis
1) Node 2 Virtual Machine Issue:
This currently being part of the Production infrastructure, any problem assessment testing is not possible.
2) BOE Configuration Issue
Comparison report to check the build between BOE 01 and BOE 02 - Support team has confirmed no major installation differences apart from a minor Operating System setting difference.Question being is there some configuration/setting that we are missing ?
3) Possible BOE Cluster Issue:
Tests in staging environment ( with a similar clustered BOE setup ) have proved inconclusive.
We require your help in
- Root cause Analysis for this problem.
- Any troubleshooting action henceforth.
Another observation from our Weblogic support engineers for the above set up which may or may not be related to the problem is mentioned below.
When the services on BOE_2 are shutdown and we try to fetch a particular report from BOE_1 (Which is running), the following WARNING/ERROR comes up:-
07/09/2011 10:22:26 AM EST> <WARN> <com.crystaldecisions.celib.trace.d.if(Unknown Source)> - getUnmanagedService(): svc=BlockingReportSourceRepository,spec=aps<BOE_1> ,cluster:@BOE_OLTP, kind:cacheserver, name:<BOE_2>.cacheserver.cacheserver, queryString:null, m_replaceable:true,uri=osca:iiop://<BOE_1>;SI_SESSIONID=299466JqxiPSPUTef8huXO
com.crystaldecisions.thirdparty.org.omg.CORBA.TRANSIENT: attempt to establish connection failed: java.net.ConnectException: Connection timed out: connect minor code: 0x4f4f0001 completed: No
at com.crystaldecisions.thirdparty.com.ooc.OCI.IIOP.Connector_impl.connect(Connector_impl.java:150)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.createTransport(GIOPClient.java:233)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClientWorkersPool.next(GIOPClientWorkersPool.java:122)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.getWorker(GIOPClient.java:105)
at com.crystaldecisions.thirdparty.com.ooc.OB.GIOPClient.startDowncall(GIOPClient.java:409)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshalBase(Downcall.java:181)
at com.crystaldecisions.thirdparty.com.ooc.OB.Downcall.preMarshal(Downcall.java:298)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.preMarshal(DowncallStub.java:250)
at com.crystaldecisions.thirdparty.com.ooc.OB.DowncallStub.setupRequest(DowncallStub.java:530)
at com.crystaldecisions.thirdparty.com.ooc.CORBA.Delegate.request(Delegate.java:556)
at com.crystaldecisions.thirdparty.org.omg.CORBA.portable.ObjectImpl._request(ObjectImpl.java:118)
at com.crystaldecisions.enterprise.ocaframework.idl.ImplServ._OSCAFactoryStub.getServices(_OSCAFactoryStub.java:806)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getUnmanagedService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.AbstractStubHelper.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.ServiceMgr.getManagedService(Unknown Source)
at com.crystaldecisions.sdk.occa.managedreports.ps.internal.a$a.getService(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.e.do(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.try(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.o.a(Unknown Source)
at com.crystaldecisions.enterprise.ocaframework.p.a(Unknown Source)
We see the above warning coming 2 or 3 times before the request is processed and then we see the report. We have checked our config's for the cluster but didn't find anything concrete.
Is this a normal behavior of the software or can we optimize it?
Any assistance that you can provide would be greatRahul,
I have exactly the same problem running BO 3.1 SP3 in a 2 machine cluster on AIX. Exact same full install on both machines. When I take down one of the machines the performance is much better.
An example of the problem now is that when i run the command ./ccm.sh -display -username administrator -password xxx on the either box when they are both up and running, I sometimes receive a timeout error (over 15mins)
If I run SQLplus direct on the boxes to the CMS DB then the response is instant. Tnspings of course shows no problems
When I bring down one of the machines and run the command ./ccm.sh -display again then this brings back results in less than a minute...
I am baffled as to the problem so was wondering if you found anything from your end
Cheers
Chris -
Performance degradation. Need advice on starting all over again.
I have never formatted my drive or reinstalled my OS X like I used to do with XP during the windows days. Now there is some performance degradation and opening of applications like Safari, iPhoto and others is really slow. I maintain Time Machine backups of my full Snow Leopard partition. What should I do? Format the HD, re install SL, or simply restore from TM or reinstall and then restore...?
I dont really want to carry my windows attitude to mac, of reformatting, reinstalling and then starting all apps from scratch. I wanna leverage out of my TM backup. Please advice.
Neerav
MacBook 2.4GHz Unibody (Late 2008), 2GB RAM, SLThe hatter wrote:
Those steps, repair permissions? only checks the installed application receipts -- worthless.
Disk Utility doesn't check for bad blocks, and Apple First Aid misses and doesn't fix directory problems that are picked up by 3rd party tools like Disk Warrior.
The hatter's comments do not represent a consensus of opinion about this & are at least partially misleading.
Permissions repairs are indeed limited to comparing receipt info to actual permissions settings, but that is hardly worthless. It is well documented that mis-set permissions will cause a number of problems & resetting them to receipts values is an effective cure for those specific problems. Obviously, that won't cure problems with other causes, but since there is no magic cure-all it would be foolish to expect it to behave like one.
Regarding Disk Utility, it is true that it can't repair certain problems that some 3rd party utilities can; however, it is very effective at identifying file system problems, including those for some file systems the 3rd party apps do not support. It is also the most conservative disk utility available, designed not to attempt any repair that could result in loss of data. This is one reason it isn't as powerful as the 3rd party ones -- it is best to use it first if you suspect you have file system problems & use the more powerful ones only when necessary.
To be fair, Disk Warrior includes a directory optimization function that Disk Utility doesn't. However, an "unoptimized" directory isn't a problem in & of itself, & it is debatable how much real world benefit there is to optimizing the directory, at least with the current OS & modern high performance drives. I used to see noticeable improvements by periodically using Disk Warrior with OS 9 & the drives of that era, but these days my Macs & Snow Leopard seem to do just fine without it.
Basically, it is simple: use the tool that best does what you need to do. There is no benefit from using a sledge hammer when a tack hammer will do; in fact, the sledge hammer may do more harm than good, or just wear you out for no good reason. Also consider the wisdom of the old saying that to a hammer everything looks like a nail. Sometimes, you don't need a tool at all, just the wisdom to know that you don't.
Regarding bad sectors, every drive has them. That is not a concern by itself but the drive suddenly developing new ones is a sure sign of serious problems. Drives keep track of this themselves. Utilities provide a way to query the drives about this & may provide early warning of impending failure, but since the drive is providing the info this is not 100% reliable. For this reason, whether you use one or not, it is extremely important to backup your important data to other devices regularly & often. -
JDBC, SQL*Net wait interface, performance degradation on 10g vs. 9i
Hi All,
I came across performance issue that I think results from mis-configuration of something between Oracle and JDBC. The logic of my system executes 12 threads in java. Each thread performs simple 'select a,b,c...f from table_xyz' on different tables. (so I have 12 different tables with cardinality from 3 to 48 millions and one working thread per table).
In each thread I'm creating result set that is explicitly marked as forward_only, transaction is set read only, fetch size is set to 100000 records. Java logic processes records in standard while(rs.next()) {...} loop.
I'm experiencing performance degradation between execution on Oracle 9i and Oracle 10g of the same java code, on the same machine, on the same data. The difference is enormous, 9i execution takes 26 hours while 10g execution takes 39 hours.
I have collected statspack for 9i and awr report for 10g. Below I've enclosed top wait events for 9i and 10g
===== 9i ===================
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 22,939,988 0 6,240 0 0.7
control file parallel write 6,152 0 296 48 0.0
SQL*Net more data to client 2,877,154 0 280 0 0.1
db file scattered read 26,842 0 91 3 0.0
log file parallel write 3,528 0 83 23 0.0
latch free 94,845 0 50 1 0.0
process startup 93 0 5 50 0.0
log file sync 34 0 2 46 0.0
log file switch completion 2 0 0 215 0.0
db file single write 9 0 0 33 0.0
control file sequential read 4,912 0 0 0 0.0
wait list latch free 15 0 0 12 0.0
LGWR wait for redo copy 84 0 0 1 0.0
log file single write 2 0 0 18 0.0
async disk IO 263 0 0 0 0.0
direct path read 2,058 0 0 0 0.0
slave TJ process wait 1 1 0 12 0.0
===== 10g ==================
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file scattered read 268,314 .0 2,776 10 0.0
SQL*Net message to client 278,082,276 .0 813 0 7.1
io done 20,715 .0 457 22 0.0
control file parallel write 10,971 .0 336 31 0.0
db file parallel write 15,904 .0 294 18 0.0
db file sequential read 66,266 .0 257 4 0.0
log file parallel write 3,510 .0 145 41 0.0
SQL*Net more data to client 2,221,521 .0 102 0 0.1
SGA: allocation forcing comp 2,489 99.9 27 11 0.0
log file sync 564 .0 23 41 0.0
os thread startup 176 4.0 19 106 0.0
latch: shared pool 372 .0 11 29 0.0
latch: library cache 537 .0 5 10 0.0
rdbms ipc reply 57 .0 3 49 0.0
log file switch completion 5 40.0 3 552 0.0
latch free 4,141 .0 2 0 0.0
I put full blame for the slowdown on SQL*Net message to client wait event. All I could find about this event is that it is a network related problem. I assume it would be true if database and client were on different machines.. However in my case they are on them very same machine.
I'd be very very grateful if someone could point me in the right direction, i.e. give a hint what statistics should I analyze further? what might cause this event to appear? why probable cause (that is said be outside db) affects only 10g instance?
Thanks in advance,
Rafi.Hi Steven,
Thanks for the input. It's a fact that I did not gather statistics on my tables. My understanding is that statistics are useful for queries more complex than simple select * from table_xxx. In my case tables don't have indexes. There's no filtering condition as well. Full table scan is what I actually want as all software logic is inside the java code.
Explain plans are as follows:
======= 10g ================================
PLAN_TABLE_OUTPUT
Plan hash value: 1141003974
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
In sqlplus I get:
SQL> set autotrace traceonly explain statistics;
SQL> select * from xxx;
36184384 rows selected.
Elapsed: 00:38:44.35
Execution Plan
Plan hash value: 1141003974
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
3339240 consistent gets
981517 physical reads
116 redo size
26535700 bytes received via SQL*Net from client
2412294 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
36184384 rows processed
======= 9i =================================
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS FULL | xxx | | | |
Note: rule based optimization
In sqlplus I get:
SQL> set autotrace traceonly explain statistics;
SQL> select * from xxx;
36184384 rows selected.
Elapsed: 00:17:43.06
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'xxx'
Statistics
0 recursive calls
1 db block gets
3306118 consistent gets
957515 physical reads
100 redo size
23659424 bytes sent via SQL*Net to client
26535867 bytes received via SQL*Net from client
2412294 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
36184384 rows processed
Thanks for pointing me difference in table scans. I infer that 9i is doing single-block full table scan (db file sequential read) while 10g is using multi-block full table scan (db file scattered read).
I now have theory that 9i is faster because sequential reads use continuous buffer space while scattered reads use discontinuous buffer space. Since I'm accessing data 'row by row' in jdbc 10g might have an overhead in providing data from discontinuous buffer space. This overhead shows itself as SQL*Net message to client wait. Is that making any sense?
Is there any way I could force 10g (i.e. with hint) to use sequential reads instead of scattered reads for full table scan?
I'll experiment with FTS tuning in 10g by enabling automatic multi-block reads tuning (i.e. db_file_multiblock_read_count=0 instead of 32 as it is now). I'll also check if response time improves after statistics are gathered.
Please advice if you have any other ideas.
Thanks & regards,
Rafi. -
Performance degradation with addition of unicasting option
We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
*-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
*-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
and the following in the remote client node that point to one of the server node like this
*-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
Is performance degradation in well-known addressing is a limitation and expected?Hi Mahesh,
From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
Thanks,
Mark
Oracle Coherence -
Performance degradation with -g compiler option
Hello
Our mearurement of simple program compiled with and without -g option shows big performance difference.
Machine:
SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
Compiler:
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
#include "time.h"
#include <iostream>
int main(int argc, char ** argv)
for (int i = 0 ; i < 60000; i++)
int *mass = new int[60000];
for (int j=0; j < 10000; j++) {
mass[j] = j;
delete []mass;
return 0;
}Compilation and execution with -g:
CC -g -o test_malloc_deb.x test_malloc.c
ptime test_malloc_deb.xreal 10.682
user 10.388
sys 0.023
Without -g:
CC -o test_malloc.x test_malloc.c
ptime test_malloc.xreal 2.446
user 2.378
sys 0.018
As you can see performance degradation of "-g" is about 4 times.
Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
This will give us possibility to open customer core files using non-stripped exe.
But our tests shows that stripping does not give performance of executable compiled without '-g'.
So we are losing performance by using this compilation method.
Is it expected behavior of compiler?
Is there any way to have -g option "on" and not lose performance?In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl. -
Performance degradation factor 1000 on failover???
Hi,
we are gaining first experience with WLS 5.1 EBF 8 clustering on
NT4 SP 6 workstation.
We have two servers in the cluster, both on same machine but with
different IP adresses (as it has to be)!
In general it seems to work: we have a test client connecting to
one of the servers and
uses a stateless test EJB which does nothing but writing into weblogic.log.
When this server fails, the other server resumes to work the client
requests, BUT VERY VERY VERY SLOW!!!
- I should repeat VERY a thousand times, because a normal client
request takes about 10-30 ms
and after failure/failover it takes 10-15 SECONDS!!!
As naive as I am I want to know: IS THIS NORMAL?
After the server is back, the performance is also back to normal,
but we were expecting a much smaller
performance degradation.
So I think we are doing something totally wrong!
Do we need some Network solution to make failover performance better?
Or is there a chance to look closer at deployment descriptors or
weblogic.system.executeThreadCount
or weblogic.system.percentSocketReaders settings?
Thanks in advance for any help!
Fleming
See http://www.weblogic.com/docs51/cluster/setup.html#680201
Basically, the rule of thumb is to set the number of execute threads ON
THE CLIENT to 2 times the number of servers in the cluster and the
percent socket readers to 50%. In your case with 8 WLS instances in the
cluster, add the following to the java command line used to start your
client:
-Dweblogic.system.executeThreadCount=16
-Dweblogic.system.percentSocketReaders=50
Hope this helps,
Robert
Fleming Frese wrote:
> Hi Mike,
>
> thanks for your reply.
>
> We do not have HTTP clients or Servlets, just EJBs and clients
> in the same LAN,
> and the failover should be handled by the replica-aware stubs.
> So we thought we need no Proxy solution for failover. Maybe we
> need a DNS to serve failover if this
> increases our performance?
>
> The timeout clue sounds reasonable, but I would expect that the
> stub times out once and than switches
> to the other server for subsequent requests. There should be a
> refresh (after 3 Minutes?) when the stub
> gets new information about the servers in the cluster, so he could
> check then if the server is back.
> This works perfectly with load balancing: If a new server joins
> the cluster, I automatically receives
> requests after a while.
>
> Fleming
>
> "Mike Reiche" <[email protected]> wrote:
> >
> >It sounds like every request is first timing out it's
> >connection
> >attempt (10 seconds, perhaps?) on the 'down' instance
> >before
> >trying the second instance. How do requests 'failover'?
> >Do you
> >have Netscape, Apache, or IIS with a wlproxy module? Or
> >do
> >you simply have a DNS that takes care of that?
> >
> >Mike
> >
> >
> >
> >"Fleming Frese" <[email protected]> wrote:
> >>
> >>Hi,
> >>
> >>we are gaining first experience with WLS 5.1 EBF 8 clustering
> >>on
> >>NT4 SP 6 workstation.
> >>We have two servers in the cluster, both on same machine
> >>but with
> >>different IP adresses (as it has to be)!
> >>
> >>In general it seems to work: we have a test client connecting
> >>to
> >>one of the servers and
> >>uses a stateless test EJB which does nothing but writing
> >>into weblogic.log.
> >>
> >>When this server fails, the other server resumes to work
> >>the client
> >>requests, BUT VERY VERY VERY SLOW!!!
> >> - I should repeat VERY a thousand times, because a normal
> >>client
> >>request takes about 10-30 ms
> >>and after failure/failover it takes 10-15 SECONDS!!!
> >>
> >>As naive as I am I want to know: IS THIS NORMAL?
> >>
> >>After the server is back, the performance is also back
> >>to normal,
> >>but we were expecting a much smaller
> >>performance degradation.
> >>
> >>So I think we are doing something totally wrong!
> >>Do we need some Network solution to make failover performance
> >>better?
> >>Or is there a chance to look closer at deployment descriptors
> >>or
> >>weblogic.system.executeThreadCount
> >>or weblogic.system.percentSocketReaders settings?
> >>
> >>Thanks in advance for any help!
> >>
> >>Fleming
> >>
> >
Maybe you are looking for
-
Delete Track from iTunes Library if in User Playlist
Hello I want to delete [music] tracks from the main iTunes [10.7] library if they are in a user playlist. I saw (on dougscripts.com) an approach where the database id was obtained from the track in the playlist and this was then used to delete from t
-
Can't undo a Huge Mouse Pointer after upgrading from Tiger to Leopard.
I did an upgrade from Leopard to Tiger. When I was on Tiger, I had a setting for a huge mouse pointer. I upgraded to Leopard. No matter how many times I reset the pointer to small, when restarting it goes back to huge. How can I fix this?
-
I have written the FM Call function 'COMMITROUTINE' in update task. but this FM is not getting triggered when some commit happens. In all, this FM is not commiting tha data in the table. CALL FUNCTION 'COMMITROUTINE' IN UPDATE TASK EXPORTING lt
-
Slow code insight in JDev 9.0.3.1035
Hello, sometimes when I invoke code insight in my Jdev it needs up to 30 secs to display. In this time JDeveloper is completely blocked and does not react to any keyboard or mouse input. It doesn't matter if the library is on a network drive or on th
-
Hi all, I am trying to install 2-node 11gR2 RAC on CentOS 5.6 using VMWare Fusion. I'm in the middle of Grid Infrastructure installation and while choosing the ASM diskgroups I don't see any candidate disks but when I click on all disks i can see the