Performance degradation in pl/sql parsing
We are trying to use xml pl/sql parser and noticed performance degradation as we run multiple times. We zeroed into the following clause:
doc := xmlparser.getDocument(p);
The first time the procedure is run the elapsed time at sqlplus is something like .45sec, but as we run repeatedly in the same session the elapsed time keeps on increasing by .02 seconds. If we log out and start fresh, we start again from .45sec.
We noticed similar degradation with
p := xmlparser.newParser;
but we got around by making the 'p' variable as package variable, initializing it once and using the same for all invocations.
Any suggestions?
Thank you.
Can I enhance the PL/SQL code for better performance ? Probably you can enhance it.
or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
But please provide some more details like your database version etc.
I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
SQL> alter session set events '10046 trace name context forever, level 12';
SQL> execute your PL/SQL code here
SQL> exitWill give you a .trc file in your udump directory on the server.
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
Also this informative thread can give you more ideas:
HOW TO: Post a SQL statement tuning request - template posting
as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29
Similar Messages
-
SQL Performance Degrades Severely in WAN
The Oracle server is located in the central LAN and the client is located in the remote LAN. These two LANs are connected via 10Mbps wide network. Both LANs are of 100M bps inside. If the SQL commands are issued in the same LAN as Oracle server is located in, the speed is fast. However, if the same commands are issued in the remote LAN, the speed is degraded severly, almost 10 times slower. There are only a few results returned by these SQL commands. My questions are the reason of this performance degradeness and how to improve the performance in the remote client.
The server is Oracle817 and OPS, and the SQL commands are issued in PB programs in the remote client.
Thanks very much.Thank you very much.
I found another point which might lead to the performance problem. The server's Listener.ora is configured as following:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
And the client's TNSNAMES.ORA is configured as following:
EMIS02.HZJYJ.COM.CN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.26.17.18)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = emis)
It shows that the listener protocol is set as IPC. However, the client is set to the protocol of TCP. Would there be a network latency for the protocol conversion between IPC and TCP?
Thanks a lot. -
JDBC, SQL*Net wait interface, performance degradation on 10g vs. 9i
Hi All,
I came across performance issue that I think results from mis-configuration of something between Oracle and JDBC. The logic of my system executes 12 threads in java. Each thread performs simple 'select a,b,c...f from table_xyz' on different tables. (so I have 12 different tables with cardinality from 3 to 48 millions and one working thread per table).
In each thread I'm creating result set that is explicitly marked as forward_only, transaction is set read only, fetch size is set to 100000 records. Java logic processes records in standard while(rs.next()) {...} loop.
I'm experiencing performance degradation between execution on Oracle 9i and Oracle 10g of the same java code, on the same machine, on the same data. The difference is enormous, 9i execution takes 26 hours while 10g execution takes 39 hours.
I have collected statspack for 9i and awr report for 10g. Below I've enclosed top wait events for 9i and 10g
===== 9i ===================
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 22,939,988 0 6,240 0 0.7
control file parallel write 6,152 0 296 48 0.0
SQL*Net more data to client 2,877,154 0 280 0 0.1
db file scattered read 26,842 0 91 3 0.0
log file parallel write 3,528 0 83 23 0.0
latch free 94,845 0 50 1 0.0
process startup 93 0 5 50 0.0
log file sync 34 0 2 46 0.0
log file switch completion 2 0 0 215 0.0
db file single write 9 0 0 33 0.0
control file sequential read 4,912 0 0 0 0.0
wait list latch free 15 0 0 12 0.0
LGWR wait for redo copy 84 0 0 1 0.0
log file single write 2 0 0 18 0.0
async disk IO 263 0 0 0 0.0
direct path read 2,058 0 0 0 0.0
slave TJ process wait 1 1 0 12 0.0
===== 10g ==================
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file scattered read 268,314 .0 2,776 10 0.0
SQL*Net message to client 278,082,276 .0 813 0 7.1
io done 20,715 .0 457 22 0.0
control file parallel write 10,971 .0 336 31 0.0
db file parallel write 15,904 .0 294 18 0.0
db file sequential read 66,266 .0 257 4 0.0
log file parallel write 3,510 .0 145 41 0.0
SQL*Net more data to client 2,221,521 .0 102 0 0.1
SGA: allocation forcing comp 2,489 99.9 27 11 0.0
log file sync 564 .0 23 41 0.0
os thread startup 176 4.0 19 106 0.0
latch: shared pool 372 .0 11 29 0.0
latch: library cache 537 .0 5 10 0.0
rdbms ipc reply 57 .0 3 49 0.0
log file switch completion 5 40.0 3 552 0.0
latch free 4,141 .0 2 0 0.0
I put full blame for the slowdown on SQL*Net message to client wait event. All I could find about this event is that it is a network related problem. I assume it would be true if database and client were on different machines.. However in my case they are on them very same machine.
I'd be very very grateful if someone could point me in the right direction, i.e. give a hint what statistics should I analyze further? what might cause this event to appear? why probable cause (that is said be outside db) affects only 10g instance?
Thanks in advance,
Rafi.Hi Steven,
Thanks for the input. It's a fact that I did not gather statistics on my tables. My understanding is that statistics are useful for queries more complex than simple select * from table_xxx. In my case tables don't have indexes. There's no filtering condition as well. Full table scan is what I actually want as all software logic is inside the java code.
Explain plans are as follows:
======= 10g ================================
PLAN_TABLE_OUTPUT
Plan hash value: 1141003974
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
In sqlplus I get:
SQL> set autotrace traceonly explain statistics;
SQL> select * from xxx;
36184384 rows selected.
Elapsed: 00:38:44.35
Execution Plan
Plan hash value: 1141003974
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
3339240 consistent gets
981517 physical reads
116 redo size
26535700 bytes received via SQL*Net from client
2412294 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
36184384 rows processed
======= 9i =================================
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS FULL | xxx | | | |
Note: rule based optimization
In sqlplus I get:
SQL> set autotrace traceonly explain statistics;
SQL> select * from xxx;
36184384 rows selected.
Elapsed: 00:17:43.06
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'xxx'
Statistics
0 recursive calls
1 db block gets
3306118 consistent gets
957515 physical reads
100 redo size
23659424 bytes sent via SQL*Net to client
26535867 bytes received via SQL*Net from client
2412294 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
36184384 rows processed
Thanks for pointing me difference in table scans. I infer that 9i is doing single-block full table scan (db file sequential read) while 10g is using multi-block full table scan (db file scattered read).
I now have theory that 9i is faster because sequential reads use continuous buffer space while scattered reads use discontinuous buffer space. Since I'm accessing data 'row by row' in jdbc 10g might have an overhead in providing data from discontinuous buffer space. This overhead shows itself as SQL*Net message to client wait. Is that making any sense?
Is there any way I could force 10g (i.e. with hint) to use sequential reads instead of scattered reads for full table scan?
I'll experiment with FTS tuning in 10g by enabling automatic multi-block reads tuning (i.e. db_file_multiblock_read_count=0 instead of 32 as it is now). I'll also check if response time improves after statistics are gathered.
Please advice if you have any other ideas.
Thanks & regards,
Rafi. -
Performance Degradation - High fetches and Prses
Hello,
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
2) High fetches and poor number/ low number of rows being processed
Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1) */ * FROM SAPNXP.INOB
WHERE MANDT = :A0
AND KLART = :A1
AND OBTAB = :A2
AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
call count cpu elapsed disk query current rows
Parse 119 0.00 0.00 0 0 0 0
Execute 239 0.16 0.13 0 0 0 0
Fetch 239 2069.31 2127.88 0 13738804 0 0
total 597 2069.47 2128.01 0 13738804 0 0
PLAN_TABLE_OUTPUT
Plan hash value: 1235313998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 268 | 1 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| INOB | 2 | 268 | 1 (0)| 00:00:01 |
|* 3 | INDEX SKIP SCAN | INOB~2 | 7514 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=TO_NUMBER(:A4))
2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
filter("OBTAB"=:A2)
18 rows selected.
SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
INDEX_NAME TABLE_NAME COLUMN_NAME
INOB~2 INOB MANDT
INOB~2 INOB CLINT
INOB~2 INOB OBTAB
Is it possible to Maximise the rows/fetch
call count cpu elapsed disk query current rows
Parse 163 0.03 0.00 0 0 0 0
Execute 163 0.01 0.03 0 0 0 0
Fetch 174899 55.26 59.14 0 1387649 0 4718932
total 175225 55.30 59.19 0 1387649 0 4718932
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 27
Rows Row Source Operation
28952 TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
28952 INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 174899 0.00 0.16
SQL*Net more data to client 155767 0.01 5.69
SQL*Net message from client 174899 0.11 208.21
latch: cache buffers chains 2 0.00 0.00
latch free 4 0.00 0.00
********************************************************************************user4566776 wrote:
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
But if you look at the text you are using bind variables.
The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Performance Degradation of new Servers
Hi All,
We are experiencing massive performance degradation on production when the system is under heavy use. It seems to be at its worst around month end. The worst effected transactions are the Cost / Profit centre 'line item' reports using RCOPCA02 (and other similar programs).
We have the message server running on the database instance as well as 2x App servers. The Msg Srv and One of the App servers are very similar builds
4x AMD Opteron 875
16gb RAM (10gb Pagefile)
Both running Win Server 2003
We're using MS SQL 2000
The other message server is a bit weaker but has been around for some time (few years) and hasn't caused any issues.
We have recently moved the Msg Server from an old (much weaker) server to the new build and since then seem to have performance issues. Initially after the move we had issues with the number of Page Table Entries (where down at about 6000-8000). Using the /3GB /USERVA=2900 switches we have this up to about 49,000.
If anyone has had a similar experience or could offer some assistance it would be much appreciated!!!
Cheers,
KyeWhile investigating a different issue we've found that we have 3 servers that share the same set of disks on the SAN. Two of these where Message Servers (R/3 and BW) when both of these where running the 'Average disk queue Length' was at times reaching 400! (should be around 1-3).
We've moved one of these instances back to DR which has relieved some of the pressure from the disks.
We have also added another index to the GLPCA table (which was causing most of the problems). -
Hello,
Below I provide a complete code to re-produce the behavior I am observing. You could run it in tempdb or any other database, which is not important. The test query provided at the top of the script is pretty silly, but I have observed the same
performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
observed on my machine). Here are the steps with numbers corresponding to the numbers in the script:
1. Run script from #1 to #7. This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
2. Run test query (at the top of the script). Here are the execution statistics:
Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5514 ms,
elapsed time = 1389 ms.
3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
4. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 828 ms,
elapsed time = 392 ms.
As you can see the query is clearly faster. Yay for columnstore indexes!.. But let's continue.
5. Run script from #10 to #12 (note that this might take some time to execute). This will move about 80% of the data in both tables to a different partition. You should be able to see the fact that the data has been moved when running Step #
11.
6. Run test query (at the top of the script). Here are the execution statistics:
Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 8172 ms,
elapsed time = 3119 ms.
And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
I am not going to paste here execution plans or the detailed properties for each of the operators. They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
run (when all of the data resided on the same partition).
So the question is: why is it slower?
Thank you for any help!
Here is the code to re-produce this:
--==> Test Query - begin --<===
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT COUNT(1)
FROM Txns AS z WITH(NOLOCK)
LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
WHERE z.RecordStatus = 1
--==> Test Query - end --<===
--===========================================================
--1. Clean-up
IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
--2. Create partition funciton
CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
--3. Partition scheme
CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
--4. Create Main table
CREATE TABLE dbo.Main(
SetID int NOT NULL,
SubSetID int NOT NULL,
TxnID int NOT NULL,
ColBatchID int NOT NULL,
ColMadeId int NOT NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--5. Create Txns table
CREATE TABLE dbo.Txns(
TxnID int IDENTITY(1,1) NOT NULL,
GroupID int NULL,
SiteID int NULL,
Period datetime NULL,
Amount money NULL,
CreateDate datetime NULL,
Descr varchar(50) NULL,
RecordStatus tinyint NOT NULL DEFAULT ((1))
) ON PS_Scheme(RecordStatus)
--6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
-- 40 mln. rows - approx. 4 min
--6.1 Populate Main table
DECLARE @NumberOfRows INT = 40000000
INSERT INTO Main (
SetID,
SubSetID,
TxnID,
ColBatchID,
ColMadeID,
RecordStatus)
SELECT TOP (@NumberOfRows)
SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--6.2 Populate Txns table
-- 10 mln. rows - approx. 1 min
SET @NumberOfRows = 10000000
INSERT INTO Txns (
GroupID,
SiteID,
Period,
Amount,
CreateDate,
Descr,
RecordStatus)
SELECT TOP (@NumberOfRows)
GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
RecordStatus = 1
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
--7. Add PK's
-- 1 min
ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Replace regular indexes with clustered columnstore indexes
--===========================================================
--8. Drop existing indexes
ALTER TABLE Txns DROP CONSTRAINT PK_Txns
DROP INDEX Main.CDX_Main
--9. Create clustered columnstore indexes (on partition scheme!)
-- 1 min
CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
--==> Run test Query --<===
--===========================================================
-- Move about 80% the data into a different partition
--===========================================================
--10. Update "RecordStatus", so that data is moved to a different partition
-- 14 min (32002557 row(s) affected)
UPDATE Main
SET RecordStatus = 2
WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
-- 4.5 min (7999999 row(s) affected)
UPDATE Txns
SET RecordStatus = 2
WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
--11. Check data distribution
SELECT
OBJECT_NAME(SI.object_id) AS PartitionedTable
, DS.name AS PartitionScheme
, SI.name AS IdxName
, SI.index_id
, SP.partition_number
, SP.rows
FROM sys.indexes AS SI WITH (NOLOCK)
JOIN sys.data_spaces AS DS WITH (NOLOCK)
ON DS.data_space_id = SI.data_space_id
JOIN sys.partitions AS SP WITH (NOLOCK)
ON SP.object_id = SI.object_id
AND SP.index_id = SI.index_id
WHERE DS.type = 'PS'
AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
ORDER BY 1, 2, 3, 4, 5;
PartitionedTable PartitionScheme IdxName index_id partition_number rows
Main PS_Scheme CDX_Main 1 1 7997443
Main PS_Scheme CDX_Main 1 2 32002557
Main PS_Scheme CDX_Main 1 3 0
Main PS_Scheme CDX_Main 1 4 0
Txns PS_Scheme PK_Txns 1 1 2000001
Txns PS_Scheme PK_Txns 1 2 7999999
Txns PS_Scheme PK_Txns 1 3 0
Txns PS_Scheme PK_Txns 1 4 0
--12. Update statistics
EXEC sys.sp_updatestats
--==> Run test Query --<===Hello Michael,
I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 251 ms, elapsed time = 128 ms.
As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
(coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation".
Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer -
Database performance degrade - delete operation
Hi,
I have a big database. One of the table contains 120 milion records and many tables (more than 50) has referential integrity to this table.
Table structure
Customer (Cust_ID, and other columns). Cust_ID is Primary Key. Other tables has referential integrity to Customer.Cust_ID.
There are around 100 thousand records that have entry only in this (Customer) table. These records are identified and kept in a table Temp_cust(Cust_ID).
I am running a PL/SQL which fetches a Cust_ID from Temp_cust and delete that record from Customer.
It is observed that a delete command takes long time and the whole system performance degrades. Even a on-line service that inserts row into this table looks like almost in hand stage.
The system is 24/7 and I have no option to disable any constraint.
Can someone explain why such a simple operation degrades system performance. Please also suggest how to complete the operation without affecting performance of other operations.
Regards
KarimHi antti.koskinen
There is no on delete rule. All are simple
referential integrity.
Like REFERS CUSTOMER (Cust_ID).
Regards,
KarimCan you run the following snippet just to make sure (params are name and owner of the Customer table):
select table_name, constraint_name, constraint_type, delete_rule
from dba_constraints
where r_constraint_name in
select constraint_name
from dba_constraints
where owner = upper('&owner')
and table_name = upper('&table_name')
and constraint_type = 'P'
/Also check the last time the table was rebuilt - deletes w/o rebuilds tend to raise the high water mark. -
Ability to perform ALTER SESSION SET SQL TRACE but not all alter clauses
I see that in order to run the ALTER SESSION SET SQL TRACE command, the user should be explicitly granted alter session privilege as the CREATE SESSION privilege alone is not enough. Is there a way to grant the ability to perform ALTER SESSION SET SQL TRACE but not the other clauses such as GUARD, PARALLEL & RESUMABLE?.
Thanks
SathyaIf you are using Oracle 10g and above, you can use DBMS_SESSION.session_trace_enable procedure,
it doesn't require alter session system privilege.
Simple example:
SQL> connect test/test@//192.168.1.2:1521/xe
Connected.
SQL> alter session set tracefile_identifier='my_id';
Session altered.
SQL> alter session set sql_trace = true
2 ;
alter session set sql_trace = true
ERROR at line 1:
ORA-01031: insufficient privileges
SQL> execute dbms_session.session_trace_enable;
PL/SQL procedure successfully completed.
SQL> select * from user_sys_privs;
USERNAME PRIVILEGE ADM
TEST CREATE PROCEDURE NO
TEST CREATE TABLE NO
TEST CREATE SEQUENCE NO
TEST CREATE TRIGGER NO
TEST SELECT ANY DICTIONARY NO
TEST CREATE SYNONYM NO
TEST UNLIMITED TABLESPACE NO
7 rows selected.
SQL> execute dbms_session.session_trace_disable;
PL/SQL procedure successfully completed.
SQL> disconnect
Disconnected from Oracle Database 10g Release 10.2.0.1.0 - Productionand here is result from tkprof:
TKPROF: Release 10.2.0.1.0 - Production on So Paź 23 00:53:07 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: xe_ora_176_my_id.trc
( ---- cut ---- )
select *
from
user_sys_privs
call count cpu elapsed disk query current rows
Parse 1 0.08 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.01 0 15 0 7
total 4 0.09 0.09 0 15 0 7
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61
Rows Row Source Operation
7 HASH GROUP BY (cr=15 pr=0 pw=0 time=11494 us)
7 CONCATENATION (cr=15 pr=0 pw=0 time=4913 us)
0 MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=1169 us)
0 NESTED LOOPS (cr=4 pr=0 pw=0 time=793 us)
0 TABLE ACCESS FULL SYSAUTH$ (cr=4 pr=0 pw=0 time=592 us)
0 INDEX RANGE SCAN I_SYSTEM_PRIVILEGE_MAP (cr=0 pr=0 pw=0 time=0 us)(object id 312)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)
7 NESTED LOOPS (cr=11 pr=0 pw=0 time=3429 us)
9 HASH JOIN (cr=9 pr=0 pw=0 time=2705 us)
9 TABLE ACCESS FULL SYSAUTH$ (cr=4 pr=0 pw=0 time=512 us)
63 TABLE ACCESS FULL USER$ (cr=5 pr=0 pw=0 time=914 us)
7 INDEX RANGE SCAN I_SYSTEM_PRIVILEGE_MAP (cr=2 pr=0 pw=0 time=510 us)(object id 312)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 20.64 20.65
BEGIN dbms_session.session_trace_disable; END;
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.00 0 0 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 -
KNOWN ISSUES 3513544: Performance degradation...
I see that version 3.1 of the Microsoft Drivers for PHP for SQL Server was published on 12/12/2014 and is available on
http://www.microsoft.com/en-us/download/details.aspx?id=20098. Thank you Microsoft...
One thing that bothered me though is the "Known issue" described at the end of the release.txt file included in the SQLSRV31.EXE package:
KNOWN ISSUES
"3513544: Performance degradation when using Microsoft Drivers 3.1 for PHP for SQL Server with Windows 7/Windows Server 2008 R2 and previous versions. Clients connecting to supported versions of Microsoft SQL Server may notice decreased performance when
opening and closing connections in a Windows 7/Windows Server 2008 R2 environment. The recommended course of action is to upgrade to Windows 8/Windows Server 2012 or later."
Has anybody experienced that "decreased performance when opening and closing connections" problem on Windows Server 2008 R2? If you have, how bad is it?
And are there any solutions - other than the "recommended course of action" ("upgrade to Windows 8/Windows Server 2012 or later")? In a corporate environment upgrading a OS isn't always a simple thing that you do in a few minutes. It can
take weeks or months of planning and testing...As I googled, apparently there are less articles mentioned this issue. On the worst surmise, MS may just drive you to upgrade :P
I don't find a specific document on this, this may be a better question on a dedicated PHP forum. -
How to investigate DB performance degradation.
We use Oracle11gr2 on win2008R2.
I heard that DB performance degradation is happening and I would like to know how to improve DB performance.
How could I investigate the reason of DB performance degradation ?Hi,
the first thing to establish is the scope of the problem -- whether it's the entire database, a single query, or a group of queries which have something in common. You cannot rely on users for that.
Then depending on the scope of the problem, you pick a performance tool that matches the scope of the problem, and use it to obtain the diagnostic information.
If you can confirm that the issue is global (almost everything is slow, not just one query), then AWR and ASH may be helpful. For local (i.e. one or several queries) issues, you can use SQL trace, dbms_xplan and ASH. Keep in mind that ASH and AWR require a Diagnostic and Tuning Pack license.
Best regards,
Nikolay -
Performance degradation with addition of unicasting option
We have been using the multi-casting protocol for setting up the data grid between the application nodes with the vm arguments as
*-Dtangosol.coherence.clusteraddress=${Broadcast Address} -Dtangosol.coherence.clusterport=${Broadcast port}*
As the certain node in the application was expected in a different sub net and multi-casting was not feasible, we opted for well known addressing with following additional VM arguments setup in the server nodes(all in the same subnet)
*-Dtangosol.coherence.machine=${server_name} -Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.localport=${server_port}*
and the following in the remote client node that point to one of the server node like this
*-Dtangosol.coherence.wka=${server_ip} -Dtangosol.coherence.wka.port=${server_port}*
But this deteriorated the performance drastically both in pushing data into the cache and getting events via map listener.
From the coherence logging statements it doesn't seems that multi-casting is getting used atleast with in the server nodes(which are in the same subnet).
Is it feasible to have both uni-casting and multi-casting to coexist? How to verify if it is setup already?
Is performance degradation in well-known addressing is a limitation and expected?Hi Mahesh,
From your description it sounds as if you've configured each node with a wka list just including it self. This would result in N rather then 1 clusters. Your client would then be serviced by the resources of just a single cache server rather then an entire cluster. If this is the case you will see that all nodes are identified as member 1. To setup wka I would suggest using the override file rather then system properties, and place perhaps 10% of your nodes on that list. Then use this exact same file for all nodes. If I've misinyerpreted your configuration please provide additional details.
Thanks,
Mark
Oracle Coherence -
Performance degradation with -g compiler option
Hello
Our mearurement of simple program compiled with and without -g option shows big performance difference.
Machine:
SunOS xxxxx 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V250
Compiler:
CC: Sun C++ 5.9 SunOS_sparc Patch 124863-08 2008/10/16
#include "time.h"
#include <iostream>
int main(int argc, char ** argv)
for (int i = 0 ; i < 60000; i++)
int *mass = new int[60000];
for (int j=0; j < 10000; j++) {
mass[j] = j;
delete []mass;
return 0;
}Compilation and execution with -g:
CC -g -o test_malloc_deb.x test_malloc.c
ptime test_malloc_deb.xreal 10.682
user 10.388
sys 0.023
Without -g:
CC -o test_malloc.x test_malloc.c
ptime test_malloc.xreal 2.446
user 2.378
sys 0.018
As you can see performance degradation of "-g" is about 4 times.
Our product is compiled with -g option and before shipment it is stripped using 'strip' utility.
This will give us possibility to open customer core files using non-stripped exe.
But our tests shows that stripping does not give performance of executable compiled without '-g'.
So we are losing performance by using this compilation method.
Is it expected behavior of compiler?
Is there any way to have -g option "on" and not lose performance?In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
HTH,
Darryl. -
Performance degradation factor 1000 on failover???
Hi,
we are gaining first experience with WLS 5.1 EBF 8 clustering on
NT4 SP 6 workstation.
We have two servers in the cluster, both on same machine but with
different IP adresses (as it has to be)!
In general it seems to work: we have a test client connecting to
one of the servers and
uses a stateless test EJB which does nothing but writing into weblogic.log.
When this server fails, the other server resumes to work the client
requests, BUT VERY VERY VERY SLOW!!!
- I should repeat VERY a thousand times, because a normal client
request takes about 10-30 ms
and after failure/failover it takes 10-15 SECONDS!!!
As naive as I am I want to know: IS THIS NORMAL?
After the server is back, the performance is also back to normal,
but we were expecting a much smaller
performance degradation.
So I think we are doing something totally wrong!
Do we need some Network solution to make failover performance better?
Or is there a chance to look closer at deployment descriptors or
weblogic.system.executeThreadCount
or weblogic.system.percentSocketReaders settings?
Thanks in advance for any help!
Fleming
See http://www.weblogic.com/docs51/cluster/setup.html#680201
Basically, the rule of thumb is to set the number of execute threads ON
THE CLIENT to 2 times the number of servers in the cluster and the
percent socket readers to 50%. In your case with 8 WLS instances in the
cluster, add the following to the java command line used to start your
client:
-Dweblogic.system.executeThreadCount=16
-Dweblogic.system.percentSocketReaders=50
Hope this helps,
Robert
Fleming Frese wrote:
> Hi Mike,
>
> thanks for your reply.
>
> We do not have HTTP clients or Servlets, just EJBs and clients
> in the same LAN,
> and the failover should be handled by the replica-aware stubs.
> So we thought we need no Proxy solution for failover. Maybe we
> need a DNS to serve failover if this
> increases our performance?
>
> The timeout clue sounds reasonable, but I would expect that the
> stub times out once and than switches
> to the other server for subsequent requests. There should be a
> refresh (after 3 Minutes?) when the stub
> gets new information about the servers in the cluster, so he could
> check then if the server is back.
> This works perfectly with load balancing: If a new server joins
> the cluster, I automatically receives
> requests after a while.
>
> Fleming
>
> "Mike Reiche" <[email protected]> wrote:
> >
> >It sounds like every request is first timing out it's
> >connection
> >attempt (10 seconds, perhaps?) on the 'down' instance
> >before
> >trying the second instance. How do requests 'failover'?
> >Do you
> >have Netscape, Apache, or IIS with a wlproxy module? Or
> >do
> >you simply have a DNS that takes care of that?
> >
> >Mike
> >
> >
> >
> >"Fleming Frese" <[email protected]> wrote:
> >>
> >>Hi,
> >>
> >>we are gaining first experience with WLS 5.1 EBF 8 clustering
> >>on
> >>NT4 SP 6 workstation.
> >>We have two servers in the cluster, both on same machine
> >>but with
> >>different IP adresses (as it has to be)!
> >>
> >>In general it seems to work: we have a test client connecting
> >>to
> >>one of the servers and
> >>uses a stateless test EJB which does nothing but writing
> >>into weblogic.log.
> >>
> >>When this server fails, the other server resumes to work
> >>the client
> >>requests, BUT VERY VERY VERY SLOW!!!
> >> - I should repeat VERY a thousand times, because a normal
> >>client
> >>request takes about 10-30 ms
> >>and after failure/failover it takes 10-15 SECONDS!!!
> >>
> >>As naive as I am I want to know: IS THIS NORMAL?
> >>
> >>After the server is back, the performance is also back
> >>to normal,
> >>but we were expecting a much smaller
> >>performance degradation.
> >>
> >>So I think we are doing something totally wrong!
> >>Do we need some Network solution to make failover performance
> >>better?
> >>Or is there a chance to look closer at deployment descriptors
> >>or
> >>weblogic.system.executeThreadCount
> >>or weblogic.system.percentSocketReaders settings?
> >>
> >>Thanks in advance for any help!
> >>
> >>Fleming
> >>
> >
-
Hi All,
I am facing some serious application pool crash in one of my customer's Production site SharePoint servers. The Application Error logs in the event Viewer says -
Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7afa2
Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c8f9
Exception code: 0xc0000374
Fault offset: 0x00000000000c40f2
Faulting process id: 0x1414
Faulting application start time: 0x01ce5edada76109d
Faulting application path: c:\windows\system32\inetsrv\w3wp.exe
Faulting module path: C:\Windows\SYSTEM32\ntdll.dll
Report Id: 5a69ec1e-cace-11e2-9be2-441ea13bf8be
At the same time the SharePoint ULS logs says -
1)
06/13/2013 03:44:29.53 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8e2s
Medium Unknown SPRequest error occurred. More information: 0x80070005 8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General
8e25 Medium Failed to look up string with key "FSAdmin_SiteSettings_UserContextManagement_ToolTip", keyfile Microsoft.Office.Server.Search.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:35.03 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
General 8l3c
Medium Localized resource for token 'FSAdmin_SiteSettings_UserContextManagement_ToolTip' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web
Server Extensions\14\Template\Features\SearchExtensions\ExtendedSearchAdminLinks.xml". 8b343224-4aa6-490c-8a2a-ce06ac160773
2)
06/13/2013 03:44:29.01 w3wp.exe (0x0808) 0x2DF0 SharePoint Foundation
Web Parts
emt4 High Error initializing Safe control - Assembly:Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c TypeName: Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl
Error: Could not load type 'Microsoft.Office.SharePoint.ClientExtensions.Publishing.TakeListOfflineRibbonControl' from assembly 'Microsoft.Office.SharePoint.ClientExtensions, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.
8b343224-4aa6-490c-8a2a-ce06ac160773
06/13/2013 03:44:29.50 w3wp.exe (0x0808)
0x2DF0 SharePoint Foundation Logging Correlation Data
xmnv Medium Site=/ 8b343224-4aa6-490c-8a2a-ce06ac160773
3)
06/13/2013 03:43:59.67 w3wp.exe (0x263C) 0x24D8 SharePoint Foundation
Performance 9fx9
Medium Performance degradation: unfetched field [PublishingPageContent] caused extra roundtrip. at Microsoft.SharePoint.SPListItem.GetValue(SPField fld,
Int32 columnNumber, Boolean bRaw, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.GetValue(String strName, Boolean bThrowException) at Microsoft.SharePoint.SPListItem.get_Item(String fieldName)
at Microsoft.SharePoint.WebControls.BaseFieldControl.get_ItemFieldValue() at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.RenderFieldForDisplay(HtmlTextWriter output) at Microsoft.SharePoint.WebControls.BaseFieldControl.Render(HtmlTextWriter
output) at Microsoft.SharePoint.Publishing.WebControls.BaseRichField.Render(HtmlTextWriter output) at Microsoft.SharePoint.Publishing.WebControls.RichHtmlField.R...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...ender(HtmlTextWriter output) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection
children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children)
at System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter
writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWrit...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...er writer, ICollection children) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer,
ICollection children) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)
at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context)
at Microsoft.SharePoint.Publishing.TemplateRedirectionPage.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionSte...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...p step, Boolean& completedSynchronously) at System.Web.HttpApplication.PipelineStepManager.ResumeSteps(Exception
error) at System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) at System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context)
at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr managedHttpContext, IntPtr nativeRequestContext, IntPtr module...
b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67* w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance
9fx9 Medium ...Data, Int32 flags) at System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr managedHttpContext,
IntPtr nativeRequestContext, IntPtr moduleData, Int32 flags) b8d0b8ca-8386-441f-8fce-d79fe72556e1
06/13/2013 03:43:59.67 w3wp.exe (0x263C)
0x24D8 SharePoint Foundation Performance g4zd
High Performance degradation: note field [PublishingPageContent] was not in demoted fields. b8d0b8ca-8386-441f-8fce-d79fe72556e1
Anybody has any idea whats going on? I need to fix this ASAP as we are suppose to go live in next few days.
SoumalyaHello Soumalya,
Do you have an update on your issue? We are actually experiencing a similar issue at a new customer.
- Dennis | Netherlands | Blog |
Twitter -
Performance Degradation with EJBs
I have a small J2EE application that consists of a Session EJB calling 3 Entity EJBs that access the database. It is a simple Order capture application. The 3 Entity beans are called Orders, OrderItems and Inventory.
A transaction consists of inserting a record into the order table, inserting 5 records into the orderitems table and updating the quantity field in the inventory table for each order item in an order. With this transaction I observe performance degradation as the transactions per second decreases dramatically within 5 minutes of running.
When I modify the transaction to insert a single record into the orderitems table I do not observe performance degradation. The only difference in this transaction is we go through the for loop 1 time as opposed to 5 times. The code is exactly the same as in the previous case with 5 items per order.
Therefore I believe the problem is a performance degradation on Entity EJBs that
get invoked in a loop.
I am using OC4J 10.1.3.3.
I am using CMP (Container Managed Persistence) and CMT (Container Managed Transactions). The Entity EJBs were all generated by Oracle JDeveloper.
EJB version being used is 2.1.One thing to consider it downloading and using the Oracle AD4J utility to see if it can help you identify any possible bottlenecks, on the application server or the database.
AD4J can be used to monitor/profile/trace applications in real time with no instrumentation required on the application. Just install it into the container and go. It can even trace a request from the app server down into the database and show you the situation is down there (it needs a db agent installed to do that).
Overview:
http://www.oracle.com/technology/products/oem/pdf/wp_productionappdiagnostics.pdf
Download:
http://www.oracle.com/technology/software/products/oem/htdocs/jade.html
Install/Config Guide:
http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e11085/toc.htm
Usage Scenarios:
http://www.oracle.com/technology/products/oem/pdf/oraclead4j_usagescenarios.pdf
Maybe you are looking for
-
So I got my first Mac a few months ago, and when I first got it, I was able to connect it to my PC (running windows 7) over my network to share files. However, I ended up doing a full restore of my PC and now I cannot connect it. It will still show u
-
Computer old and probably going to die soon. Help needed backing up iTunes
My G4 has lost most of it's hard drive space and I'm too stupid to figure out how to clear it up. You guys have tried to help me, but my mind goes numb after I get past 'Emptying the Trash'. I've come to the realization that I must now buy a new Mac.
-
No Subcontracting Purchase requisition creation
Hello everybody, I have a strange system behavior : the external purchase requisition creation at Production order type level (AFVG-AUDISP) is set to "3", so I want that the system always create automatically the related purchase requisition. It r
-
PWM measuremen​ts using the 6071E
Hi, I am trying to measure both frequency and duty cycle of a signal using one of the PCI 6071E counters. I am using the semiperiod measurements and I already got an array of values but cannot determine which of those is the hi values and which is th
-
How to apply custom authentication in an application?
Hi All, I have to create one application in which there is a dealership table..... which stores information about dealers. now i want to apply a custom authentication so that only a valid dealer according to the table data can login into the applicat