Oracle 11g performance issue ( BITMAP CONVERSION TO ROWIDS)
I have two instance of oracle 11g.
in both instance i fired same query.
one instance returns the result in 1sec but other instance returns the result in 10 sec
following is explain plan for bot instance
instance 1
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 143 | 59 (2)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 143 | 59 (2)| 00:00:01 |
| 2 | VIEW | VM_NWVW_2 | 1 | 143 | 59 (2)| 00:00:01 |
| 3 | HASH UNIQUE | | 1 | 239 | 59 (2)| 00:00:01 |
| 4 | NESTED LOOPS | | | | | |
| 5 | NESTED LOOPS | | 1 | 239 | 58 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 6 | NESTED LOOPS | | 1 | 221 | 57 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 210 | 55 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 184 | 54 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 158 | 53 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 139 | 52 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 105 | 50 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | year_field | 1 | 29 | 2 (0)| 00:00:01 |
| 13 | SORT AGGREGATE | | 1 | 8 | | |
| 14 | INDEX FULL SCAN (MIN/MAX)| idx_bf_creation_date | 1 | 8 | 2 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| OHRT_bugs_fact | 1 | 76 | 48 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | idx_bf_creation_date | 76 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 17 | TABLE ACCESS BY INDEX ROWID | OHRT_all_time_dimension | 1 | 34 | 2 (0)| 00:00:01 |
|* 18 | INDEX UNIQUE SCAN | unique_alltime_bug_instance_id | 1 | | 1 (0)| 00:00:01 |
| 19 | TABLE ACCESS BY INDEX ROWID | OHRT_all_time_dimension | 1 | 19 | 1 (0)| 00:00:01 |
|* 20 | INDEX UNIQUE SCAN | unique_alltime_bug_instance_id | 1 | | 1 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | bugseverity_instance_id_ref_id | 1 | 26 | 1 (0)| 00:00:01 |
|* 22 | INDEX UNIQUE SCAN | unique_alltime_bug_instance_id | 1 | 26 | 1 (0)| 00:00:01 |
| 23 | INLIST ITERATOR | | | | | |
|* 24 | TABLE ACCESS BY INDEX ROWID | OHMT_ANL_BUCKET | 1 | 11 | 2 (0)| 00:00:01 |
|* 25 | INDEX UNIQUE SCAN | SYS_C0053213 | 5 | | 1 (0)| 00:00:01 |
|* 26 | INDEX RANGE SCAN | FK_BUCKET_TYPE | 6 | | 0 (0)| 00:00:01 |
|* 27 | TABLE ACCESS BY INDEX ROWID | OHMT_ANL_BUCKET | 1 | 18 | 1 (0)| 00:00:01 |
instance 2
Plan
SELECT STATEMENT ALL_ROWS Cost: 22 Bytes: 142 Cardinality: 1
32 HASH GROUP BY Cost: 22 Bytes: 142 Cardinality: 1
31 VIEW VIEW SYS.VM_NWVW_2 Cost: 22 Bytes: 142 Cardinality: 1
30 HASH UNIQUE Cost: 22 Bytes: 237 Cardinality: 1
29 NESTED LOOPS
27 NESTED LOOPS Cost: 21 Bytes: 237 Cardinality: 1
25 NESTED LOOPS Cost: 20 Bytes: 219 Cardinality: 1
21 NESTED LOOPS Cost: 18 Bytes: 208 Cardinality: 1
19 NESTED LOOPS Cost: 17 Bytes: 183 Cardinality: 1
17 NESTED LOOPS Cost: 16 Bytes: 157 Cardinality: 1
14 NESTED LOOPS Cost: 15 Bytes: 138 Cardinality: 1
11 NESTED LOOPS Cost: 13 Bytes: 104 Cardinality: 1
3 INDEX RANGE SCAN INDEX REPORTSDB.year_field Cost: 2 Bytes: 29 Cardinality: 1
2 SORT AGGREGATE Bytes: 8 Cardinality: 1
1 INDEX FULL SCAN (MIN/MAX) INDEX REPORTSDB.idx_bf_creation_date Cost: 3 Bytes: 8 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE REPORTSDB.OHRT_bugs_fact Cost: 13 Bytes: 75 Cardinality: 1
9 BITMAP CONVERSION TO ROWIDS
8 BITMAP AND
5 BITMAP CONVERSION FROM ROWIDS
4 INDEX RANGE SCAN INDEX REPORTSDB.idx_OHRT_bugs_fact_2product Cost: 2 Cardinality: 85
7 BITMAP CONVERSION FROM ROWIDS
6 INDEX RANGE SCAN INDEX REPORTSDB.idx_bf_creation_date Cost: 2 Cardinality: 85
13 TABLE ACCESS BY INDEX ROWID TABLE REPORTSDB.OHRT_all_time_dimension Cost: 2 Bytes: 34 Cardinality: 1
12 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORTSDB.unique_alltime_bug_instance_id Cost: 1 Cardinality: 1
16 TABLE ACCESS BY INDEX ROWID TABLE REPORTSDB.OHRT_all_time_dimension Cost: 1 Bytes: 19 Cardinality: 1
15 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORTSDB.unique_alltime_bug_instance_id Cost: 1 Cardinality: 1
18 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORTSDB.unique_alltime_bug_instance_id Cost: 1 Bytes: 26 Cardinality: 1
20 INDEX RANGE SCAN INDEX REPORTSDB.bugseverity_instance_id_ref_id Cost: 1 Bytes: 25 Cardinality: 1
24 INLIST ITERATOR
23 TABLE ACCESS BY INDEX ROWID TABLE OPSHUB.OHMT_ANL_BUCKET Cost: 2 Bytes: 11 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) OPSHUB.SYS_C0040939 Cost: 1 Cardinality: 5
26 INDEX RANGE SCAN INDEX OPSHUB.FK_BUCKET_TYPE Cost: 0 Cardinality: 6
28 TABLE ACCESS BY INDEX ROWID TABLE OPSHUB.OHMT_ANL_BUCKET Cost: 1 Bytes: 18 Cardinality: 1
in both explain plan only difference is
9 BITMAP CONVERSION TO ROWIDS
8 BITMAP AND
5 BITMAP CONVERSION FROM ROWIDS
but is bitmap degrading performance lot?
or suggest me what other parameter i can see so 2nd instance gives me better performace.
I see more differences.
In plan 1:
* 16 INDEX RANGE SCAN idx_bf_creation_date 76 1 (0) 00:00:01
in Plan 2:
1 INDEX FULL SCAN (MIN/MAX) INDEX REPORTSDB.idx_bf_creation_date Cost: 3 Bytes: 8 Cardinality: 1
So this is not about "bitmap" good/bad, it about the access strategy which changed due to differences in data statistics etc. To analyze more, I'd help a LOT if those plans would be formated in a good and same way, use around it to do so.
Similar Messages
-
We have Oracle 11g (11.1.0.6) on HP-UX environment with CC&B application. It was working fine and after we upgrade the database to 11.1.0.7 last week, since then the database performance is really slow. After we upgrade the database we notice that xdb component became invalid.
We are not sure how to investigate this issue, any help would be appreciated.WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Startup Time Release RAC
CCBPROD 3218377102 ccbprod 1 10-Jan-12 20:26 11.1.0.7.0 NO
Host Name Platform CPUs Cores Sockets Memory(GB)
huccbhp5 HP-UX IA (64-bit) 4 4 2 23.97
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 43912 11-Jan-12 10:00:31 157 96.1
End Snap: 43913 11-Jan-12 11:00:35 186 101.3
Elapsed: 60.08 (mins)
DB Time: 653.40 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 12,160M 12,096M Std Block Size: 8K
Shared Pool Size: 704M 704M Log Buffer: 58,764K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 10.9 0.5 0.02 0.02
DB CPU(s): 3.6 0.2 0.01 0.01
Redo size: 529,539.6 24,544.1
Logical reads: 155,545.5 7,209.5
Block changes: 2,047.0 94.9
Physical reads: 204.6 9.5
Physical writes: 96.0 4.5
User calls: 698.7 32.4
Parses: 77.8 3.6
Hard parses: 0.1 0.0
W/A MB processed: 406,873.3 18,858.5
Logons: 0.4 0.0
Executes: 456.1 21.1
Rollbacks: 19.6 0.9
Transactions: 21.6
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.97
Buffer Hit %: 99.87 In-memory Sort %: 100.00
Library Hit %: 99.93 Soft Parse %: 99.89
Execute to Parse %: 82.95 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 0.01 % Non-Parse CPU: 99.93
Shared Pool Statistics Begin End
Memory Usage %: 89.78 90.02
% SQL with executions>1: 86.04 84.87
% Memory for SQL w/exec>1: 86.22 86.03
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
virtual circuit wait 2,170,696 15,761 7 40.2 Network
DB CPU 13,115 33.5
db file sequential read 362,802 6,304 17 16.1 User I/O
enq: TX - row lock contention 115 1,118 9721 2.9 Applicatio
log file sync 8,183 818 100 2.1 Commit
Host CPU (CPUs: 4 Cores: 4 Sockets: 2)
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
3.28 3.93 93.1 2.3 2.8 4.6
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 91.1
% of busy CPU for Instance: 95.4
%DB time waiting for CPU - Resource Mgr: 0.0
Memory Statistics
~~~~~~~~~~~~~~~~~ Begin End
Host Mem (MB): 24,545.7 24,545.7
SGA use (MB): 13,312.0 13,312.0
PGA use (MB): 935.9 949.3
% Host Mem used for SGA+PGA: 58.05 58.05 -
Newbie question about Oracle 11G performance
I have a situation where a newer better faster server running Oracle 11gR1 is running considerably slower than an older slower server running Oracle 10gR2.
Both of these servers have the same web applications and same dumpfiles loaded into them.
I personally installed the Oracle 11g on the new server, however, I didn't install the 10gR2 on the older server, some one else did...
I have checked what I think are the basics: the tables are not missing constraints, the automatic memory management is enabled.
I know this is a really broad question.
Newer server:
Quad-Core AMD Opteron Processor 2352 2.10 GHz (2 processors)
16 GB memory
Windows Server 2008, 64 bit
Raid 5
Older Server:
Intel Xeon 1.8 GHz (2 processors)
4 GB memory
Windows Server 2003, 32 bit
Raid 5Hi Derik,
I know this is a really broad question. No, it happens all the time! Here is a similar issue:
http://blog.tuningknife.com/2008/09/26/oracle-11g-performance-issue/
+"In the end, nothing I tried could dissuade 11g from emitting the “PARSING IN CURSOR #d+” message for each insert statement. I filed a service request with Oracle Support on this issue and they ultimately referred the issue to development as a bug. Note that Support couldn’t reproduce the slowness I was seeing, but their trace files did reflect the parsing messages I observed."+
I would:
1 - Start by examining historical SQL execution plans (stats$sql_plan or dba_hist_sql_plan). Try to isolate the exact nature of the decreased performance.
Are different indexes being used? Are you geting more full-table scans?
2 - Migrate-in your old 10g CBO statistics
3 - Confirm that all init.ora parms are identical
4 - Drill into each SQL with a different execution plan . . .
Raid 5 Don't believe that crap that all RAID-5 is evil. . . .
http://www.dba-oracle.com/t_raid5_acceptable_oracle.htm
But at the same time, consider using the Oracle standard, RAID-10 . . .
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/t_oracle_tuning_book.htm
"Time flies like an arrow; Fruit flies like a banana". -
[11g R2] Update-Select with BITMAP CONVERSION TO ROWIDS = very slow
Hi all,
I have to deal with some performance issues in our database.
The query below takes between 30 minutes and 60 minutes to complete (30 minutes during the batch process and 1h when I executed the query with SQLPLUS):
SQL_ID 4ky65wauhg1ub, child number 0
UPDATE fiscpt x SET (x.cimld) = (SELECT COUNT (*)
FROM fiscpt f WHERE f.comar = x.comar AND
f.coint = x.coint AND f.nucpt = x.nucpt AND
f.codev != x.codev AND f.cimvt != 0) WHERE x.comar IN
('CBOT', 'CME', 'EUREX', 'FOREX', 'LIFFE', 'METAL', 'OCC')
Plan hash value: 697684605
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | UPDATE STATEMENT | | 1 | | | 773K(100)| | 0 |00:22:22.30 | 36M| 7629 | | | |
| 1 | UPDATE | FISCPT | 1 | | | | | 0 |00:22:22.30 | 36M| 7629 | | | |
| 2 | INLIST ITERATOR | | 1 | | | | | 179K|00:00:00.37 | 1221 | 3 | | | |
|* 3 | INDEX RANGE SCAN | FISCPT1 | 7 | 154K| 4984K| 5 (0)| 00:00:01 | 179K|00:00:00.23 | 1221 | 3 | | | |
| 4 | SORT AGGREGATE | | 179K| 1 | 33 | | | 179K|01:02:58.45 | 35M| 3020 | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID | FISCPT | 179K| 1 | 33 | 4 (25)| 00:00:01 | 63681 |01:02:57.80 | 35M| 3020 | | | |
| 6 | BITMAP CONVERSION TO ROWIDS | | 179K| | | | | 121K|01:02:52.71 | 35M| 885 | | | |
| 7 | BITMAP AND | | 179K| | | | | 87091 |01:02:52.25 | 35M| 885 | | | |
| 8 | BITMAP CONVERSION FROM ROWIDS| | 179K| | | | | 179K|00:00:03.31 | 241K| 0 | | | |
|* 9 | INDEX RANGE SCAN | FISCPT2 | 179K| 1547 | | 1 (0)| 00:00:01 | 1645K|00:00:02.23 | 241K| 0 | | | |
| 10 | BITMAP CONVERSION FROM ROWIDS| | 179K| | | | | 148K|01:02:44.98 | 35M| 885 | | | |
| 11 | SORT ORDER BY | | 179K| | | | | 2412M|00:52:19.70 | 35M| 885 | 1328K| 587K| 1180K (0)|
|* 12 | INDEX RANGE SCAN | FISCPT1 | 179K| 1547 | | 2 (0)| 00:00:01 | 2412M|00:22:11.22 | 35M| 885 | | | |
Query Block Name / Object Alias (identified by operation id):
1 - UPD$1
3 - UPD$1 / X@UPD$1
4 - SEL$1
5 - SEL$1 / F@SEL$1
Predicate Information (identified by operation id):
3 - access(("X"."COMAR"='CBOT' OR "X"."COMAR"='CME' OR "X"."COMAR"='EUREX' OR "X"."COMAR"='FOREX' OR "X"."COMAR"='LIFFE' OR "X"."COMAR"='METAL' OR
"X"."COMAR"='OCC'))
5 - filter("F"."CIMVT"<>0)
9 - access("F"."COINT"=:B1 AND "F"."NUCPT"=:B2)
12 - access("F"."COMAR"=:B1)
filter(("F"."CODEV"<>:B1 AND "F"."COMAR"=:B2))
Column Projection Information (identified by operation id):
2 - (upd=6; cmp=2,3,4,5) "SYS_ALIAS_4".ROWID[ROWID,10], "X"."COMAR"[VARCHAR2,5], "X"."COINT"[VARCHAR2,11], "X"."NUCPT"[VARCHAR2,8], "X"."CODEV"[VARCHAR2,3],
"X"."CIMLD"[NUMBER,22]
3 - "SYS_ALIAS_4".ROWID[ROWID,10], "X"."COMAR"[VARCHAR2,5], "X"."COINT"[VARCHAR2,11], "X"."NUCPT"[VARCHAR2,8], "X"."CODEV"[VARCHAR2,3], "X"."CIMLD"[NUMBER,22]
4 - (#keys=0) COUNT(*)[22]
5 - "F".ROWID[ROWID,10], "F"."COMAR"[VARCHAR2,5], "F"."COINT"[VARCHAR2,11], "F"."NUCPT"[VARCHAR2,8], "F"."CODEV"[VARCHAR2,3], "F"."CIMVT"[NUMBER,22]
6 - "F".ROWID[ROWID,10]
7 - STRDEF[BM VAR, 10], STRDEF[BM VAR, 10], STRDEF[BM VAR, 32496]
8 - STRDEF[BM VAR, 10], STRDEF[BM VAR, 10], STRDEF[BM VAR, 32496]
9 - "F".ROWID[ROWID,10]
10 - STRDEF[BM VAR, 10], STRDEF[BM VAR, 10], STRDEF[BM VAR, 32496]
11 - (#keys=1) "F".ROWID[ROWID,10]
12 - "F".ROWID[ROWID,10]
Note
- dynamic sampling used for this statement (level=2)We intentionally don't gather statistics on the FISCPT table.
There are no indexes on the column updated so the slowness is not due to updating of indexes:
SQL> select index_name, column_name from user_ind_columns where table_name='FISCPT';
INDEX_NAME COLUMN_NAM
FISCPT1 NUCPT
FISCPT1 CODEV
FISCPT1 RGCID
FISCPT1 DATRA
FISCPT2 COINT
FISCPT2 NUCPT
FISCPT3 NUFIS
FISCPT1 COINT
FISCPT1 COMAR
9 ligne(s) sÚlectionnÚe(s).
SQL> select count(1) from FISCPT;
COUNT(1)
179369If I replace the UPDATE-SELECT statement by a SELECT, the query runs in few seconds:
SQL> SELECT COUNT (*)
2 FROM fiscpt f, fiscpt x
3 WHERE f.comar = x.comar
4 AND f.coint = x.coint
5 AND f.nucpt = x.nucpt
6 AND f.codev != x.codev
7 AND f.cimvt != 0
8 and x.comar IN ('CBOT', 'CME', 'EUREX', 'FOREX', 'LIFFE', 'METAL', 'OCC');
COUNT(*)
63681
EcoulÚ : 00 :00 :00.75
SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
SQL_ID 5drbpdmdv0gv1, child number 0
SELECT COUNT (*) FROM fiscpt f, fiscpt x
WHERE f.comar = x.comar AND f.coint = x.coint
AND f.nucpt = x.nucpt AND f.codev != x.codev
AND f.cimvt != 0 and x.comar IN ('CBOT', 'CME', 'EUREX', 'FOREX',
'LIFFE', 'METAL', 'OCC')
Plan hash value: 1326101771
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | | 2477 (100)| |
| 1 | SORT AGGREGATE | | 1 | 53 | | | |
|* 2 | HASH JOIN | | 107K| 5557K| 4720K| 2477 (1)| 00:00:30 |
| 3 | INLIST ITERATOR | | | | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID| FISCPT | 107K| 3460K| | 1674 (1)| 00:00:21 |
|* 5 | INDEX RANGE SCAN | FISCPT1 | 154K| | | 873 (0)| 00:00:11 |
|* 6 | INDEX FAST FULL SCAN | FISCPT1 | 154K| 3021K| | 337 (0)| 00:00:05 |
Predicate Information (identified by operation id):
2 - access("F"."COMAR"="X"."COMAR" AND "F"."COINT"="X"."COINT" AND
"F"."NUCPT"="X"."NUCPT")
filter("F"."CODEV"<>"X"."CODEV")
4 - filter("F"."CIMVT"<>0)
5 - access(("F"."COMAR"='CBOT' OR "F"."COMAR"='CME' OR "F"."COMAR"='EUREX' OR
"F"."COMAR"='FOREX' OR "F"."COMAR"='LIFFE' OR "F"."COMAR"='METAL' OR "F"."COMAR"='OCC'))
6 - filter(("X"."COMAR"='CBOT' OR "X"."COMAR"='CME' OR "X"."COMAR"='EUREX' OR
"X"."COMAR"='FOREX' OR "X"."COMAR"='LIFFE' OR "X"."COMAR"='METAL' OR "X"."COMAR"='OCC'))
Note
- dynamic sampling used for this statement (level=2)The optimizer parameters are at their default values.
The database is an 11.2.0.1 and the OS is a Linux Red hat.
can someone help me to tune this query please?Thanks Tubby for your reply,
We don't gather statistics at all on this table because it's a process table: on the production database we may have several processes which insert/delet/update rows into this table so we prefer to rely on Dynamic Sampling instead of gathering statistics each time a process need to access this table.
I don't dynamic sampling is the problem here because when i use the level 10 the execution plan is the same:
SQL> alter session set optimizer_dynamic_sampling=10;
Session modifiÚe.
EcoulÚ : 00 :00 :00.00
SQL> explain plan for
2 UPDATE fiscpt x
3 SET (x.cimld) =
4 (SELECT COUNT (*)
5 FROM fiscpt f
6 WHERE f.comar = x.comar
7 AND f.coint = x.coint
8 AND f.nucpt = x.nucpt
9 AND f.codev != x.codev
10 AND f.cimvt != 0)
11 WHERE x.comar IN ('CBOT', 'CME', 'EUREX', 'FOREX', 'LIFFE', 'METAL', 'OCC');
ExplicitÚ.
EcoulÚ : 00 :00 :01.04
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 697684605
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 179K| 5780K| 896K (20)| 02:59:23 |
| 1 | UPDATE | FISCPT | | | | |
| 2 | INLIST ITERATOR | | | | | |
|* 3 | INDEX RANGE SCAN | FISCPT1 | 179K| 5780K| 5 (0)| 00:00:01 |
| 4 | SORT AGGREGATE | | 1 | 33 | | |
|* 5 | TABLE ACCESS BY INDEX ROWID | FISCPT | 1 | 33 | 4 (25)| 00:00:01 |
| 6 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 7 | BITMAP AND | | | | | |
| 8 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 9 | INDEX RANGE SCAN | FISCPT2 | 1794 | | 1 (0)| 00:00:01 |
| 10 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 11 | SORT ORDER BY | | | | | |
|* 12 | INDEX RANGE SCAN | FISCPT1 | 1794 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("X"."COMAR"='CBOT' OR "X"."COMAR"='CME' OR "X"."COMAR"='EUREX' OR
"X"."COMAR"='FOREX' OR "X"."COMAR"='LIFFE' OR "X"."COMAR"='METAL' OR
"X"."COMAR"='OCC')
5 - filter("F"."CIMVT"<>0)
9 - access("F"."COINT"=:B1 AND "F"."NUCPT"=:B2)
12 - access("F"."COMAR"=:B1)
filter("F"."CODEV"<>:B1 AND "F"."COMAR"=:B2)
Note
- dynamic sampling used for this statement (level=10)I have tested the query you provided and the execution plan is almost the same (A FILTER clause is added but the COST isthe same):
SQL> explain plan for
2 UPDATE fiscpt x
3 SET (x.cimld) =
4 (SELECT COUNT (*)
5 FROM fiscpt f
6 WHERE f.comar = x.comar
7 AND f.coint = x.coint
8 AND f.nucpt = x.nucpt
9 AND f.codev != x.codev
10 AND f.cimvt != 0
11 and f.comar IN ('CBOT', 'CME', 'EUREX', 'FOREX', 'LIFFE', 'METAL', 'OCC')
12 )
13 WHERE x.comar IN ('CBOT', 'CME', 'EUREX', 'FOREX', 'LIFFE', 'METAL', 'OCC');
ExplicitÚ.
EcoulÚ : 00 :00 :00.01
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1565986742
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 154K| 4984K| 773K (20)| 02:34:41 |
| 1 | UPDATE | FISCPT | | | | |
| 2 | INLIST ITERATOR | | | | | |
|* 3 | INDEX RANGE SCAN | FISCPT1 | 154K| 4984K| 5 (0)| 00:00:01 |
| 4 | SORT AGGREGATE | | 1 | 33 | | |
|* 5 | FILTER | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | FISCPT | 1 | 33 | 4 (25)| 00:00:01 |
| 7 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 8 | BITMAP AND | | | | | |
| 9 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 10 | INDEX RANGE SCAN | FISCPT2 | 1547 | | 1 (0)| 00:00:01 |
| 11 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 12 | SORT ORDER BY | | | | | |
|* 13 | INDEX RANGE SCAN | FISCPT1 | 1547 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("X"."COMAR"='CBOT' OR "X"."COMAR"='CME' OR "X"."COMAR"='EUREX' OR
"X"."COMAR"='FOREX' OR "X"."COMAR"='LIFFE' OR "X"."COMAR"='METAL' OR "X"."COMAR"='OCC')
5 - filter(:B1='CBOT' OR :B2='CME' OR :B3='EUREX' OR :B4='FOREX' OR :B5='LIFFE' OR
:B6='METAL' OR :B7='OCC')
6 - filter(("F"."COMAR"='CBOT' OR "F"."COMAR"='CME' OR "F"."COMAR"='EUREX' OR
"F"."COMAR"='FOREX' OR "F"."COMAR"='LIFFE' OR "F"."COMAR"='METAL' OR
"F"."COMAR"='OCC') AND "F"."CIMVT"<>0)
10 - access("F"."COINT"=:B1 AND "F"."NUCPT"=:B2)
13 - access("F"."COMAR"=:B1)
filter("F"."CODEV"<>:B1 AND ("F"."COMAR"='CBOT' OR "F"."COMAR"='CME' OR
"F"."COMAR"='EUREX' OR "F"."COMAR"='FOREX' OR "F"."COMAR"='LIFFE' OR
"F"."COMAR"='METAL' OR "F"."COMAR"='OCC') AND "F"."COMAR"=:B2)
Note
- dynamic sampling used for this statement (level=2)I have executed this statement for 50 minutes and it is still running now
Furthermore, I have used the tuning advisor but it has not found any recommendation.
is it normal that oracle takes 1hour to update 175k rows? -
Please explain plan with 'BITMAP CONVERSION TO ROWIDS'
Hi,
in my 9.2.0.8 I've got plan like :
Plan
SELECT STATEMENT CHOOSECost: 26,104
7 TABLE ACCESS BY INDEX ROWID UMOWY Cost: 26,105 Bytes: 41 Cardinality: 1
6 BITMAP CONVERSION TO ROWIDS
5 BITMAP AND
2 BITMAP CONVERSION FROM ROWIDS
1 INDEX RANGE SCAN UMW_PRD_KPD_KOD Cost: 406 Cardinality: 111,930
4 BITMAP CONVERSION FROM ROWIDS
3 INDEX RANGE SCAN UMW_PRD_KPR_KOD Cost: 13,191 Cardinality: 111,930 as far as I know Oracle is trying to combine two indexes , so if I create multicolumn index the plan should be better right ?
Generally all bitmap conversions related to b-tree indexes are trying to combine multiple indexes to deal with or/ index combine operations right ?
And finally what about AND_EQUAL hint is that kind of alternative for that bitmap conversion steps ?
Regards
Gregas far as I know Oracle is trying to combine two indexes , so if I create multicolumn index the plan should be better right ?Only you can really tell - but if this is supposed to be a "precision" query the optimizer thinks you don't have a good index into the target data. Don't forget to consider the benefits of compressed indexes if you do follow this route.
Generally all bitmap conversions related to b-tree indexes are trying to combine multiple indexes to deal with or/ index combine operations right ?Bitmap conversions when there are no real bitmap indexes involved are always about combining multiple b-tree index range scans to minimise the number of reads from the table.
And finally what about AND_EQUAL hint is that kind of alternative for that bitmap conversion steps ?AND_EQUAL was an older mechanism for combining index range scans to minimise visits to the table - it was restricted to a maximum of 5 indexes per table - the indexes had to be single column, non-unique, and the predicates had to be equality. The access method is deprecated in 10g. (See the following note, and the comments in particular, for more details: http://jonathanlewis.wordpress.com/2009/05/08/ioug-day-4/ )
Regards
Jonathan Lewis -
This is the plan used for my query in my prod database.
10.2.0.3
In dev same oracle version, the plan is not doing bitmap conversion.
prod query time: 4minutes
dev query time:20 seconds
how can i tell oracle to stop bitmap conversing on me?
BTW, i have no bitmap index, weird.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1934 | 267K| 3025 (1)| 00:00:37 |
| 1 | SORT ORDER BY | | 1934 | 267K| 3024 (25)| 00:00:37 |
| 2 | UNION-ALL | | | | | |
| 3 | MAT_VIEW ACCESS BY INDEX ROWID | MV_SONG | 92 | 14076 | 2304 (1)| 00:00:28 |
| 4 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 5 | BITMAP AND | | | | | |
| 6 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 7 | INDEX RANGE SCAN | IDX_MV_SONG_USCFALB | | | 417 (2)| 00:00:06 |
| 8 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 9 | SORT ORDER BY | | | | | |
|* 10 | DOMAIN INDEX | MV_SON_TITLE_FULLT_IDX | | | 1827 (0)| 00:00:22 |
| 11 | MAT_VIEW ACCESS BY INDEX ROWID | MV_SONG | 1842 | 253K| 720 (1)| 00:00:09 |
|* 12 | INDEX RANGE SCAN | IDXF_MV_SONG_SONTUSFALB | 737 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------In dev same oracle version, the plan is not doing bitmap conversion.
prod query time: 4minutes
dev query time:20 secondsIs the amount of data in both databases the same?
If you want to switch off this behaviour you can change the value of the BTREE_BITMAP_PLANS parameter. (Actually it might not be an underscore parameter in 10.2 - I wouldn't know, I'm still on 9i). Of course, the usual caveats about tweaking underscore paarmters apply.
Niall Litchfield wrote an interesting article on this pararmeter aa couple of years back. You should read it.
Cheers, APC -
Explain plan change after partitioning - Bitmap conversion to rowid
hi gurus,
before partitioning the table using range paritioning,
for the query,
SELECT MEDIUMID
FROM MEDIUM_TB
WHERE CMREFERENCEID =8
AND CONTENTTYPEID = 8
AND CMSTATUSID = 5
AND SUBTYPEID = 99
A. before partitioning
SELECT STATEMENT, GOAL = ALL_ROWS 2452 882 16758
SORT ORDER BY 2452 882 16758
TABLE ACCESS BY INDEX ROWID DBA1 MEDIUM_TB 2451 882 16758
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 242 94423
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX02 1973 94423
B. after partitioning
the explain plan changed to
SELECT STATEMENT, GOAL = ALL_ROWS 33601 796 15124
TABLE ACCESS BY GLOBAL INDEX ROWID DBA1 MEDIUM_TB 33601 796 15124
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 300 116570 as you can see in the plan, the paln cost is very high after partitioning and the query is taking more time.
index MEDIUM_TB_IX02 is not used in the second plan and also the plan method BITMAP CONVERSION.
fyi, there is all the indexes are b-tree based and global indexes.
what could be reason for the plan change?
please help
thanks,
charlesuser570138 wrote:
SELECT STATEMENT, GOAL = ALL_ROWS 2452 882 16758
SORT ORDER BY 2452 882 16758
TABLE ACCESS BY INDEX ROWID DBA1 MEDIUM_TB 2451 882 16758
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX07 242 94423
BITMAP CONVERSION FROM ROWIDS
INDEX RANGE SCAN DBA1 MEDIUM_TB_IX02 1973 94423
If you supplied proper execution plans we might be able to offer some advice.
Use dbms_xplan.
A list of the index definitions might also help
Regards
Jonathan Lewis -
Oracle 11g - Date Issue?
Oracle 11g - Date Issue?
=====================
I hope this is the right forum for this issue, if not let me, where to go.
We are using Oracle 11g (11.2.0.1.0) on (Platform : solaris[tm] oe (64-bit)), Sql Developer 3.0.04.
Our NLS_DATE_FORMAT = 'DD-MON-RR'
Using 'RR' in the format forces two-digit years less than or equal to 49 to be interpreted as years in the 21st century (2000–2049), and years 50 and over, as years in the 20th century (1950–1999). Setting the RR format as the default for all two-digit year entries allows you to become year-2000 compliant. For example:
We have a date '01-JUN-31' in our source system. It treat this date as 01-JUN-2031' instead of '01-JUN-1931'. One century forward.
Do we able to resolve using NLS_DATE_FORMAT change?
How do we resolve this issue?
Thanks in helping.qwe16235 wrote:
Our source is Oracle data base, where S_date is defined as DATE. Why did you say STRING , when it defined as DATE data type?I doubt you source is an Oracle database. You may have it stored in an Oracle database, but it cam form somewhere else, and was very likely inserted into the table as a string, wherever it came from. Given a string that resembles a date, Oracle will try to convert it to a date using the nls_date_format parameter for the session (which can either be set in the session or inherited form the database). Perhaps this will help explain:
SQL> create table t (conv_type varchar2(10), dt date);
Table created.
SQL> alter session set nls_date_format = 'dd-mon-rr';
Session altered.
SQL> insert into t values ('Implicit', '01-jun-31');
1 row created.
SQL> insert into t values ('Explicit', to_date('01-jun-1931', 'dd-mon-yyyy'));
1 row created.
SQL> commit;
Commit complete.
SQL> select conv_type, to_char(dt, 'dd-mon-yyyy') dt
2 from t;
CONV_TYPE DT
Implicit 01-jun-2031
Explicit 01-jun-1931So, unless you are really inserting dates, not strings that look like dates, you are going to have problems.
John -
Oracle BPEL 11G performance issue
Hi
We are facing performance issues in executing our composite process in Oracle SOA 11g .
We have installed an admin server and 2 managed servers in cluster in same box. The machine utilization reached almost 95% when i started the admin server and 2 managed server (min n max size of heap given as 1GB each in start up). So i shut down one managed server and increased the JVM size of other to 2 GB and found that the heap size reaches 1.5 GB on start up (observed the heap size using Jconsole)
The machine capacity is windows server with 4 GB RAM.
Our process requries multiple records to be processed which are retrieved using Database query.
We have created 2 composites
the first composite has 2 BPEL process. the First BPEL 1 executes the DB query and retrieves the result and based on result retrieved we invoke the second BPEL2
which does around 4 DB calls and passed the result to the next composite. The final BPEL process 3 has multiple select and update query involving DB intensive process.
When we retrieve 500 records from the BPEL 1 and process , half way through we face out of memory exception. So we are using throttling but even then while executing the process of BPEL3 we are facing out of memory excetion.
Can you let me know how to find the memory space taken from heap by each BPEL process during it execution. Where in console can i get the memory used details so that i can find which BPEL sis consuming more memory and we can work on optimising.
Actually we are expecting around 1Lakh and above messages per day and need to check on how this process can handle and also how to increase or determine the capacity of the windows box.
any immediate help is highly appreciated
thanksAlways raise a case with Oracle Support for such issues.
Regards,
Anuj -
"oracle database 11g performance issues"
Hai everybody,
In oracle 11g 11.2.0.1.0 we are developing business application using java, Our developers said Database performance is very poor it takes more time to retrieve values from database, they check froantend and middleware that has no problem, if the query returns less values or rows it takes little more time. How can i solve this problem? please help me
Regards Benkhai,
sorry for the delay my @$ORACLE_HOME/rdbms/admin/ashrpt.sql output shown below
ASH Report For ORCL/orcl
DB Name DB Id Instance Inst Num Release RAC Host
ORCL 1295420332 orcl 1 11.2.0.1.0 NO node6.node6-
CPUs SGA Size Buffer Cache Shared Pool ASH Buffer Size
2 1,561M (100%) 480M (30.7%) 432M (27.7%) 4.0M (0.3%)
Analysis Begin Time: 04-Oct-12 16:44:16
Analysis End Time: 04-Oct-12 16:59:33
Elapsed Time: 15.3 (mins)
Begin Data Source: V$ACTIVE_SESSION_HISTORY
End Data Source: V$ACTIVE_SESSION_HISTORY
Sample Count: 3
Average Active Sessions: 0.00
Avg. Active Session per CPU: 0.00
Report Target: None specified
Top User Events DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
Avg Active
Event Event Class % Event Sessions
null event Other 33.33 0.00
Top Background Events DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
Avg Active
Event Event Class % Activity Sessions
CPU + Wait for CPU CPU 33.33 0.00
os thread startup Concurrency 33.33 0.00
Top Event P1/P2/P3 Values DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Service/Module DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
Service Module % Activity Action % Action
SYS$BACKGROUND UNNAMED 66.67 UNNAMED 66.67
SYS$USERS UNNAMED 33.33 UNNAMED 33.33
Top Client IDs DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top SQL Command Types DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Phases of Execution DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
Avg Active
Phase of Execution % Activity Sessions
SQL Execution 33.33 0.00
Top SQL with Top Events DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top SQL with Top Row Sources DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top SQL using literals DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Parsing Module/Action DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top PL/SQL Procedures DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Java Workload DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Call Types DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Sessions DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
-> '# Samples Active' shows the number of ASH samples in which the session
was found waiting for that particular event. The percentage shown
in this column is calculated with respect to wall clock time
and not total database activity.
-> 'XIDs' shows the number of distinct transaction IDs sampled in ASH
when the session was waiting for that particular event
-> For sessions running Parallel Queries, this section will NOT aggregate
the PQ slave activity into the session issuing the PQ. Refer to
the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# % Activity Event % Event
User Program # Samples Active XIDs
10, 1 33.33 CPU + Wait for CPU 33.33
SYS [email protected] (ARC2) 1/917 [ 0%] 0
19, 148 33.33 null event 33.33
SYS [email protected] (J000) 1/917 [ 0%] 0
139, 3 33.33 os thread startup 33.33
SYS [email protected] (CJQ0) 1/917 [ 0%] 0
Top Blocking Sessions DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Sessions running PQs DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top DB Objects DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top DB Files DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Top Latches DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
No data exists for this section of the report.
Activity Over Time DB/Inst: ORCL/orcl (Oct 04 16:44 to 16:59)
-> Analysis period is divided into smaller time slots
-> Top 3 events are reported in each of those slots
-> 'Slot Count' shows the number of ASH samples in that slot
-> 'Event Count' shows the number of ASH samples waiting for
that event in that slot
-> '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Event
Slot Time (Duration) Count Event Count % Event
16:48:00 (2.0 min) 1 CPU + Wait for CPU 1 33.33
16:50:00 (2.0 min) 2 null event 1 33.33
os thread startup 1 33.33
End of Report
Regards Benk -
Oracle SOA 11g Performance Issue
Hi,
We have set up Oracle SOA Suite on AIX environment. Java which we are using is IBM Jdk 1.6. Recently we are hit with performance issue. Frequently we are getting out of memory exception and we need to restart the server and sometimes physically reboot the machine, because out of 16 GB of RAM 4GB we have given as heap space to Admin Server, 7 GB to SOA Server but it is taking more than 7 GB as heap space. On stopping or killing both the services memory is not getting released
SOA Suite Version : 11.1.1.3
Instance Node: Single Node
I collected the logs and tried to analyze in Thread Dump Analyzer and i could see objects(Reserved) is taking 100% of the CPU Utilization.
We are getting the following error highlighed in the analyzer. There are about 200+ threads got stuck.
"HTTPThreadGroup-42" prio=10 tid=0x6382ba28 nid=0x20bf4 waiting on condition [0x6904f000..0x6904fb94]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:146)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:772)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1087)
at java.util.concurrent.SynchronousQueue$Node.waitForPut(SynchronousQueue.java:291)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:443)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:475)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:674)
at java.lang.Thread.run(Thread.java:595)
"HTTPThreadGroup-41" prio=10 tid=0x6ae3cce0 nid=0x20bf0 waiting on condition [0x68d8f000..0x68d8fc14]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:146)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:772)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1087)
at java.util.concurrent.SynchronousQueue$Node.waitForPut(SynchronousQueue.java:291)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:443)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:475)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:674)
at java.lang.Thread.run(Thread.java:595)
Has anyone faced same issue? We are badly hit with this performance issue in UAT
Please concider this on high priority and someone please help us
Regards,
SundarAlways raise a case with Oracle Support for such issues.
Regards,
Anuj -
Crystal Reports 11g Performance Issue
We just upgraded our database to Oracle 11g from Oracle 10g and we are having significant performance issues with Crystal Reports.
Our DEV and TEST environments are on 11g and are very slow to connect to the database and attach to specific tables. It is unusable. Our PROD environment is still 10g and works fine.
We have tested with both the lastest version - Crystal Reports 2008 V1 and Crystal Reports XI R2 SP6. We have also tested on several different machines.
We are using Oracle 10g ODBC drivers.
Does anyone have any recommendations?You could also try our Data direct drivers we have available on our WEB site using this [link|https://smpdl.sap-ag.de/~sapidp/012002523100008666562008E/cr_datadirect53_win32.zip].
Those drivers are the most recent and do support 11g. It also has a wired driver that doesn't require the client to be installed.
Also, highly recommended that when you do update the Oracle client to uninstall 10 first. There have been issues with CR and Oracle mixing the Oracle dependencies and causing problems.
Thank you
Don -
I have Oracle 11g (11.1.0.6) in one server, and I have 10g (10.2.0.3) database in all other server. I am trying to do EXP (normal export not expdp) of my 10g database using the 11g client. I am getting the following error.
Export: Release 11.1.0.6.0 - Production on Wed Jul 22 15:17:03 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8MSWIN1252 character set (possible charset conversion)
EXP-00008: ORACLE error 1003 encountered
ORA-01003: no statement parsed
. . exporting table SNP_DT
EXP-00008: ORACLE error 904 encountered
ORA-00904: "MAXSIZE": invalid identifier
EXP-00008: ORACLE error 942 encountered
ORA-00942: table or view does not exist
EXP-00024: Export views not installed, please notify your DBA
EXP-00000: Export terminated unsuccessfully
Is there any way way to solve this. Could some one please help me.
Thanks
Mano
Edited by: Mano Rangasamy on Jul 22, 2009 3:24 PM
Edited by: Mano Rangasamy on Jul 22, 2009 3:26 PMI have the same issue, but my client (export) is 11.1.0.7.0 and my database is 11.1.0.6.0. I beleaved that the same versions (11) have to work toghether. But they do not.
Does it mean that I have to reinstall my client to have en export that works? -
Hello :
We recently migrated our database from 9.2.07 (RBO) to 11.2.0.1 (CBO) ( Solaris 10 Sparc 64 bit ). Ever since the migration happened we are experiencing the following issues:
1. Execution plans are very costly. ( Please note that we moved from RBO to CBO )
2. High Gets / Reads
3. High CPU Utilization
4. High waits on 'DB FILE SEQUENTIAL READ'
5. This situation is vitually killing all our applications performance.
I know that oracle 11g explicitly does not support RULE based optimizer , but is there any parameters that can force it go the RBO way.
I would really appreciate your help.
Regards,
BMPOfficially it is no more documented http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/initparams165.htm#i1131532
but it is still accepted:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> alter system set optimizer_mode=rule;
System altered.
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string RULE
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUEEdited by: P. Forstmann on 1 sept. 2010 20:21 -
Oracle 11g Performance tuning approach ?
Hello Experts,
Is it the right forum to follow oracle performance tuning discussions ? If not, let me know what will be the forum to pick up some thread on this subject.
I am looking for performance tuning approach for oracle 11g. I learned there are some new items in 11g in this regard. For persons, who did tuning in earlier versions of Oracle,
what will be the best way adopt to 11 g?
I reviewed the 11g performance tuning guide, but I am looking for some white papers/blogs with case studies and practical approaches. I hope that you have used them.
What are the other sources to pick up some discussions?
Do you mind, share your thoughts?
Thanks in advance.
RIThe best sources of information on performance tuning are:
1. Jonathan Lewis: http://jonathanlewis.wordpress.com/all-postings/
2. Christian Antognini: http://www.antognini.ch/
3. Tanel Poder: http://blog.tanelpoder.com/
4. Richard Foote: http://richardfoote.wordpress.com/
5. Cary Millsap: http://carymillsap.blogspot.com/
and a few dozen others whose blogs you will find cross-referenced in those above.
Maybe you are looking for
-
How Do I Install an OSx Update on a Remote Drive?
Hi, I run my OSx on a Remote Pegasus RAID and use this as my startup disk. I have found that since Yosemite came along that even routine updates will crash the OS and then require a complete OS download. I think that this can be solved if I boot from
-
I have two tables. ID columns exist in both tables and same record has the same id in both tables. Which mean if the record with ID = 3 in TABLE 1, the record with ID = 3 in TABLE 2 will represent the same record. TABLE 1 has more data then TABLE 2.
-
Problem in select single statement : its urgent
Dear , i written one select single stmt i.e tables : ser03 . SELECT SINGLE * FROM ser03 WHERE obknr = objk-obknr AND bwart = '101' AND shkzg = 'S'. based on this select stmt their are 5 records in data
-
I am unable to resolve this error
I had set every .jar file which is required but stil i am getting this error LoginAction.java:4: cannot access javax.servlet.http.HttpServletRequest bad class file: E:\jboss-4.0.3SP1\server\default\deploy\TestApplication.war\WEB- INF\lib\servlet-api.
-
TV Episode Description Missing in iTunes 12
Pretty much what it says... the new interface doesn't provide episode descriptions. Why remove that feature? Also, now have to scroll through several seasons to get to the latest one instead of just clicking the season I want at the top of the show.