OLTP behavior in Oracle 11g R2
Hi everyone.
Someone could please explain this behavior.
I've three tables, one in OLTP compression, other in basic compression and other without compression.
When I inserted lines on OLTP table, these lines are not compressed.
Only after do a "move" the segment shrinks.
My test case (Oracle 11.2.0.3.8):
create table a_normal
as
select rownum id, a.*
from all_objects a
create table a_compress
as
select rownum id, a.*
from all_objects a
create table a_comp_oltp
as
select rownum id, a.*
from all_objects a
SQL> alter table a_compress move compress;
Table altered.
SQL> alter table a_comp_oltp compress for oltp;
Table altered.
SQL> select table_name, compression, compress_for, pct_free from dba_tables where table_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
TABLE_NAME COMPRESS COMPRESS_FOR PCT_FREE
A_NORMAL DISABLED 10
A_COMP_OLTP ENABLED OLTP 10
A_COMPRESS ENABLED BASIC 0
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 10240
A_COMPRESS 3072
A_NORMAL 10240As you can see here, even after the move compress for oltp, the space used still the same.
Now, after the "move", our space shrinks.
SQL> alter table a_comp_oltp move;
Table altered.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 4096
A_COMPRESS 3072
A_NORMAL 10240And, after that, I inserted 4 million rows without append and more 4 million with append, both are not compressed until before I did a "move".
SQL> @ins_a_comp_oltp
Enter value for 1: 4000000
old 3: l_rows number := &1;
new 3: l_rows number := 4000000;
Enter value for 1: 4000000
old 9: where rownum <= &1;
new 9: where rownum <= 4000000;
PL/SQL procedure successfully completed.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 493568
A_COMPRESS 3072
A_NORMAL 10240
SQL> alter table A_COMP_OLTP move;
Table altered.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 188416
A_COMPRESS 3072
A_NORMAL 499712
SQL> @ins_a_comp_oltp_append
Enter value for 1: 4000000
old 3: l_rows number := &1;
new 3: l_rows number := 4000000;
Enter value for 1: 4000000
old 9: where rownum <= &1;
new 9: where rownum <= 4000000;
PL/SQL procedure successfully completed.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 665600
A_COMPRESS 3072
A_NORMAL 499712
SQL> alter table A_COMP_OLTP move;
Table altered.
SQL> select segment_name, bytes/1024 from dba_segments where segment_name in ('A_COMPRESS','A_COMP_OLTP','A_NORMAL');
SEGMENT_NAME BYTES/1024
A_COMP_OLTP 360448
A_COMPRESS 3072
A_NORMAL 499712When I read the documentation, it says "When a block reaches the PCTFREE value, it will be compressed, if the compression make the block leaves the PCTFREE threshold it will can accept more lines, and do the compression again"
But here, even with a 4 million insert, the table is not compressed, neither with append (direct-path), why?
Thanks and regards,
Felipe.
Edited by: Felipe Romeu on 09/10/2012 14:43
It works for me:orcl> create table nocomp (c1 varchar2(10));
Table created.
orcl> create table oltpcomp(c1 varchar2(10)) compress for oltp;
Table created.
orcl> insert into nocomp select 'aaaaaaaaaa' from dual connect by level < 1000000;
999999 rows created.
orcl> insert into oltpcomp select 'aaaaaaaaaa' from dual connect by level < 1000000;
999999 rows created.
orcl> select segment_name,bytes from user_segments where segment_name like '%COMP';
SEGMENT_NAME BYTES
NOCOMP 18874368
OLTPCOMP 13631488
orcl> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 32-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
orcl>
Similar Messages
-
Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)
Hi All,
We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
Thanks in advanceIs there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
or at least be N times less than Oracle block size.
For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
Contemporary HDDs have internal sector size 4K. Usually. -
The danger of memory target in Oracle 11g - request for discussion.
Hello, everyone.
This is not a question, but kind of request for discussion.
I believe that many of you heard something about automatic memory management in Oracle 11g.
The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
UKJA@ukja116> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
-- Configuration
*.memory_target=350m
*.memory_max_target=350m
create table t1(c1 int, c2 char(100));
create table t2(c1 int, c2 char(100));
insert into t1 select level, level from dual connect by level <= 10000;
insert into t2 select level, level from dual connect by level <= 10000;
-- First 10053 trace
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';
-- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
vc sys_refcursor;
vs varchar2(1000);
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 10000000 loop
execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
into va;
if mod(idx, 1000) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v -- views for x$ table
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it!');
exit;
end if;
end if;
end loop;
end;
-- As to alert log file,
25000th execution
26000th execution
27000th execution
28000th execution
29000th execution
30000th execution
yep, I got it! <-- the pga target changed with 30000th hard parse
-- Second 10053 trace for same query
alter session set events '10053 trace name context forever, level 1';
select /*+ use_hash(t1 t2) */ count(*)
from t1, t2
where t1.c1 = t2.c1 and t1.c2 = t2.c2
alter session set events '10053 trace name context off';With above test case, I found that
1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
-- First 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 11468 KB
_smm_px_max_size = 28672 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
-- Second 10053 trace file
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
_smm_max_size = 13107 KB
_smm_px_max_size = 32768 KB
optimizer_use_sql_plan_baselines = false
optimizer_use_invisible_indexes = true
Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
I believe that this is a desinged behavior, but the negative side effect is not negligible.
I just like to hear your opinions on this behavior.
Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
*.memory_target=200m
*.memory_max_target=200m
create table t3(c1 int, c2 char(1000));
insert into t3 select level, level from dual connect by level <= 50000;
declare
pat1 varchar2(1000);
pat2 varchar2(1000);
va number;
begin
select ksppstvl into pat1
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
for idx in 1 .. 1000000 loop
-- try many patterns here!
execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
if mod(idx, 100) = 0 then
sys.dbms_system.ksdwrt(2, idx || 'th execution');
for p in (select ksppinm, ksppstvl
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
end loop;
select ksppstvl into pat2
from sys.xm$ksppi i, sys.xm$ksppcv v
where i.indx = v.indx
and i.ksppinm = '__pga_aggregate_target';
if pat1 <> pat2 then
sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
exit;
end if;
end if;
end loop;
end;
/This test case showed expected and reasonable result, like following:
100th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
200th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
300th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
400th execution
__shared_pool_size = 92274688
__db_cache_size = 16777216
__pga_aggregate_target = 83886080
500th execution
__shared_pool_size = 88080384
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1100th execution
__shared_pool_size = 92274688
__db_cache_size = 20971520
__pga_aggregate_target = 83886080
1200th execution
__shared_pool_size = 92274688
__db_cache_size = 37748736
__pga_aggregate_target = 58720256
yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
(I'm still in dark age on this automatic memory target management of 11g. More research in need!)
I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
================================ -
Oracle 11g Linguistic NLS SORT Throws Exception
Hi All,
I am using oracle 11g and when i try NLS_Sort with Generic_m option, throws ORA-00910 exception. See below for more details
Select * From v$version
1 Oracle Database 11g Release 11.1.0.6.0 - Production
2 PL/SQL Release 11.1.0.6.0 - Production
3 CORE 11.1.0.6.0 Production
4 TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
5 NLSRTL Version 11.1.0.6.0 - Production
Select * From v$NLS_PARAMETERS;
1 NLS_LANGUAGE AMERICAN
2 NLS_TERRITORY AMERICA
3 NLS_CURRENCY $
4 NLS_ISO_CURRENCY AMERICA
5 NLS_NUMERIC_CHARACTERS .,
6 NLS_CALENDAR GREGORIAN
7 NLS_DATE_FORMAT DD-MON-RR
8 NLS_DATE_LANGUAGE AMERICAN
9 NLS_CHARACTERSET WE8MSWIN1252
10 NLS_SORT BINARY
11 NLS_TIME_FORMAT HH.MI.SSXFF AM
12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
15 NLS_DUAL_CURRENCY $
16 NLS_NCHAR_CHARACTERSET AL16UTF16
17 NLS_COMP BINARY
18 NLS_LENGTH_SEMANTICS BYTE
19 NLS_NCHAR_CONV_EXCP FALSE
I have one table 'Test' in that column TestDesc varchar2(4000). Now if i retrieve the records by
select sts, it is successfully executed. The same if i do after exec of the below sts, it is
throwing ORA-00910:specified length too long for its datatype.
1. ALTER SESSION SET NLS_SORT=GENERIC_m;
2. Select * From Test order by TestDesc; (It is throwing ORA-00910 exception)
But the same query/setup is executed successfully in oracle 10g.
My question:
Why it is throwing this exception especially in 11g?
Do i need to excute any alter sts like changing of charactor set..
Kindly help me to sove this issue.Does that mean that 11.2.0.1 (without pse), NLSSORT does not work as intended (documented)?
Correct. At least, if the related information in bug reports is correct.
Is there in 11.2.0.2 still silent truncation instead of "... calculates the collation key for a maximum prefix", if there's a difference?
There is no difference. The term "silent truncation" is not really precise, so the documentation describes the process in a more elaborate way.
Shouldn't docs mention that a "calculated result" could mean that sorts may sometimes be out of sequence (in-exact collation)?
Yes, it should. It is sometimes too easy to assume that things obvious to oneself are also obvious to anybody else ;-) I have asked for the following explanation to be added after the relevant doc paragraph:
"The above behavior implies that two character values whose collation keys (NLSSORT results) are compared to find the linguistic ordering are considered equal if they do not differ in the prefix described above even though they may differ at some further character position. As the NLSSORT function is used implicitly to find linguistic ordering for comparison conditions, BETWEEN condition, IN condition, ORDER BY, GROUP BY, and COUNT(DISTINCT), those operations may return results that are only approximate for long character values. This is a restriction of the current comparison architecture. Currently, the only way to guarantee precise linguistic comparison results is to not compare character values that are longer than 499 characters for monolingual collations and 124 characters for multilingual collations."
-- Sergiusz
Edited by: S. Wolicki, Oracle on May 5, 2011 1:17 PM
Lowered the pessimistic length for multilingual sorts from 249=floor((2000-3)/8) to 124=floor((2000-3)/16) -
Has anyone else experienced extremely long parse times for Oracle 11G versus 10G? We are experiencing at least a 10 times increase in the parsing of our SQL statements. This is causing our customers to complain when running reports which contain several SQL statements that aren't in the SGA due to the infrequent use. I have opened a Service Request and development stated that this is to be expected with Oracle 11G due to the new optimizer features. I have tried to disable the features by settting the optimizer version to anything but 11G and no setting has helped. To make thing even worse, this increased parse time is on a new server that should be 2.5 times faster than the server that is running the 10G database. I do get at least a 2.5 times increase, larger if I/O intensive, in almost every other aspect of the database except for the parse times.
user5999814 wrote:
I wondered what the resolution was to this issue. We currently experiencing this will our Oracle 11.1.0.7 database with a small but important set of queries and it seems to be getting worse. We first noticed it with a query that went from taking 3.5 seconds to 7 seconds. A SQLTrace / TKProf revealed that 99% of the time was being spent during the parsing. A second similar query that brings in more data is taking around 1.5 minutes to parse then only a few seconds to actually execute. These are queries that are part of on online web transaction so this is not acceptable. We are able to replicate the behavior in multiple database instances.As a starting point it would be interesting to see the tkprof output from :
alter session set events '10046 trace name context forever, level 8';
+your select statement+
exitWhen posting the output, use the "code" tags (see below) to make the output readable.
Regards
Jonathan Lewis
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
Isaac Asimov -
Porting from Oracle 10g AS to Oracle 11g Weblogic server
Hi,
I am trying to port J2EE applicatin from Oracle 10g AS to Oracle 11g Weblogic Server. I have jsp files which contains sql code embedded to fetch data from the DB. The components used in this application are Java, JSP, Servlets and Stateless session beans.
I have a commonjar which is used to create DB Connection manager and other common stuff which is present inside web-inf/lib. My application is packaged as an ear and which contains the war and different ejb-project.jar files. When the Stateless bean is trying to call the ConnectionManager class, which is present inside the commonjar inside web-inf/lib, I get InvocationTargetException. When I debugged in Eclipse, I see that that inside ejb the reference ConnectionManager, the variable cannot be resolved. So I guess at runtime, ejb is not able to locate ConnectionManager Class.
I tried to copy this jar to APP-INF/lib and the same jar is present inside web-inf/lib as well. In that case what happens is the jsp fails with the exception ConnectionManager cannot be resolved.
How can I make the ejb work by keeping the commonjar inside web-inf/lib alone.
Any help or suggestions?
Thanks
RajThe classloader hierarchy in WLS looks like this (arrows denote inheritance direction):
Java bootstrap classloader <- Java CLASSPATH classloader <- WLS $DOMAIN_HOME/lib classloader <- Application (e.g., EAR) classloader <- web app classloader <- single JSP classloader
The Connection Manager is loaded by the web app classloader while the EJBs are loaded by the application classloader so the EJBs have no visibility to the classes loaded by the child (web app) classloader.
The simplest fix might be to move the Connection Manager jar file to the EAR file's APP-INF/lib directory.
One word of caution though, you need to be using WebLogic Server's Data Sources and JDBC connection pooling to make sure that you correct transactional behavior if the application is doing any sort of JTA transaction (regardless of whether the JTA transactions are using XA or not...).
Hope this helps,
Robert -
Font Loading in Oracle 11g javavm
Hello,
I want to write a java source in Oracle 11g that creates an image containing texts.
I tried to create a java class from sqlplus using "create or replace and resolve java source" .....
that contains ..
"Font fnt = new Font("Courier",Font.PLAIN,14);"
and,
"g.setFont(fnt);".
The source is compiled and resolved succesfully, however, when I call the corresponding PL/SQL function, I find the following error messages...
SQL Error: ORA-29532: Java call terminated by uncaught Java exception: java.lang.Error: Probable fatal error:No fonts found.
ORA-06512: at "TESTUSER.PKG_TXT_IMG", line 17
ORA-06512: at "TESTUSER.PKG_TXT_IMG", line 62
29532. 00000 - "Java call terminated by uncaught Java exception: %s"
*Cause: A Java exception or error was signaled and could not be
resolved by the Java code.
*Action: Modify Java code, if this behavior is not intended.
The code works fine in Oracle 10g R2. The font.properties is found at $ORACLE_HOME/javavm/lib/ojvmfonts directory.
The Oracle 11g does not have the ojvmfonts directory in its javavm.
How can I load fonts in Oracle 11g Java VM?
Can anybody help me?
RahmanHello,
It could be nice that your script spool a log.
By that way, you could more easily catch the error and the detail of the rows rejected.
Best regards,
Jean-Valentin -
Unpredictable problem using XMLTYPE in Oracle 11g?
We recently upgraded from Oracle 10g to Oracle 11g, which caused some of our stored procedures to start acting funny.
Our database stores BLOBs containing XML data in a table. We then asynchronously convert these BLOBs into XMLTYPE objects, and use them to perform operations in our database. This logic started failing when we moved to 11g.
Our original code looked like this:
PROCEDURE submitTpfdd(shipmentDataId IN VARCHAR2) AS
shipmentData XMLTYPE;
csid INTEGER;
shipmentName VARCHAR(128);
gk_namespaces VARCHAR(1024) := 'xmlns:a="http://my.app/1.0.0.0"';
BEGIN
SELECT NLS_CHARSET_ID('UTF8') INTO csid FROM dual;
SELECT XMLTYPE(tf.shipmentData, csid)
INTO shipmentData
FROM SHIPQ.SHIPMENT_FILE tf
WHERE tf.shipment_id = shipmentDataId;
shipmentName := shipmentData.extract('/a:Shipment/shipmentName/text()', gk_namespaces).getStringVal();
... (more logic)
END submitTpfdd;When we switched to 11g, this code started frequently failing with an "unsupported character set" error. It happens about half the time, and it's unpredictable. It will sometimes pass and sometimes fail, even if the same BLOB is being read both times. I haven't been able to reproduce the error with any of XMLTYPE's other constructors.
Has anybody encountered similar behavior with the XMLTYPE in 11g? Should I submit a tracker?I have created a SQL program which can be run independently to reproduce the problem.
DECLARE namespaces constant VARCHAR2(1024) := 'xmlns:a="http://morton.com/"';
CURSOR cursor0(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor1(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor2(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor3(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor4(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor5(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor6(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor7(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor8(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
CURSOR cursor9(reeves XMLTYPE) IS
SELECT EXTRACT(VALUE(t), 'text()').getstringval() AS
bullock
FROM TABLE(xmlsequence(reeves.EXTRACT('/a:hopper', namespaces))) t;
xml_clob CLOB := empty_clob;
xml_blob BLOB := empty_blob;
xml_varchar VARCHAR2(4000);
xml_xmltype XMLTYPE;
warning INTEGER;
dest_offset INTEGER := 1;
src_offset INTEGER := 1;
lang_context INTEGER := 0;
char_set INTEGER := nls_charset_id('UTF8');
BEGIN
dbms_lob.createtemporary(xml_clob, TRUE);
dbms_lob.createtemporary(xml_blob, TRUE);
dbms_lob.OPEN(xml_clob, dbms_lob.lob_readwrite);
dbms_lob.OPEN(xml_blob, dbms_lob.lob_readwrite);
xml_varchar := '<a:hopper xmlns:a="http://morton.com"/>';
dbms_lob.writeappend(xml_clob, length(xml_varchar), xml_varchar);
dbms_lob.converttoblob(xml_blob, xml_clob, dbms_lob.lobmaxsize, dest_offset, src_offset, char_set, lang_context, warning);
xml_xmltype := XMLTYPE(xml_blob, char_set);
FOR daniels IN cursor0(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor1(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor2(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor3(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor4(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor5(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor6(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor7(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor8(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
FOR daniels IN cursor9(xml_xmltype)
LOOP
CONTINUE;
END LOOP;
END;If someone else could run this program and verify that it acts unpredictably, that would be helpful. I have submitted a metalink tracker but the person who is working the tracker is not able to duplicate the same behavior. -
Object Invalidation in Oracle 11g R2
Hi All,
I found a distinct behaviour in oracle 11gR2 which is not even available in previous releases . Let me explain with an example.
--creating a small table
create table TEMPSAMPLE (COL1 VARCHAR2(10),COL2 VARCHAR2(10),COL3 VARCHAR2(15),COL4 VARCHAR2(15));
-- Now Creating an Primary key index for table TEMPSAMPLE
ALTER TABLE TEMPSAMPLE ADD CONSTRAINT PKTEMPSAMPLE PRIMARY KEY (COL1,COL2);
---CREATING A VIEW ON THE ABOVE TABLE
CREATE OR REPLACE VIEW VWTEMPSAMPLE AS
SELECT * FROM TEMPSAMPLE;
-- CREATING A PACKAGE WHICH USES TEMPSAMPLE AND VWTEMPSAMPLE OBJECTS.
CREATE OR REPLACE PACKAGE PKGSAMP AS
VAL1 TEMPSAMPLE.COL1%TYPE;
VAL2 CONSTANT TEMPSAMPLE.COL1%TYPE:='11';
PROCEDURE VERIFYSAMP(INVAL IN NUMBER);
END PKGSAMP;
CREATE OR REPLACE PACKAGE BODY PKGSAMP IS
PROCEDURE VERIFYSAMP(INVAL IN NUMBER)
AS
VAL1 TEMPSAMPLE.COL1%TYPE;
VAL2 CONSTANT TEMPSAMPLE.COL1%TYPE:='11';
BEGIN
VAL1:='RAVI';
FOR RC IN (SELECT * FROM VWTEMPSAMPLE)
LOOP
DBMS_OUTPUT.PUT_LINE('COL1 '||RC.COL1);
DBMS_OUTPUT.PUT_LINE('COL2 '||RC.COL2);
DBMS_OUTPUT.PUT_LINE('COL3 '||RC.COL3);
END LOOP;
INSERT INTO TEMPSAMPLE VALUES('REC05','RAVI','EEE','CK');
DELETE FROM TEMPSAMPLE WHERE COL1='RECO1';
UPDATE TEMPSAMPLE SET COL4='CKR' WHERE COL1='RECO2';
DBMS_OUTPUT.PUT_LINE('VALUE IS '||INVAL);
DBMS_OUTPUT.PUT_LINE('VALUE IS '||VAL1);
END VERIFYSAMP;
END PKGSAMP;
--CREATING A PACKAGE PKGSAMP2 WHICH USES TEMPSAMPLE TABLE ITSELF
CREATE OR REPLACE PACKAGE PKGSAMP2 AS
VAL1 TEMPSAMPLE.COL1%TYPE;
VAL2 CONSTANT TEMPSAMPLE.COL1%TYPE:='11';
PROCEDURE VERIFYSAMP(INVAL IN NUMBER);
END PKGSAMP2;
CREATE OR REPLACE PACKAGE BODY PKGSAMP2 IS
PROCEDURE VERIFYSAMP(INVAL IN NUMBER)
AS
VAL1 TEMPSAMPLE.COL1%TYPE;
VAL2 CONSTANT TEMPSAMPLE.COL1%TYPE:='11';
BEGIN
VAL1:='RAVI';
FOR RC IN (SELECT * FROM TEMPSAMPLE)
LOOP
DBMS_OUTPUT.PUT_LINE('COL1 '||RC.COL1);
DBMS_OUTPUT.PUT_LINE('COL2 '||RC.COL2);
DBMS_OUTPUT.PUT_LINE('COL3 '||RC.COL3);
END LOOP;
INSERT INTO TEMPSAMPLE VALUES('REC05','RAVI','EEE','CK');
DELETE FROM TEMPSAMPLE WHERE COL1='RECO1';
UPDATE TEMPSAMPLE SET COL4='CKR' WHERE COL1='RECO2';
DBMS_OUTPUT.PUT_LINE('VALUE IS '||INVAL);
DBMS_OUTPUT.PUT_LINE('VALUE IS '||VAL1);
END VERIFYSAMP;
END PKGSAMP2;
-- OBJECT STATUS OF PACKAGES
SELECT OBJECT_NAME,OBJECT_TYPE,STATUS FROM USER_OBJECTS WHERE OBJECT_NAME IN ('PKGSAMP','PKGSAMP2','VWTEMPSAMPLE');
OBJECT_NAME OBJECT_TYPE STATUS*
VWTEMPSAMPLE VIEW VALID
PKGSAMP2 PACKAGE BODY VALID
PKGSAMP2 PACKAGE VALID
PKGSAMP PACKAGE BODY VALID
PKGSAMP PACKAGE VALID
Alter table TEMPSAMPLE DISABLE constraint PKTEMPSAMPLE KEEP INDEX;
DROP INDEX PKTEMPSAMPLE;
--OBJECT STATUS OF PACKAGES AFTER DROPPING INDEX
SELECT OBJECT_NAME,OBJECT_TYPE,STATUS FROM USER_OBJECTS WHERE OBJECT_NAME IN ('PKGSAMP','PKGSAMP2','VWTEMPSAMPLE');
OBJECT_NAME OBJECT_TYPE STATUS*
VWTEMPSAMPLE VIEW INVALID
PKGSAMP2 PACKAGE BODY VALID
PKGSAMP2 PACKAGE VALID
PKGSAMP PACKAGE BODY INVALID
PKGSAMP PACKAGE VALID
Alter table TEMPSAMPLE ENABLE constraint PKTEMPSAMPLE;
As per the above process, if we observe that drop of index on a table arel lead to invalidation of view which* depends on that table and all the objects which uses this view will also get invalidated.*
The above invalidation is being occurred only in Oracle 11g R2, due to which we are facing the issue in our application.
We got a procedure where we disable a constraint , drop a index and process the insertion/updation into the tables. After successfull insertion/updation, at finally we are enabling the constraint.
This worked fine with previous releases of oracle 11g R2 , where as we recently migrated to 10g R2 which was leading to invalidation of all packages which uses the view , and in the application which uses previous db sessions are unable to access the invalidated package and raising an exception.
Please provide the solution if any possibleI tested the behavior in 10.2.0.4 and 11.2.0.3
In 10.2.0.4, The view was valid if we disabled the constraint using the keep index option but it became invalid after dropping the index. SO i guess you are making them as unusable and rebuilding them after data load.
in 11.2.0.3, The view became invalid as soon as we disabled the constraint using the keep index option. -
Xpath difference between Oracle 10g and Oracle 11g
All,
I'm working on moving our existing stored functions from Oracle 10g (10.2.0.4.0) to Oracle 11g (11.2.0.1.0) and so far, everything has worked just fine on Oracle 11g...execpt for one xpath statement.
The statement below works fine in Oracle 10g (10.2.0.4.0):
extractValue(inv_dtl_img, '/service//ground/sortKeyCode') AS "srt_key_cd",
Please note: I need to use the double slash "//" in order to ignore the two different elements in the schema.
However, in Oracle 11g (11.2.0.1.0), when this statement is executed in the stored function, I get this:
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected - got -
The extractValue command is pulling data out of an XMLType column, and the corresponding XML schema looks like:
<service>
<trans>
<ground>
<sortKeyCode>
</sortKeyCode>
</ground>
</trans>
<nontrans>
<ground minOccurs=0>
<sortKeyCode>
</sortKeyCode>
</ground>
</nontrans>
</service>
Please note: In the XML message, the "trans" and "nontrans" elements are exclusive, so both will never be populated at the same time. A typical XML message would look like this:
<service><trans><ground><sortKeyCode>3</sortKeyCode></ground></trans></service>
or this:
<service><nontrans><ground><sortKeyCode>5</sortKeyCode></ground></nontrans></service>
In the schema, the sortKeyCode has been defined in both places as "string maxlen=3", so the datatype of that element is exactly the same in both the "trans" and "nontrans" sections of the schema. The only difference in the schema (outside of the trans and nontrans tags) is the fact that the second "ground" tag is defined with a "minOccurs=0". Could Oracle 11g be treating the schema differently than Oracle 10g, resulting in the error?
Any thoughts would be appreciated.The only way to get an quick answer to that one is to file a service request with Oracle support. It could be a bug or a correct change regarding W3C behavior. Despite this, you moving to 11.2, the extract/extractvalue etc propriety Oracle solutions are deprecated from 11.2 and onwards. The more sensible way to move forward, although, I know more work intensive, is to apply the XQuery alternatives like xmlexist, xquery or xmltable functions.
Moving to EXTRACT is a bad idea, because this alway will be treated as an XML fragment. If you unlucky then Oracle will deal with this in memory via DOM (the standard solution regarding XML parsing if every smart thing within Oracle can not be applied) and this will result in a performance downgrade due to high CPU and Memory consumption/overhead...
Your pick... -
USER_DEPENDENCIES view in oracle 11g and oracle 10g are different?
I have tested oracle 10g and oracle 11g. And I found some different behavior in the user_dependencies view between two oracle version.
In the oracle 10g, whatever referenced object is populated in the user_dependencies view. In the oracle 11g, however, only valid referenced objects are populated in it.
For example, a package, pac_1, references table, ref_tab_1, and package, ref_pac_2. After ref_tab_1 is dropped and ref_pac_2 is invalid, I query to user_dependencies.
sql> select referenced_name, referenced_type from user_dependencies where name = 'pac_1'.
In oracle 10g, the result is like the following,
< referenced_type > , < referenced_name >
non-existent , ref_tab_1
package , ref_pac_2
In oracle 11g, the result is not showing non-existed object nor invlid object like the following,
no rows selected
Is that the TRUE or do I missed some when I installed database or created database in 11g?
Is there a way to make the behavior of user_dependencies like it in 10g which shows all the referenced objets?
Edited by: user7436913 on Mar 10, 2010 5:48 AM
Edited by: user7436913 on Mar 10, 2010 5:51 AM
Edited by: user7436913 on Mar 10, 2010 5:52 AM
Edited by: user7436913 on Mar 10, 2010 5:53 AMcan you post your query with explain plan for both 9i version and 10g version.
Thanks,
karthick. -
Create tablespace in Oracle 11g
Windows 2008 R2 (64-bit OS)
Oracle 11g R2 Standard Edition ONE(64-bit database)
I use the following Oracle command to create tablespace in 11g.
CREATE TABLESPACE TEST_TBS
DATAFILE
'C:\ORACLE\ORADATA\TEST_TBS01.DBF' SIZE 5M
AUTOEXTEND ON NEXT 5M
MAXSIZE 50000M;
- The above command works fine.
Is there any better commands i need to be using in creating a tablespace w.r.t. performance and management.TSharma wrote:
it is recommended to have small sized blocks for OLTP, and big sized for DSS. The reasons:
You can try different block sizes in tablespaces...
On OLTP, a smaller block size reduces contention:
1. Reduces the possibility of concurrent transaction and locks on the same block.
2. Provides a better locking mechanism
3. Less transacction locks possibilities at the block header.
DSS benefit from big blocks mainly because of IO:
1. Retrieves in less IO operations more data, which is critical for FTS and index scans.
2. A higher row density means less IO
3. Indexes retrieve information on a block per block basis, with big blocks indexes are retrieved with less IO
4. Reduces the chances of chained rows.
Check the following link for examples and details information.If johnpau needs a 50G file he has no choice: 16K blocks is the only option.
But that advice about different blocksizes is very twentieth century. Nowadays, it doesn't matter. -
"Rows" statistics from STAT in trace event 10046, Oracle 11g.
Hi, all!
Why "Rows" statistics in STAT are different for 10g and 11g?
I have two database with version 10g and 11g, with the same data.
I executed some pl/sql code with tracing 10046 level 8.
And I have different result when obtain raw trace.
In 10g I obtain rows statistics for all executions - Rows= 7.
In 11g I obtain rows statistics for first executions - Rows= 1. Why?
See my example:
declare
type t_name_tbl is table of varchar2(30) index by binary_integer;
v_name_tbl t_name_tbl;
v_len number := 10;
begin
execute immediate 'alter session set timed_statistics = true ';
execute immediate 'alter session set statistics_level=all ';
execute immediate 'alter session set max_dump_file_size = unlimited ';
execute immediate 'alter session set events ''10046 trace name context forever,level 8'' ';
loop
select cour_name bulk collect
into v_name_tbl
from country t
where length(t.cour_name) = v_len;
exit when v_len = 0;
v_len := v_len - 1;
for i in 1 .. v_name_tbl.count loop
dbms_output.put_line(v_name_tbl(i));
end loop;
end loop;
end;Result Tkprof for Oracle 10g:
SELECT COUR_NAME
FROM
COUNTRY T WHERE LENGTH(T.COUR_NAME) = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 11 0.00 0.00 0 0 0 0
Fetch 11 0.01 0.00 0 44 0 7
total 23 0.01 0.00 0 44 0 7
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 649 (recursive depth: 1)
Rows Row Source Operation
7 TABLE ACCESS FULL COUNTRY (cr=44 pr=0 pw=0 time=1576 us)Result Tkprof for Oracle 11g:
SQL ID: 3kqmkg8jp5nwk
Plan Hash: 1371235632
SELECT COUR_NAME
FROM
COUNTRY T WHERE LENGTH(T.COUR_NAME) = :B1
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 11 0.02 0.01 0 0 0 0
Fetch 11 0.00 0.01 3 44 0 7
total 23 0.03 0.02 3 44 0 7
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 82 (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL COUNTRY (cr=4 pr=3 pw=0 time=0 us cost=2 size=44 card=2)Were can I read about it?Oracle 11g by default writes the execution plan (the STAT lines in the raw trace file) after the first execution of the SQL statement, while prior to 11g the execution plan is written only when the cursor is closed.
The behavior in 11g can be controlled by changing the PLAN_STAT parameter of the call that enables the trace from the default value of FIRST_EXECUTION to ALL_EXECUTIONS:
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_sessio.htm#i1010518
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Edited by: Charles Hooper on Aug 9, 2010 3:33 PM
Default value on 11g is FIRST_EXECUTION, while the behavior prior to 11g is ALL_EXECUTIONS - corrected the incomplete sentence. -
Oracle 10g Vs Oracle 11g Scripts
Hi
We have a production environment that has Oracle 11g and development environment with Oracle 10g.
Scripts for:
1. Creating tables with constraints
2. Insert/Update/Delete records
3. Create Triggers
4. Create Stored Procedure
will be developed in Oracle 10g. These scripts will be executed in Oracle 11d production environment.
Will this create any issue?
Are they any syntax/procedure change in Oracle 11g?
Help required...
Thanks
Shoba AnandhanWhat version of Oracle will you be running in test?
Why would you be running different Oracle versions in different environments? That doesn't generally make much sense other than when you are in the middle of an upgrade in which case you would have upgraded the development environment first, not last.
For the most part, ignoring bugs and the occasional depricated package, code that compiles in 10g should compile in 11g. But there is no guarantee that the behavior (particularly performance) will be similar. You'd need to verify that each script works correctly in the new environment.
Look at it this way-- would you blindly upgrade your production database from 10g to 11g without retesting the application? If you wouldn't do that, you shouldn't assume that the same script will behave exactly the same way in 11g as 10g. Most things will almost certainly work the same-- you just won't necessarily know where the deviations are until you try it.
Justin -
Importing Statistics - Oracle 11g upgrade
Hi,
We are in the middle for planning for migration of oracle 9.2.0.8 database hosted in HP-UX to oracle 11gR2 to Linix. The database size is 1 TB ( 400GB of table data & 600GB of index data).
Please let us know whether we can use the option of import/export the statistics from the Oraclr 9i to oradle 11g database. The database is highly OLTP/Batch. Will it be any Query performance problem due to the statistics import from oracle 9i to oracle 11g.
Any suggestion are welcome and let me know if you need any more information.
thanks,
MaheshHello,
Please let us know whether we can use the option of import/export the statistics from the Oraclr 9i to oradle 11g databaseIf I were you, when the data are imported to the 11g Database I would refresh the Statistics by using the dbms_stats Package.
Then, you can test the most common queries on the new Database. If some perfomance troubles appear, you can use classical tools (Statspack, Explain Plan, TKPROF,...) or some useful tools you have in 11g as SQL Tuning Advisor or SQL Access Advisor :
http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/sql_tune.htm#PFGRF028
http://download.oracle.com/docs/cd/E11882_01/server.112/e16638/advisor.htm#PFGRF008
Hope this help.
Best regards,
Jean-Valentin
Maybe you are looking for
-
Windows Search Service Crashes on Windows Server 2012 R2
Hi, I'm running Windows Search service on a Windows Server 2012 R2 (24GB RAM, 8-core proc). The index catalog has a little over 2 million items (files and folders indexed). Every once in a while this service crashes, and either remains in limbo or
-
Using a Second Zone with AppleTV
I have my AppleTV connected to a receiver that powers my home entertainment center. This receiver has a second zone which is connected to a second receiver. This second receiver/zone powers speakers throughout the rest of my house. I can get it to
-
I have just purchased a new NAS (Synology DS413). I am in the process of Transfering Data back to this device and come across and oddity, that I am looking for advise on how to troubleshoot. Test Environment Mid 2011 iMac - OSX Lion - Ethernet Connec
-
RE: Creating Auxiliary objectclasses in Directory 6.3
Hi, I take it that the above process cant be done using the DSCC console? Is this an oversight by Sun and will it be fixed in a later release please? Ive created a custom objectclass through the front end and doing a search on the cn=schema shows tha
-
How do I download windows to my OSX Maverick 10.9 and what Browser should I use for this?