Buffer_pool and db_keep_cache_size
hi all,
i want to change the buffer_pool of one table to KEEP and my parameter settings are
sga_target big integer 7516192768
buffer_pool_recycle string
db_recycle_cache_size big integer 0
db_keep_cache_size big integer 0
if i change the buffer_pool it KEEP for table, will it be cached or not with above settings.
do i have to first set db_keep_cache_size to non-zero to make it effective.
regards
Mir
user11972299 wrote:
hi all,
i want to change the buffer_pool of one table to KEEP and my parameter settings are
sga_target big integer 7516192768
buffer_pool_recycle string
db_recycle_cache_size big integer 0
db_keep_cache_size big integer 0
if i change the buffer_pool it KEEP for table, will it be cached or not with above settings.
do i have to first set db_keep_cache_size to non-zero to make it effective.
buffer_pool_recycle is an old parameter. Why you want to combine with the new ASMM parameter? If you want to use Keep pool for your table, in the table, change the Storage's option BUFFER_POOL to Keep with the alter table option.
Don't touch the db_cache_size . Let ASMM handle it.
HTH
Aman....
Similar Messages
-
Three questions regarding DB_KEEP_CACHE_SIZE and caching tables.
Folks,
In my Oracle 10g db, which I got in legacy. It has the init.ora parameter DB_KEEP_CACHE_SIZE parameter configured to 4GB in size.
Also there are bunch of tables that were created with CACHE turned on for them.
By querying dba_tables table , with CACHE='Y', I can see the name of these tables.
With time, some of these tables have grown in size (no. of rows) and also some of these tables are not required to be cached any longer.
So here is my first question
1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE.
2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these
tables have grown in size.
Third question
I know for fact, that there are 2 tables that do not need to be cached any longer.
So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.
I tried, alter table <table_name> nocache; statement
Now the cache column value for these in dba_tables is 'N', but if I query the dba_segments tables, the BUFFER_POOL column for them still has value of 'KEEP'.
After altering these tables to nocache, I did bounce my database.
Again, So how do I make sure these tables which are not required to be cached any longer, do not occupy space in the DB_KEEP_CACHE_SIZE.
Would very much appreciate your help.
Regards
AshishHello,
1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE:You may try this query:
select owner, segment_name, segment_type, buffer_pool
from dba_segments
where buffer_pool = 'KEEP'
order by owner, segment_name;
2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these tables have grown in size.You may try to get the total size of the Segments using the KEEP BUFFER:
select sum(bytes)/(1024*10124) "Mo"
from dba_segments
where buffer_pool = 'KEEP';To be sure that all the blocks of these segments (Table / Index) won't be often aged out from the KEEP BUFFER, the total size given by the above query should be less than the size of your KEEP BUFFER.
I know for fact, that there are 2 tables that do not need to be cached any longer.
So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.You just have to execute the following statement:
ALTER TABLE <owner>.<table> STORAGE(BUFFER_POOL DEFAULT);Hope this help.
Best regards,
Jean-Valentin -
Buffer_pool & db_keep_cache_size
Our database is 10.2.0.3 with 2 nodes RAC. The database servers are window 2003.
I ran the query “select * from dba_tables where buffer_pool = 'KEEP'” and get 6 records come back. Some tables are pretty big.
I ran “select sum(blocks) * 8192 from dba_tables where buffer_pool = 'KEEP'” and get:
5489451008
The parameter for db_keep_cache_size is 1073741824 for both instances.
My question is: How Oracle handles this if Oracle allocated memory is smaller than the requested space? What are the impacts on the performance?
Thanks,
ShirleyYes, a buffer pool is a buffer pool and is managed as such. Each type of pool may have has some unique logic especially for how data is selected to go into the pool (keep, recycle, 16K block size, 2K block size) but when all else is said and done the end product is a buffer cache that has to be managed.
Personally, I am not a fan of using multiple buffer pools. For most situations Oracle can probably do a better job of deciding what blocks to keep and purge from one large buffer cache than most DBA's can do by using the normal buffer cache, a keep, and a recycle pool. Over time the application data and usage changes. The pool configuration probably is not checked regularly enough to keep it properly aligned.
Besides Oracle really uses a modified for touch count algorithm to manage the buffer cache instead of the documented LRU. Call it a modified LRU algorithm so the need to use a keep and/or recycle really isn’t there for most shops.
IMHO -- Mark D Powell -- -
Questions about db_keep_cache_size and Automatic Shared Memory Management
Hello all,
I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
SQL> select name, value, value/1024/1024 value_MB from v$parameter
2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
NAME VALUE VALUE_MB
sga_max_size 1694498816 1616
shared_pool_size 0 0
db_cache_size 0 0
db_keep_cache_size 0 0
db_recycle_cache_siz 0 0
e
Looking at granularity level:
SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
GRANULE_SIZE/VALUE
2048
Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1616M
sga_target big integer 0
So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
If that isn't the way to go...let me continue with the table size, etc....
The table I need to pin is:
SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
SUM(BLOCKS)
4858
And block size is:
SQL> show parameter block_size
NAME TYPE VALUE
db_block_size integer 8192
So, the space I'll need in memory for pinning this is:
4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
Thanks in advance for any suggestions and links to info on this.
cayenne
Edited by: cayenne on Mar 27, 2013 10:14 AM
Edited by: cayenne on Mar 27, 2013 10:15 AMJohnWatson wrote:
This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
Thanks,
cayenne -
db version = 10.2
db block size = 8k
I just added followiong two parameters to my init.ora:
DB_KEEP_CACHE_SIZE=8k
DB_RECYCLE_CACHE_SIZE=8k
Question: I want to keep few lookup tables in 'KEEP' and 'RECYCLE'. Am I configuring the above two parameters correctly? Any inputs will be appreciated.
regards,
Lily.Sorry not mean how many rows in the tables rather the actual size like how many blocks.
Check this from Performance Tuning Guide
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref540
You can compute an approximate size for the KEEP buffer pool by adding together the blocks used by all objects assigned to this pool. If you gather statistics on the segments, you can query DBA_TABLES.BLOCKS and DBA_TABLES.EMPTY_BLOCKS to determine the number of blocks used. -
Db_keep_cache_size and cache in dba_tables
Hello All:
I have recently alter a table to cache it in db_keep_cache pool however i do not see the change reflected in dba_tables in cache column. Is this expected behaviour?
Thanks
S~The CACHE clause is specified when you create the table. It indicates how the blocks of this table handled in regular buffer cache. It's separate setting from keep cache.
CACHE
For data that is accessed frequently, this clause indicates that the blocks retrieved for this table are placed at the most recently used end of the least recently used (LRU) list in the buffer cache when a full table scan is performed. This attribute is useful for small lookup tables.
Once you put your table in keep pool, the BUFFER_POOL should indicate which pool this object belong to.
select buffer_pool from dba_tables
2 where table_name='TEST1'
3 /
BUFFER_
KEEP -
Table size exceeds Keep Pool Size (db_keep_cache_size)
Hello,
We have a situation where one of our applications started performing bad since last week.
After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
After the data increase, the table size exceeded db_keep_cache_size.
I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
Is my inference correct here ?
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - ProductionSetup
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 4M
SQL>
SQL>
SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
Table created.
SQL> set autotrace on
SQL>
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
PL/SQL procedure successfully completed.
SQL> set serveroutput on
SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
SEGMENT_NAME : T1
PARTITION_NAME :
SEGMENT_TYPE : TABLE
SEGMENT_SUBTYPE : ASSM
TABLESPACE_NAME : HR_TBS
BYTES : 16777216
BLOCKS : 2048
EXTENTS : 31
INITIAL_EXTENT : 65536
NEXT_EXTENT : 1048576
MIN_EXTENTS : 1
MAX_EXTENTS : 2147483645
MAX_SIZE : 2147483645
RETENTION :
MINRETENTION :
PCT_INCREASE :
FREELISTS :
FREELIST_GROUPS :
BUFFER_POOL : KEEP
FLASH_CACHE : DEFAULT
CELL_FLASH_CACHE : DEFAULT
PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
9 recursive calls
0 db block gets
2006 consistent gets
2218 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=10M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=10M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 12M
SQL>
SQL> set autotrace on
SQL>
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1940 consistent gets
1937 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedDB_KEEP_CACHE_SIZE=20M
SQL> connect / as sysdba
Connected.
SQL>
SQL> alter system set db_keep_cache_size=20M scope=both;
System altered.
SQL>
SQL> connect hr/hr@orcl
Connected.
SQL>
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 20M
SQL> set autotrace on
SQL> select count(*) from t1;
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
1656 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> /
COUNT(*)
135496
Execution Plan
Plan hash value: 3724264953
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 538 (1)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T1 | 126K| 538 (1)| 00:00:07 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
1943 consistent gets
0 physical reads
0 redo size
424 bytes sent via SQL*Net to client
419 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedOnly with 20M db_keep_cache_size I see no physical reads.
Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
Or am I missing something ?
Rgds,
GokulHello Jonathan,
Many thanks for your response.
Here is the test I ran;
SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
BUFFER_ BLOCKS
KEEP 1977
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
1939
SQL> show parameter db_keep_cache_size
NAME TYPE VALUE
db_keep_cache_size big integer 20M
SQL>
SQL> alter system set db_keep_cache_size = 5M scope=both;
System altered.
SQL> select count(*) from hr.t1;
COUNT(*)
135496
SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
COUNT(*)
992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
Rgds,
Gokul -
Db_keep_cache_size aging
Database Version: 10.2.0.4
OS: RHEL 4
SGA_MAX_SIZE=25G
SGA_TARGET=10G
Database block size=8K
I am trying to cache the most recent 20 or so partitions of an index.
No objects in my database were using the buffer keep pool before now.
I did the following to accomplish this:
alter system set sga_target=20G scope=both;
alter system set db_keep_cache_size = 10G scope=both;
ALTER INDEX [INDEX_NAME] modify PARTITION PART1 storage (buffer_pool keep);
ALTER INDEX [INDEX_NAME] modify PARTITION PART2 storage (buffer_pool keep);
ALTER INDEX [INDEX_NAME] modify PARTITION PART3 storage (buffer_pool keep);
SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR1);
SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR2);
SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR3);
As the "SELECT" statements are running, I can see the buffer cache being populated with the blocks from this index's partitions using the following query:
--- QUERY1
select
+ b.object_name,b.subobject_name,count(*)+
from
+ v$bh a,dba_objects b+
where
+ b.object_name = '[INDEX_NAME]'+
+ and+
+ b.owner = '[OWNER]'+
+ and+
+ b.object_id = a.objd and a.status != 'free'+
group by
+ b.object_name,b.subobject_name+
order by 3
+ desc;+
I'm using the following query to determine when my db_keep_cache_size is approaching capacity:
--- QUERY2
select
+ (sum(count(*))*8192)/1024/1024 BUFFER_KEEP_USED_IN_MB+
from
+ v$bh a,+
+ dba_objects b+
where
+ b.object_name = '[INDEX_NAME]'+
+ and+
+ b.owner = '[OWNER]'+
+ and+
+ b.object_id = a.objd and a.status != 'free'+
group by
+ b.object_name,b.subobject_name;+
My Problems/Questions:
What I've noticed is that as the blocks are being cached, the index partitions that were cached first are being aged out.
I know this is normal behavior if you reach the capacity of your buffer keep pool size but according to QUERY2 above I am not even close to reaching 10G.
Why aren't the blocks remaining in the buffer cache? Is QUERY2 accurately depicting the usage of my buffer keep pool?
I have already verified that this index's partitions are the only objects setup for my keep cache. (SELECT * fROM DBA_OBJECTS WHERE BUFFER_POOL <> 'DEFAULT')SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR1);
SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR2);
SELECT /*+ INDEX ([INDEX_NAME]) */ ... FROM [TABLE_NAME] partition (PAR3);Just to clarify.
Do above queries follow index scan?
================================
Dion Cho - Oracle Performance Storyteller
http://dioncho.wordpress.com (english)
http://ukja.tistory.com (korean)
http://dioncho.blogspot.com (japanese)
http://ask.ex-em.com (q&a)
================================ -
Db_keep_cache_size shows 0 when i keep object in KEEP buffer pool !
Dear Frineds ,
I use Oracle 10g . Form the oracle 10g documentaiton, I get the following information regarding ASMM (Automatic Shared Memory Management) :
The following pools are manually sized components and are not affected by Automatic Shared Memory Management:
Log buffer
Other buffer caches (such as KEEP, RECYCLE, and other non-default block size)
Fixed SGA and other internal allocations
Now plz see the following examle :
1) SQL> select sum(bytes)/1024/1024 " SGA size used in MB" from v$sgastat where name!='free memory';
SGA size used in MB
247.09124
2) SQL> show parameter keep_
NAME TYPE VALUE
db_keep_cache_size big integer 0 (Here db_keep_cache_size is 0 )
3) Now I keep the scott's dept table to KEEP cache :
SQL> select owner,segment_type,segment_name,buffer_pool from dba_segments where buffer_pool != 'DEFAULT';
no rows selected
SQL> alter table scott.dept storage(BUFFER_POOL KEEP);
Table altered.
SQL> select owner,segment_type,segment_name,buffer_pool from dba_segments where buffer_pool != 'DEFAULT';
OWNER SEGMENT_TYPE SEGMENT_NAME
SCOTT TABLE DEPT
4)
After doing this , I have to see the following parameter :
SQL> show parameter keep
NAME TYPE VALUE
db_keep_cache_size big integer 0
SQL> select sum(bytes)/1024/1024 " SGA size used in MB" from v$sgastat where name!='free memory';
SGA size used in MB
246.76825
Here I see that my sga is used but "db_keep_cache_size" still shows the '0' .
Can u plz explain why this parameter value shows '0' now ?
Thx in advance ... ...Hi,
I am not sure I have understood the question fully but if you are trying to monitor usage of the buffer pools you should use some of the dynamic views like in the example query below. If this is not what you are interested in let me know.
SELECT NAME, BLOCK_SIZE, SUM(BUFFERS)
FROM V$BUFFER_POOL
GROUP BY NAME, BLOCK_SIZE
HAVING SUM(BUFFERS) > 0; -
KEEP POOL and count(*)
Hello,
I resized db_keep_cache_size and altered tables and indexes -> storage (buffer_pool keep).
Now, I think, I have to select * from tables.
Is command select count(*) from table an equivalent please ?
If I run select count(*), Disk activity is on 100% and it takes 2minutes. But when I run script, where is
set termout off
select * from table;
set termout on
It takes very very long time and activity is maybe on 5%. Could you help me with this please ?
Thank you very much! :)Ondrej T. wrote:
I'm creating application, only for one user. Data from tablespace are static - writing is not possible. Only reading.
There are 4 tables ( 7+7+3+18 ) GB.
I want to put them into keep pool. ( allocated 40GB)
I altered tables and indexes. But the data will be in pull after execution
select * from tables
When I run this command, execution is very slow. Disk usage - 5%.
1) Why? Termout is off...
When I run app, there will be checkout if the tables are in pool, if not(server restart), it will execute select * from tables.
So, why is it too slow?
( When I run select count(*) from table , disk usage is 100% )Reading 40G data from disk will take a while. Btw, do you have enough RAM to keep indexes of these tables?
Have you waited until your first select complete? What about second run?
Why don't you use an in-memory database solution such as TimesTen?
Regards
Gokhan -
I want to know what exactly happens when we try to put a segment in Buffer using KEEP claause.
I have default DB_BUFFER_CACHE and have not created KEEP POOLs
NAME TYPE VALUE
db_cache_size big integer 1504M
NAME TYPE VALUE
db_keep_cache_size big integer 0and ran the following command
ALTER INDEX TEST_SCHEMA.PK_INDEX STORAGE (BUFFER_POOL KEEP);
After running the following SQL, I found that this Index is in KEEP
set linesize 132
SELECT ds.buffer_pool, do.owner, SUBSTR(do.object_name,1,9) OBJECT_NAME,
ds.blocks OBJECT_BLOCKS, COUNT(*) CACHED_BLOCKS
FROM dba_objects do, dba_segments ds, v$bh v
WHERE do.data_object_id=V.OBJD
AND do.owner=ds.owner(+)
AND do.object_name=ds.segment_name(+)
AND DO.OBJECT_TYPE=DS.SEGMENT_TYPE(+)
AND ds.buffer_pool ='KEEP'
GROUP BY ds.buffer_pool, do.owner, do.object_name, ds.blocks
ORDER BY do.owner, do.object_name, ds.buffer_pool;
BUFFER_ OWNER OBJECT_NAME OBJECT_BLOCKS CACHED_BLOCKS
KEEP TEST_SCHEMA PK_INDEX 24064 10854Question -
1. Is this index really in KEEP status under default BUFFER POOL?
2. If this index is in KEEP, does it mean that it will "always" be in buffer?
3. If not, then what should we do so that a segment remains in the buffer cache all the time.
Thanks!
Edited by: user608897 on Mar 2, 2011 9:45 AM
Edited by: user608897 on Mar 2, 2011 9:46 AM
Edited by: user608897 on Mar 2, 2011 9:49 AMHello,
Here are the steps I followed for pinning this Index on memory.
alter system set db_keep_cache_size=2000M scope=both;
SQL> sho parameter db_keep_cache_size
NAME TYPE VALUE
db_keep_cache_size big integer 2016M
ALTER INDEX [SCHEMA].[INDEX] STORAGE (BUFFER_POOL KEEP);
SELECT /*+ INDEX ([SCHEMA].[INDEX]) */
FROM [SCHEMA].[TABLE];
set linesize 132
COL OBJECT_NAME FORMAT A30
SELECT ds.buffer_pool, do.owner, do.object_name OBJECT_NAME,
ds.blocks OBJECT_BLOCKS, COUNT(*) CACHED_BLOCKS
FROM dba_objects do, dba_segments ds, v$bh v
WHERE do.data_object_id=V.OBJD
AND do.owner=ds.owner(+)
AND do.object_name=ds.segment_name(+)
AND DO.OBJECT_TYPE=DS.SEGMENT_TYPE(+)
AND ds.buffer_pool ='KEEP'
GROUP BY ds.buffer_pool, do.owner, do.object_name, ds.blocks
ORDER BY do.owner, do.object_name, ds.buffer_pool;
BUFFER_ OWNER OBJECT_NAME OBJECT_BLOCKS CACHED_BLOCKS
KEEP [SCHEMA] [INDEX] 234496 7313
As the "SELECT" statements are running, I can see the buffer cache being populated with the blocks from this index using the following query:
--- QUERY1
select
b.object_name,b.subobject_name,count(*)
from
v$bh a,dba_objects b
where
b.object_name = '[INDEX]'
and
b.owner = '[SCHEMA]'
and
b.object_id = a.objd and a.status != 'free'
group by
b.object_name,b.subobject_name
order by 3
desc;
I'm using the following query to determine when my db_keep_cache_size is approaching capacity:
--- QUERY2
select
(sum(count(*))*8192)/1024/1024 BUFFER_KEEP_USED_IN_MB
from
v$bh a,
dba_objects b
where
b.object_name = '[INDEX]'
and
b.owner = '[SCHEMA]'
and
b.object_id = a.objd and a.status != 'free'
group by
b.object_name,b.subobject_name;Following issue has been seen by another forum member also but there is no explanation to that. Since my problem is also same, I am putting same questions here -
What I've noticed is that as the blocks are being cached, the index that were cached first are being aged out.
I know this is normal behavior if you reach the capacity of your buffer keep pool size but according to QUERY2 above I am not even close to reaching 2G.
Why aren't the blocks remaining in the buffer cache? Is QUERY2 accurately depicting the usage of my buffer keep pool?
Secondly, If this index is of 1.5GB and KEEP_POOL size is 2GB then will the following sql make sure that the whole index will be avaliable in in buffer "all the time" as there is no other segment in the KEEP BUFFER POOL
ALTER INDEX [SCHEMA].[INDEX] STORAGE (BUFFER_POOL KEEP);
Thanks! -
Needing to add keep pool to SGA, sizing and checking for room?
Hi all,
I'm needing to experiment with pinning a table and index (recommended by COTS product vendor) to see if it helps performance.
I'm trying to set up a keep pool...and put the objects in it
I've gone into the database, and found that I will need to set up a keep pool:
SQL> show parameter keep
NAME TYPE VALUE
buffer_pool_keep string
control_file_record_keep_time integer 7
db_keep_cache_size big integer 0
That being said, and I'm having a HUGE senior moment right now...how
do I go about making sure I have enough room to make a little keep
pool?
I've looked at my objects I want to put in there, and one is about
.675 MB, and the other is about .370 MB. So, roughly a little more
than 1MB
Looking at my SGA parameters:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 572M
sga_target big integer 572M
Now...how do I find out what is being used in SGA, to make sure I have room?
I've been searching around, and trying to come up with some queries. I
came up with this one:
SQL> select name, value / (1024*1024) size_mb from v$sga;
NAME SIZE_MB
Fixed Size 1.97846222
Variable Size 232.002007
Database Buffers 332
Redo Buffers 6.01953125
From this, it appears everything is being used....so, not sure what to
do from here.
Suggestions and links greatly appreciated!
cayenneSELECT SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR, ESTD_PHYSICAL_READS
FROM V$DB_CACHE_ADVICE
WHERE NAME = 'KEEP'
AND BLOCK_SIZE = (SELECT VALUE FROM V$PARAMETER WHERE NAME = 'db_block_size')
AND ADVICE_STATUS = 'ON';
SELECT ds.BUFFER_POOL,
Substr(do.object_name,1,9) object_name,
ds.blocks object_blocks,
Count(* ) cached_blocks
FROM dba_objects do,
dba_segments ds,
v$bh v
WHERE do.data_object_id = v.objd
AND do.owner = ds.owner (+)
AND do.object_name = ds.segment_name (+)
AND do.object_type = ds.segment_type (+)
AND ds.BUFFER_POOL IN ('KEEP','RECYCLE')
GROUP BY ds.BUFFER_POOL,
do.object_name,
ds.blocks
ORDER BY do.object_name,
ds.BUFFER_POOL; Edited by: sb92075 on Jul 9, 2009 2:48 PM -
question from oracler:
SYS@orcl>select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
parameter set as :
SYS@orcl>sho parameter cache_size
NAME TYPE VALUE
client_result_cache_size big integer 0
db_16k_cache_size big integer 80M
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_cache_size big integer 160M
db_flash_cache_size big integer 0
db_keep_cache_size big integer 128M
db_recycle_cache_size big integer 0
SYS@orcl>create table dna.t2 storage(buffer_pool keep) as select level id ,rpad('*',4000,'*') data from dual connect by
level<=15000;
表已创建。
SYS@orcl>select count(*) from dna.t2;
COUNT(*)
15000
SYS@orcl>set autotrace traceonly
SYS@orcl>select count(*) from dna.t2;
执行计划
Plan hash value: 3321871023
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4116 (1)| 00:00:50 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T2 | 16126 | 4116 (1)| 00:00:50 |
Note
- dynamic sampling used for this statement (level=2)
统计信息
0 recursive calls
0 db block gets
15004 consistent gets
15000 physical reads
0 redo size
528 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SYS@orcl>select count(*) from dna.t2;
执行计划
Plan hash value: 3321871023
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4116 (1)| 00:00:50 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| T2 | 16126 | 4116 (1)| 00:00:50 |
Note
- dynamic sampling used for this statement (level=2)
统计信息
0 recursive calls
0 db block gets
15004 consistent gets
15000 physical reads
0 redo size
528 bytes sent via SQL*Net to client
519 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
为什么会出现大量的physical reads现象,难道此存在于The KEEP buffer pool中的表,应该不会出现此现象?answered by maclean liu:
SQL>
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> show parameter db_keep
NAME TYPE VALUE
db_keep_cache_size big integer 128M
SQL> create table maclean_tan2 storage(buffer_pool keep) as select level id ,rpad('*',4000,'*') data from dual connect by
2 level<=15000;
Table created.
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
Execution Plan
Plan hash value: 1229461046
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4069 (1)| 00:00:49 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MACLEAN_TAN2 | 15476 | 4069 (1)| 00:00:49 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
4 recursive calls
0 db block gets
15081 consistent gets
15000 physical reads
0 redo size
527 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
Execution Plan
Plan hash value: 1229461046
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4069 (1)| 00:00:49 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MACLEAN_TAN2 | 15476 | 4069 (1)| 00:00:49 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
15004 consistent gets
15000 physical reads
0 redo size
527 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> alter session set events '10046 trace name context forever,level 8';
Session altered.
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
SQL> oradebug setmypid;
Statement processed.
SQL> oradebug tracefile_name
/s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_29876.trc
PARSING IN CURSOR #140118795641360 len=33 dep=0 uid=0 oct=3 lid=0 tim=1340511245212199 hv=486583032 ad='76883110' sqlid='drryzcwfh1ars'
select count(*) from maclean_tan2
END OF STMT
PARSE #140118795641360:c=6000,e=35195,p=0,cr=77,cu=0,mis=1,r=0,dep=0,og=1,plh=1229461046,tim=1340511245212192
EXEC #140118795641360:c=0,e=54,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1229461046,tim=1340511245212328
WAIT #140118795641360: nam='SQL*Net message to client' ela= 13 driver id=1650815232 #bytes=1 p3=0 obj#=79780 tim=1340511245212395
WAIT #140118795641360: nam='asynch descriptor resize' ela= 11 outstanding #aio=0 current aio limit=235 new aio limit=265 obj#=79780 tim=1340511245214369
WAIT #140118795641360: nam='direct path read' ela= 140 file number=1 first dba=200555 block cnt=1 obj#=79780 tim=1340511245276928
WAIT #140118795641360: nam='direct path read' ela= 124 file number=1 first dba=200683 block cnt=1 obj#=79780 tim=1340511245294008
WAIT #140118795641360: nam='direct path read' ela= 126 file number=1 first dba=201707 block cnt=1 obj#=79780 tim=1340511245425743
WAIT #140118795641360: nam='direct path read' ela= 170 file number=1 first dba=201835 block cnt=1 obj#=79780 tim=1340511245454308
WAIT #140118795641360: nam='direct path read' ela= 126 file number=1 first dba=201963 block cnt=1 obj#=79780 tim=1340511245472445
WAIT #140118795641360: nam='direct path read' ela= 113 file number=1 first dba=202091 block cnt=1 obj#=79780 tim=1340511245488926
WAIT #140118795641360: nam='direct path read' ela= 116 file number=1 first dba=202219 block cnt=1 obj#=79780 tim=1340511245505475
WAIT #140118795641360: nam='direct path read' ela= 116 file number=1 first dba=202475 block cnt=1 obj#=79780 tim=1340511245539057
WAIT #140118795641360: nam='direct path read' ela= 157 file number=1 first dba=202603 block cnt=1 obj#=79780 tim=1340511245556950
WAIT #140118795641360: nam='direct path read' ela= 31 file number=1 first dba=202987 block cnt=1 obj#=79780 tim=1340511245608673
WAIT #140118795641360: nam='direct path read' ela= 131 file number=1 first dba=203115 block cnt=1 obj#=79780 tim=1340511245624922
WAIT #140118795641360: nam='direct path read' ela= 113 file number=1 first dba=203755 block cnt=1 obj#=79780 tim=1340511245706298
WAIT #140118795641360: nam='direct path read' ela= 28 file number=1 first dba=203883 block cnt=1 obj#=79780 tim=1340511245722656
WAIT #140118795641360: nam='direct path read' ela= 13 file number=1 first dba=204011 block cnt=1 obj#=79780 tim=1340511245738218
WAIT #140118795641360: nam='direct path read' ela= 31 file number=1 first dba=204523 block cnt=1 obj#=79780 tim=1340511245801733
direct path read 而非 db file scattered read
11g new feature 对于大表 的FULL SCAN 可以直接采用 direct path read 读入PGA 而不经过 buffer cache
ALTER SESSION SET EVENTS '10949 TRACE NAME CONTEXT FOREVER';
10949 event 可以禁止 11g 的这种特性;
[oracle@vrh1 ~]$ oerr ora 10949
10949, 00000, "Disable autotune direct path read for full table scan"
// *Cause:
// *Action: Disable autotune direct path read for serial full table scan.
_small_table_threshold 设置为较大值 避免 optimizer 将这个表视为大表 buffer 被flush
SQL>
SQL> alter session set "_small_table_threshold"=999999;
Session altered.
SQL> ALTER SESSION SET EVENTS '10949 TRACE NAME CONTEXT FOREVER';
Session altered.
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
SQL> set autotrace on;
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
Execution Plan
Plan hash value: 1229461046
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4069 (1)| 00:00:49 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MACLEAN_TAN2 | 15476 | 4069 (1)| 00:00:49 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
15011 consistent gets
0 physical reads
0 redo size
527 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count(*) from maclean_tan2;
COUNT(*)
15000
Execution Plan
Plan hash value: 1229461046
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 4069 (1)| 00:00:49 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MACLEAN_TAN2 | 15476 | 4069 (1)| 00:00:49 |
Note
- dynamic sampling used for this statement (level=2)
Statistics
0 recursive calls
0 db block gets
15011 consistent gets
0 physical reads
0 redo size
527 bytes sent via SQL*Net to client
523 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
http://www.oracledatabase12g.com/archives/script-list-buffer-cache-details.html
参考以上网址的脚本
set pages 999
set lines 92
column c0 heading "Owner" format a12
column c1 heading "Object|Name" format a30
column c2 heading "Object|Type" format a8
column c3 heading "Number of|Blocks in|Buffer|Cache" format 99,999,999
column c4 heading "Percentage|of object|blocks in|Buffer" format 999
column c5 heading "Buffer|Pool" format a7
column c6 heading "Block|Size" format 99,999
select
buffer_map.owner c0,
object_name c1,
case when object_type = 'TABLE PARTITION' then 'TAB PART'
when object_type = 'INDEX PARTITION' then 'IDX PART'
else object_type end c2,
sum(num_blocks) c3,
(sum(num_blocks)/greatest(sum(blocks), .001))*100 c4,
buffer_pool c5,
sum(bytes)/sum(blocks) c6
from
buffer_map,
dba_segments s
where
s.segment_name = buffer_map.object_name
and
s.owner = buffer_map.owner
and
s.segment_type = buffer_map.object_type
and
nvl(s.partition_name,'-') = nvl(buffer_map.subobject_name,'-')
group by
buffer_map.owner,
object_name,
object_type,
buffer_pool
having
sum(num_blocks) > 10
order by
sum(num_blocks) desc
Number of Percentage
Blocks in of object
Object Object Buffer blocks in Buffer Block
Owner Name Type Cache Buffer Pool Size
SYS MACLEAN_TAN2 TABLE 15,001 98 KEEP 8,192
SYS C_TOID_VERSION# CLUSTER 1,765 57 DEFAULT 8,192
SYS C_OBJ# CLUSTER 1,428 93 DEFAULT 8,192
SYS OBJ$ TABLE 931 91 DEFAULT 8,192
SYS I_OBJ2 INDEX 760 99 DEFAULT 8,192
SYS C_FILE#_BLOCK# CLUSTER 198 77 DEFAULT 8,192
SYS I_FILE#_BLOCK# INDEX 40 100 DEFAULT 8,192
SYS I_OBJ1 INDEX 37 14 DEFAULT 8,192
SYS INDPART$ TABLE 16 100 DEFAULT 8,192
SYS I_HH_OBJ#_INTCOL# INDEX 15 12 DEFAULT 8,192
SYS HIST_HEAD$ TABLE 15 4 DEFAULT 8,192
SYS AQ$_SYS$SERVICE_METRICS_TAB_S TABLE 14 88 DEFAULT 8,192
SYS C_TS# CLUSTER 13 81 DEFAULT 8,192
SYS I_DEPENDENCY1 INDEX 13 2 DEFAULT 8,192
SYS I_ACCESS1 INDEX 12 2 DEFAULT 8,192
15 rows selected.
可以看到 MACLEAN_TAN2 表在 keep buffer pool中的详细信息, 15,001 blocks= 117MB
-
Hi, I am trying to properly display a list of tasks for a project however without a join to the project number (which I was aware of) and the employee table, I get over 500 results.
The task can be created without a employee assigned to it so therefore the page does not require the field to be filled in.
Here is the SQL code, any does anyone have any ideas?
select
pd.pk_proj_detail_id "Task Number",
pd.task_title "Task Title",
pd.DETAIL_STATUS "Task Status",
pm.name "Associated Project",
pps.last_name||', '||pps.first_name||', '||pps.middle_initial||'.' "Assigned Employee",
pd.TRACKIT_NUMBER "TrackIt! Number",
pd.CREATEBY_DATE "Date Entered",
pd.DATE_BEGIN "Date Began",
pd.ESTIMATED_DATE "Estimated Completion Date",
pd.DATE_END "Date Completed"
from
PROTRAC_DETAIL pd,
protrac_master pm,
cobr.vw_pps_payroll pps,
resources r
where
pd.fk_proj_master_id = pm.PK_PROJ_MASTER_ID
and r.fk_master_id = pm.PK_PROJ_MASTER_ID
and (r.emp_id = pps.emple_no
or r.emp_id is null)It's 10g r2 with Application Express 3.1.0.00.32
This is the tasks (detail) table
ALTER TABLE PROTRAC_DETAIL
DROP PRIMARY KEY CASCADE;
DROP TABLE PROTRAC_DETAIL CASCADE CONSTRAINTS;
CREATE TABLE PROTRAC_DETAIL
PK_PROJ_DETAIL_ID NUMBER NOT NULL,
FK_PROJ_MASTER_ID NUMBER,
TRACKIT_NUMBER NUMBER,
DETAIL_DESCRIPTION VARCHAR2(4000 CHAR),
DETAIL_STATUS VARCHAR2(19 CHAR),
DETAIL_STATUS_COMMENT VARCHAR2(4000 CHAR),
DATE_BEGIN DATE,
DATE_END DATE,
ESTIMATED_DATE DATE,
CREATEBY_DATE DATE,
CREATEBY_USER VARCHAR2(50 CHAR),
LASTMOD_DATE DATE,
LASTMOD_USER VARCHAR2(50 CHAR),
TASK_TITLE VARCHAR2(100 CHAR)
TABLESPACE DEVPROTRAC_DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
CREATE UNIQUE INDEX PROTRAC_DETAIL_PK ON PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID)
TABLESPACE DEVPROTRAC_DATA
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
CREATE OR REPLACE TRIGGER BUI_PROTRAC_DETAIL
before insert or update
on PROTRAC_DETAIL
referencing new as New old as Old
for each row
begin
if inserting then
select users_seq.nextval, sysdate, apex_application.g_user
into :new.pk_proj_detail_id, :new.createby_date, :new.createby_user
from dual;
elsif updating then
select sysdate, apex_application.g_user
into :new.lastmod_date, :new.lastmod_user
from dual;
end if;
end;
SHOW ERRORS;
ALTER TABLE PROTRAC_DETAIL ADD (
CONSTRAINT PROTRAC_DETAIL_PK
PRIMARY KEY
(PK_PROJ_DETAIL_ID)
USING INDEX
TABLESPACE DEVPROTRAC_DATA
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
ALTER TABLE PROTRAC_DETAIL ADD (
CONSTRAINT PROTRAC_DETAIL_NUM
FOREIGN KEY (FK_PROJ_MASTER_ID)
REFERENCES PROTRAC_MASTER (PK_PROJ_MASTER_ID));
ALTER TABLE DEVPROTRAC.RESOURCES ADD (
FOREIGN KEY (FK_DETAIL_ID)
REFERENCES DEVPROTRAC.PROTRAC_DETAIL (PK_PROJ_DETAIL_ID));
SET DEFINE OFF;
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(34, 24, NULL, 'test', 'Queued',
NULL, NULL, NULL, NULL, TO_DATE('10/30/2008 13:37:01', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', TO_DATE('11/03/2008 15:19:35', 'MM/DD/YYYY HH24:MI:SS'), NULL, 'bananana');
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(41, 40, NULL, '2354234', 'Queued',
NULL, NULL, NULL, NULL, TO_DATE('10/31/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', TO_DATE('11/03/2008 13:52:02', 'MM/DD/YYYY HH24:MI:SS'), 'LREDMOND', 'I can type on the keyboarddf');
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(49, 32, 78888, 'one day fishsticks will walk on the moon.', 'Queued',
'waiting for fishsticks.', NULL, NULL, NULL, TO_DATE('11/03/2008 11:28:11', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', NULL, NULL, 'Fix the keyboard');
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(50, 38, NULL, 'dfdfdfdfdfdfdfdfdf', 'Queued',
NULL, NULL, NULL, NULL, TO_DATE('11/03/2008 12:03:06', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', TO_DATE('11/03/2008 15:19:44', 'MM/DD/YYYY HH24:MI:SS'), NULL, 'resreeeeeeeeee');
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(33, 31, NULL, 'Make sure the bananas are fresh', 'Queued',
NULL, NULL, NULL, NULL, TO_DATE('10/29/2008 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', TO_DATE('11/03/2008 15:19:52', 'MM/DD/YYYY HH24:MI:SS'), NULL, 'e543563465');
Insert into PROTRAC_DETAIL
(PK_PROJ_DETAIL_ID, FK_PROJ_MASTER_ID, TRACKIT_NUMBER, DETAIL_DESCRIPTION, DETAIL_STATUS,
DETAIL_STATUS_COMMENT, DATE_BEGIN, DATE_END, ESTIMATED_DATE, CREATEBY_DATE,
CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, TASK_TITLE)
Values
(48, 37, NULL, 'guitar heros! yay', 'Queued',
NULL, NULL, NULL, NULL, TO_DATE('11/03/2008 11:26:06', 'MM/DD/YYYY HH24:MI:SS'),
'LREDMOND', TO_DATE('11/03/2008 15:19:57', 'MM/DD/YYYY HH24:MI:SS'), NULL, '34444444444444543etfg');
COMMIT;This is for the resources table:
ALTER TABLE RESOURCES
DROP PRIMARY KEY CASCADE;
DROP TABLE RESOURCES CASCADE CONSTRAINTS;
CREATE TABLE RESOURCES
PK_RESOURCES_ID NUMBER,
FK_DETAIL_ID NUMBER,
EMP_ID NUMBER,
RESOURCE_STATUS VARCHAR2(8 CHAR),
RESOURCE_COMMENT VARCHAR2(4000 CHAR),
CREATEBY_DATE DATE,
CREATEBY_USER VARCHAR2(50 CHAR),
LASTMOD_DATE DATE,
LASTMOD_USER VARCHAR2(50 CHAR),
FK_MASTER_ID NUMBER
TABLESPACE DEVPROTRAC_DATA
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
CREATE UNIQUE INDEX RESOURCES_PK ON RESOURCES
(PK_RESOURCES_ID)
TABLESPACE DEVPROTRAC_DATA
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
CREATE OR REPLACE TRIGGER BUI_RESOURCES
before insert or update
on RESOURCES
referencing new as New old as Old
for each row
begin
if inserting then
select users_seq.nextval, sysdate, apex_application.g_user
into :new.pk_resources_id, :new.createby_date, :new.createby_user
from dual;
elsif updating then
select sysdate, apex_application.g_user
into :new.lastmod_date, :new.lastmod_user
from dual;
end if;
end;
SHOW ERRORS;
ALTER TABLE RESOURCES ADD (
CONSTRAINT RESOURCES_PK
PRIMARY KEY
(PK_RESOURCES_ID)
USING INDEX
TABLESPACE DEVPROTRAC_DATA
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
ALTER TABLE RESOURCES ADD (
FOREIGN KEY (FK_DETAIL_ID)
REFERENCES PROTRAC_DETAIL (PK_PROJ_DETAIL_ID),
FOREIGN KEY (FK_MASTER_ID)
REFERENCES PROTRAC_MASTER (PK_PROJ_MASTER_ID));
SET DEFINE OFF;
Insert into RESOURCES
(PK_RESOURCES_ID, FK_DETAIL_ID, EMP_ID, RESOURCE_STATUS, RESOURCE_COMMENT,
CREATEBY_DATE, CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, FK_MASTER_ID)
Values
(53, 50, 356654, 'Active', NULL,
TO_DATE('11/04/2008 09:32:06', 'MM/DD/YYYY HH24:MI:SS'), 'LREDMOND', NULL, NULL, NULL);
Insert into RESOURCES
(PK_RESOURCES_ID, FK_DETAIL_ID, EMP_ID, RESOURCE_STATUS, RESOURCE_COMMENT,
CREATEBY_DATE, CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, FK_MASTER_ID)
Values
(51, 41, 447250, 'Active', 'No Sure.',
TO_DATE('11/03/2008 14:23:11', 'MM/DD/YYYY HH24:MI:SS'), NULL, TO_DATE('11/04/2008 09:00:04', 'MM/DD/YYYY HH24:MI:SS'), NULL, 40);
Insert into RESOURCES
(PK_RESOURCES_ID, FK_DETAIL_ID, EMP_ID, RESOURCE_STATUS, RESOURCE_COMMENT,
CREATEBY_DATE, CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, FK_MASTER_ID)
Values
(54, 50, 323829, 'Active', NULL,
TO_DATE('11/04/2008 10:26:08', 'MM/DD/YYYY HH24:MI:SS'), 'LREDMOND', NULL, NULL, 38);
Insert into RESOURCES
(PK_RESOURCES_ID, FK_DETAIL_ID, EMP_ID, RESOURCE_STATUS, RESOURCE_COMMENT,
CREATEBY_DATE, CREATEBY_USER, LASTMOD_DATE, LASTMOD_USER, FK_MASTER_ID)
Values
(52, 33, 8915, 'Active', 'get to work',
TO_DATE('11/03/2008 15:20:18', 'MM/DD/YYYY HH24:MI:SS'), 'LREDMOND', TO_DATE('11/03/2008 15:35:10', 'MM/DD/YYYY HH24:MI:SS'), NULL, NULL);
COMMIT;The results I want is everything above regardless of emp_id assigned (if any). Without the r.emp_id = pps.emple_no join, the query will generate 234234239482304234 results.
Hope this helps.
Edited by: leland on Nov 4, 2008 12:56 PM -
Why Doesn't XMLIndex Create and Populate Upon Scale-Up For Eval Table?
Presently working with Oracle release 11.2.0.1 using xmltype securefile binary xml tables.
In a quandry here and hoping not to have to open an Oracle SR...
Able to create a working xmlindex against an 'Acme Eval' table in our development environment against an Acme eval table (estimate ~ 5GB) containing 325,550 rows. Creation takes about 10 mins. No partitioning is being used.
When trying the exact same xmlindex creation against our, much more powerful, pvs platform environment contaning 13,985,124 rows; the xmlindex object shows up as existing in the data dictionary, but the session never stops running after at least 24 hrs of runtime.
The pvs hardware environment uses: (1.) 24 processor, (2.) Solaris-64 OS, (3.) 128GB memory.
Two 1 hr AWR reports for the pvs environment shows a huge amount of logical read/writes. The foreground wait event; 'db file sequential read' dominates the DBTime @ 92%. There is about 4.6 GB physical reads/3.5GB physical writes - not too large relatively speaking. The I/O subsystem is having no problem handling the throughput. The top, by far,Time Model Statistics is the 'sql excute elapsed time' @ 99%. User I/O is the main foreground wait class @92%. These values are similar for both of the AWR report - except one report show the 'CREATE XMLINDEX...' statement as being the top sql. The other report shows ' INSERT INTO CROUTREACH.EVAL_IDX_TAB_I... ' As the top sql.
Been several days since this post. Hoping someone might be able to provide some insight or share their experiences on xmlindexes scaling up to millions of records in the 5 - 10 gb xmltype table range...
Regards,
Rick Blanchard
The frustration here is; there is no obvious database configuration, physical cpu, memory, or I/O issue - other than the logical gets centered around the db file sequential read' wait event.
Can't do much as far as adjusting the create index statement and underlying attendent Oracle xml operations - the main frustration factor here...
The xmlindex is still undergoing record insertions.
Additionally, in the pvs environment; no dml is allowed on the xmlindex and the select statement that works fine using the xmlindex via the optimizer in the development environment doesnt pick up the xmlindex in the pvs environment - as would be expected if the xmlindex wasn't completely populated.
Appears the xmlindex record population is stalled...
In the pvs environment, when performing the dml 'alter index croutreach.eval_xmlindex_ix noparallel';
get this error - typical when an xmlindex is being populated with records:
ALTER INDEX croutreach.eval_xmlindex_ix NOPARALLEL
Error report:
SQL Error: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
00054. 00000 - "resource busy and acquire with NOWAIT specified"
*Cause: Resource interested is busy.
*Action: Retry if necessary. xmlindex create statement used in both cases is
(The underlying eval table is also set to a dop of 20):
CREATE
INDEX "EVAL_XMLINDEX_IX" ON "EVAL"
OBJECT_VALUE
INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''
COLUMNS
eval_catt VARCHAR2(50) path ''@category'',
acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
eval_idd VARCHAR2(50) path ''@evalId'',
eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual
XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact
COLUMNS
defId VARCHAR2(50) path ''ns7:defId'',
factSource VARCHAR2(50) path ''ns7:factSource'',
origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
defUrn VARCHAR2(100) path ''ns7:defUrn'''
)parallel 20;The development environment eval table is:
CREATE
TABLE "N98991"."EVAL" OF XMLTYPE
CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" ENABLE
XMLTYPE STORE AS SECUREFILE BINARY XML
TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT)
ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
"EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
/eval/@eval_dt'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
WITH
TIME ZONE))),
"EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50)))
PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" PARALLEL 20 ;
CREATE
INDEX "N98991"."EVAL_XMLINDEX_IX" ON "N98991"."EVAL"
OBJECT_VALUE
INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''
COLUMNS
eval_catt VARCHAR2(50) path ''@category'',
acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
eval_idd VARCHAR2(50) path ''@evalId'',
eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual
XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact
COLUMNS
defId VARCHAR2(50) path ''ns7:defId'',
factSource VARCHAR2(50) path ''ns7:factSource'',
origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
defUrn VARCHAR2(100) path ''ns7:defUrn'''
CREATE UNIQUE INDEX "N98991"."SYS_C00415365" ON "N98991"."EVAL"
"SYS_NC_OID$"
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" ;
CREATE UNIQUE INDEX "N98991"."SYS_IL0000688125C00003$$" ON "N98991"."EVAL"
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST
GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
CREATE UNIQUE INDEX "N98991"."EVAL_ID_PK" ON "N98991"."EVAL" ("EVAL_ID")
PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" ;The pvs environment's eval table and xmlindex defintion is:
CREATE
TABLE "CROUTREACH"."EVAL" OF XMLTYPE
CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" ENABLE
XMLTYPE STORE AS SECUREFILE BINARY XML
TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT)
ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
"EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
/eval/@eval_dt'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
WITH
TIME ZONE))),
"EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50))),
"EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
16777216,0),50,1,2) AS VARCHAR2(50)))
PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" PARALLEL 20 ;
CREATE
INDEX "CROUTREACH"."EVAL_IDX_MBR_ID_EVAL_CAT" ON "CROUTREACH"."EVAL"
"ACME_MBR_ID",
"EVAL_CAT"
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" PARALLEL 16 ;
CREATE UNIQUE INDEX "CROUTREACH"."SYS_C0018448" ON "CROUTREACH"."EVAL"
"SYS_NC_OID$"
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE "ACME_DATA" ;
CREATE UNIQUE INDEX "CROUTREACH"."SYS_IL0000094844C00003$$" ON "CROUTREACH".
"EVAL"
PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING STORAGE(INITIAL 65536 NEXT
1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;
CREATE UNIQUE INDEX "CROUTREACH"."EVAL_ID_PK" ON "CROUTREACH"."EVAL" ("EVAL_ID"
) PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
DEFAULT) TABLESPACE "ACME_DATA" PARALLEL 16 ;
CREATE
INDEX "CROUTREACH"."EVAL_XMLINDEX_IX" ON "CROUTREACH"."EVAL"
OBJECT_VALUE
INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
'XMLTable eval_idx_tab_I XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/eval''
COLUMNS
eval_catt VARCHAR2(50) path ''@category'',
acne_mbr_idd VARCHAR2(50) path ''@acmeMemberId'',
eval_idd VARCHAR2(50) path ''@evalId'',
eval_dtt TIMESTAMP WITH TIME ZONE path ''@eval_dt'',
derivedFact XMLTYPE path ''derivedFacts/ns7:derivedFact'' virtual
XMLTable eval_idx_tab_II XMLNamespaces(''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7",
DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03''),''/ns7:derivedFact'' passing derivedFact
COLUMNS
defId VARCHAR2(50) path ''ns7:defId'',
factSource VARCHAR2(50) path ''ns7:factSource'',
origInferred_dt TIMESTAMP WITH TIME ZONE path ''ns7:origInferred_dt'',
typee VARCHAR2(20) path ''ns7:factValue/ns7:type'',
valuee VARCHAR2(1000) path ''ns7:factValue/ns7:value'',
defUrn VARCHAR2(100) path ''ns7:defUrn'''
PARALLEL 20 ;Wondering if anyone has run into xmlindex creation and populating problems similar to this, when scaling up from thousands of records to millions of records.
At this point, for my work to be useful; must be able to get the xmlindex to at least successfully create and populate @ the 13.9 million records.
Any suggestions, much appreciated.
Regards,
Rick Blanchard
Edited by: RickBlanchardSRS on May 29, 2012 1:03 PMWe didn't use "XMLDB XMLType partitioning" actually, but something simple like
CREATE TABLE P_DATA
( "ID" NUMBER(15,0),
"DOC" "SYS"."XMLTYPE"
) SEGMENT CREATION IMMEDIATE
NOCOMPRESS NOLOGGING
TABLESPACE "XML_DATA"
XMLTYPE COLUMN "DOC" STORE AS SECUREFILE BINARY XML
(TABLESPACE "XML_DATA"
NOCOMPRESS KEEP_DUPLICATES)
XMLSCHEMA "http://www.xxxxx.com/schema_v3.0.xsd"
ELEMENT "RECORD"
DISALLOW NONSCHEMA
PARTITION BY RANGE(ID)
(PARTITION Q_DATA_PART_01 VALUES LESS THAN (100000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_02 VALUES LESS THAN (200000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_03 VALUES LESS THAN (300000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_04 VALUES LESS THAN (400000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_05 VALUES LESS THAN (500000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_06 VALUES LESS THAN (600000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_07 VALUES LESS THAN (700000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_08 VALUES LESS THAN (800000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_09 VALUES LESS THAN (900000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_10 VALUES LESS THAN (1000000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_11 VALUES LESS THAN (1100000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_12 VALUES LESS THAN (1200000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_13 VALUES LESS THAN (1300000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_14 VALUES LESS THAN (1400000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_15 VALUES LESS THAN (1500000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_16 VALUES LESS THAN (1600000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_17 VALUES LESS THAN (1700000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_18 VALUES LESS THAN (1800000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_19 VALUES LESS THAN (1900000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_20 VALUES LESS THAN (2000000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_21 VALUES LESS THAN (2100000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_22 VALUES LESS THAN (2200000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_23 VALUES LESS THAN (2300000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_24 VALUES LESS THAN (2400000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_25 VALUES LESS THAN (2500000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_26 VALUES LESS THAN (2600000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_27 VALUES LESS THAN (2700000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_28 VALUES LESS THAN (2800000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_29 VALUES LESS THAN (2900000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_30 VALUES LESS THAN (3000000000) TABLESPACE "XML_DATA" NOCOMPRESS
,PARTITION Q_DATA_PART_MAX VALUES LESS THAN (MAXVALUE) TABLESPACE "XML_DATA" NOCOMPRESS
);Could be mistaken, but if I remember correctly we ended up with 10mill record id ranges. We needed to do this anyway (=using partitioning), otherwise we would have reached the maximum amount of records in a column physical limit (for our used db_block_size)
Edited by: Marco Gralike on May 29, 2012 10:02 PM
Maybe you are looking for
-
I have a MacBook Pro 13" mid 2010 model machine. Love it to bits but i want more applications, more documents, more music and more iPad apps backed up, problem is I am running out of space. I want to update the hard drive inside my computer, so is th
-
I have a scenario in a configure to order environment where I want to automatically creating flow schedules based on sales order demand. I have two orgs in same OU. 1. How to setup sales order line to generate flow schedules in the manufacturing org
-
How to upgrade to lightroom 5 from lightroom 4
I have had lightroom for awhile and would like to know how to update to lightroom 5 and how much it will cost
-
Hi please someone help me.. TRUNCATE TABLE dmfg.STXN_INVOICE_T ; COMMIT; DROP TABLE dmfg.STXN_INVOICE_T ; CREATE TABLE dmfg.STXN_INVOICE_T storage (initial 1024k next 1024k) nologging AS SELECT * FROM dmfg.STXN_Invoice_V; COMMIT;dmfg.STXN_INVOICE_T
-
How do I create a sub-group in the new iPhone 3G? And is it possible to send e-mail or test message to multiple people without needing to add every single person from my contact list one at a time?