Result_cache on tables
Hi all,
I'm just experimenting with result_cache at the table level:
SQL> select *
2 from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
5 rows selected.here's my sample setup:
create table foo (id number, txt varchar2(100)) result_cache (mode force);
create table foobar(id number, foo_id number, txt varchar2(100)) result_cache (mode force);
insert into foo values (1,'test');
insert into foo values (2,'testing');
insert into foo values (3,'tester');
insert into foo values (4,'tested');
insert into foobar values (1,1,'blah');
insert into foobar values (2,1,'blah');
insert into foobar values (3,2,'blah');
insert into foobar values (4,3,'blah');
insert into foobar values (5,4,'blah');
commit;now when I select from the individual tables it appears to result cache fine:
SQL> set autotrace trace stat
SQL> select * from foo;
4 rows selected.
Statistics
10 recursive calls
0 db block gets
20 consistent gets
0 physical reads
0 redo size
462 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL> /
4 rows selected.
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
462 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL> select * from foobar;
5 rows selected.
Statistics
8 recursive calls
0 db block gets
22 consistent gets
0 physical reads
0 redo size
524 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed
SQL> /
5 rows selected.
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
524 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processedbut when I run a SQL statement that combines the two, it does not cache.
SQL> select *
2 from foo f
3 join foobar fb on (f.id = fb.foo_id)
4 where f.id in (1,2);
3 rows selected.
Statistics
9 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
SQL> /
3 rows selected.
Statistics
0 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processedWhat am I missing here?
Hi WhiteHat. Nice test case and easily reproducible on my 11.2.0.3 Linux VM. My first guess when I looked at your query was the ansi join. My luck with them has always been bad and the luck gets worse with new-ish features.
... yours:
select *
from foo f
join foobar fb on (f.id = fb.foo_id)
4 where f.id in (1,2);
Statistics
9 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
SQL> /
Statistics
0 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
SQL> /
Statistics
0 recursive calls
0 db block gets
15 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
...mine:
select *
from foo f, foobar fb
3 where f.id = fb.foo_id;
Statistics
9 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
676 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed
SQL> /
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
676 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed
select *
from foo f, foobar fb
where f.id = fb.foo_id
4 and f.id in (1,2);
Statistics
7 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
SQL> /
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
634 bytes sent via SQL*Net to client
363 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
Similar Messages
-
Using the client result cache without the query result cache
I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
This is done using a result_cache hint in the query.
select /*+ result_cache */ * from table
As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
Thanks.e3a934c9-c4c2-4c80-b032-d61d415efd4f wrote:
I have constructed a client in C# using ODP.NET to connect to an Oracle database and want to perform client result caching for some of my queries.
This is done using a result_cache hint in the query.
select /*+ result_cache */ * from table
As far as I can tell query result caching on the server is done using the same hint, so I was wondering if there was any way to differentiate between the two? I want the query results to be cached on the client, but not on the server.
The only way I have found to do this is to disable all caching on the server, but I don't want to do this as I want to use the server cache for PL/SQL function results.
Thanks.
You haven't provided ANY information about how you configured the result cache. Different parameters are used for configuring the client versus the server result cache so you need to post what, if anything, you configured.
Post the code you executed when you set the 'client_result_cache_lag' and 'client_result_cache_size' parameters so we can see what values you used. Also post the results of querying those parameters after you set them that show that they really are set.
You also need to post your app code that shows that you are using the OCI statements are used when you want to use client side result cacheing.
See the OCI dev guide
http://docs.oracle.com/cd/B28359_01/appdev.111/b28395/oci10new.htm#sthref1491
Statement Caching in OCI
Statement caching refers to the feature that provides and manages a cache of statements for each session. In the server, it means that cursors are ready to be used without the need to parse the statement again. Statement caching can be used with connection pooling and with session pooling, and will improve performance and scalability. It can be used without session pooling as well. The OCI calls that implement statement caching are:
OCIStmtPrepare2()
OCIStmtRelease() -
I would like to know which ANSI standard does ORACLE follow?
quoting wikipedia:
SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standards (ISO) in 1987. Since then, the standard has been enhanced several times with added features. However, issues of SQL code portability between major RDBMS products still exist due to lack of full compliance with, or different interpretations of, the standard. Among the reasons mentioned are the large size and incomplete specification of the standard, as well as vendor lock-in.
I think you may be better off trying to write whatever it is you want to write and seeing if it works that way across databases.
the thing is (and this goes for all databases): some things are more efficient written in the proprietary extensions to SQL than in ANSI SQL. Take recursive with clauses vs the connect by clause etc.
edit:
The other thing to be wary of is the potential for differing functionality depending on how you write your queries. for example even though the database supports ANSI joins, I encountered a bug whereby the result_cache functionality doesn't work when you use ANSI join syntax:
result_cache on tables
Edited by: WhiteHat on Jun 21, 2012 11:58 AM -
Best way to generate report on huge data table
Hi,
I am using Oracle11g.
I want to generate reports on transaction tables containing huge amount of data, on which very frequently DMLs are performing in real time.
i want to keep my report/result in RESULT_CACHE for 15 mins. active. whenever any insert/update runs on main tables the RESULT_CACHE is getting invalidated
my question is can i control/stop RESULT_CACHE relies on(table_name) invalidating, as Oracle11g invalidating RESULT_CACHE automatically.
my requirement is to not hit the main table again&again.
pls help..
Thanks in advance.
Vinod910575 wrote:
Hi,
I am using Oracle11g.
I want to generate reports on transaction tables containing huge amount of data, on which very frequently DMLs are performing in real time.
i want to keep my report/result in RESULT_CACHE for 15 mins. active. whenever any insert/update runs on main tables the RESULT_CACHE is getting invalidated
my question is can i control/stop RESULT_CACHE relies on(table_name) invalidating, as Oracle11g invalidating RESULT_CACHE automatically.
my requirement is to not hit the main table again&again.
pls help..
It sounds as if you're trying to avoid contention on a very busy large table while users are experimenting with relatively small fractions of the total data set. The type of thing you're doing is probably about the best approach - though it sounds as if you are not using global temporary tables which could save you a bit of time and contention when refreshing each private data set.
Ideally, though, you probably want a front end tool that does client-side caching - i.e. pulls the data into the front-end tool and lets the user rearrange it cosmetically there until the user explicitly requests a new trip to the database. I think Oracle Discoverer has (had) some capability in this area. What's the scale of the work the users are doing - can you give us a few scenarios about how much raw data they will extract and what they want to do with it before they refresh it ?
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b>
P.S. Oracle does have client-side caching technology - but your ability to use it is dependent on the tools you use. You might want to go over to one of the developer or BI forums to see what they say about this problem; they may give you a different perspective on it.
Edited by: Jonathan Lewis on Jan 31, 2012 6:55 PM -
What's wrong with the following result_cache hint?
Dear all,
I try to use result_cache hint in the following sql aggregate function statement, but the explain plan doesn't show any difference between non-result_cached and with result_cached:
SQL> set autot on explain stat
SQL> select account_mgr_id,count(*) from customers group by account_mgr_id;
ACCOUNT_MGR_ID COUNT(*)
147 76
149 74
148 58
145 111
Execution Plan
Plan hash value: 1577413243
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 4 | 16 | 6 (17)| 00:00:01 |
| 1 | HASH GROUP BY | | 4 | 16 | 6 (17)| 00:00:01 |
| 2 | TABLE ACCESS FULL| CUSTOMERS | 319 | 1276 | 5 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
689 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL> select /*+ result_cache */ account_mgr_id,count(*) from customers group by account_mgr_id;
ACCOUNT_MGR_ID COUNT(*)
147 76
149 74
148 58
145 111
Execution Plan
Plan hash value: 1577413243
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 4 | 16 | 6 (17)| 00:00:01 |
| 1 | RESULT CACHE | 3s3bugtq0p5bm71mhmqvvw0x7y | | | | |
| 2 | HASH GROUP BY | | 4 | 16 | 6 (17)| 00:00:01 |
| 3 | TABLE ACCESS FULL| CUSTOMERS | 319 | 1276 | 5 (0)| 00:00:01 |
Result Cache Information (identified by operation id):
1 - column-count=2; dependencies=(OE.CUSTOMERS); name="select /*+ result_cache */ account_mgr_id,
count(*) from customers group by account_mgr_id"
Statistics
1 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
689 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processedAnything wrong with the hint?
Best regards,
ValTwo executions required to benefit from result cache (two executions of the statement with the result cache hint).
First to populate, Second to benefit.
Can offer good benefits particularly with poor code - e.g. functions in SQL, row-by-row function calls, etc.
Optimal solution may be to refactor to more efficient approach.
But result cache can deliver significant short term tactical gain.
Not a no-brainer though.
There were scalability issues with a single latch protecting the result cache - I believe this has changed in 11gR2.
There are also issues with concurrent executions were the result needs to be recalculated and takes x time to regenerate that result.
See http://uhesse.wordpress.com/2009/11/27/result-cache-another-brilliant-11g-new-feature/#comment-216
SQL> drop table t1;
Table dropped.
SQL>
SQL> create table t1
2 as
3 select rownum col1
4 from dual
5 connect by rownum <= 100000;
Table created.
SQL> set autotrace traceonly explain statistics
SQL> select col1
2 from t1
3 where col1 >= 99997;
Execution Plan
Plan hash value: 3617692013
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 130 | 58 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T1 | 10 | 130 | 58 (9)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1">=99997)
Note
- dynamic sampling used for this statement (level=4)
Statistics
10 recursive calls
0 db block gets
220 consistent gets
153 physical reads
0 redo size
379 bytes sent via SQL*Net to client
334 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL> select col1
2 from t1
3 where col1 >= 99997;
Execution Plan
Plan hash value: 3617692013
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 130 | 58 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T1 | 10 | 130 | 58 (9)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1">=99997)
Note
- dynamic sampling used for this statement (level=4)
Statistics
0 recursive calls
0 db block gets
158 consistent gets
0 physical reads
0 redo size
379 bytes sent via SQL*Net to client
334 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL>
SQL>
SQL> select /*+ result_cache */ col1
2 from t1
3 where col1 >= 99997;
Execution Plan
Plan hash value: 3617692013
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 130 | 58 (9)| 00:00:01 |
| 1 | RESULT CACHE | 4p777jcbdgjm25xy3502ypdb5r | | | | |
|* 2 | TABLE ACCESS FULL| T1 | 10 | 130 | 58 (9)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">=99997)
Result Cache Information (identified by operation id):
1 - column-count=1; dependencies=(RIMS.T1); name="select /*+ result_cache */ col1
from t1
where col1 >= 99997"
Note
- dynamic sampling used for this statement (level=4)
Statistics
4 recursive calls
0 db block gets
216 consistent gets
0 physical reads
0 redo size
379 bytes sent via SQL*Net to client
334 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL>
SQL>
SQL> select /*+ result_cache */ col1
2 from t1
3 where col1 >= 99997;
Execution Plan
Plan hash value: 3617692013
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 130 | 58 (9)| 00:00:01 |
| 1 | RESULT CACHE | 4p777jcbdgjm25xy3502ypdb5r | | | | |
|* 2 | TABLE ACCESS FULL| T1 | 10 | 130 | 58 (9)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("COL1">=99997)
Result Cache Information (identified by operation id):
1 - column-count=1; dependencies=(RIMS.T1); name="select /*+ result_cache */ col1
from t1
where col1 >= 99997"
Note
- dynamic sampling used for this statement (level=4)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
379 bytes sent via SQL*Net to client
334 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed
SQL> -
Is result_cache working in Oracle 11.1.0.6.0?
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for Solaris: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
SQL> SELECT dbms_result_cache.status() FROM dual;
DBMS_RESULT_CACHE.STATUS()
--------------------------------------------------------------------------------------------------------------------------------------------------------------------ENABLED
12:31:08 SQL> set autotrace on
select count(*) from objs;
12:31:27 SQL>
COUNT(*)
69918
Elapsed: 00:00:01.72
Execution Plan
Plan hash value: 386529197
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 289 (1)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| OBJS | 80773 | 289 (1)| 00:00:04 |
Note
- dynamic sampling used for this statement
Statistics
282 recursive calls
0 db block gets
1140 consistent gets
1038 physical reads
0 redo size
524 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
1 rows processed
12:31:49 SQL> select /*+ result_cache */ count(*) from objs;
COUNT(*)
69918
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 386529197
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 289 (1)| 00:00:04 |
| 1 | RESULT CACHE | cnsc9rw3p17364cbg4975pad6y | | | |
| 2 | SORT AGGREGATE | | 1 | | |
| 3 | TABLE ACCESS FULL| OBJS | 80773 | 289 (1)| 00:00:04 |
Result Cache Information (identified by operation id):
1 - column-count=1; dependencies=(CTSGKOD.OBJS); attributes=(single-row); name="select /*+ result_cache */ count(*) from objs"
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
1110 consistent gets
0 physical reads
0 redo size
524 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
12:32:06 SQL>
i see result_cache in execution plan, but why do i see 1110 consistent gets? I expected 0 in consisent gets and physical reads and the query is not executed ...
Thank YouAman,
lets say i run these statements
1) exec dbms_result_cache.flush
2) alter system flush shared_pool;
3)alter system flush buffer_cache;
Then i run the query without hint of result_cache
4) Select count(*) from objs; --> This query is not using result cache, it is running for first time, so physical reads & consistent gets
5) select count(*) from objs; --> again same query, this time only consistent gets
6) select /*+ result_cache */ count(*) from objs; -- Lets introduce result_cache now
Will the above query gets data from buffer cache or will it use result_cache and gets result from memory instead of running query? Or will this 6th statement run same as (5) and prepares ground for usage of result_cache and
any other queries later can benefit from result_cache?
Thanks again Aman -
PL/Sql RESULT_CACHE Function under VPD environment
Hi,
Imagine that I have a VPD environment that depending on the user the data returned from the table in a schema is different than returned to other user. for instance:
The VPD filters the data by company_id in this table
table persons
company_id number(4),
person_id number(4)
person varchar2(100)
now imagine that I connect as scott and scott belongs company_id 1000. If scott runs select * from schema.persons he will see this
1000 123 ANNA
1000 124 MARY
1000 125 SCOTT
If I connect as JOHN and JOHN belongs to company_id 1111. If scott runs select * from schema.persons he will see this
1111 123 ALBERT
1111 124 KEVIN
1111 125 JOHN
This is the VPD environment I have...
So, does RESULT_CACHE functions works well under this type of environment? RESULT_CACHE is shared between sessions... but in this case the sessions of scott and john see always different results. Is there any option of implementing RESULT_CACHE by username?
Regards
RicardoIt appears that the result cache functionality can work with Virtual Private Database. Check out the following links:
Adventures with VPD I: Result Cache
Concepts: Result Cache -
BLOB column in own tablespace, in partition, in table, tablespace to be moved
Hi All,
First off I am using Oracle Database 11.2.0.2 on AIX 5.3.
We have a table that is partitioned monthly.
In this table there is a partition (LOWER), this lower partition is 1.5TB in size due to a BLOB column called (ATTACHMENT).
The rest of the table is not that big, about 30GB, its the BLOB column that is using up all the space.
The lower partition is in its own default tablespace (DefaultTablespace), the BLOB column in the lower partition is also in its own tablespace(TABLESPACE_LOB) - 1.5TB
I've been asked to free up some space by moving the TABELSPACE_LOB(from the lower partition) to an archive database, confirming the data is there and then removing the lower partition from production.
I don't have enough free space (or time) to do an expdp, I don't think its doable with so much data.
CREATE TABLE tablename
xx VARCHAR2(14 BYTE),
xx NUMBER(8),
xx NUMBER,
ATTACHMENT BLOB,
xx DATE,
xx VARCHAR2(100 BYTE),
xx INTEGER,
LOB (ATTACHMENT) STORE AS (
TABLESPACE DefaultTablespace
ENABLE STORAGE IN ROW
NOCOMPRESS
TABLESPACE DefaultTablespace
RESULT_CACHE (MODE DEFAULT)
PARTITION BY RANGE (xx)
PARTITION LOWER VALUES LESS THAN ('xx')
LOGGING
COMPRESS BASIC
TABLESPACE DefaultTablespace
LOB (ATTACHMENT) STORE AS (
TABLESPACE TABLESPACE_LOB
ENABLE STORAGE IN ROW
...>>
My idea was to take an datapump export of the table excluding the column ATTACHMENT, using external tables.
Then to create the table on the archive database "with" the column ATTACHMENT.
Import the data only, from what I understand if you use a dump file that has too many columns Oracle will handle it, i'm hoping it will work the other way round.
Then on production make the TABLESPACE_LOB read only and move it to the new file system.
This is a bit more complicated than a normal tablespace move due to how the table is split up.
Any advice would be very much appreciated.JohnWatson wrote:
If disc space is the problem, would a network mode export/import work for you? I have never tried it with that much data, but theoretically it should work. You could do just a few G at a time.
I see what you are saying, if we use a network link then no redo would be generate on the export, but it would for the import right. But like you said, we could do 100GB per day for the next ten days and that would be very doable I think, it would just take a long time. On the archive database we backup archivelogs every morning so anything generate on the import would be backed up to tape the following morning.
mtefft wrote:
Does it contain only that partition? Or are there other partitions in there as well? If there are other partitions, what % of the space is used by the partition you are trying to move?
Yep, tablespace_lob only contains the LOWER partition, no other partitions. Just the LOWER partition is taking up 1.5TB. -
I am trying to figure out why the explain plan (and performance) for the same query is different between our staging environment and our production environment when using the RESULT_CACHE hint. It's significantly worse in production. Platform and database version is the same. There are differences in the init settings between the two environments, specifically the following:
In Stage:
optimizer_mode = first_rows_100
result_cache_mode=manual
result_cache_max_result=5
result_cache_max_size=7872K
cursor_sharing=similar
In Prod:
optimizer_mode =
result_cache_mode=
result_cache_max_result=
result_cache_max_size=
cursor_sharing=exact
When I run the query in Stage, the explain plan looks like this:
Execution Plan
Plan hash value: 3058471186
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 263 | 8 (13)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 263 | 8 (13)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 263 | 7 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| C11_HOLDINGS | 1 | 195 | 4 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | HOLDING_CALC2_IDX | 1 | | 3 (0)| 00:00:01 |
|* 6 | TABLE ACCESS BY INDEX ROWID| C11_HOLDINGS | 1 | 68 | 3 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | HOLDING_CALC_IDX | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("STATEMENT_DATE"=MAX("STATEMENT_DATE"))
4 - filter(UPPER("SYMBOL")=UPPER(NVL(NULL,"SYMBOL")) AND "SECURITY_TYPE"<>'NO REVIEW
REQUIRED' AND ("STATEMENT_DATE">=INTERNAL_FUNCTION("STATEMENT_DATE")-.0000115740740740740740
7407407407407407407407 OR "STATEMENT_DATE"=NULL) AND "ACTIVE_FLAG"='Y' AND
"STATEMENT_DATE"<=SYSDATE@!-3)
5 - access("BROKERAGE_ACCOUNT_ID"=14873 AND "USER_ID"=39356 AND "CLIENT_ID"=609)
6 - filter("B"."ACTIVE_FLAG"='Y' AND "B"."STATEMENT_DATE"<=SYSDATE@!-3 AND
"A"."SYMBOL"="B"."SYMBOL")
7 - access("B"."BROKERAGE_ACCOUNT_ID"=14873 AND "B"."USER_ID"=39356 AND
"B"."CLIENT_ID"=609 AND "A"."SECURITY_TYPE"="B"."SECURITY_TYPE")
filter("B"."SECURITY_TYPE"<>'NO REVIEW REQUIRED')
Statistics
0 recursive calls
0 db block gets
1356 consistent gets
0 physical reads
0 redo size
1904 bytes sent via SQL*Net to client
360 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
When I run it in Prod:
Execution Plan
Plan hash value: 1021161140
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 252 | 239 (1)| 00:00:03 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 252 | 239 (1)| 00:00:03 |
| 3 | VIEW | VW_SQ_1 | 1 | 88 | 236 (1)| 00:00:03 |
|* 4 | FILTER | | | | | |
| 5 | HASH GROUP BY | | 1 | 83 | 236 (1)| 00:00:03 |
|* 6 | TABLE ACCESS BY INDEX ROWID| C11_HOLDINGS | 256 | 21248 | 235 (0)| 00:00:03 |
|* 7 | INDEX RANGE SCAN | HOLDING_CALC2_IDX | 512 | | 5 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | HOLDINGS_SYMB_IDX | 1 | | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID | C11_HOLDINGS | 1 | 164 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("B"."BROKERAGE_ACCOUNT_ID"=14873 AND "B"."USER_ID"=39356 AND
"B"."CLIENT_ID"=609 AND MAX("STATEMENT_DATE")<=SYSDATE@!-3)
6 - filter("B"."ACTIVE_FLAG"='Y' AND "B"."STATEMENT_DATE"<=SYSDATE@!-3 AND
"B"."SECURITY_TYPE"<>'NO REVIEW REQUIRED')
7 - access("B"."BROKERAGE_ACCOUNT_ID"=14873 AND "B"."USER_ID"=39356 AND
"B"."CLIENT_ID"=609)
8 - access("A"."SYMBOL"="ITEM_5")
filter(UPPER("SYMBOL")=UPPER(NVL(NULL,"SYMBOL")))
9 - filter("BROKERAGE_ACCOUNT_ID"=14873 AND "USER_ID"=39356 AND "CLIENT_ID"=609 AND
"ACTIVE_FLAG"='Y' AND "SECURITY_TYPE"<>'NO REVIEW REQUIRED' AND
("STATEMENT_DATE">=INTERNAL_FUNCTION("STATEMENT_DATE")-.00001157407407407407407407407407407407
407407 OR "STATEMENT_DATE"=NULL) AND "STATEMENT_DATE"<=SYSDATE@!-3 AND
"STATEMENT_DATE"="MAX(STATEMENT_DATE)" AND "A"."SECURITY_TYPE"="ITEM_4")
Statistics
0 recursive calls
0 db block gets
18051 consistent gets
0 physical reads
0 redo size
1872 bytes sent via SQL*Net to client
360 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
The queries and the data in both environments are identical. Any ideas, suggestions would be appreciated.
ThanksInterestingly, the explain plans are identical to what they were with the result_cache hint.
-
Apex using result_cache has an invalid status for wwv_flow_language query
In the v_$result_cache_objects view of SYS I noticed that Apex is using the result_cache feature of 11g.
Some of the Apex queries will have the status 'PUBLISHED', but others have the status 'INVALID'
Why does the "SELECT /*+ result_cache */ NLS_LANGUAGE, NLS_TERRITORY, NLS_SORT, NLS_WINDOWS_CHARSET FROM WWV_FLOW_LANGUAGES WHERE LANG_ID_UPPE..." query has a INVALID status?
It already happens when I start APEX (http://localhost:7778/pls/apex).
As I understand the result_cache method it will invalidate the cache after an update is done on the depending table.
Does it make sense that those Apex queries has a result_cache hint when it will be invalidated soon after?Hi,
just had a look at our own development box and for me this query doesn't show the status INVALID. It would also not make a lot of sense because this table is only populated during installation.
When you shutdown your database and start it up again and then access APEX with http://localhost:7778/pls/apex to see the query in invalid status?
Regards
Patrick
My Blog: http://www.inside-oracle-apex.com
APEX 4.0 Plug-Ins: http://apex.oracle.com/plugins
Twitter: http://www.twitter.com/patrickwolf -
Oracle RESULT_CACHE new features 11g
Hello folks !
One question about this new func of 11g : RESULT_CACHE.
I understand it permit to keep in sga a RESULT_CACHE_MAX_SIZE quantity (bytes) of resultset retrieved from sql query.
But, what the difference with this and CURSOR_SHARING set on EXACT or FORCE ?
TIA.
nicoshare the result setNope. Not correct.
As Alex says - completely different features.
Cursor sharing is all about the treatment of literals in a SQL statement and the subsequent sharing of execution plans for SQL statements where only the literals were different in the original SQL.
See summary table near top of this post:
http://optimizermagic.blogspot.com/2009/05/why-do-i-have-hundreds-of-child-cursors.html -
When oracle invalidates result_cache results without any changes in objects
Hi all!
On our production servers we have simple function with result_cache, like this:
create or replace function f_rc(p_id number) return number result_cache
is
ret number;
begin
select t.val into ret from rc_table t where t.id=p_id;
return ret;
exception
when no_data_found then
return null;
end;
/And its results frequently invalidates without any changes in table or
function. I found only 2 cases when oracle invalidates result_cache
results without any changes in table:
1. "select for update" from this table with commit;
2. deletion of unrelated rows from parent table if there is unindexed
foreign key with "on delete cascade".
I test it on 11.2.0.1, 11.2.0.3, on solaris x64 and windows. Test
cases: http://www.xt-r.com/2012/07/when-oracle-invalidates-resultcache.html
But none of them can be the cause of our situation: we have no
unindexed fk, and even if i lock all rows with "select for update", it
still does not stop invalidating.
In what other cases this happens? Am I right that the oracle does not
track any changes, but the captures of the locks and "commits"?
Best regards,
Sayan Malakshinov
http://xt-r.comHmm.. Do you about our situation or about test cases with "select for update" and "fk" too?
I'm not sure that it is a bug, maybe it's an architectural approach to simplify and reduce the cpu load?
Best regards,
Sayan Malakshinov
http://xt-r.com -
Result_cache and data dictionary views
Hi,
Are there any special considerations when caching the results of a function which uses data dictionary views to determine it's results?
This question has popped because I have a such a result_cached function for which the result_cache objects are not getting invalidated even when the underlying data dictionary views have changed and the function gives 'stale' values in it's output. Adding the relies_on clause has not helped either.
Here is what I am trying to do:
The function accepts table name as its input and tries to determine all the child tables using the sys.dba_constraints view. The results are returned in a pl/sql table and are cached so that the subsequent calls to this function use the result_cache.
Everything works fine for the parent/child tables which have been created before the creation of this function. All the results are correct.
The problem starts when a new child table is added to an existing parent table.
The v$result_cache_objects view shows the result of this function as 'Published' and the output of the function does not show the newly created child table.
Same is the case when an existing child table is deleted; the function continues to return it in the output as it is pulled from the result_cache.
Oracle version:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production>
Restrictions on Result-Cached Functions*
To be result-cached, a function must meet all of these criteria:
* It is not defined in a module that has invoker's rights or in an anonymous block.
* It is not a pipelined table function.
* It does not reference dictionary tables, temporary tables, sequences, or nondeterministic SQL functions.
For more information, see Oracle Database Performance Tuning Guide.
* It has no OUT or IN OUT parameters.
* No IN parameter has one of these types:
o BLOB
o CLOB
o NCLOB
o REF CURSOR
o Collection
o Object
o Record
* The return type is none of these:
o BLOB
o CLOB
o NCLOB
o REF CURSOR
o Object
o Record or PL/SQL collection that contains an unsupported return type
It is recommended that a result-cached function also meet these criteria:
* It has no side effects.
For information about side effects, see "Subprogram Side Effects".
* It does not depend on session-specific settings.
For more information, see "Making Result-Cached Functions Handle Session-Specific Settings".
* It does not depend on session-specific application contexts.
>
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e17126/subprograms.htm#LNPLS698 -
Logical Standby table not supported 11.2.0.1
I have a table with CLOB in primary and runs the SQL to determine which tables are unsupported.
They, nor I, can see why this table is unsupported. Please let me know what I am missing.
Are there additional steps to see why?
Please also see the DDL for the table at the end.
SQL> SELECT COLUMN_NAME,DATA_TYPE FROM DBA_LOGSTDBY_UNSUPPORTED WHERE OWNER='P_RPMX_AUDIT_DB' AND TABLE_NAME = 'AUDIT_TAB' ;
COLUMN_NAME DATA_TYPE
TABLE_SCRIPT CLOB
SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP WHERE STATEMENT_OPT = 'INTERNAL SCHEMA';
OWNER
DBSNMP
SYS
SYSTEM
WMSYS
ORDDATA
OUTLN
DIP
EXFSYS
XDB
ORDPLUGINS
ANONYMOUS
APPQOSSYS
ORDSYS
SI_INFORMTN_SCHEMA
XS$NULL
15 rows selected.
SQL> SELECT DISTINCT OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED ORDER BY OWNER,TABLE_NAME;
OWNER TABLE_NAME
APG APG_REJECT_TAB
APG ERR$_ACCOUNT
APG ERR$_ACCOUNT_ADDRESS
APG ERR$_ACCOUNT_ATTRIBUTES
APG ERR$_ACCOUNT_CYCLE_HISTORY
APG ERR$_ACCOUNT_USERS
APG ERR$_ACCOUNT_XREFERENCE
APG ERR$_ACTIONABLE_ITEMS
APG ERR$_IDENTIFICATION
APG ERR$_LOAD_RECON_METRICS
APG ERR$_PHONE
APG ERR$_POOL
APG ERR$_POOL_ACCOUNTS
APG ERR$_POOL_ACCOUNT_ENROLL_HISTO
APG ERR$_USERS
P_IM_EXTRACT_MASK ERR$_REWARDS_ACTIVITY
P_IM_EXTRACT_MASK ERR$_REWARDS_TRANSACTION
P_IM_EXTRACT_MASK ERR$_REWARDS_TRANSACTION_ITEM
P_IM_EXTRACT_MASK ERR$_TRANSACTION
P_RPMX_AUDIT_DB AUDIT_TAB
P_RPMX_JOBS ERR$_ACCOUNT
P_RPMX_JOBS ERR$_ACCOUNT_XREFERENCE
22 rows selected.
CREATE TABLE P_RPMX_AUDIT_DB.AUDIT_TAB
TABLE_SCRIPT CLOB
LOB (TABLE_SCRIPT) STORE AS (
TABLESPACE P_RPMX_AUD_DB_DATA
ENABLE STORAGE IN ROW
CHUNK 8192
RETENTION
NOCACHE
LOGGING
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE P_RPMX_AUD_DB_DATA
RESULT_CACHE (MODE DEFAULT)
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
Thanks, ChrisHi I was able to find the following answer. Thanks
"Logical standby has never supported tables that only contain LOBs. We require some scalar column that can be used for row identification during update and delete processing.
There is some discussion of row identification issues in section 4.1.2 of Oracle Data Guard Concepts and Administration. "
http://docs.oracle.com/cd/E11882_01/server.112/e25608/create_ls.htm#i77026
"If there is no primary key and no nonnull unique constraint/index, then all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. All columns are logged except the following: LONG, LOB, LONG RAW, object type, and collections."
So you see with only a LOB column it cannot identify the row. You need a primary key or at least some columns that can uniquely identify the row. The one LOB column is no logged with the supplemental logging.
So it is not supported. -
Gather table stats taking longer for Large tables
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
Does Table size actually matter for stats collection ?Max wrote:
Version : 11.2
I've noticed that gathers stats (using dbms_stats.gather_table_stats) is taking longer for large tables.
Since row count needs to be calculated, a big table's stats collection would be understandably slightly longer (Running SELECT COUNT(*) internally).
But for a non-partitioned table with 3 million rows, it took 12 minutes to collect the stats ? Apart from row count and index info what other information is gathered for gather table stats ?
09:40:05 SQL> desc user_tables
Name Null? Type
TABLE_NAME NOT NULL VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
IOT_NAME VARCHAR2(30)
STATUS VARCHAR2(8)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
FLASH_CACHE VARCHAR2(7)
CELL_FLASH_CACHE VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(30)
DEPENDENCIES VARCHAR2(8)
COMPRESSION VARCHAR2(8)
COMPRESS_FOR VARCHAR2(12)
DROPPED VARCHAR2(3)
READ_ONLY VARCHAR2(3)
SEGMENT_CREATED VARCHAR2(3)
RESULT_CACHE VARCHAR2(7)
09:40:10 SQL> >
Does Table size actually matter for stats collection ?yes
Handle: Max
Status Level: Newbie
Registered: Nov 10, 2008
Total Posts: 155
Total Questions: 80 (49 unresolved)
why so many unanswered questions?
Maybe you are looking for
-
Memory lead in Adobe Reader 9.3
Appears like there is a memory leak in Adobe reader 9.3.0 on Windows Vista. When I open a large document (>36MB) Adobe starts hogging memory - over 1.4GB - and choking the PC) and then it hangs the computer. As soon as adobe is killed (the Program i.
-
Adobe Reader for Android Update
Hello Everybody, We have updated the Adobe Reader for Android today. This update brings reflow back, adds Android 4.0 compatibility, and has a few other great feature additions. Read the blog posting for the full details! http://blogs.adobe.com/acrob
-
Sorry one more question, "USB Device not recognized" ???
I got this new iPod dock, called the Harman/Kardon GO+PLAY, they say it should sync with all new iPods... when i connect this dock to my computer through the USB, and then I connect my iPod to the dock, it always says on my computer, "USB Device not
-
Delivery challan print preview for transaction j1if01 problem
hi , i have problem with print preview of delivery challan. after creating it from j1if01 im unable to get a print preview of it as i dont find any tab or button for print preview . plz guide me how can i get an output . also plz give form name for
-
Wanting to make FF 4 my default browser, help pages are no help!
The FF help menus show directions to make FF default browser. However after trying to follow the steps given, no response and no default. Something is missing there, don't know what. Windows XP Pro is my system. Thanks