Oracle not using space occupied by flashback logs
Hi,
oracle 10g R2 10.2.0.1 on RHEL 4 64-bits.
my db_flashback_retention_target is set to 1440 and my db_recover_file_dest_size is set to 40G.
flashback logs are occupying about 36G so whenever rman tries to backup the database, it breaks saying it cannot reclaim enough space from the 40G limit.
when checking the flashback logs,i saw that the DB is keeping 3 day old logs. should it not reclaim the space occupied by these logs and delete them?
Others have reported similar issues: http://www.wayneadamsconsulting.com/
>
One thing that I have not seen the database do, however, is delete reclaimable flashback logs in order to make room for a backup. If there is not enough free space in you FRA for a backup but there is plenty of space used by relcaimable flashback logs, RMAN and the database will not delete flashback logs to free up room in the FRA for the backup. This seems like a fairly large gap in the flashback log functionality, but hopefully Oracle will handle it in a future release.
>
See MOS note: Space issue in Flash Recovery Area( FRA ) [ID 829755.1] .
Note also that 10.2.0.1 is a old 10.2 release and that you could consider upgrading to 10.2.0.4.
Similar Messages
-
Oracle not using the stored outline
SQL> create table emp as select * from sys.emp;
Table created.
SQL> alter session set create_stored_outlines = TRUE;
Session altered.
SQL> create outline emp_dept for category scott_outlines on select empno from emp where ename = 'SCOTT';
Outline created.
SQL> set autot on exp
SQL> select empno from emp where ename = 'SCOTT';
EMPNO
7788
Execution Plan
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 20 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 20 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("ENAME"='SCOTT')
Note
- dynamic sampling used for this statement (level=2)
SQL> create unique index i on emp(ename);
Index created.
SQL> select empno from emp where ename = 'SCOTT';
EMPNO
7788
Execution Plan
Plan hash value: 3262377121
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
| 0 | SELECT STATEMENT | | 1 | 20 | 1 (0)| 00:00:
01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 20 | 1 (0)| 00:00:
01 |
|* 2 | INDEX UNIQUE SCAN | I | 1 | | 0 (0)| 00:00:
01 |
Predicate Information (identified by operation id):
2 - access("ENAME"='SCOTT')
SQL> alter session set use_stored_outlines = SCOTT_OUTLIN
2 ;
Session altered.
SQL> alter session set use_stored_outlines = SCOTT_OUTLINS
2 ;
Session altered.
SQL> select empno from emp where ename = 'SCOTT';
EMPNO
7788
Execution Plan
Plan hash value: 3262377121
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
| 0 | SELECT STATEMENT | | 1 | 20 | 1 (0)| 00:00:
01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 20 | 1 (0)| 00:00:
01 |
|* 2 | INDEX UNIQUE SCAN | I | 1 | | 0 (0)| 00:00:
01 |
Predicate Information (identified by operation id):
2 - access("ENAME"='SCOTT')
Note
- outline "SYS_OUTLINE_11050409142489113" used for this statement
SQL> SELECT name, category, used FROM user_outlines;
NAME CATEGORY USED
EMP_DEPT SCOTT_OUTLINES UNUSED
SYS_OUTLINE_11050408594412502 DEFAULT USED
SYS_OUTLINE_11050408591781301 DEFAULT UNUSED
SYS_OUTLINE_11050408594415603 DEFAULT UNUSED
SYS_OUTLINE_11050408595648404 DEFAULT UNUSED
SYS_OUTLINE_11050409003554705 DEFAULT UNUSED
SYS_OUTLINE_11050409030340606 DEFAULT UNUSED
7 rows selected.
Execution Plan
Plan hash value: 1195863419
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Ti
me |
| 0 | SELECT STATEMENT | | 1 | 81 | 2 (0)| 00
:00:01 |
| 1 | NESTED LOOPS | | | | |
|
| 2 | NESTED LOOPS | | 1 | 81 | 2 (0)| 00
:00:01 |
| 3 | TABLE ACCESS FULL | OL$ | 1 | 64 | 2 (0)| 00
:00:01 |
|* 4 | INDEX UNIQUE SCAN | I_USER1 | 1 | | 0 (0)| 00
:00:01 |
|* 5 | TABLE ACCESS BY INDEX ROWID| USER$ | 1 | 17 | 0 (0)| 00
:00:01 |
Predicate Information (identified by operation id):
4 - access("CREATOR"="U"."NAME")
5 - filter("U"."USER#"=USERENV('SCHEMAID'))
Note
- outline "SYS_OUTLINE_11050409030340606" used for this statement
SQL>
Note : I have dropped all default outlines in dba_outlines but they are being created automatically.(Why)
Please give me good article to understand more on stored outlines.Please post your 4 digits Oracle version.
It looks like that Oracle has only 1 stored outline for the SQL statement and that the second execution plan with the index has replaced the first outline (execution plan with full table scan).
Here is a good article on stored outlines http://www.oracle-base.com/articles/misc/Outlines.php (except that the used Oracle version is also missing).
Here is a short demo based on your demo that I have modified (note that I disabled stored outlines just after creating the first one):
SQL> select * from v$version;
BANNER
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> drop table emp purge;
Table dropped.
SQL> drop outline emp_dept;
Outline dropped.
SQL> whenever sqlerror exit failure;
SQL> --
SQL> create table emp as
2 select object_name ename, object_id empno
3 from all_objects
4 where object_id < 5000;
Table created.
SQL> --
SQL> alter session set create_stored_outlines = TRUE;
Session altered.
SQL> create outline emp_dept for category scott_outlines on
select empno from emp where ename = 'SCOTT';
Outline created.
SQL> set autot on exp
SQL> select empno from emp where ename = 'SCOTT';
no rows selected
Execution Plan
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 30 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 30 | 5 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("ENAME"='SCOTT')
Note
- dynamic sampling used for this statement
SQL> -- disable stored outline creation
SQL> alter session set create_stored_outlines = FALSE;
Session altered.
SQL> create index i on emp(ename);
Index created.
SQL> select empno from emp where ename = 'SCOTT';
no rows selected
Execution Plan
Plan hash value: 4079916893
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 30 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 30 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | I | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("ENAME"='SCOTT')
Note
- dynamic sampling used for this statement
SQL> -- use stored outlines
SQL> alter session set use_stored_outlines = SCOTT_OUTLINES;
Session altered.
SQL> select empno from emp where ename = 'SCOTT';
no rows selected
Execution Plan
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 10 | 300 | 5 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| EMP | 10 | 300 | 5 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("ENAME"='SCOTT')
Note
- outline "EMP_DEPT" used for this statement
SQL> -- do not use stored outlines
SQL> alter session set use_stored_outlines=false;
Session altered.
SQL> select empno from emp where ename = 'SCOTT';
no rows selected
Execution Plan
Plan hash value: 4079916893
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 30 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 30 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | I | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("ENAME"='SCOTT')
Note
- dynamic sampling used for this statementEdited by: P. Forstmann on 4 mai 2011 13:34 -
Could not use kanaka plug-in to log into Lion
I could not use either afp or samba as home directory to log into Lion. Kana plug-in is used. I can see the return path from Kanaka console but user can not log into the Lion.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Also, can you login via the "desktop client" without using the plugin
(which is just to login to the machine initially)? In my hour or two of
testing I was able to get the desktop client piece going though I had
trouble with the plugin (no time to research... didn't really care that
much since I'm not good at either Mac or Kanaka) so that may be
interesting to see what comes back.
Good luck.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQIcBAEBAgAGBQJPHsNMAAoJEF+XTK08PnB5kxYP/0C1IwfnYoyYcfvA3M9v359B
FoWRDMsWWfgUvKjWLuhD5vcqcOsraAoKCgHM0FouKJRLkW/CxcQdmJXsgg3/u8hy
QxSpjdQojwnR7s7bMC43OcFfcPaQZz/Z+F4SNcoVeZpdhQvvEPipoZl6kp0Qk9Od
vzSI8i1Mb0ZZ8byKLSMS6DmnQZJmj8A1yccqhDs5qq4pGUUTe+ eyZcqld34KOxLk
RDQX+Sp7Gzb++lW3HvFx0Bb2XLaBuotR2xqLNgUmUIsRK/3W9vZGwu0FYtVieJeI
5dZi2dB87gEibVQtagvTUOayTPjUj9jCd56zQDZ4LOcI8DV/YHv0+tp2khllY6nG
+T4Q6lhalDiQnmRL/wLx24kKigGvK3YWqtocXDmbNer5g9zOuUzl0viz00aKxRmD
mWODsroB4DiEfhLW+uzyfJmAozPqGda+SmYfZ8J6mDbM21Wx1p 9rpKjg89Tr7IsK
C0qHO64/G/EelUwm35NQz9aoKVXyY881w8uoTGw0I8+bUFAwQVMGKX3vaD7w 2t77
/oIGF0go/oOaT8zxIG9/C4EWKBB0FdZ4kFzSzTHMWrXjkqE4OUQzmTLohzgQQvK5
YGCHRBUGCwA9Nxk8V2ZqU+8xbwTa7kxo4G3pFmTqVIcrTWudxs UXZOushfqK9BWH
HEOCtvaMI8fqNJQzBXRM
=XM+g
-----END PGP SIGNATURE----- -
Why is Oracle not using the index??
Hi,
I have a table called 'arc_errors' which has an index on 'member_number' as follows:- Create/Recreate indexes
create index DWO.DW_ARC_CERRORS_MNO on DWO.DW_ARC_CERRORS (MEMBER_NUMBER);
But surpisingly, when I execute the following query, it does not use the index.
SELECT member_number,
COUNT(*) error_count
FROM arc_errors a
WHERE member_number = 68534152 AND
( tx_type = 'SDIC' AND
error_number IN (4, 7, 12, 13, 15, 17, 18, 705) )
OR
( tx_type = 'AUTH' AND
error_number IN (100, 104, 107, 111, 116) )
OR
( tx_type = 'BHO' AND
error_number IN (708,710) )
OR
( tx_type = 'XLGN' AND
( error_number BETWEEN 102 AND 105 OR
error_number BETWEEN 107 AND 120 OR
error_number BETWEEN 300 AND 304 ) )
OR
( tx_type = 'None' AND
( error_number IN (20, 112) OR
error_number BETWEEN 402 AND 421 ) )
OR
( tx_type = 'HYBR' AND
error_number IN (303, 304) )
GROUP BY member_number;
This is what 'explain plan' tell me
SELECT STATEMENT, GOAL = RULE 237907 502923 15087690
SORT GROUP BY 237907 502923 15087690
PARTITION RANGE ALL
TABLE ACCESS FULL DWO DW_ARC_CERRORS 237209 502923 15087690
Can someone tell me why a 'table acess full' is required here?
Thanks in advance,
RajeshSorry, I just found the solution myself. I need to put an extra pair of braces around the set of conditions seperated by OR.
-
Oracle not using its own explain plan
When I run a simple select query on an indexed column on a large (30 million records) table oracle creates a plan using the indexed column and at a cost of 4. However, what it actually does is do a table scan (I can see this in the 'Long Operations' tab in OEM).
The funny thing is that I have the same query in a ADO application and when the query is run from there, the same plan is created but no table scan is done - and the query returns in less than a second. However, with the table scan it is over a minute.
When run through SQL plus Oracle creates a plan including the table scan at a cost of 19030.
In another (dot net) application I used the: "Alter session set optimizer_index_caching=100" and "Alter session set optimizer_index_cost_adj=10" to try to force the optimizer to use the index. It creates the expected plan, but still does the table scan.
The query is in the form of:
"Select * from tab where indexedcol = something"
Im using Oracle 9i 9.2.0.1.0
Any ideas as I'm completely at a loss?Hello
It sounds to me like this has something to do with bind variable peeking which was introduced in 9i. If the predicate is
indexedcolumn = :bind_variablethe first time the query is parsed by oracle, it will "peek" at the value in the bind variable and see what it is and will generate an execution plan based on this. That same plan will be used for matching SQL.
If you use a litteral, it will generate the plan based on that, and will generate a separate plan for each litteral you use (depending on the value of the cursor_sharing initialisation parameter).
This can cause there to be a difference between the execution plan seen when issuing EXPLAIN PLAN FOR, and the actual exectuion plan used when the query is run.
Have a look at the following example:
tylerd@DEV2> CREATE TABLE dt_test_bvpeek(id number, col1 number)
2 /
Table created.
Elapsed: 00:00:00.14
tylerd@DEV2> INSERT
2 INTO
3 dt_test_bvpeek
4 SELECT
5 rownum,
6 CASE
7 WHEN MOD(rownum, 5) IN (0,1,2,3) THEN
8 1
9 ELSE
10 MOD(rownum, 5)
11 END
12 END
13 FROM
14 dual
15 CONNECT BY
16 LEVEL <= 100000
17 /
100000 rows created.
Elapsed: 00:00:00.81
tylerd@DEV2> select count(*), col1 from dt_test_bvpeek group by col1
2 /
COUNT(*) COL1
80000 1
20000 4
2 rows selected.
Elapsed: 00:00:00.09
tylerd@DEV2> CREATE INDEX dt_test_bvpeek_i1 ON dt_test_bvpeek(col1)
2 /
Index created.
Elapsed: 00:00:00.40
tylerd@DEV2> EXEC dbms_stats.gather_table_stats( ownname=>USER,-
tabname=>'DT_TEST_BVPEEK',-
method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254',-
cascade=>TRUE -
);PL/SQL procedure successfully completed.
Elapsed: 00:00:00.73
tylerd@DEV2> EXPLAIN PLAN FOR
2 SELECT
3 *
4 FROM
5 dt_test_bvpeek
6 WHERE
7 col1 = 1
8 /
Explained.
Elapsed: 00:00:00.01
tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 2611346395
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 78728 | 538K| 82 (52)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DT_TEST_BVPEEK | 78728 | 538K| 82 (52)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1"=1)
13 rows selected.
Elapsed: 00:00:00.06The execution plan for col1=1 was chosen because oracle was able to see that based on the statistics, col1=1 would result in most of the rows from the table being returned.
tylerd@DEV2> EXPLAIN PLAN FOR
2 SELECT
3 *
4 FROM
5 dt_test_bvpeek
6 WHERE
7 col1 = 4
8 /
Explained.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 3223879139
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 21027 | 143K| 74 (21)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| DT_TEST_BVPEEK | 21027 | 143K| 74 (21)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DT_TEST_BVPEEK_I1 | 21077 | | 29 (28)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("COL1"=4)
14 rows selected.
Elapsed: 00:00:00.04This time, the optimiser was able to see that col1=4 would result in far fewer rows so it chose to use an index. Look what happens however when we use a bind variable with EXPLAIN PLAN FOR - especially the number of rows the optimiser estimates to be returned from the table
tylerd@DEV2> var an_col1 NUMBER
tylerd@DEV2> exec :an_col1:=1;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
tylerd@DEV2>
tylerd@DEV2> EXPLAIN PLAN FOR
2 SELECT
3 *
4 FROM
5 dt_test_bvpeek
6 WHERE
7 col1 = :an_col1
8 /
Explained.
Elapsed: 00:00:00.01
tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 2611346395
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 49882 | 340K| 100 (60)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 | 340K| 100 (60)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1"=TO_NUMBER(:AN_COL1))
13 rows selected.
Elapsed: 00:00:00.04
tylerd@DEV2>
tylerd@DEV2> exec :an_col1:=4;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
tylerd@DEV2>
tylerd@DEV2> EXPLAIN PLAN FOR
2 SELECT
3 *
4 FROM
5 dt_test_bvpeek
6 WHERE
7 col1 = :an_col1
8 /
Explained.
Elapsed: 00:00:00.01
tylerd@DEV2> SELECT * FROM TABLE(DBMS_XPLAN.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 2611346395
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 49882 | 340K| 100 (60)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| DT_TEST_BVPEEK | 49882 | 340K| 100 (60)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1"=TO_NUMBER(:AN_COL1))
13 rows selected.
Elapsed: 00:00:00.07For both values of the bind variable, the optimiser has no idea what the value will be so it has to make a calculation based on a formula which results in it estimating that the query will return roughly half of the rows in the table, and so it chooses a full scan.
Now when we actually run the query, the optimiser can take advantage of bind variable peeking and have a look at the value the first time round and base the execution plan on that:
tylerd@DEV2> exec :an_col1:=1;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT
2 *
3 FROM
4 dt_test_bvpeek
5 WHERE
6 col1 = :an_col1
7 /
80000 rows selected.
Elapsed: 00:00:10.98
tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
2 /
PREV_SQL_ID
9t52uyyq67211
1 row selected.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT
2 operation,
3 options,
4 object_name
5 FROM
6 v$sql_plan
7 WHERE
8 sql_id = '9t52uyyq67211'
9 /
OPERATION OPTIONS OBJECT_NAME
SELECT STATEMENT
TABLE ACCESS FULL DT_TEST_BVPEEK
2 rows selected.
Elapsed: 00:00:00.03It saw that the bind variable value was 1 and that this would return most of the rows in the table so it chose a full scan.
tylerd@DEV2> exec :an_col1:=4
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT
2 *
3 FROM
4 dt_test_bvpeek
5 WHERE
6 col1 = :an_col1
7 /
20000 rows selected.
Elapsed: 00:00:03.50
tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
2 /
PREV_SQL_ID
9t52uyyq67211
1 row selected.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT
2 operation,
3 options,
4 object_name
5 FROM
6 v$sql_plan
7 WHERE
8 sql_id = '9t52uyyq67211'
9 /
OPERATION OPTIONS OBJECT_NAME
SELECT STATEMENT
TABLE ACCESS FULL DT_TEST_BVPEEK
2 rows selected.
Elapsed: 00:00:00.01Even though the value of the bind variable changed, the optimiser saw that it already had a cached version of the sql statement along with an execution plan, so it used that rather than regenerating the plan. We can check the reverse of this by causing the statement to be invalidated and re-parsed - there's lots of ways, but I'm just going to rename the table:
Elapsed: 00:00:00.03
tylerd@DEV2> alter table dt_test_bvpeek rename to dt_test_bvpeek1
2 /
Table altered.
Elapsed: 00:00:00.01
tylerd@DEV2>
20000 rows selected.
Elapsed: 00:00:04.81
tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
2 /
PREV_SQL_ID
6ztnn4fyt6y5h
1 row selected.
Elapsed: 00:00:00.00
tylerd@DEV2> SELECT
2 operation,
3 options,
4 object_name
5 FROM
6 v$sql_plan
7 WHERE
8 sql_id = '6ztnn4fyt6y5h'
9 /
OPERATION OPTIONS OBJECT_NAME
SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID DT_TEST_BVPEEK1
INDEX RANGE SCAN DT_TEST_BVPEEK_I1
3 rows selected.
80000 rows selected.
Elapsed: 00:00:10.61
tylerd@DEV2> SELECT prev_sql_id FROM v$session WHERE audsid=SYS_CONTEXT('USERENV','SESSIONID')
2 /
PREV_SQL_ID
6ztnn4fyt6y5h
1 row selected.
Elapsed: 00:00:00.01
tylerd@DEV2> SELECT
2 operation,
3 options,
4 object_name
5 FROM
6 v$sql_plan
7 WHERE
8 sql_id = '6ztnn4fyt6y5h'
9 /
OPERATION OPTIONS OBJECT_NAME
SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID DT_TEST_BVPEEK1
INDEX RANGE SCAN DT_TEST_BVPEEK_I1
3 rows selected.This time round, the optimiser peeked at the bind variable the first time the statement was exectued and found it to be 4, so it based the execution plan on that and chose an index range scan. When the statement was executed again, it used the plan it had already executed.
HTH
David -
Oracle not using correct index
Hi,
I have a fact table (a big table) and a dimension table representing dates.
My query is
select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
fdat.dim_date_id = dd.dim_date_id
and dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS');
and the corresponding plan is:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 76M| 9173M| 709K (1)| 02:21:51 |
|* 1 | HASH JOIN | | 76M| 9173M| 709K (1)| 02:21:51 |
|* 2 | INDEX FAST FULL SCAN| UI_DD_DATES_ID | 6951 | 97314 | 8 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | FDAT_BITMAP | 198M| 20G| 708K (1)| 02:21:39 |
Predicate Information (identified by operation id):
1 - access("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
2 - filter("DD"."DIM_DATE">TO_DATE(' 2011-12-20 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
17 rows selected
When I change the query to:
select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
fdat.dim_date_id = dd.dim_date_id
and fdat.dim_date_id > 20111220;
Explain plan changes to:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 390K| 46M| 43958 (1)| 00:08:48 |
| 1 | MERGE JOIN | | 390K| 46M| 43958 (1)| 00:08:48 |
| 2 | TABLE ACCESS BY INDEX ROWID | FDAT_BITMAP | 390K| 41M| 43948 (1)| 00:08:48 |
| 3 | BITMAP CONVERSION TO ROWIDS| | | | | |
|* 4 | BITMAP INDEX RANGE SCAN | I_FDATB_DIM_DID | | | | |
|* 5 | SORT JOIN | | 6991 | 97874 | 9 (12)| 00:00:01 |
|* 6 | INDEX FAST FULL SCAN | UI_DD_DATES_ID | 6991 | 97874 | 8 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("FDAT"."DIM_DATE_ID">20111220)
filter("FDAT"."DIM_DATE_ID">20111220)
5 - access("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
filter("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
6 - filter("DD"."DIM_DATE_ID">20111220)
22 rows selected
My question is why the first query not resulting in the plan similar to second one ? How can make it come with plan similar to the second one without changing the query ?
Thanks,
-Rakeshuser12257218 wrote:
My query is
select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
fdat.dim_date_id = dd.dim_date_id
and dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS');When I change the query to:
select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
fdat.dim_date_id = dd.dim_date_id
and fdat.dim_date_id > 20111220;My question is why the first query not resulting in the plan similar to second one ? How can make it come with plan similar to the second one without changing the query ?
To a very large extent this is because the two queries are not logically equivalent - unless you have a constraint in place that enforces the rule that:
dd.dim_date_id > 20111220 if, and only if, dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')
A constraint like: (dim_date_id = to_number(to_char(dim_date,'yyyymmdd'))) might help - provided both columns also have NOT NULL declarations (or "is not null" constraints), and provided that that's appropriate for the way your application works.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Why Oracle not using the correct indexes after running table stats
I created an index on the table and ran the a sql statement. I found that via the explain plan that index is being used and is cheaper which I wanted.
Latter I ran all tables stats and found out again via explain plan that the same sql is now using different index and more costly plan. I don't know what is going on. Why this is happening. Any suggestions.
ThxI just wanted to know the cost using the index.
To gather histograms use (method_opt is the one that causes the package to collect histograms)
DBMS_STATS.GATHER_SCHEMA_STATS (
ownname => 'SCHEMA',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
block_sample => TRUE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
degree => 4,
granularity => 'ALL',
cascade => TRUE,
options => 'GATHER'
); -
Rconfig ora-01034 Oracle not Available
Hi all,
I have my database running in node rac1. here i'm converting non rac db to rac db..
while running rconfig, i'm getting the below error. here i have pasted rconfig log file contents.
kindly suggest the way to resolve.
Thanks in advance.
ERROR=ORA-01034: ORACLE not available
[main] [ 2014-07-04 15:02:32.191 IST ] [SQLEngine.done:2189] Done called
[main] [ 2014-07-04 15:02:32.191 IST ] [SQLEngine.spoolOff:2035] Setting spool off = /ebiz/app/oracle/cfgtoollogs/rconfig/PROD/sqlLog
[main] [ 2014-07-04 15:02:32.191 IST ] [SQLEngine.done:2189] Done called
oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-01034: ORACLE not available
at oracle.sysman.assistants.rconfig.RConfig.<init>(RConfig.java:62)
at oracle.sysman.assistants.rconfig.RConfig.main(RConfig.java:145)
[main] [ 2014-07-04 15:02:32.196 IST ] [RACConvertStep.execute:223] Returning result:Got Exception
[main] [ 2014-07-04 15:02:32.196 IST ] [RConfigEngine.execute:68] bAsyncJob=false
[main] [ 2014-07-04 15:02:32.215 IST ] [RConfigEngine.execute:77] Result=<?xml version="1.0" ?>
<RConfig version="1.1" >
<ConvertToRAC>
<Convert>
<Response>
<Result code="1" >
Got Exception
</Result>
<ErrorDetails>
ORA-01034: ORACLE not available
Operation Failed. Refer logs at /ebiz/app/oracle/cfgtoollogs/rconfig/rconfig_07_04_14_15_02_28.log for more details.
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC></RConfig>
Thanks & Regards,
Vel
Kindly reply ..Dear Hussein,
EBS Version - 12.1.1.13
DB Version - 11.2.0.3
OS - RHEL 5U8
I'm following one of my friend's document, Now my database is running in asm storage.
while converting non rac database to rac database using the script ConvertToRAC via rconfig, I'm getting the above error.
Kindly suggest someway.
Thanks in advance.
Regards,
Vel -
Why the flashback log'size smaller than the archived log ?
hi, all . why the flashback log'size smaller than the archived log ?
Lonion wrote:
hi, all . why the flashback log'size smaller than the archived log ?Both are different.
Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
Some more information:-
Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
Edited by: CKPT on Jun 14, 2012 7:34 PM -
InfoCube is not a Basis-InfoCube;Administration is not useful
Hi All,
On adding a new field to a DSO , we are unable to activate it. It gives the message as shown below:
InfoCube is not a Basis-InfoCube;Administration is not useful
When we see the logs it shows that database tables are deleted.
Thanks,
MayuriYes, but if i try to activate the DSO , ith give the message that the there was an error in activating the DDIC tables.
Thanks,
Mayuri -
VSS-00011: Connection to database instance <instance_name> failed.
Cause : The user account in which the Oracle VSS Writer Service is running does not have the DBA privileges to log in to the Oracle instance.
Action : Run the Oracle VSS Writer Service in a user account that can connect to the Oracle instance with DBA privileges.
I have assigned ora_dba group to the user that runs the Oracle VSS Writer Service which is the only Oracle solution but still getting
the above error. Was advised to raise the issue here that it is an OS issue. Pls helpThe user account cannot access Oracle Database instance. And also how do you temporarily disable security software on the server.
Have you checked what I already asked for? "Try using the user account and access the Database Instance.
That will let you see if the problem is with the user account permissions or not."
If this does not help then you can contact Oracle as suggested by Dave.
This posting is provided AS IS with no warranties or guarantees , and confers no rights.
Ahmed MALEK
My Website Link
My Linkedin Profile
My MVP Profile -
Oracle 10g not releasing space after delting the records
Hello
We have a tool which uses Oracle 10g as database. We have been deleting records from the tool and the table space does not show any improvements in the free space. When we have taken the dump and recreated the schema all the unused space is visible. Before recreating the schema database was showing 17.5 GB after recreatig it has hardly 1G which is real. Clearly this is not recommended on a production environemt. Please suggest.By performing DELETE operations you will not get back the used space. Extents are still allocated to the segments. To free unused extents you may have to do one of the following:
1) move table within the same tablespace
2) shrink the table
3) exp/imp
....... -
Asynch Hot Log mode does not use hot (online) redo logs
Version 10.2
We have just set up a test of the Asynch Hot Log replication according to Chap 16 of the Data Warehousing guide.
We can see data put into the change table. However, it seems that data gets written to the change table ONLY after a log switch. This would suggest that the capture process is not reading the online logs, but is only reading the archived logs.
I don't think this can be correct behavior because the docs indicate that Oracle "seamlessly switches" between the online and the archived redo logs.
Is there a flag or something to set to cause the online logs to be available to the capture process? Or is this a bug? Has anyone else observed this behavior?
Thanks for any insight.
-- Chris CurzonAccording to the 10g Dataguard docs, section 2.5.1:
"Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."yes, those are used when database is open.
You should not perform any changes in Standby. Even if those exist online redo log files, whats the difficulty you have seen?
These will be used whenever you performed switchover/failover. So nothing to worry on this.
Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?Then oracle functionality itself harm if you think in that way. When they not used in open then what the harm with that?
Standby_File_management --> for example if you add any datafile, those information will be in archives/redos once they applied on standby those will be added automatically when it is set to AUTO if its manual, then it creates a unnamed file in $ORACLE_HOME/dbs location later you have to rename that file and recovery need to perform .
check this http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams206.htm
HTH. -
Why not use Redo log for consistent read
Oracle 11.1.0.7:
This might be a stupid question.
As I understand if a select was issued at 7:00 AM and the data that select is going to read has changed at 7:10 AM even then Oracle will return the data that existed at 7:00 AM. And for this Oracle needs the data in Undo segments.
My question is since redo also has past and current information why can't redo logs be used to retreive that information? Why is undo required when redo already has all that information.user628400 wrote:
Thanks. I get that piece but isn't it the same problem with UNDO? It's written as it expires and there is no guranteee until we specifically ask oracle to gurantee the UNDO retention? I guess I am trying to understand that UNDO was created for effeciency purposes so that there is less performance overhead as compared to reading and writing from redo.And this also you said,
>
If data was changed to 100 to 200 wouldn't both the values be there in redo logs. As I understand:
1. Insert row with value 100 at 7:00 AM and commit. 100 will be writen to redo log
2. update row to 200 at 8:00 AM and commit. 200 will be written to redo log
So in essence 100 and 200 both are there in the redo logs and if select was issued at 7:00 data can be read from redo log too. Please correct me if I am understanding it incorrectly.I guess you didnt understand the explaination that I did. Its not the old data that is kept. Its the changed vector of Undo that is kept which is useful to "recover" it when its gone but not useful as such for a select statement. Whereas in an Undo block, the actual value is kept. You must remember that its still a block only which can contain data just like your normal block which may contain a table like EMP. So its not 100,200 but the change vectors of these things which is useful to recover the transaction based on their SCN numbers and would be read in that order as well. And to read the data from Undo, its quite simple for oracle to do so using an Undo block as the transaction table which holds the entry for the transaction, knows where the old data is kept in the Undo Segment. You may have seen XIDSEQ, XIDUSN, XIDSLOT in the tranaction id which are nothing but the information that where the undo data is kept. And to read it, unlke redo, undo plays a good role.
About the expiry of Undo, you must know that only INACTIVE Undo extents are marked for expiry. The Active Extents which are having an ongoing tranaction records, are never marked for it. You can come back after a lifetime and if undo is there, your old data would be kept safe by oracle since its useful for the multiversioning. Undo Retention is to keep the old data after commit, something which you need not to do if you are on 11g and using Total Recall feature!
HTH
Aman.... -
BR0372E File /oracle/ SID /sapreorg/space SID .log has been already reporte
Hello,
The backup offline failed, i get the error as bellow:
#SAVED.... 20111009.0135.10
BR0372E File /oracle/<SID>/sapreorg/space<SID>.log has been already reported by backup utility
BR0280I BRBACKUP time stamp: 2011-10-10 00.19.19
#PFLOG.... /oracle/<SID>/sapbackup/begypbev.aff
#SAVED.... 20111009.0135.12
BR0280I BRBACKUP time stamp: 2011-10-10 00.19.22
#PFLOG.... /oracle/<SID>/sapbackup/back<SID>.log
#SAVED.... 20111009.0135.13
BR0280I BRBACKUP time stamp: 2011-10-10 00.21.07
#PFLOG.... /oracle/<SID>/102_64/dbs/init<SID>.ora
#SAVED.... 20111009.0135.08
[Normal] From: OB2BAR_BACKINT@e108pae0 "<SID>" Time: 10/10/11 00:21:07
#Backint exits with the code (0).
BR0232E 5 of 6 files saved by backup utility
BR0280I BRBACKUP time stamp: 2011-10-10 00.21.07
BR0231E Backup utility call failed
BR0056I End of database backup: begypbev.aff 2011-10-10 00.21.07
BR0280I BRBACKUP time stamp: 2011-10-10 00.21.08
BR0054I BRBACKUP terminated with errors
could you please help me
thx in advanceHi,
Please note that in fact the error BR0372E File ... has been already reported as saved by backup utility is not an error issued by brbackup tool but by the backint which is third party tool.
As the backint is a third party tool, you will need to contact your Netbackup partner to assist with solution.
Regards,
Abhishek
Maybe you are looking for
-
Customer Exit for Number of Days from 1 st Apr to last date of Month Enter
Hello BI Experts, I have a requirement to count the number of days from 1 st April of current year to the last date of month entered. For example : The use will enter say July 2010 or 003.2010 (as Fiscal Year Variant is V3 ). Today is 14 July ...So
-
Error While Creating Essbase Database From Hyperion Planning
Hi, While creating the Essbase Database From the 'Manage Database' in Hyperion Planning, I am getting the following error: com.hyperion.planning.olap.EssbaseException: Account (1060000) It gets stuck at Adding Dimensions. I have tried reconfiguring P
-
hi guys morning. i use MVC to develop a BSP page ,in my layout there are two part, one is the inputfields and a button , the other is the tableview to display the search result. and if the user began to click the button,firstly i us
-
Package unable to distribute to DP
Wanna ask, I have 1 DP which I have a problem to distribute a package to and the DP is part of a distribution point group. I have tried before to manually remove the particular DP in Content Locations from the package properties. Then i add back indi
-
Using external firewire iSight with iMovie 8
I'm trying to use my old external iSight camera that I purchased for my old powerbook with iMovie, but only the internal iSight camera is showing in the drop down menu. When I plug in my external iSight camera it will make the lens adjusting noise an