Fast Refresh MVs and HASH_SJ Hint
I am building fast refresh MVs on a 3rd party database to enable faster reporting. This is an interim solution whilst we build a new ETL process using CDC.
The source DB has no PKs, so I'm creating the MV logs with ROWID. When I refresh the MV (exec DBMS_MVIEW.REFRESH('<mview_name>') and trace the session I notice:
1. The query joins back to the base table - I think this is necessary as there are two base tables and the MV change could be instigated from either table independently. Therefore the changes might not be in the log.
2. However in this case shouldn't it be possible to just joij mv_log1 to base_table2 and ignore base_table1?
3. There is a HASH_SJ hint in this join, forcing a full table scan on the 7M row base_table1.
4. I am doing 1 update then refreshing the MV
5. In production this table would have many 10s of single row inserts and updates per minute
This is an excerpt from the tkprof'd trace file (I've hidden some table/column names)
FROM (SELECT MAS$.ROWID RID$
,MAS$.*
FROM <base_table1> MAS$
WHERE ROWID IN (SELECT /*+ HASH_SJ */
CHARTOROWID(MAS$.M_ROW$$) RID$
FROM <mview_log1> MAS$
WHERE MAS$.SNAPTIME$$ > sysdate-1/24 --:1
) AS OF SNAPSHOT (:2) JV$
,<base_table2> AS OF SNAPSHOT (:2) MAS$0
WHERE JV$.<col1>=MAS$0.<col1>
AND JV$.<col2>=MAS$0.<col2>
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 13.78 153.32 490874 551013 3 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 13.78 153.32 490874 551013 3 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 277 (<user>) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID <base_table2>(cr=551010 pr=490874 pw=0 time=153321352 us)
3 NESTED LOOPS (cr=551009 pr=490874 pw=0 time=647 us)
1 VIEW (cr=551006 pr=490874 pw=0 time=153321282 us)
1 HASH JOIN RIGHT SEMI (cr=551006 pr=490874 pw=0 time=153321234 us)
2 TABLE ACCESS FULL <base_table1_mv_log> (cr=21 pr=0 pw=0 time=36 us)
7194644 TABLE ACCESS FULL <base_table1>(cr=550985 pr=490874 pw=0 time=158282171 us)
1 INDEX RANGE SCAN <base_table2_index> (cr=3 pr=0 pw=0 time=22 us)(object id 3495055)As you can see there are two rows in the MV log (one update, old and new values), the FTS on the base table ensure that the MV refresh is far from fast
I have tried this with refreshing on demand and commit with similar results. Implementing this would make my the application impossibly slow.
I will search the knowledge base once I am given access
SQL>select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionThank you for taking the time to read/respond.
Ben
Thanks for looking.
From the Knowledge Base it appears that Bug 6456841 might be the cause. I'll play around with the settings it suggests and see what happens.
the MV query is basically:
SELECT ...
FROM base_table1
,base_table2
WHERE base_table1.col1 = base_table2.col1
AND base_table1.col2 = base_table2.col2When 1 row in base_table1 is updated there is a FTS for that table, rather than:
1. getting the data from the MV log or
2. a Nested loop join to base_table1 from its mv_log on rowid
This is due to the oracle internal code putting a HASH_SJ hint in when joining the mv log to its base table
Ben
Similar Messages
-
Commit performance on table with Fast Refresh MV
Hi Everyone,
Trying to wrap my head around fast refresh performance and why I'm seeing (what I would consider) high disk/query numbers associated with updating the MV_LOG in a TKPROF.
The setup.
(Oracle 10.2.0.4.0)
Base table:
SQL> desc action;
Name Null? Type
PK_ACTION_ID NOT NULL NUMBER(10)
CATEGORY VARCHAR2(20)
INT_DESCRIPTION VARCHAR2(4000)
EXT_DESCRIPTION VARCHAR2(4000)
ACTION_TITLE NOT NULL VARCHAR2(400)
CALL_DURATION VARCHAR2(6)
DATE_OPENED NOT NULL DATE
CONTRACT VARCHAR2(100)
SOFTWARE_SUMMARY VARCHAR2(2000)
MACHINE_NAME VARCHAR2(25)
BILLING_STATUS VARCHAR2(15)
ACTION_NUMBER NUMBER(3)
THIRD_PARTY_NAME VARCHAR2(25)
MAILED_TO VARCHAR2(400)
FK_CONTACT_ID NUMBER(10)
FK_EMPLOYEE_ID NOT NULL NUMBER(10)
FK_ISSUE_ID NOT NULL NUMBER(10)
STATUS VARCHAR2(80)
PRIORITY NUMBER(1)
EMAILED_CUSTOMER TIMESTAMP(6) WITH LOCAL TIME
ZONE
SQL> select count(*) from action;
COUNT(*)
1388780MV was created
create materialized view log on action with sequence, rowid
(pk_action_id, fk_issue_id, date_opened)
including new values;
-- Create materialized view
create materialized view issue_open_mv
build immediate
refresh fast on commit
enable query rewrite as
select fk_issue_id issue_id,
count(*) cnt,
min(date_opened) issue_open,
max(date_opened) last_action_date,
min(pk_action_id) first_action_id,
max(pk_action_id) last_action_id,
count(pk_action_id) num_actions
from action
group by fk_issue_id;
exec dbms_stats.gather_table_stats('tg','issue_open_mv')
SQL> select table_name, last_analyzed from dba_tables where table_name = 'ISSUE_OPEN_MV';
TABLE_NAME LAST_ANAL
ISSUE_OPEN_MV 15-NOV-10
*note: table was created a couple of days ago *
SQL> exec dbms_mview.explain_mview('TG.ISSUE_OPEN_MV');
CAPABILITY_NAME P REL_TEXT MSGTXT
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST Y
REWRITE Y
PCT_TABLE N ACTION relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT Y
REFRESH_FAST_AFTER_ANY_DML Y
REFRESH_FAST_PCT N PCT is not possible on any of the detail tables in the mater
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL Y
REWRITE_PCT N general rewrite is not possible or PCT is not possible on an
PCT_TABLE_REWRITE N ACTION relation is not a partitioned table
13 rows selected.Fast refresh works fine. And the log is kept quite small.
SQL> select count(*) from mlog$_action;
COUNT(*)
0When I update one row in the base table:
var in_action_id number;
exec :in_action_id := 398385;
UPDATE action
SET emailed_customer = SYSTIMESTAMP
WHERE pk_action_id = :in_action_id
AND DECODE(emailed_customer, NULL, 0, 1) = 0
commit;I see the following happen via tkprof...
INSERT /*+ IDX(0) */ INTO "TG"."MLOG$_ACTION" (dmltype$$,old_new$$,snaptime$$,
change_vector$$,sequence$$,m_row$$,"PK_ACTION_ID","DATE_OPENED",
"FK_ISSUE_ID")
VALUES
(:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,
sys.cdc_rsid_seq$.nextval,:m,:1,:2,:3)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 2 0.00 0.03 4 4 4 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.04 4 4 4 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
2 SEQUENCE CDC_RSID_SEQ$ (cr=0 pr=0 pw=0 time=28 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.01 0.01
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.94 5.36 55996 56012 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.94 5.38 55996 56012 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.02 4.91
select dmltype$$, max(snaptime$$)
from
"TG"."MLOG$_ACTION" where snaptime$$ <= :1 group by dmltype$$
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.70 0.68 55996 56012 0 1
total 4 0.70 0.68 55996 56012 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 SORT GROUP BY (cr=56012 pr=55996 pw=0 time=685671 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=1851 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.00 0.38
delete from "TG"."MLOG$_ACTION"
where
snaptime$$ <= :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.71 0.70 55946 56012 3 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.71 0.70 55946 56012 3 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 DELETE MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=702813 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=1814 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3530 0.00 0.39
db file sequential read 33 0.00 0.00
********************************************************************************Could someone explain why are the SELECT/UPDATE/DELETE on MLOG$_ACTION so "expensive" when there should only be 2 rows (old value and new value) in that log after an update? Is there anything I could do to improve the performance of the update?
Let me know if you require more info...would be glad to provide it.Brilliant. Thanks.
I owe you a beverage.
SQL> set autotrace on
SQL> select count(*) from MLOG$_ACTION;
COUNT(*)
0
Execution Plan
Plan hash value: 2727134882
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 12309 (1)| 00:02:28 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MLOG$_ACTION | 1 | 12309 (1)| 00:02:28 |
Note
- dynamic sampling used for this statement
Statistics
4 recursive calls
0 db block gets
56092 consistent gets
56022 physical reads
0 redo size
410 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> truncate table MLOG$_ACTION;
Table truncated.
SQL> select count(*) from MLOG$_ACTION;
COUNT(*)
0
Execution Plan
Plan hash value: 2727134882
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| MLOG$_ACTION | 1 | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement
Statistics
1 recursive calls
1 db block gets
6 consistent gets
0 physical reads
96 redo size
410 bytes sent via SQL*Net to client
400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedJust for fun...comparison of the TKPROF.
Before:
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.94 5.36 55996 56012 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.94 5.38 55996 56012 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3529 0.02 4.91
********************************************************************************After:
update "TG"."MLOG$_ACTION" set snaptime$$ = :1
where
snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 7 1 2
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 7 1 2
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
0 UPDATE MLOG$_ACTION (cr=7 pr=0 pw=0 time=79 us)
2 TABLE ACCESS FULL MLOG$_ACTION (cr=7 pr=0 pw=0 time=28 us)
******************************************************************************** -
Materialized view - fast refresh not working on joins
Hello,
Is it true that fast refresh for materialized view is not possible when I do a union of 2 tables, although both tables have materialized view logs ?there are a number of restrictions with fast refresh, read Materialized View Fast Refresh Restrictions and ORA-12052 [ID 222843.1]
edit: his royal kyteness has posted on this before
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6506967884606
Edited by: deebee_eh on Apr 25, 2012 3:13 PM -
Issue with materialized view and fast refresh between Oracle 10g and 11g
Hi all,
I've hit a problem when trying to create a fast-refreshable materialized view.
I've got two databases, one 10.2.0.10, another 11.2.0.1.0, running on 32-bit Windows. Both are enterprise edition, and I'm trying to pull data from the 10g one into the 11g one. I can happily query across the database link from 11g to 10g, and can use complete refresh with no problem except the time it takes.
On the 10g side, I've got tables with primary keys and m.v. logs created, the logs being of this form ...
CREATE MATERIALIZED VIEW LOG ON table WITH PRIMARY KEY INCLUDING NEW VALUES
On the 11g side, when I try to create an m.v. against that ...
CREATE MATERIALIZED VIEW mv_table REFRESH FAST WITH PRIMARY KEY AS SELECT col1, col2 FROM table@dblink;
... I get an ORA-12028 error (materialized view type is not supported by master site).
After running the EXPLAIN_MVIEW procedure it shows this;
REFRESH_FAST_AFTER_INSERT not supported for this type mv by Oracle version at master site
A colleague has managed to build a fast-refresh m.v. from the same source database, but pulling to a different one than I'm using; his target is also 10g, like the (common) source, so I've no idea why I'm getting the 'not supported' message whilst he isn't.
I've been able, on previous projects, to do exactly what I'm trying to achieve but on those someone with more knowledge than me has configured the database!
I'm now stumped. I'm also no DBA but despite that it's been left to me to install the new 11g database on the 32-bit Windows server from scratch, so there are probably a couple of things I'm missing. It's probably something really obvious but I don't really know where to look now.
If anyone can give me any pointers at all, I'd be really grateful. This question is also duplicated in the Replication forum but hasn't had any replies as yet, so I'm reproducing it here in hope!
Thanks in advance,
SteveHi Steve,
You should have a look at metalink, Doc ID 1059547.1
If that does not help, there may be something else in Mater note ID 1353040.1
Best regards
Peter -
Snapshot Refresh (How to stop COMPLETE refresh and run FAST refresh)?
Hi,
I have a snapshot refresh executed as COMPLETE which is taking very long. When I try to kill this and try to run a FAST I get:
ERROR at line 1:
ORA-12057: materialized view "PORTALSNP1"."V21_BILLING_ACCOUNT" is INVALID an must complete refresh
How can I resolve this to stop the COMPLETE refresh altogether and be able to run the FAST refresh.
Also is there a way to get the time it will take to complete the running snapshot refresh?
Please and thankYou!
Regards,
AYou don't resolve it ... you drop the materialized view. Then you create a materialized view log. Then a properly coded MV.
http://www.morganslibrary.org/library.html
bookmark this link
then look up "Materialized Views and "Materialized View Logs"
The log must be created first. -
How can I fast refresh the materialized view !!
I created a MV base on some tables in order to improve the querey speed.
but the mv I have created falied to refresh fast.
because there are two same table in the from clause:
jcdm jc1,jcdm jc2
create materialized view temp_mv
nologging
pctfree 0
storage (initial 2048k next 2048k pctincrease 0)
parallel
build immediate
refresh force
on demand
as
select
TAB_GSHX.rowid hx_rid,
TAB_GSHD.rowid hd_rid ,
JC1.rowid jc1_rid ,
JC2.rowid jc2_rid ,
YSHD_ID HXID,
JC1.JCDM QFD,
JC2.JCDM JLD
FROM
TAB_GSHX,
TAB_GSHD,
jCDM JC1,
JCDM JC2
WHERE
YSHD_ID=YSHX_ID
AND YSHD_QFD=JC1.JBJC_ID
AND YSHD_JLD=JC2.JBJC_ID
AND TO_CHAR(YSHX_time,'YYYYMMDD')='20030101'
the column msgtxt of the table MV_CAPABILITIES_TABLE is :
"the multiple instances of the same table or view" and " one or more joins present in mv".
How can I succeed in fast refresh the above temp_mv!!!
thanks.lianjun,
When you are using Oracle9i there is a procedure which can help you setup the materialized view. If some option isn't working it gives you hint why it doesn't work.
The procedure is dbms_mview.explain_mview.
Take a look at the documentation how to use it. (In the Oracle9i DWH guide the package is explained.)
Hope this helps
With kind regards,
Bas Roelands -
Error While Creating Fast Refresh Materialized view.
Table Scripts:
CREATE TABLE CONTRACT_MASTER
CONTRACT_SEQ NUMBER(10) NOT NULL,
PDN CHAR(5) NOT NULL,
APPID NUMBER(10) NOT NULL,
CONTRACT_LOB_DESC VARCHAR2(20) NOT NULL,
CUSTOMER_NAME VARCHAR2(57) NOT NULL,
CONTRACT_DT DATE NOT NULL,
CONTRACT_RECD_DT DATE NOT NULL,
HELD_OFFERING_DT DATE NOT NULL,
DRAFT_AMT NUMBER(15,2) NOT NULL,
STATUS_DESC VARCHAR2(20) NOT NULL,
GIF_UPLOAD_TM TIMESTAMP NOT NULL
CREATE table CONTRACT_COMMENTS
CONTRACT_COMMENTS_SEQ NUMBER(10) NOT NULL,
APPID NUMBER(10) NOT NULL,
COMMENTS VARCHAR2(1000) NOT NULL,
GIF_UPLOAD_TM TIMESTAMP NOT NULL
Constraints on tables
ALTER TABLE CONTRACT_MASTER ADD
CONSTRAINT XPKCONTRACT_MASTER PRIMARY KEY (CONTRACT_SEQ) USING INDEX ;
ALTER TABLE CONTRACT_COMMENTS ADD
CONSTRAINT XPKCONTRACT_COMMENTS PRIMARY KEY (CONTRACT_COMMENTS_SEQ) USING INDEX ;
alter table CONTRACT_MASTER add CONSTRAINT XUIAPPCONTRACT_MASTER UNIQUE (APPID) USING INDEX;
CREATE INDEX XUIAPPCONTRACT_COMMENTS ON
CONTRACT_COMMENTS(APPID) ;
Materialized View Creation:
CREATE MATERIALIZED VIEW LOG ON CONTRACT_MASTER WITH PRIMARY KEY,ROWID;
CREATE MATERIALIZED VIEW LOG ON CONTRACT_COMMENTS WITH PRIMARY KEY, ROWID;
CREATE MATERIALIZED VIEW MV_CONTRACT_COMMENTS_HELDOFFERING
REFRESH FAST
ENABLE QUERY REWRITE AS
SELECT APPID,COMMENTS FROM CONTRACT_COMMENTS WHERE APPID IN (
SELECT APPID FROM CONTRACT_MASTER WHERE STATUS_DESC = 'Held Offering' )
Errors generated:
ERROR at line 4:
ORA-12015: cannot create a fast refresh materialized view from a complex query
_*Afer That I have changed the query but still it was not created like:*_
CREATE MATERIALIZED VIEW MV_CONT_COMMNTS_HELDOFFERNG
REFRESH FAST
ENABLE QUERY REWRITE AS
SELECT CONTRACT_COMMENTS_SEQ,c.APPID,COMMENTS
FROM CONTRACT_COMMENTS c,CONTRACT_MASTER m
WHERE m.APPID = c.APPID and m.STATUS_DESC = 'Held Offering'
*even though error displayed:
SQL> CREATE MATERIALIZED VIEW MV_CONT_COMMNTS_HELDOFFERNG*
2 REFRESH FAST
3 ENABLE QUERY REWRITE AS
4 SELECT CONTRACT_COMMENTS_SEQ,c.APPID,COMMENTS
5 FROM CONTRACT_COMMENTS c,CONTRACT_MASTER m
6 WHERE m.APPID = c.APPID and m.STATUS_DESC = 'Held Offering';
FROM CONTRACT_COMMENTS c,CONTRACT_MASTER m
ERROR at line 5:
ORA-12052: cannot fast refresh materialized view GSSIO.MV_CONT_COMMNTS_HELDOFFERNG
*Again I have done "Analyzing Materialized Views for Fast Refresh" as follows:*
1: exec dbms_mview.explain_mview('MV_CONT_COMMNTS_HELDOFFERNG');
2: SELECT capability_name, possible, SUBSTR(msgtxt,1,60) AS msgtxt
FROM mv_capabilities_table
WHERE capability_name like '%FAST%';
Output is :
CAPABILITY_NAME P MSGTXT
REFRESH_FAST N
REFRESH_FAST_AFTER_INSERT N the SELECT list does not have the rowids of all the detail t
REFRESH_FAST_AFTER_ONETAB_DML N see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML N see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT N PCT is not possible on any of the detail tables in the mater
Please suggest what to do to implement fast refresh materialized view for same.Edited by: dba on Sep 20, 2010 12:00 AMIf the two MVs have to be consistent with each other as of a specific time, put them into a Refresh Group and refresh them with DBMS_MVIEW.REFRESH
If an MV is dependent on another, use DBMS_MVIEW.REFRESH_DEPENDENT
See the "Oracle® Database PL/SQL Packages and Types Reference" documentation pages for DBMS_MVIEW.
Hemant K Chitale
http://hemantoracledba.blogspot.com --- this is NOT a documentation site.
Edited by: Hemant K Chitale on Sep 20, 2010 5:19 PM -
Enabling materialized view for fast refresh method
In AWM when i select refresh method as FAST and enable the materialized view, am getting the below error:
"Refresh method fast requires materialized view logs and a previously run complete refresh of the cube mv". i need this Fast refresh of a cube materialized view, so that it performs an incremental refresh and re-aggregation of only changed rows in the source table.
can anyone help me on this??If you want the cube to hold data even after it has been deleted from the relational table, then you should disable the MV on the cube.
Synchronization with the source table is determined by the default "cube script".
<li>CLEAR, LOAD, SOLVE : This will synchronize your cube with the source table. It is a requirement for MVs.
<li>LOAD, SOLVE: This will allow your cube to contain data even after it has been removed from the source table. It sounds like you want this.
Cube builds can be "incremental" in one of two ways.
(1) You can have an "incremental LOAD" if the source table contains only the changed rows or if you use MV "FAST" or "PCT" refresh. Since you can't use MVs, you would need a source table with only the changed rows.
(2) You will have an "incremental SOLVE" (a.k.a. "incremental aggregation") if there is no "CLEAR VALUES" or "CLEAR AGGREGATES" step and various other conditions hold.
To force a "complete LOAD" with an "incremental SOLVE" you should have all rows in your source table and run the following build script.
LOAD, SOLVE You could also run "CLEAR LEAVES, LOAD, SOLVE" to synchronize the cube with the table.
To force an "incremental LOAD" with a "complete SOLVE" you make the source table contains only the changed rows and the run the following:
CLEAR AGGREGATES, LOAD, SOLVEor
LOAD, CLEAR AGGREGATES, SOLVEFinally, if you want both LOAD and SOLVE to be incremental you make the source table contains only the changed rows and the run the following:
LOAD, SOLVE -
Materialized view not fast refreshable
I need to calculate a moving total in a materialized view - that is, calculate subtotal for rows starting at current row up to the first row.
I am using a window function, as shown below.
The problem is, MVs using window functions are not fast refreshable, and I need this MV to be fast refreshable.
Is there any other way of getting the moving total without using a window function?
CREATE MATERIALIZED VIEW BIL_BI_OPDTL_MV_11_TST
--REFRESH FAST ON DEMAND
AS
SELECT
fact.txn_time_id txn_time_id
, fact.effective_time_id close_time_id
, prntmv.parent_group_id
, jgd.parent_group_id sales_group_id
, fact.lead_id
, fact.opp_open_status_flag
, fact.win_loss_indicator
, fact.forecast_rollup_flag
,fact.rev_flag1
, SUM(fact.rev_flag1) OVER (PARTITION BY
fact.effective_time_id , prntmv.parent_group_id
, fact.lead_id, fact.opp_open_status_flag, fact.win_loss_indicator, fact.forecast_rollup_flag
, to_number(to_char(opty_creation_date,'J'))
, to_number(to_char(opty_conversion_date,'J'))
ORDER BY fact.lead_id
ROWS UNBOUNDED PRECEDING) moving_total
, to_number(to_char(opty_creation_date,'J')) opty_creation_time_id
, to_number(to_char(opty_conversion_date,'J')) opty_conversion_time_id
,count(*) cnt
FROM bil.bil_bi_opty_ld_f_tst fact, bil_bi_rs_grp_mv rgmv, bil_bi_rs_grp_mv prntmv,
jtf.jtf_rs_groups_denorm jgd
WHERE rgmv.sales_group_id = fact.sales_group_id
and rgmv.salesrep_id = fact.salesrep_id
AND fact.sales_group_id = jgd.group_id
AND prntmv.sales_group_id = jgd.parent_group_id
AND prntmv.umarker in ('top_groups', 'groups')
AND jgd.active_flag='Y'
group by
fact.txn_time_id,
fact.effective_time_id
, prntmv.parent_group_id
, jgd.parent_group_id
, fact.lead_id
, fact.opp_open_status_flag
, fact.win_loss_indicator
, fact.forecast_rollup_flag
, fact.rev_flag1
, to_number(to_char(opty_creation_date,'J'))
, to_number(to_char(opty_conversion_date,'J'))Please post your question at
General Database Discussions
for faster response -
Materialized View Fast Refresh. Quick Question
Hi all
I have an assumption that I would love to have validated quickly if possible.
I am assuming that once a refresh operation is kicked off, any changes to the MLOG$ log table subsequent to the start of the refresh will not be picked up on that refresh cycle.
I'm basing this on my understanding of cursor consistency logic, but if someone could validate or refute the above then it would be much appreciated.
Many thanks
ChrisMV log entries are deleted when all registered materialized views that depend on them have refreshed.
For example: Suppose you have a table T and materialized views A and B that are fast-refresh and depend on the MV log for T.
On Monday you refresh MV A. The log entries are nto purged because B has not yet incorporated them.
On Tuesday you refresh MV B. At this point the log entries are purged up to the time on Monday that A was refreshed, because all MVs have processed them.
On Wednesday you refresh MV A again. At this point the log entries are purged up to the time on Tuesday that A was refreshed. etc. -
Materialized view - fast refresh logs
I understand that there will be a Change Data Capture(CDC) log maintained internally for materialized view fast refresh to happen.
My question is, will this log get purged once the changes are applied to corresponding materialized view or will it persist with further changes getting appended to it ?
What would be the size of change log for 150 million records covering 40 base tables ?MV log entries are deleted when all registered materialized views that depend on them have refreshed.
For example: Suppose you have a table T and materialized views A and B that are fast-refresh and depend on the MV log for T.
On Monday you refresh MV A. The log entries are nto purged because B has not yet incorporated them.
On Tuesday you refresh MV B. At this point the log entries are purged up to the time on Monday that A was refreshed, because all MVs have processed them.
On Wednesday you refresh MV A again. At this point the log entries are purged up to the time on Tuesday that A was refreshed. etc. -
Regd FAST refresh option in a Materialized view
Hi All,
I am using a pipeline function in which I am creating a table of records and a few cursors to fetch data from various tables.
Now this PL/SQL table is being used to construct a Materialized view.
Creation of Materialized view is happening fine but not with FAST refresh option. It gives an error " Cannot create a FAST refresh Materialized view from a complex query."
The query which I have used for the view creation is
CREATE MATERIALIZED VIEW CUSTOM.ABC
PCTFREE 0
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
AS
SELECT A.Number,
A.Guarantors_Number,
A.Guarantors_Name,
A.Personal_Garantee_PCNT,
A.Company, LG.Source_System,
A.Type_of_Info,
A.File_Gen_Date,
A.Periodicity
FROM
TABLE(CUSTOM.CDM_LG_PACK_PF.CDM_LG_FUNC) A;
where CDM_LG_PACK_PF is the package and CDM_LG_FUNC is the pipeline function I have written to fetch all the records.
Please help me on how can I do a FAST refresh on this materialized view.
Thanks in advance,
GauravWelcome to the forum!
FAST refresh doesn't mean that the operation is fast (time wise), it means it's an incremental refresh.
If you have a complex query, you can't use a FAST refresh - that's what the exception tells you. -
Fast Refresh mview performance
Hi,
I'm actually facing with Fast Refresh Mview performace problems and i would like to know what is the possible improvments for the fast refresh procedure:
- base table is 1 500 000 000 rows partitions by day , subparttion by has (4)
- mlog partition by hash (4) with indexes on pk and snaptime
- mview partition by day subpartition by hash (4)
10 000 000 insertions / day in base table/mlog
What improvment or indexes can i add to improve fast refresh ?
Thanks for helpHi,
Which DB version are you using?
Did you have a look at the MV refresh via Partition Change Tracking (PCT)?
http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/advmv.htm#sthref575
If it's possible to use PCT, it would probably improve a lot the performance of the refresh of your MV.
Regards
Maurice -
Hello,
We run a fast refresh for a particular mview. I have the mview log created on the primary db (base table) and the mview created on another reporting database.
The fast refresh seems to be fine throughout the day but at the end of a day it fails to do the fast refresh and i'm forced to do a complete refresh everyday which fixes it but this cycle continues until the end of day when it breaks again.
If you notice in the next date column the refresh date turns to 01-JAN-00, not sure why this happens and this is causing the problem.
JOB REFRESH_JOB LAST_DATE NEXT_DATE NEXT_SEC BROKEN
70 CON_SYS"."MV_BRICKNW" 04-AUG-10 01-JAN-00 00:00:00 Y
75 CON_SYS"."MV_BRICKNWCAN" 04-AUG-10 01-JAN-00 00:00:00 Y
Could someone please tell me what the problem is.The Refresh Date gets reset when the job is Broken (which you see as BROKEN='Y'). The job is Broken when 16 consecutive attempts have failed (which you would see in FAILURE_COUNT).
You have to find out why it fails -- fails 16 times. There seems to be an issue with the date/time logic OR one of the source objects probably gets dropped or becomes unavailable close to the end of the day.
Hemant K Chitale -
Mview fast refresh taking longer than complete refresh
Hi there,
We have a mview which takes about an hour to refresh in complete mode. Created all the mview logs and refreshing via fast method but this takes longer.
When all the mview logs empty works fine as fast refresh and far far quicker as would expect. Tried upadting one of records on base table called todays_date which simply stores todays date to get around fast refresh restriction with sysdate.
Anybody any idea why so slow?
Also wondering if better approach. Thought about just having an aggregate table and processing the inserts in MLOG$_TRANS_TABLE via a merge statement. Also heard about change data capture.
Any thoughts/suggestions welcome.
Many ThanksWell, search with your post subject on this forum. There are a lot of threads discussing the same.
Maybe you are looking for
-
Can't Import iPhoto into iMove 6
I'm trying to import photos from iPhoto but can't get the photos to get accepted into iMovie. They are jpegs. It's worked many times before. Thanks!
-
Error -5000 and -3253 while downloading
i just bought the Beatles box set it is composed of songs, videos and LPs on most of the LP (read here most not all) i have the error -5000 and on 2 videos i have error -3253 from what i have understood error -5000 if a problem with access permission
-
Adobe Help: Creative Suite 5 always out-of-date?
Every time I open Adobe Help from any application, I get a message to update the content. This can be several times a day. I agree. I does and expands it. But in the local content listing, "Creative Suite 5" will list as "May 02, 2010" version, "out-
-
Transfering CUF from R3 PR to SRM SC
Good morning, (SRM 5.5) There are several custom fields in the R3 PR's. There are the same in the SRM SC. When i transfer PR's to the SRM SC (using BBP_EXTREQ_TRANSFER), SC custom fields is not filled. Is it right? How can i fill them? Thanks Alexey
-
Company Portal (xap) wp8 Apps failing to install (Can't install)
Hi, We have set up our company portal on windows mobile devices in order to distribute company apps. However we have created some cordova applications to be deployed via the company portal. These applications work when we deploy them via visual stu