Snapshot fast refresh performance

I have a master table with a snapshot log
on the primary database (8.1.6 Oracle and
AiX). There are 20,000 rows in the snapshot
log which take about 6 hrs to perform a fast
refresh to a snapshot database on the same
set up.
i am looking for some ways to tune the snapshot to decrease the time of the refresh.

The refresh time is primarily dependent upon the speed of the remote connection. Also it is not so much the number of rows as it is the size of the rows. As far as tuning the refresh you can check to see if the master snapshot log HWM has grown very large. The refresh will perform a full table scan when it determines which rows it needs to refresh.

Similar Messages

  • Commit performance on table with Fast Refresh MV

    Hi Everyone,
    Trying to wrap my head around fast refresh performance and why I'm seeing (what I would consider) high disk/query numbers associated with updating the MV_LOG in a TKPROF.
    The setup.
    (Oracle 10.2.0.4.0)
    Base table:
    SQL> desc action;
    Name                                      Null?    Type
    PK_ACTION_ID                              NOT NULL NUMBER(10)
    CATEGORY                                           VARCHAR2(20)
    INT_DESCRIPTION                                    VARCHAR2(4000)
    EXT_DESCRIPTION                                    VARCHAR2(4000)
    ACTION_TITLE                              NOT NULL VARCHAR2(400)
    CALL_DURATION                                      VARCHAR2(6)
    DATE_OPENED                               NOT NULL DATE
    CONTRACT                                           VARCHAR2(100)
    SOFTWARE_SUMMARY                                   VARCHAR2(2000)
    MACHINE_NAME                                       VARCHAR2(25)
    BILLING_STATUS                                     VARCHAR2(15)
    ACTION_NUMBER                                      NUMBER(3)
    THIRD_PARTY_NAME                                   VARCHAR2(25)
    MAILED_TO                                          VARCHAR2(400)
    FK_CONTACT_ID                                      NUMBER(10)
    FK_EMPLOYEE_ID                            NOT NULL NUMBER(10)
    FK_ISSUE_ID                               NOT NULL NUMBER(10)
    STATUS                                             VARCHAR2(80)
    PRIORITY                                           NUMBER(1)
    EMAILED_CUSTOMER                                   TIMESTAMP(6) WITH LOCAL TIME
                                                         ZONE
    SQL> select count(*) from action;
      COUNT(*)
       1388780MV was created
    create materialized view log on action with sequence, rowid
    (pk_action_id, fk_issue_id, date_opened)
    including new values;
    -- Create materialized view
    create materialized view issue_open_mv
    build immediate
    refresh fast on commit
    enable query rewrite as
    select  fk_issue_id issue_id,
         count(*) cnt,
         min(date_opened) issue_open,
         max(date_opened) last_action_date,
         min(pk_action_id) first_action_id,
         max(pk_action_id) last_action_id,
         count(pk_action_id) num_actions
    from    action
    group by fk_issue_id;
    exec dbms_stats.gather_table_stats('tg','issue_open_mv')
    SQL> select table_name, last_analyzed from dba_tables where table_name = 'ISSUE_OPEN_MV';
    TABLE_NAME                     LAST_ANAL
    ISSUE_OPEN_MV                  15-NOV-10
    *note: table was created a couple of days ago *
    SQL> exec dbms_mview.explain_mview('TG.ISSUE_OPEN_MV');
    CAPABILITY_NAME                P REL_TEXT MSGTXT
    PCT                            N
    REFRESH_COMPLETE               Y
    REFRESH_FAST                   Y
    REWRITE                        Y
    PCT_TABLE                      N ACTION   relation is not a partitioned table
    REFRESH_FAST_AFTER_INSERT      Y
    REFRESH_FAST_AFTER_ANY_DML     Y
    REFRESH_FAST_PCT               N          PCT is not possible on any of the detail tables in the mater
    REWRITE_FULL_TEXT_MATCH        Y
    REWRITE_PARTIAL_TEXT_MATCH     Y
    REWRITE_GENERAL                Y
    REWRITE_PCT                    N          general rewrite is not possible or PCT is not possible on an
    PCT_TABLE_REWRITE              N ACTION   relation is not a partitioned table
    13 rows selected.Fast refresh works fine. And the log is kept quite small.
    SQL> select count(*) from mlog$_action;
      COUNT(*)
             0When I update one row in the base table:
    var in_action_id number;
    exec :in_action_id := 398385;
    UPDATE action
    SET emailed_customer = SYSTIMESTAMP
    WHERE pk_action_id = :in_action_id
    AND DECODE(emailed_customer, NULL, 0, 1) = 0
    commit;I see the following happen via tkprof...
    INSERT /*+ IDX(0) */ INTO "TG"."MLOG$_ACTION" (dmltype$$,old_new$$,snaptime$$,
      change_vector$$,sequence$$,m_row$$,"PK_ACTION_ID","DATE_OPENED",
      "FK_ISSUE_ID")
    VALUES
    (:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,
      sys.cdc_rsid_seq$.nextval,:m,:1,:2,:3)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      2      0.00       0.03          4          4          4           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      0.00       0.04          4          4          4           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          2  SEQUENCE  CDC_RSID_SEQ$ (cr=0 pr=0 pw=0 time=28 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         4        0.01          0.01
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.94       5.36      55996      56012          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.94       5.38      55996      56012          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.02          4.91
    select dmltype$$, max(snaptime$$)
    from
    "TG"."MLOG$_ACTION"  where snaptime$$ <= :1  group by dmltype$$
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.70       0.68      55996      56012          0           1
    total        4      0.70       0.68      55996      56012          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT GROUP BY (cr=56012 pr=55996 pw=0 time=685671 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=1851 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.00          0.38
    delete from "TG"."MLOG$_ACTION"
    where
    snaptime$$ <= :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.71       0.70      55946      56012          3           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.71       0.70      55946      56012          3           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  DELETE  MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=702813 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=1814 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3530        0.00          0.39
      db file sequential read                        33        0.00          0.00
    ********************************************************************************Could someone explain why are the SELECT/UPDATE/DELETE on MLOG$_ACTION so "expensive" when there should only be 2 rows (old value and new value) in that log after an update? Is there anything I could do to improve the performance of the update?
    Let me know if you require more info...would be glad to provide it.

    Brilliant. Thanks.
    I owe you a beverage.
    SQL> set autotrace on
    SQL> select count(*) from MLOG$_ACTION;
      COUNT(*)
             0
    Execution Plan
    Plan hash value: 2727134882
    | Id  | Operation          | Name         | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |              |     1 | 12309   (1)| 00:02:28 |
    |   1 |  SORT AGGREGATE    |              |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| MLOG$_ACTION |     1 | 12309   (1)| 00:02:28 |
    Note
       - dynamic sampling used for this statement
    Statistics
              4  recursive calls
              0  db block gets
          56092  consistent gets
          56022  physical reads
              0  redo size
            410  bytes sent via SQL*Net to client
            400  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> truncate table MLOG$_ACTION;
    Table truncated.
    SQL> select count(*) from MLOG$_ACTION;
      COUNT(*)
             0
    Execution Plan
    Plan hash value: 2727134882
    | Id  | Operation          | Name         | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |              |     1 |     2   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE    |              |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| MLOG$_ACTION |     1 |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    Statistics
              1  recursive calls
              1  db block gets
              6  consistent gets
              0  physical reads
             96  redo size
            410  bytes sent via SQL*Net to client
            400  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedJust for fun...comparison of the TKPROF.
    Before:
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.94       5.36      55996      56012          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.94       5.38      55996      56012          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.02          4.91
    ********************************************************************************After:
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          7          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          7          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=7 pr=0 pw=0 time=79 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=7 pr=0 pw=0 time=28 us)
    ********************************************************************************

  • Fast Refresh mview performance

    Hi,
    I'm actually facing with Fast Refresh Mview performace problems and i would like to know what is the possible improvments for the fast refresh procedure:
    - base table is 1 500 000 000 rows partitions by day , subparttion by has (4)
    - mlog partition by hash (4) with indexes on pk and snaptime
    - mview partition by day subpartition by hash (4)
    10 000 000 insertions / day in base table/mlog
    What improvment or indexes can i add to improve fast refresh ?
    Thanks for help

    Hi,
    Which DB version are you using?
    Did you have a look at the MV refresh via Partition Change Tracking (PCT)?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/advmv.htm#sthref575
    If it's possible to use PCT, it would probably improve a lot the performance of the refresh of your MV.
    Regards
    Maurice

  • Snapshot Refresh (How to stop COMPLETE refresh and run FAST refresh)?

    Hi,
    I have a snapshot refresh executed as COMPLETE which is taking very long. When I try to kill this and try to run a FAST I get:
    ERROR at line 1:
    ORA-12057: materialized view "PORTALSNP1"."V21_BILLING_ACCOUNT" is INVALID an must complete refresh
    How can I resolve this to stop the COMPLETE refresh altogether and be able to run the FAST refresh.
    Also is there a way to get the time it will take to complete the running snapshot refresh?
    Please and thankYou!
    Regards,
    A

    You don't resolve it ... you drop the materialized view. Then you create a materialized view log. Then a properly coded MV.
    http://www.morganslibrary.org/library.html
    bookmark this link
    then look up "Materialized Views and "Materialized View Logs"
    The log must be created first.

  • Fast refresh snapshots with primary key deferrable

    Oracle version 8.1.7 standard
    I've created this sample schema:
    Tables:
    create table a_customer
    (c_id integer primary key deferrable,
    zip integer
    create table a_orders
    (o_id integer primary key deferrable,
    c_id integer,
    constraint o_c foreign key (c_id)
    references a_customer(c_id)
    create index c_ind on a_orders(c_id);
    create table a_order_line
    (ol_id integer primary key deferrable,
    o_id integer,
    constraint ol_o foreign key(o_id)
    references a_orders(o_id)
    Snapshot logs:
    create snapshot log on a_customer with primary key (zip);
    create snapshot log on a_orders with primary key (c_id);
    create snapshot log on a_order_line with primary key (o_id);
    When I create the snapshot from another instance:
    create snapshot orders_snap refresh fast as
    select * from a_orders@dblink o where exists
    (select c_id from a_customer@dblink c where
    o.c_id = c.c_id and zip = 19555);
    It return me the error ORA-12015.
    If I recreate the primary keys without "deferrable" option, the snapshot is created succesfully.
    Is it a bug of the Oracle version used?

    Why will someone update the PK of a table. This itself violates the basic design of any application. A PK on a customer ID or a social security no of a person will never get updated in a life time of a user. In case a new number needs to be issued then the correct sequence is to delete the existing record and create new one rather then updating the existing record.
    So you jsut need to check the logic with deleting the PK and then refreshing the Mviews. Please do not update the PK that will itself be a flaw in application design as far my experience is concerend with the PK.
    Amar

  • Fast Refresh MVs and HASH_SJ Hint

    I am building fast refresh MVs on a 3rd party database to enable faster reporting. This is an interim solution whilst we build a new ETL process using CDC.
    The source DB has no PKs, so I'm creating the MV logs with ROWID. When I refresh the MV (exec DBMS_MVIEW.REFRESH('<mview_name>') and trace the session I notice:
    1. The query joins back to the base table - I think this is necessary as there are two base tables and the MV change could be instigated from either table independently. Therefore the changes might not be in the log.
    2. However in this case shouldn't it be possible to just joij mv_log1 to base_table2 and ignore base_table1?
    3. There is a HASH_SJ hint in this join, forcing a full table scan on the 7M row base_table1.
    4. I am doing 1 update then refreshing the MV
    5. In production this table would have many 10s of single row inserts and updates per minute
    This is an excerpt from the tkprof'd trace file (I've hidden some table/column names)
    FROM   (SELECT MAS$.ROWID RID$ 
                  ,MAS$.* 
            FROM   <base_table1> MAS$
            WHERE  ROWID IN (SELECT  /*+ HASH_SJ */ 
                                    CHARTOROWID(MAS$.M_ROW$$) RID$    
                             FROM   <mview_log1> MAS$  
                             WHERE  MAS$.SNAPTIME$$ > sysdate-1/24 --:1
           ) AS OF SNAPSHOT (:2) JV$
           ,<base_table2> AS OF SNAPSHOT (:2)  MAS$0
    WHERE   JV$.<col1>=MAS$0.<col1>
    AND     JV$.<col2>=MAS$0.<col2>
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1     13.78     153.32     490874     551013          3           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     13.78     153.32     490874     551013          3           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 277  (<user>)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY INDEX ROWID <base_table2>(cr=551010 pr=490874 pw=0 time=153321352 us)
          3   NESTED LOOPS  (cr=551009 pr=490874 pw=0 time=647 us)
          1    VIEW  (cr=551006 pr=490874 pw=0 time=153321282 us)
          1     HASH JOIN RIGHT SEMI (cr=551006 pr=490874 pw=0 time=153321234 us)
          2      TABLE ACCESS FULL <base_table1_mv_log> (cr=21 pr=0 pw=0 time=36 us)
    7194644      TABLE ACCESS FULL <base_table1>(cr=550985 pr=490874 pw=0 time=158282171 us)
          1    INDEX RANGE SCAN <base_table2_index> (cr=3 pr=0 pw=0 time=22 us)(object id 3495055)As you can see there are two rows in the MV log (one update, old and new values), the FTS on the base table ensure that the MV refresh is far from fast
    I have tried this with refreshing on demand and commit with similar results. Implementing this would make my the application impossibly slow.
    I will search the knowledge base once I am given access
    SQL>select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionThank you for taking the time to read/respond.
    Ben

    Thanks for looking.
    From the Knowledge Base it appears that Bug 6456841 might be the cause. I'll play around with the settings it suggests and see what happens.
    the MV query is basically:
    SELECT ...
    FROM   base_table1
          ,base_table2
    WHERE  base_table1.col1 = base_table2.col1
    AND    base_table1.col2 = base_table2.col2When 1 row in base_table1 is updated there is a FTS for that table, rather than:
    1. getting the data from the MV log or
    2. a Nested loop join to base_table1 from its mv_log on rowid
    This is due to the oracle internal code putting a HASH_SJ hint in when joining the mv log to its base table
    Ben

  • How to speed up fast refresh of materialized view without primary key

    Thought I'd share this info, as I couldn't find anything on here to help me diagnose the issue:
    I had a materialized view that was taking longer to perform a fast refresh than it took to perform a complete refresh. My mview had no primary key, as the base table had no primary key.
    I created a trace file for the session and saw references to a column M_ROW$$ in my mview. Nowhere in the data dictionary could I find a reference to the m_row$$ column in my mview, but apparently it exists and is created automatically. After creating the index below, the fast refresh took 6 minutes to add 500k rows to the materialized view. (versus 4+ hours without the index) When I looked in the trace file, I noticed that for each row in the mview log, it first tries to update the mview with an UPDATE statement, then it performs an insert of the new data. Seems like it should be able to determine whether to perform an update or insert based on the DMLTYPE$$ column of the mview log. What was killing my performance was the UPDATE phase. Since I had no primary key on the mview, and no index on the m_row$$ column, the UPDATE phase was performing a full table scan of my mview for every row in the mview log. I was expecting it to be smart enough to only perform inserts, as the only transactions against the base table were inserts.
    In summary: If you have a materialized view without a primary key, create an index on the m_row$$ column of the mview, even though no such column displays in the data dictionary.
    ex:
    CREATE MATERIALIZED VIEW mv_minidrr ...
    CREATE INDEX pk_mv_minidrr ON mv_minidrr(m_row$$) ...

    Well, there indeed is a column called M_ROW$$
    Your MLOG$_EMP is nothing but the materialized view log on the base table.
    SQL> create  materialized view log on emp with rowid ;
    Materialized view log created.
    SQL> create materialized view emp_mview refresh fast on demand with rowid as select * from emp ;
    Materialized view created.
    SQL> desc mlog$_emp
    Name                                                  Null?    Type
    M_ROW$$                                                        VARCHAR2(255)
    SNAPTIME$$                                                     DATE
    DMLTYPE$$                                                      VARCHAR2(1)
    OLD_NEW$$                                                      VARCHAR2(1)
    CHANGE_VECTOR$$                                                RAW(255)
    SQL> select table_name, column_name from user_tab_columns where column_name = 'M_ROW$$' ;
    TABLE_NAME                     COLUMN_NAME
    MLOG$_EMP                      M_ROW$$
    1 row selected.
    SQL>

  • Materialized View with Join for FAST Refresh

    Hi Gurus,
    Facing issues in MV with a simple join for FAST Refresh.
    2 sample Tables:
    1. employee
    empid integer PK
    empname varchar(50)
    deptid integer FK
    2. delta_employee
    empid integer PK
    empname varchar(50)
    deptid integer FK
    dmlflag varchar(2)
    watermark integer
    Code is as given below -
    CREATE MATERIALIZED VIEW LOG ON work.employee
    WITH SEQUENCE,rowid(empid)
    INCLUDING NEW VALUES;
    CREATE MATERIALIZED VIEW LOG ON work.delta_employee
    WITH SEQUENCE,rowid(empid)
    INCLUDING NEW VALUES;
    CREATE MATERIALIZED VIEW work.MVEmployee REFRESH force on
    demand with rowid AS
    select e.empid,e.empname,e.deptid,d.empid t1
    from work.employee e, work.delta_employee d
    where e.empid = d.empid;
    Able to perform Complete Refresh. Not able to use Fast Refresh for
    incremental refresh.
    Please help.
    Thanks,
    J Kumar

    Found a solution,
    As per oracle documentation rowid fileds should be included in the Select statement. Even though I included the rowid fields, empid in this case it still didnt work.
    The implementation of the rowid column should be included with tablename.rowid instead of the tablename.columnname !
    Modified script is as given below.
    CREATE MATERIALIZED VIEW WORK.MVEMPLOYEE
    REFRESH FORCE ON DEMAND
    WITH PRIMARY KEY
    AS
    select e.rowid "empid",d.rowid "t1" ,e.empname,e.deptid
    from work.employee e, work.delta_employee d
    where e.empid = d.empid;
    And It really WORKS

  • MATERIALIZED view on two tables with Fast Refresh

    i Wanted to create MV on two tables with Fast refresh on commit.
    I followed below steps
    create materialized view log on t1 WITH PRIMARY KEY, rowid;
    create materialized view log on t2 WITH PRIMARY KEY, rowid;
    CREATE MATERIALIZED VIEW ETL_ENTITY_DIVISION_ASSO_MV
    REFRESH fast ON commit
    ENABLE QUERY REWRITE
    AS
    select A.ROWID B.ROWID,a.c1, DECODE(a.c1,'aaa','xxx','aaa') c2
    from t1 A
    join t2 b
    on AB.c1= CD.c2;
    i am getting below error.
    Error report:
    SQL Error: ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
    12054. 00000 - "cannot set the ON COMMIT refresh attribute for the materialized view"
    *Cause:    The materialized view did not satisfy conditions for refresh at
    commit time.
    *Action:   Specify only valid options.
    Basically i want to take record in MV by joinig two tables and if both of the base tables will updated then record should reflect in materialised view.
    Please do the needfull.

    does the table support PCT? the other restrictions on joins look to be ok in your statement.
    maybe try creating first with on demand instead of commit to see does it create.
    http://docs.oracle.com/cd/B19306_01/server.102/b14223/basicmv.htm
    >
    Materialized Views Containing Only Joins
    Some materialized views contain only joins and no aggregates, such as in Example 8-4, where a materialized view is created that joins the sales table to the times and customers tables. The advantage of creating this type of materialized view is that expensive joins will be precalculated.
    Fast refresh for a materialized view containing only joins is possible after any type of DML to the base tables (direct-path or conventional INSERT, UPDATE, or DELETE).
    A materialized view containing only joins can be defined to be refreshed ON COMMIT or ON DEMAND. If it is ON COMMIT, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.
    If you specify REFRESH FAST, Oracle performs further verification of the query definition to ensure that fast refresh can be performed if any of the detail tables change. These additional checks are:
    A materialized view log must be present for each detail table unless the table supports PCT. Also, when a materialized view log is required, the ROWID column must be present in each materialized view log.
    The rowids of all the detail tables must appear in the SELECT list of the materialized view query definition.
    If some of these restrictions are not met, you can create the materialized view as REFRESH FORCE to take advantage of fast refresh when it is possible. If one of the tables did not meet all of the criteria, but the other tables did, the materialized view would still be fast refreshable with respect to the other tables for which all the criteria are met.

  • Fast refreshable mviews vs. cdc

    im currently working on a new data warehouse environment. i need to create an ods schema for each of the operational systems in my warehouse.
    i've looked at two of 10gR2 prefered technologies for this task:
    fast refreshable materialized view and change data capture.
    i would to know what is the difference between this two approaches.
    in my understanding, there both cause a performance overhead and require some amount of additional work on each dml operations. in addition , there both work on the method of capturing changes from a table and applying them on a target database.
    the only difference i could think of is that cdc capture changes from the redo log so that commit time on the operational system wont be affected (as much as it would when logging a dml operation in a mview log).

    dba_snapshot_refresh_times or dba_mview_refresh_times would help.
    select job, last_date last_refresh, next_date next_refresh, total_time,what from dba_jobs where what like '%dbms_refresh%'
    This will work only if the refresh is scheduled to run through job queue.

  • Erro perforing Fast Refresh of Materialized View.

    Hi Experts,
    We are facing serious problem while refreshing materialized views using fast refresh option in ORACLE..
    For the very first time we are performing Complete refresh of data from DB1 to DB2 for few tables.Ongoing we are performing Fast Refresh.
    Sometimes the fast refresh works fine without any error and sometimes it fails with the below error.
    ERROR at line 1:
    ORA-32320: REFRESH FAST of "CIR"."C_BO_COMM" unsupported after cointainer table
    PMOPs
    ORA-06512: at "SYS.DBMS_SNAPSHOT", line 803
    ORA-06512: at "SYS.DBMS_SNAPSHOT", line 860
    ORA-06512: at "SYS.DBMS_SNAPSHOT", line 841
    ORA-06512: at line 1we came to know the folowing.
    // *Cause:
      A Partition Maintenance Operation (PMOP) has been performed on the materialized view,
      and no materialized view supportsfast refersh after container table PMOPs.
    // *Action:
                   Use REFRESH COMPLETE.
         Note: 
                   you can determine why your materialized view does not support fast refresh after PMOPs using
                   the DBMS_MVIEW.EXPLAIN_MVIEW() API.Please let u know what action to be taken to avoid this.
    Please note that this error is not repeated always. Some times REFRESH FAST is happening and some times it is throwing error.
    Appreicate your earliest reply.
    Thank you.
    Edited by: User71408 on Sep 4, 2008 11:40 PM
    Edited by: User71408 on Sep 4, 2008 11:43 PM
    Edited by: User71408 on Sep 5, 2008 12:38 AM

    9i doesn't support fast refresh after truncate. Read the metalink note 275325.1
    You should be sure no truncate has been executed, or use complete refresh.
    Nicolas.

  • Create a fast refresh materialized view with partitioned primary index

    Hi,
    I have a materialized view that is based on a table with primary key.
    I want to create a materialized view with a partitioned primary index . do you have any way of doing it?
    I tried to create a materialized view with rowid and then I created a partitioned primary index on it.
    It did not work as what I expected. I could not perform a fast refresh on it. the materialized view can only complete refresh
    thank you

    Hi,
    Here is some info from the Oracle Documentation.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10706/repmview.htm
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10810/basicmv.htm
    Determining the Fast Refresh Capabilities of a Materialized View
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10706/repmview.htm#BABEDIAH
    Regards,

  • Fast Refresh using two non-primary key tables

    Hi,
    I have a materialized view based on two tables with an outer join clause. Both the tables do not have a primary key so I had created materialized view log with row-id on each of them but still I am not able to bring out the fast refresh option for the materialized view. My question is can I have a fast refresh option for materialized view built from two tables without primary keys and having an outer join clause????. If possible please send me some sample scripts for quicker understanding.
    Thanks and Regards,
    Sudhakar

    I was able to create a fast-refreshable MV, on tables without any PK. Unfortunately, I can't complete all the steps since my setup is a multi-master advanced replication (which ABSOLUTELY requires the tables to have PK's). Here are anyway the steps I took. Note that ORA102 is my (definition) master site, and MVDB is my MV site. The tables were created under user HR, and my master group is called "hr_repg". Here are my steps:
    HR on ora102 >create table countries_no_pk as select * from countries;
    Table created.
    HR on ora102 >create table regions_no_pk as select * from regions;
    Table created.
    HR on ora102 >create materialized view log on countries_no_pk with rowid;
    Materialized view log created.
    HR on ora102 >create materialized view log on regions_no_pk with rowid;
    Materialized view log created.
    REPADMIN on ora102 >exec dbms_repcat.suspend_master_activity('hr_repg')
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:02.68
    REPADMIN on ora102 >BEGIN
    2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
    3 gname => 'hr_repg',
    4 type => 'TABLE',
    5 oname => 'countries_no_pk',
    6 sname => 'hr',
    7 use_existing_object => TRUE,
    8 copy_rows => FALSE);
    9 END;
    10 /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:05.19
    REPADMIN on ora102 >set timing off
    REPADMIN on ora102 >BEGIN
    2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
    3 gname => 'hr_repg',
    4 type => 'TABLE',
    5 oname => 'regions_no_pk',
    6 sname => 'hr',
    7 use_existing_object => TRUE,
    8 copy_rows => FALSE);
    9 END;
    10 /
    PL/SQL procedure successfully completed.
    (note that you ABSOLUTELY need the rowid's in your select statement for an MV with joins):
    MVIEWADMIN on mvdb >CREATE MATERIALIZED VIEW hr.complex_mv refresh fast as
    2 select c.rowid "C_ROW_ID", r.rowid "R_ROW_ID", c.COUNTRY_ID, c.COUNTRY_NAME,
    3 c.REGION_ID, r.REGION_NAME from hr.regions_no_pk@ora102 r, hr.countries_no_pk@ora102 c
    4 where c.region_id = r.region_id (+);
    Materialized view created.
    MVIEWADMIN on mvdb >BEGIN
    2 DBMS_REPCAT.CREATE_MVIEW_REPOBJECT (
    3 gname => 'hr_repg',
    4 sname => 'hr',
    5 oname => 'complex_mv',
    6 type => 'SNAPSHOT',
    7 min_communication => TRUE);
    8 END;
    9 /
    PL/SQL procedure successfully completed.
    REPADMIN on ora102 >BEGIN
    2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
    3 sname => 'hr',
    4 oname => 'countries_no_pk',
    5 type => 'TABLE',
    6 min_communication => TRUE);
    7 END;
    8 /
    PL/SQL procedure successfully completed.
    (wait when the entries in DBA_REPCATLOG is empty)
    REPADMIN on ora102 >exec dbms_repcat.resume_master_activity('hr_repg')
    Hope that can help you. If that doesn't work, tell us where it bombs.
    Daniel

  • Materialized view fast refresh with date field

    I have a situation where I need to create a materialized view worth of 6 months of data with fast refresh option from the master table. Somehow whenever I have the where clause added with the date field then it craps out with "ORA-12015: cannot create a fast refresh materialized view from a complex query".
    Here is what I am trying to do. Please let me know if there is any other way to accomplish this.
    create table test (id number, date_time DATE);
    CREATE MATERIALIZED VIEW LOG ON test WITH ROWID;
    CREATE MATERIALIZED VIEW cms.scoreboard_statistics_mv
    BUILD IMMEDIATE
    REFRESH FAST
    WITH ROWID
    AS
    SELECT * from test
    WHERE date_time >= sysdate - 180;
    ORA-12015: cannot create a fast refresh materialized view from a complex query
    Thanks,
    Raj

    It's crazy but a new time Metalink help us into the Note:179466.1
    The restrictions that prevent snapshots from being fast refreshed depend on
    the version of Oracle being used, a full list of these by version is included
    in section 3. In all cases the snapshot defining query should:
    - refer to fully qualified table names rather than to partial table names.
    - refer to remote tables only, not to remote master views or synonyms.
    - not generate context sensitive data. For example, do not create a simple
    snapshot with a query that uses the SQL functions :SYSDATE, UID or USER.

  • Refresh materialized view on fast refresh

    Hi,
    I want to create a fast refresh on a materialized view but i kept getting ORA-12015: cannot create a fast refresh materialized view from a complex query. When I did a complete refresh on the materialzed view, it completed. I have create a materialized view log for the table. In my materialized view script, I have included a user defined function. Does db version 10g have the capability to do a fast refresh?
    Thanks

    What is the query you are using for the MV?
    The error message says it all... "cannot create a fast refresh materialized view from a complex query"
    If your query is complex then you will have to perform complete refreshes.
    One way around can be to fast refresh all tables in the query then create a view on them based on the 'complex' query. Admittedly this is only a workaround in certain scenarios.
    Check out the documentation...
    http://68.142.116.70/docs/cd/B19306_01/server.102/b14226/repmview.htm#sthref422

Maybe you are looking for