Materialized view as aggregate table, obiee10g???

Hi All,
I am very new to OBIEE and I am learning from OBE tutorials.
I really don't understand how these aggregated tables will be refreshed? I read in some blogs that these tables will be dropped and created using Scheduler scripts, is it correct? I am wondering if we can use "Materialized views" as aggregated tables.
Please help me to understand aggregated tables functionality.
Thanks,
Enric

Hi
You could also Materialized Views as an alternative to the Aggregation functionality in Oracle BI. Treat a MV in the same way you would treat a 'normal' tabel. Add the MV as an additional Logical Table Source to your Logical Table. Scheduling the refresh would normaly be managed outside Oracle BI.
Good Luck,
Daan Bakboord
http://obibb.wordpress.com

Similar Messages

  • MATERIALIZED view on two tables with Fast Refresh

    i Wanted to create MV on two tables with Fast refresh on commit.
    I followed below steps
    create materialized view log on t1 WITH PRIMARY KEY, rowid;
    create materialized view log on t2 WITH PRIMARY KEY, rowid;
    CREATE MATERIALIZED VIEW ETL_ENTITY_DIVISION_ASSO_MV
    REFRESH fast ON commit
    ENABLE QUERY REWRITE
    AS
    select A.ROWID B.ROWID,a.c1, DECODE(a.c1,'aaa','xxx','aaa') c2
    from t1 A
    join t2 b
    on AB.c1= CD.c2;
    i am getting below error.
    Error report:
    SQL Error: ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
    12054. 00000 - "cannot set the ON COMMIT refresh attribute for the materialized view"
    *Cause:    The materialized view did not satisfy conditions for refresh at
    commit time.
    *Action:   Specify only valid options.
    Basically i want to take record in MV by joinig two tables and if both of the base tables will updated then record should reflect in materialised view.
    Please do the needfull.

    does the table support PCT? the other restrictions on joins look to be ok in your statement.
    maybe try creating first with on demand instead of commit to see does it create.
    http://docs.oracle.com/cd/B19306_01/server.102/b14223/basicmv.htm
    >
    Materialized Views Containing Only Joins
    Some materialized views contain only joins and no aggregates, such as in Example 8-4, where a materialized view is created that joins the sales table to the times and customers tables. The advantage of creating this type of materialized view is that expensive joins will be precalculated.
    Fast refresh for a materialized view containing only joins is possible after any type of DML to the base tables (direct-path or conventional INSERT, UPDATE, or DELETE).
    A materialized view containing only joins can be defined to be refreshed ON COMMIT or ON DEMAND. If it is ON COMMIT, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.
    If you specify REFRESH FAST, Oracle performs further verification of the query definition to ensure that fast refresh can be performed if any of the detail tables change. These additional checks are:
    A materialized view log must be present for each detail table unless the table supports PCT. Also, when a materialized view log is required, the ROWID column must be present in each materialized view log.
    The rowids of all the detail tables must appear in the SELECT list of the materialized view query definition.
    If some of these restrictions are not met, you can create the materialized view as REFRESH FORCE to take advantage of fast refresh when it is possible. If one of the tables did not meet all of the criteria, but the other tables did, the materialized view would still be fast refreshable with respect to the other tables for which all the criteria are met.

  • Advantage of Materialized view on prebuilt table

    Could someone tell me the advantage of Materialized view on prebuilt table?. I'm unable to understand the concept from the Oracle documentation. I need to know the answers for following questions.
    1) Is the data stored in table and MV same?. Does the query retrive data from table or MV?. The query internally uses either table or a view as the names of both are same.
    2) Our DSS application is generating complicated queries and executing for long time. Is there any way I can optimize those queries using MV without rewriting the code in the application.

    It's a roughly analgous problem to figuring out what set of indexes to create to improve the performance of an application-- you need to understand the various queries, how the application is accessing data, and you need to balance a variety of competing needs in order to come up with a reasonable set of indexes.
    Fundamentally, Oracle can only use a materialized view to satisfy a query if a human could use just the data in the materialized view to answer the question. That generally means that the materialized view is going to have to be aggregated at a lower level or at the same level as the query. A materialized view that aggregates sales by day can be used for queries by year but a materialized view that aggregates sales by year cannot be used for queries that get sales by day. A materiailized view that aggregates sales by vendor and product can be used for queries that aggregate by vendor or by product, but a materialized view that aggregates by product cannot be used in a query that aggregates by product for a particular vendor.
    You'll have to balance what materialized views are ideal for a particular query, what materialized views are sufficient for a particular set of queries, and how to balance the space, managability, and refresh performance requirements between creating lots of somewhat redundant materialized views that provide optimal query performance at the cost of a lot of disk and a large refresh window vs creating fewer, more general materialized views that consume less disk and can be refreshed faster but that provide less query performance boosts.
    Justin

  • Materialized view from prebuilt table doesn't work with spatial types?

    Hello, I'm trying to build a materialized view of a table using the prebuilt option and a pre-built table.
    Oracle gives me an ORA-32304 error, saying it can't do this with user-defined types. The original table has no user-defined types, but does use an sdo_geometry column. Is this what it's complaining about?
    Now I can sort of understand this, but here's my real problem: the materialized view I'm creating is comprised of a subset of the columns from the original table, but not the geometry column. Is Oracle right in refusing my prebuilt request? Does anyone know of a way around this (besides creating the MV from scratch without prebuilt).
    I've successfully created an MV on another table, which doesn't have a spatial column, using the prebuilt option and a subset of columns.
    (I'm using Oracle 11.2.0.2.0 on both master and slave databases)

    Good news, everyone! =)
    SAPwebIDE team fixed this issue with MMD template in SAPwebIDE v1.10.2. available on http://hanatrial.ondemand.com.
    This "Bug" or "Feature" was presented in 1.8.x and 1.9.x SAPwebIDE (i've used local installation) and now it's gone in v1.10.2. Thank you, SAPwebIDE Team! =)
    The difference between versions of MMD template is only in one file (fixed one is on the right):
    Master2.controller.js
    And here it is:
    Now, only one question remains: HOWTO:SAPUI5 Fiori-like report. (mix control's value as key into binding context)
    Best regards, ilia.

  • Materialized view with aggregates doing a fast refresh

    why is that i need to have count(*),count(<expressions used>) in my Materialized view Query with Aggregates?
    say mat view query is:
    select deptno,sum(sal) from emp,dept where emp.deptno,dept.deptno group by dname.cant do a fast refresh.
    BUT
    select deptno,sum(sal),count(*),count(sal) from emp,dept where emp.deptno,dept.deptno group by dname.Does a fast refresh.Why?
    Also its mentioned in manuals that count(*) and count(expr) is needed but it doesnt explain why.
    Thanks

    Thanks for the correction.I just wanted to simulate the query and it was a typing mistake.sorry for that.
    My query working fine with count(). If i understand it correctly it is to determine whether there should be an update or delete to MV in case of say delete on master table.that is, count is decremented on delete and if it becomes 0 then we need to delete that aggregated row from the MV,else it need to be updated even in case of delete.
    But this answers why count() is needed for coulmns in group by clause.
    Dont really see a need to have count() in case i m updating the measures as materilized view logs should take care of it.

  • Materialized views on prebuilt tables - query rewrite

    Hi Everyone,
    I am currently counting on implementing the query rewrite functionality via materialized views to leverage existing aggregated tables.
    Goal*: to use aggregate-awareness for our queries
    How*: by creating views on existing aggregates loaded via ETL (+CREATE MATERIALIZED VIEW xxx on ON PREBUILT TABLE ENABLE QUERY REWRITE+)
    Advantage*: leverage oracle functionalities + render logical model simpler (no aggregates)
    Disadvantage*: existing ETL's need to be written as SQL in view creation statement --> aggregation rule exists twice (once on db, once in ETL)
    Issue*: Certain ETL's are quite complex via lookups, functions, ... --> might create overy complex SQLs in view creation statements
    My question: is there a way around the issue described? (I'm assuming the SQL in the view creation is necessary for oracle to know when an aggregate can be used)
    Best practices & shared experiences are welcome as well of course
    Kind regards,
    Peter

    streefpo wrote:
    I'm still in the process of testing, but the drops should not be necessary.
    Remember: The materialized view is nothing but a definition - the table itself continues to exist as before.
    So as long as the definition doesn't change (added column, changed calculation, ...), the materialized view doesn't need to be re-created. (as the data is not maintained by Oracle)Thanks for reminding me but if you find a documented approach I will be waiting because this was the basis of my argument from the beginning.
    SQL> select * from v$version ;
    BANNER                                                                                                                                                                    
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production                                                                                                    
    PL/SQL Release 11.2.0.1.0 - Production                                                                                                                                    
    CORE     11.2.0.1.0     Production                                                                                                                                                
    TNS for Linux: Version 11.2.0.1.0 - Production                                                                                                                            
    NLSRTL Version 11.2.0.1.0 - Production                                                                                                                                    
    SQL> desc employees
    Name                                                                                            Null?    Type
    EMPLOYEE_ID                                                                                     NOT NULL NUMBER(6)
    FIRST_NAME                                                                                               VARCHAR2(20)
    LAST_NAME                                                                                       NOT NULL VARCHAR2(25)
    EMAIL                                                                                           NOT NULL VARCHAR2(25)
    PHONE_NUMBER                                                                                             VARCHAR2(20)
    HIRE_DATE                                                                                       NOT NULL DATE
    JOB_ID                                                                                          NOT NULL VARCHAR2(10)
    SALARY                                                                                                   NUMBER(8,2)
    COMMISSION_PCT                                                                                           NUMBER(2,2)
    MANAGER_ID                                                                                               NUMBER(6)
    DEPARTMENT_ID                                                                                            NUMBER(4)
    SQL> select count(*) from employees ;
      COUNT(*)                                                                                                                                                                
           107                                                                                                                                                                
    SQL> create table mv_table nologging as select department_id, sum(salary) as totalsal from employees group by department_id ;
    Table created.
    SQL> desc mv_table
    Name                                                                                            Null?    Type
    DEPARTMENT_ID                                                                                            NUMBER(4)
    TOTALSAL                                                                                                 NUMBER
    SQL> select count(*) from mv_table ;
      COUNT(*)                                                                                                                                                                
            12                                                                                                                                                                
    SQL> create materialized view mv_table on prebuilt table with reduced precision enable query rewrite as select department_id, sum(salary) as totalsal from employees group by department_id ;
    Materialized view created.
    SQL> select count(*) from mv_table ;
      COUNT(*)                                                                                                                                                                
            12                                                                                                                                                                
    SQL> select object_name, object_type from user_objects where object_name = 'MV_TABLE' ;
    OBJECT_NAME                                                                                                                      OBJECT_TYPE                              
    MV_TABLE                                                                                                                         TABLE                                    
    MV_TABLE                                                                                                                         MATERIALIZED VIEW                        
    SQL> insert into mv_table values (999, 100) ;
    insert into mv_table values (999, 100)
    ERROR at line 1:
    ORA-01732: data manipulation operation not legal on this view
    SQL> update mv_table set totalsal = totalsal * 1.1 where department_id = 10 ;
    update mv_table set totalsal = totalsal * 1.1 where department_id = 10
    ERROR at line 1:
    ORA-01732: data manipulation operation not legal on this view
    SQL> delete from mv_table where totalsal <= 10000 ;
    delete from mv_table where totalsal <= 10000
    ERROR at line 1:
    ORA-01732: data manipulation operation not legal on this view While investigating for this thread I actually made my own question redundant as the answer became gradually clear:
    When using complex ETL's, I just need to make sure the complexity is located in the ETL loading the detailed table, not the aggregate
    I'll try to clarify through an example:
    - A detailed Table DET_SALES exists with Sales per Day, Store & Product
    - An aggregated table AGG_SALES_MM exists with Sales, SalesStore per Month, Store & Product
    - An ETL exists to load AGG_SALES_MM where Sales = SUM(Sales) & SalesStore = (SUM(Sales) Across Store)
    --> i.e. the SalesStore measure will be derived out of a lookup
    - A (Prebuilt) Materialized View will exist with the same column definitions as the ETL
    --> to allow query-rewrite to know when to access the table
    My concern was how to include the SalesStore in the materialized view definition (--> complex SQL!)
    --> I should actually include SalesStore in the DET_SALES table, thus:
    - including the 'Across Store' function in the detailed ETL
    - rendering my Aggregation ETL into a simple GROUP BY
    - rendering my materialized view definition into a simple GROUP BY as wellNot sure how close your example is to your actual problem. Also don't know if you are doing an incremental/complete data load and the data volume.
    But the "SalesStore = (SUM(Sales) Across Store)" can be derived from the aggregated MV using analytical function. One can just create a normal view on top of MV for querying. It is hard to believe that aggregating in detail table during ETL load is the best approach but what do I know?

  • Leave a distinct value in a materialized view on two tables

    Hi and thank you for reading,
    I have the following problem. I am creating a materialized view out of two tables, with "where a.id = b.id".
    The resulting materialized view list several values twice. For example, one customer name has several contact details and thus the customer name is listed several times. Now I would like to join each customer name with just ONE contact detail, how can I do that? (Even if I would loose some information while doing this).
    Thanks
    Evgeny

    Hi,
    You can do this
    SELECT   deptno, empno, ename, job, mgr, hiredate, sal, comm
        FROM emp_test
    ORDER BY deptno;
        DEPTNO      EMPNO ENAME      JOB              MGR HIREDATE          SAL       COMM
            10       7782 CLARK      MANAGER         7839 1981-06-09       2450          
            10       7839 KING       PRESIDENT            1981-11-17       5000          0
            10       7934 MILLER     CLERK           7782 1982-01-23       1300          
            20       7566 JONES      MANAGER         7839 1981-04-02       2975          
            20       7902 FORD       ANALYST         7566 1981-12-03       3000          
            20       7876 ADAMS      CLERK           7788 1987-05-23       1100          
            20       7369 SMITH      CLERK           7902 1980-12-17        800          
            20       7788 SCOTT      ANALYST         7566 1987-04-19       3000          
            30       7521 WARD       SALESMAN        7698 1981-02-22       1250        500
            30       7844 TURNER     SALESMAN        7698 1981-09-08       1500          
            30       7499 ALLEN      SALESMAN        7698 1981-02-20       1600        300
            30       7900 JAMES      CLERK           7698 1981-12-03        950          
            30       7698 BLAKE      MANAGER         7839 1981-05-01       2850          
            30       7654 MARTIN     SALESMAN        7698 1981-09-28       1250       1400
    14 rows selected.
    SELECT CASE
              WHEN ROW_NUMBER () OVER (PARTITION BY deptno ORDER BY empno) =
                                                                         1
                 THEN deptno
           END deptno,
           empno, ename, job, mgr, hiredate, sal, comm
      FROM emp_test;
        DEPTNO      EMPNO ENAME      JOB              MGR HIREDATE          SAL       COMM
            10       7782 CLARK      MANAGER         7839 1981-06-09       2450          
                     7839 KING       PRESIDENT            1981-11-17       5000          0
                     7934 MILLER     CLERK           7782 1982-01-23       1300          
            20       7369 SMITH      CLERK           7902 1980-12-17        800          
                     7566 JONES      MANAGER         7839 1981-04-02       2975          
                     7788 SCOTT      ANALYST         7566 1987-04-19       3000          
                     7876 ADAMS      CLERK           7788 1987-05-23       1100          
                     7902 FORD       ANALYST         7566 1981-12-03       3000          
            30       7499 ALLEN      SALESMAN        7698 1981-02-20       1600        300
                     7521 WARD       SALESMAN        7698 1981-02-22       1250        500
                     7654 MARTIN     SALESMAN        7698 1981-09-28       1250       1400
                     7698 BLAKE      MANAGER         7839 1981-05-01       2850          
                     7844 TURNER     SALESMAN        7698 1981-09-08       1500          
                     7900 JAMES      CLERK           7698 1981-12-03        950          
    14 rows selected.Edited by: Salim Chelabi on 2009-09-14 08:13

  • Query performance on materialized view vs master tables

    Hi,
    I am afraid of strange behavior in db, on my master tables UDBMOVEMENT_ORIG(26mil.rows) and UDBIDENTDATA_ORIG(18mil.rows) is created materialized view TMP_MS_UDB_MV (UDBMOVEMENT is synonym to this object) which meets some default conditions and join condition on these master tables. MV got about 12milions rows. I created MV to query not so huge objects, MV got 3GB, master tables toghether 12GB. But I don't understand that even physical reads and consistent gets are less on MV than on master tables, the final execution time is shorter on master tables. See my log below.
    Why?
    Thanks for answers.
    SQL> set echo on
    SQL> @flush
    SQL> alter system flush buffer_cache;
    System altered.
    Elapsed: 00:00:00.20
    SQL> alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:00.65
    SQL> SELECT
    2 UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBIdentData.sCardSubType, UDBIdentData.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBIdentData.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBIdentData.tTarifTimeStart, UDBIdentData.tTarifTimeEnd, UDBIdentData.cLicensePlate, UDBIdentData.lMoneyValue, UDBIdentData.lPointValue, UDBIdentData.lTimeValue, UDBIdentData.tProdTime, UDBIdentData.tExpireDate
    3 FROM UDBMOVEMENT_orig UDBMovement, Udbidentdata_orig UDBIdentData
    4 WHERE
    5 UDBMovement.lGlobalId = UDBIdentData.lGlobalRef(+) AND UDBMovement.sComputer = UDBIdentData.sComputer(+)
    6 AND UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBIdentData.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    7 AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25 AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    8 ORDER BY tActionTime, lBlock, lSequenz;
    4947 rows selected.
    Elapsed: 00:00:15.84
    Execution Plan
    Plan hash value: 1768406139
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 7166 | 1238K| | 20670 (1)| 00:04:09 |
    | 1 | SORT ORDER BY | | 7166 | 1238K| 1480K| 20670 (1)| 00:04:09 |
    | 2 | NESTED LOOPS | | | | | | |
    | 3 | NESTED LOOPS | | 7166 | 1238K| | 20388 (1)| 00:04:05 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| UDBMOVEMENT_ORIG | 7142 | 809K| | 7056 (1)| 00:01:25 |
    |* 5 | INDEX RANGE SCAN | IDX_UDBMOVARTICLE | 10709 | | | 61 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | UDBIDENTDATA_PRIM | 1 | | | 1 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS BY INDEX ROWID | UDBIDENTDATA_ORIG | 1 | 61 | | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - filter("UDBMOVEMENT"."STRANSTYPE">0 AND "UDBMOVEMENT"."SDEVICE"<1000 AND
    BITAND("SSALEFLAG",1)=0 AND "UDBMOVEMENT"."SDEVICE">=0 AND BITAND("UDBMOVEMENT"."SSALEFLAG",4)=0)
    5 - access("UDBMOVEMENT"."TACTIONTIME">=TO_DATE(' 2011-05-05 06:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."TACTIONTIME"<TO_DATE(' 2011-05-05 12:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."SARTCLASSREF"<100)
    filter("UDBMOVEMENT"."SARTCLASSREF"<100)
    6 - access("UDBMOVEMENT"."LGLOBALID"="UDBIDENTDATA"."LGLOBALREF" AND
    "UDBMOVEMENT"."SCOMPUTER"="UDBIDENTDATA"."SCOMPUTER")
    7 - filter("UDBIDENTDATA"."SCARDTYPE"=2)
    Statistics
    543 recursive calls
    0 db block gets
    84383 consistent gets
    4485 physical reads
    0 redo size
    533990 bytes sent via SQL*Net to client
    3953 bytes received via SQL*Net from client
    331 SQL*Net roundtrips to/from client
    86 sorts (memory)
    0 sorts (disk)
    4947 rows processed
    SQL> @flush
    SQL> alter system flush buffer_cache;
    System altered.
    Elapsed: 00:00:00.12
    SQL> alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:00.74
    SQL> SELECT UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBMovement.sCardSubType, UDBMovement.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBMovement.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBMovement.tTarifTimeStart, UDBMovement.tTarifTimeEnd, UDBMovement.cLicensePlate, UDBMovement.lMoneyValue, UDBMovement.lPointValue, UDBMovement.lTimeValue, UDBMovement.tProdTime
    2 FROM UDBMOVEMENT WHERE
    3 UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBMovement.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    4 AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    5 AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5 ORDER BY tActionTime, lBlock, lSequenz;
    4947 rows selected.
    Elapsed: 00:00:26.46
    Execution Plan
    Plan hash value: 3648898312
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 2720 | 443K| 2812 (1)| 00:00:34 |
    | 1 | SORT ORDER BY | | 2720 | 443K| 2812 (1)| 00:00:34 |
    |* 2 | MAT_VIEW ACCESS BY INDEX ROWID| TMP_MS_UDB_MV | 2720 | 443K| 2811 (1)| 00:00:34 |
    |* 3 | INDEX RANGE SCAN | EEETMP_MS_ACTTIMEDEVICE | 2732 | | 89 (0)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - filter("UDBMOVEMENT"."STRANSTYPE">0 AND BITAND("UDBMOVEMENT"."SSALEFLAG",4)=0 AND
    BITAND("SSALEFLAG",1)=0 AND "UDBMOVEMENT"."SARTCLASSREF"<100)
    3 - access("UDBMOVEMENT"."TACTIONTIME">=TO_DATE(' 2011-05-05 06:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."SDEVICE">=0 AND "UDBMOVEMENT"."SCARDTYPE"=2 AND
    "UDBMOVEMENT"."TACTIONTIME"<TO_DATE(' 2011-05-05 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
    "UDBMOVEMENT"."SDEVICE"<1000)
    filter("UDBMOVEMENT"."SCARDTYPE"=2 AND "UDBMOVEMENT"."SDEVICE"<1000 AND
    "UDBMOVEMENT"."SDEVICE">=0)
    Statistics
    449 recursive calls
    0 db block gets
    6090 consistent gets
    2837 physical reads
    0 redo size
    531987 bytes sent via SQL*Net to client
    3953 bytes received via SQL*Net from client
    331 SQL*Net roundtrips to/from client
    168 sorts (memory)
    0 sorts (disk)
    4947 rows processed
    SQL> spool off
    Edited by: MattSk on Feb 4, 2013 2:20 PM

    I have added some tkprof outputs on MV and master tables:
    SELECT tmp_ms_udb_mv.zIdDevice, tmp_ms_udb_mv.sDevice, tmp_ms_udb_mv.zIdLocal, tmp_ms_udb_mv.sComputer, tmp_ms_udb_mv.tActionTime, tmp_ms_udb_mv.sCardSubType, tmp_ms_udb_mv.sCardType, tmp_ms_udb_mv.cEpan, tmp_ms_udb_mv.cText, tmp_ms_udb_mv.lArtRef, tmp_ms_udb_mv.sArtClassRef, tmp_ms_udb_mv.lSequenz, tmp_ms_udb_mv.sTransMark, tmp_ms_udb_mv.lBlock, tmp_ms_udb_mv.sTransType, tmp_ms_udb_mv.lGlobalID, tmp_ms_udb_mv.sFacility, tmp_ms_udb_mv.sCardClass, tmp_ms_udb_mv.lSingleAmount, tmp_ms_udb_mv.sVAT, tmp_ms_udb_mv.lVATTot, tmp_ms_udb_mv.tTarifTimeStart, tmp_ms_udb_mv.tTarifTimeEnd, tmp_ms_udb_mv.cLicensePlate, tmp_ms_udb_mv.lMoneyValue, tmp_ms_udb_mv.lPointValue, tmp_ms_udb_mv.lTimeValue, tmp_ms_udb_mv.tProdTime
    FROM tmp_ms_udb_mv WHERE
    tmp_ms_udb_mv.sTransType > 0 AND tmp_ms_udb_mv.sDevice < 1000 AND tmp_ms_udb_mv.sDevice>= 0 AND tmp_ms_udb_mv.sCardType IN (1) AND (bitand(tmp_ms_udb_mv.sSaleFlag,1) = 0 AND bitand(tmp_ms_udb_mv.sSaleFlag,4) = 0) AND tmp_ms_udb_mv.sArtClassRef < 100
    AND tmp_ms_udb_mv.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    AND tmp_ms_udb_mv.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    ORDER BY tActionTime, lBlock, lSequenz
    call count cpu elapsed disk query current rows
    Parse 1 0.04 0.10 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 596 0.17 27.07 2874 8894 0 8925
    total 598 0.21 27.18 2874 8894 0 8925
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60
    Rows Row Source Operation
    8925 SORT ORDER BY (cr=8894 pr=2874 pw=0 time=27071773 us)
    8925 MAT_VIEW ACCESS BY INDEX ROWID TMP_MS_UDB_MV (cr=8894 pr=2874 pw=0 time=31458291 us)
    8925 INDEX RANGE SCAN EEETMP_MS_ACTTIMEDEVICE (cr=68 pr=68 pw=0 time=161347 us)(object id 149251)
    SELECT
    UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBIdentData.sCardSubType, UDBIdentData.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBIdentData.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBIdentData.tTarifTimeStart, UDBIdentData.tTarifTimeEnd, UDBIdentData.cLicensePlate, UDBIdentData.lMoneyValue, UDBIdentData.lPointValue, UDBIdentData.lTimeValue, UDBIdentData.tProdTime, UDBIdentData.tExpireDate
    FROM UDBMOVEMENT_orig UDBMovement, Udbidentdata_orig UDBIdentData
    WHERE
    UDBMovement.lGlobalId = UDBIdentData.lGlobalRef(+) AND UDBMovement.sComputer = UDBIdentData.sComputer(+)
    AND UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBIdentData.sCardType IN (1) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    ORDER BY tActionTime, lBlock, lSequenz
    call count cpu elapsed disk query current rows
    Parse 1 0.03 0.06 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 596 0.76 16.94 3278 85529 0 8925
    total 598 0.79 17.01 3278 85529 0 8925
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60
    Rows Row Source Operation
    8925 SORT ORDER BY (cr=85529 pr=3278 pw=0 time=16942799 us)
    8925 NESTED LOOPS (cr=85529 pr=3278 pw=0 time=15017857 us)
    22567 TABLE ACCESS BY INDEX ROWID UDBMOVEMENT_ORIG (cr=17826 pr=1659 pw=0 time=7273473 us)
    22570 INDEX RANGE SCAN IDX_UDBMOVARTICLE (cr=111 pr=111 pw=0 time=112351 us)(object id 143693)
    8925 TABLE ACCESS BY INDEX ROWID UDBIDENTDATA_ORIG (cr=67703 pr=1619 pw=0 time=8154915 us)
    22567 INDEX UNIQUE SCAN UDBIDENTDATA_PRIM (cr=45136 pr=841 pw=0 time=3731470 us)(object id 108324)

  • Materialized View UNION different tables 10g.

    I am trying to create a materialized view from 2 different tables. According the documentation for 10G it should be available.
    Here is my script.
    DROP MATERIALIZED VIEW PERSON_MV_T16;
    CREATE MATERIALIZED VIEW PERSON_MV_T16 refresh complete on demand
    AS
    SELECT
    CAST(P.MARKER AS VARCHAR2(4)) AS MARKER,
    P.ROWID P_ROW_ID,
    CAST(P.ACTIVE_IND_DT AS DATE) AS ACTIVE_IND_DT
    FROM PERSON_ORGS_APEX_MV P
    UNION
    SELECT
    CAST(P.MARKER AS VARCHAR2(4)) AS MARKER,
    P.ROWID P_ROW_ID,
    CAST(P.ACTIVE_IND_DT AS DATE) AS ACTIVE_IND_DT
    FROM PERSON_ORGS_APVX_MV P
    delete from mv_capabilities_table;
    begin
    dbms_mview.explain_mview('PEOPLE.PERSON_MV_T16');
    end;
    select *
    from mv_capabilities_table where capability_name not like '%PCT%' and capability_name = 'REFRESH_FAST_AFTER_INSERT';
    I get the following error.
    CAPABILITY_NAME = REFRESH_FAST_AFTER_INSERT
    POSSIBLE = N
    MSGTEXT = tables must be identical across the UNION operator
    I wrapped them in CAST operations just to be sure they are the same type and size.

    As far as I'm aware, you can create MV in standard and also there is no limitation that I'm aware off
    Standard and Enterprise Edition
    A. Basic replication (MV replication)
    \- transaction based
    - row-level
    - asynchronous from master table to MV (Materialized View)
    - DML replication only
    - database 7 / 8.0 / 8i / 9i / 10g
    Variants:
    1. Read-only MV replication
    2. Updateable MV replication:
    2.1 asynchronous from MV to master
    2.2 synchronous from MV to master
    3. Writeable MV replication
    Enterprise Edition only
    B. Multimaster replication
    \- transaction based
    - row-level or procedural
    - asynchronous or synchronous
    - DML and DDL replication
    - database 7 / 8.0 / 8i / 9i / 10g
    - Enterprise Edition only
    Variants:
    1. row-level asynchronous replication
    2. row-level synchronous replication
    3. procedural asynchronous replication
    4. procedural synchronous replication
    C. Streams replication
    (Standard Edition 10g can execute Apply process)
    \- (redo) log based
    - row-level
    - asynchronous
    - DML and DDL replication
    - database 9i / 10g (10g has Down Streams Capture)

  • Materialized View with OLAP table function

    Hi,
    I am trying to materialize OLAP cubes into relational materialized views which works quite fine. After a few loadings in background in parallel with Database Jobs and DBMS_MVIEW package the peformance is getting poor. Steps I am performing:
    1. Generate materialized view as DEFERRED and COMPLETE refresh
    2. Generate database Jobs for refreshing views with DBMS_MVIEW.REFRESH function
    3. Running Jobs in background
    I have loading times the first time 10min after then over 4 hours. I also tried with ATOMIC_REFRESH=FALSE but the same result. Database is running in ARCHIVE LOGGING. Can this degrade the performance?
    Any ideas?
    Thanks,
    Christian

    Hi,
    yes thats correct. I am creating MVs in 10.2.0.3
    Here is an example:
    CREATE MATERIALIZED VIEW "FCRSGX"."MV_F_ICCC_C11"
    ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS NOLOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "FCRSGX_CONSO_RELATIONAL"
    BUILD DEFERRED
    USING INDEX
    REFRESH COMPLETE ON DEMAND
    USING DEFAULT LOCAL ROLLBACK SEGMENT
    DISABLE QUERY REWRITE
    AS SELECT ENTITY, REVE_ICC, RP_ICC, MV_ICC, PERIOD, MEASURE, AMOUNT, R2C
    FROM TABLE(OLAP_TABLE('FCRSGX.CONSODATA DURATION SESSION',
    DIMENSION ENTITY as varchar2(8) FROM ENTITY
    DIMENSION REVE_ICC as varchar2(8) FROM REVE_ICC
    DIMENSION RP_ICC as varchar2(8) FROM RP_ICC
    DIMENSION MV_ICC as varchar2(8) FROM MV_ICC
    DIMENSION PERIOD as varchar2(8) FROM GMONTH
    DIMENSION MEASURE as varchar2(30) FROM EXPR
    MEASURE AMOUNT as number FROM ICCC.C11
    LOOP CMPE.ICCC.C11
    ROW2CELL R2C '))
    WHERE OLAP_CONDITION(r2c, 'lmt entity to CMPE.ICCC.C11')=1
    AND OLAP_CONDITION(r2c, 'lmt reve_icc to CMPE.ICCC.C11')=1
    AND OLAP_CONDITION(r2c, 'lmt mmonth to sapload.per eq y')=1
    AND OLAP_CONDITION(r2c, 'lmt gmonth to charl(mmonth) ')=1
    AND OLAP_CONDITION(r2c, 'lmt rp_icc to CMP.ICCC.C11 ')=1
    AND OLAP_CONDITION(r2c, 'lmt mv_icc to CMP.ICCC.C11 ')=1
    AND OLAP_CONDITION(r2c, 'lmt expr to ''F.ICCC.C11'' ')=1
    MODEL
    DIMENSION BY(ENTITY,REVE_ICC,RP_ICC,MV_ICC,PERIOD,MEASURE)
    MEASURES(AMOUNT,R2C)
    RULES UPDATE SEQUENTIAL ORDER()
    ;

  • Updatable Materialized View and Master Table on same database

    Hi all,
    My first question - Is it possible to have an Updatable Materialized View and the associated Master Table located on the same database?
    This is the requirement scenario:
    One unique database D exists.
    A is a batch table. Only inserts are allowed on Table A.
    M is an updatable materialized view on Table A (Master). Only updates are allowed on M (no insert or delete).
    Requirement is to push updates/changes from M to A periodically and then get the new inserted records from A into M via a refresh.
    Is this possible? What other approaches are applicable here?

    John,
    My question is related to the implementation and setup of the environment as explained in the above example. How can I achieve this considering that I have created an updatable m-view?
    If possible, how do I push changes made to an updatable m-view back to it's master table when/before I execute DBMS_MVIEW.REFRESH on the m-view? What is the procedure to do this if both table and mview exist on the same database? Do I need to create master groups, materialized view refresh groups, etc.?
    One more thing.. Is there a way to retain changes to the m-view during refresh? In this case, only newly inserted/updated records in the associated table would get inserted into m-view. Whereas changes made to m-view records would stay as-is.
    Hope my question is directed well. Thanks for your help.
    - Ankit

  • Create materialized View fails with "table or view does not exist"

    DB: 10.2.0.4
    OS: Win 2003
    Hi,
    Here in my tests, i have 2 databases (A(source) and B(backup)), and i am trying to create an mview in database B to replicate data from one test table from database A, only for test purpose. I'm getting the error "table or view does not exist" when i try to create a mview with REFRESH FAST. Here is my code:
    CREATE MATERIALIZED VIEW TESTES.TAB_TESTES_REPLIC_MVIEW_02
    REFRESH FAST
    START WITH TO_DATE('21/02/2012 18:50:00', 'DD/MM/YYYY HH24:MI:SS')
    NEXT SYSDATE + 1/24/60
    WITH PRIMARY KEY
    AS SELECT REGISTRO1,
    REGISTRO2
    FROM TESTES.TAB_TESTES_REPLIC_MVIEW_02@DB_LINK_ORA10;
    The dblink is workig fine(dblink user has select privilege on TESTES.TAB_TESTES_REPLIC_MVIEW_02), and i have created the mview log on database A.
    Where is my mistake.
    Thanks a lot.
    Edited by: Fabricio_Jorge on 21/02/2012 19:06

    I found the solution.
    I had to grant SELECT on the mview log. The name is avaiable in DBA_MVIEW_LOGS

  • Performance consequences to adding materialized view logs to tables?

    I am writing a very complex query for a client of our transactional database system and this will require the creation of a materialized viewbecause all attempts at tuning to make performance acceptable have failed.
    I want to enable fast refresh of the MVIEW but I am confused regarding the consequences of the addition of adding materialized view logs to the base tables.
    Some of the tables are large and involved in alot of transactions and I am wondering if the performance of INSERT/UPDATES will be seriously affected by the presence of an mview log.
    This may be a simple question to answer but I was unable to find a clear cut answer in the literature.
    Thanks for any answers!!
    Chris Mills
    Biotechnology Data Management Consultant

    Last time i looked into this there were three cases to consider
    If you're doing conventional row-by-row DML then the impact is just one insert into a heap table per row modified.
    If you are modifying a high number of rows using bulk-binds then the overhead is very severe because modifying 1,000 rows on the base table causes 1,000 non-bulk bound inserts into the log table.
    Direct path inserts have extremely low overhead because the MV log is not touched. Instead, the range of new rowids added is logged in ALL_SUMDELTA
    http://oraclesponge.wordpress.com/2005/09/15/optimizing-materialized-views-part-ii-the-direct-path-insert-enhancement/

  • ORA-12034: materialized view log on table younger than last refresh

    I am getting this error while creating the materialized view.
    This error seems to be refresh error.
    but when i have a mat view log already and running script to create the materialized view how can that happen?

    34MCA2K2, Google lover?
    I confess perhaps I gave a little info.
    View log was created before MV.
    It was the first initial load.
    No refresh failed.
    No DDL.
    No purge log.
    Not warehouse.
    There is no such behavior for MVs on another sites.
    P.S. I ask help someone who knows what's wrong or who faced with it or can me  follow by usefull link.
    P.P.S. It's a pity that there is no button "Useless answer"

  • Materialized view with tables in different schemas

    Hello,
    I want to create a materialized view with a table from a different schema in the SELECT statement. For materialized view I would like to apply the "REFRESH COMPLETE ON COMMIT" option.
    Here the code:
    CREATE MATERIALIZED VIEW S1.MV_EXAMPLE
    TABLESPACE TS1
    PCTFREE 0
    BUILD IMMEDIATE
    REFRESH COMPLETE ON COMMIT
    AS
    SELECT T.COLUMN1 COLUMN
    FROM S2.TABLE1 T
    I can't execute this SQL because I get an "insufficient privileges" error to this table:
    FROM S2.TABLE1 T
    FEHLER in Zeile 9:
    ORA-01031: Insufficient privileges
    User S1 has the following privileges:
    CREATE SESSION
    CREATE SNAPSHOT
    CREATE TABLE
    CREATE QUERY REWRITE
    SELECT ANY TABLE
    User S2 has the following privileges:
    CREATE SESSION
    CREATE SNAPSHOT
    CREATE TABLE
    CREATE QUERY REWRITE
    ALTER ANY SNAPSHOT
    Which privileges are missing?
    Thanks, Mathias

    Thanks Kamal for your answer!
    S1 has the grant select directly. But I solveld the problem. The system privilege "ON COMMIT REFRESH" was missing for S1. This has to be set if any of the tables are outside the owner's schema of the materialized view (ORACLE documentation - Data Warehouse Guide).
    But one thing is not clear to me yet and the ORACLE documentation doesn't give me an answer. I can set the refresh-attribute ON COMMIT on a materialized view containing only joins when a group by clause is set. If the group by clause is missing I can't! Why?
    Regards, Mathias

Maybe you are looking for

  • Enabling/disabling the 'display PDF in browser' via a script after Reader 9 is installed

    So we have Reader 9 already installed on the machine, but we need a simple way that a user can click on a shortcut in the start menu to reconfigure how the PDF is opened - either from within the browser or outside of the brower. In Reader 7, I was ab

  • An error occurred during the Lync meeting when open meeting the first time.

    Hello Lync specialists out there! We are running a Lync 2013 standard server in our company together with edge server and reverse proxy for external access. We haven't taken Lync meetings, because of missing licenses, official in use but while testin

  • Lose balance in my account

    Dear support, I lose my account 20$ of amount. My have balance 28$ and then I refill 10$ to my account I get 17$ so I lose 20$. Please help to check and return money to my account back.

  • Yosemite tries to open another photoshop using "open with"

    Hi there, I upgraded to Yosemite a few days ago.  When I right click on a jpeg and ask it to "open with" Photoshop it tries to open another Photoshop. By this I mean that a second Photoshop icon appears in the dock bouncing up and down, even if I alr

  • 10.8.3 not working with Epson printer

    I have a 2009 iMac, 27 inch, 10.8.3, and an Epson XP 601 all-in-one (Asian version of the XP600) I cannot print in DRAFT or ECONOMY mode. I think this began when I upgraged to 10.8.3. Epson has been giving me suggestions for the past 3 days, but none