Materialized views performance gain  estimation

Hi;
I have to estimate the performance gain of a materialized view for a particular query without creating it.
that means, i have:
- A query with initial execution plan
- A select statement with is considered as a probable MV
i need to show what is the performance gain of creating the MV for the previous Query.
I see the DBMS_MVIEW.EXPLAIN_REWRITE but it shows the performance gain for a yet created MV.
any idea .
Thanks

Hi Bidi,
- for the first (agregation ones), i don't know how to estimate the performance gain.It's the difference in the elapsed time between the original aggregation and the time required to fetch the pre-computed summary, ususally a single logical I/O (if you index the MV).
i want to determinate the impact of creating materialized views on the performance of a workload of queries. Perfect! Real-world workload tests are always better than contrived test cases!
If you have the SQLAccess advisor, you can define a SQL Tuning Set, and run a representative benchmark with dbms_sqltune:
http://www.dba-oracle.com/t_dbms_sqltune.htm
Hope this helps. . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference":
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

Similar Messages

  • Dblink + materialized views performance

    Hi guys,
    Here´s my problem. I have a table with a Lob column and I need to use it remotely
    via dblink.
    so I created a view changing this column to varchar2(3999) and I was able to acess it
    remotely without problems. first problem solved :-)
    But performance acessing via DBlink is really bad in this environment, because there´s a lot of queries with joins using this table. So I tried to create a materialized view localy with these data. no sucess.
    After that I returned to the first view( the one with varchar2 3999) and changed it to Materialized view and then created a Mview log in this table. so I tried to create a materialized view in my other database acessing data of the first materialized
    view . but it failed again.
    Any suggestions what can I do ? Maybe exporting a dump every day with this table should solve my problem, but it´s not the best option
    thanks,
    Felipe

    Hi Felipe,<br>
    <br>
    It it the continuing post from some days ago dblink + query with join = poor performance ?<br>
    Anyway, what did you mean by "no sucess." or "it failed again." ?<br>
    <br>
    Please, tell us more information about what you tried.<br>
    <br>
    Nicolas.

  • Materialized View performance

    Hi all
    I'm creating some MVs for performance reasons. First I created the following view:
    CREATE OR REPLACE VIEW XXHR_PERSON_INFO_V AS
    SELECT
    per.person_id person_id
    ,per.effective_start_date person_effective_start
    ,per.effective_end_date person_effective_end
    ,pptu.effective_start_date person_usage_eff_start
    ,pptu.effective_end_date person_usage_eff_end
    ,hr_general.decode_lookup('TITLE', per.title) title
    ,per.last_name surname
    ,per.first_name forename
    ,per.middle_names middle_names
    ,per.full_name full_name
    ,hr_general.decode_lookup('SEX', per.sex) gender
    ,ppt.user_person_type person_type
    ,per.employee_number employee_number
    ,per.applicant_number applicant_number
    ,per.national_identifier national_insurance_number
    ,per.date_of_birth date_of_birth
    ,per.town_of_birth town_of_birth
    ,per.region_of_birth county_of_birth
    ,(select territory_short_name country_of_birth
    from fnd_territories_tl
    where territory_code = per.country_of_birth) country_of_birth
    ,hr_general.decode_lookup('MAR_STATUS', per.marital_status) marital_status
    ,hr_general.decode_lookup('NATIONALITY', per.nationality) nationality
    ,hr_general.decode_lookup('REGISTERED_DISABLED', per.registered_disabled_flag) registered_disabled
    ,hr_general.decode_lookup('ETH_TYPE', per.per_information1) ethnic_origin
    ,per.email_address email
    ,per.resume_last_updated hr_record_last_checked
    ,per.honors honors
    ,per.known_as preferred_name
    ,per.previous_last_name previous_surname
    ,per.correspondence_language correspondence_language
    ,per.date_of_death date_of_death
    ,per.student_status student_status
    ,per.on_military_service on_military_service
    ,per.second_passport_exists second_passport_exists
    ,per.original_date_of_hire date_of_joining_met
    ,per.current_employee_flag current_employee_flag
    ,per.current_applicant_flag current_applicant_flag
    ,per.current_emp_or_apl_flag current_emp_or_apl_flag
    ,per.creation_date person_creation_date
    ,per.created_by person_created_by
    ,per.last_update_date person_last_update_date
    ,per.last_updated_by person_last_updated_by
    ,ppt.system_person_type
    ,per.per_information_category
    ,per.per_information1
    ,per.per_information2
    ,per.per_information3
    ,per.per_information4
    ,per.per_information5
    ,per.per_information6
    ,per.per_information7
    ,per.per_information8
    ,per.per_information9
    ,per.per_information10
    ,per.per_information11
    ,per.per_information12
    ,per.per_information13
    ,per.per_information14
    ,per.per_information15
    ,per.per_information16
    ,per.per_information17
    ,per.per_information18
    ,per.per_information19
    ,per.per_information20
    ,per.per_information21
    ,per.per_information22
    ,per.per_information23
    ,per.per_information24
    ,per.per_information25
    ,per.per_information26
    ,per.per_information27
    ,per.per_information28
    ,per.per_information29
    ,per.per_information30
    ,per.attribute_category per_attribute_category
    ,per.attribute1 per_attribute1
    ,per.attribute2 per_attribute2
    ,per.attribute3 per_attribute3
    ,per.attribute4 per_attribute4
    ,per.attribute5 per_attribute5
    ,per.attribute6 per_attribute6
    ,per.attribute7 per_attribute7
    ,per.attribute8 per_attribute8
    ,per.attribute9 per_attribute9
    ,per.attribute10 per_attribute10
    ,per.attribute11 per_attribute11
    ,per.attribute12 per_attribute12
    ,per.attribute13 per_attribute13
    ,per.attribute14 per_attribute14
    ,per.attribute15 per_attribute15
    ,per.attribute16 per_attribute16
    ,per.attribute17 per_attribute17
    ,per.attribute18 per_attribute18
    ,per.attribute19 per_attribute19
    ,per.attribute20 per_attribute20
    ,per.attribute21 per_attribute21
    ,per.attribute22 per_attribute22
    ,per.attribute23 per_attribute23
    ,per.attribute24 per_attribute24
    ,per.attribute25 per_attribute25
    ,per.attribute26 per_attribute26
    ,per.attribute27 per_attribute27
    ,per.attribute28 per_attribute28
    ,per.attribute29 per_attribute29
    ,per.attribute30 per_attribute30
    FROM
    per_person_types ppt
    ,per_person_type_usages_f pptu
    ,per_people_f per
    WHERE 1 = 1
    AND ppt.person_type_id = pptu.person_type_id
    AND pptu.person_id = per.person_id;
    I then used this script to create a MV with the indexes shown:
    CREATE MATERIALIZED VIEW APPS.XXHR_PERSON_INFO_MV
    NOCACHE
    LOGGING
    NOPARALLEL
    BUILD IMMEDIATE
    REFRESH FORCE
    START WITH TO_DATE('09-Aug-2008 06:00:00','dd-mon-yyyy hh24:mi:ss')
    NEXT TRUNC(SYSDATE) + 1
    AS
    SELECT *
    FROM XXHR_PERSON_INFO_V;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N1 ON APPS.XXMPHRI_PERSON_INFO_MV
    (PERSON_ID)
    LOGGING
    NOPARALLEL;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N2 ON APPS.XXMPHRI_PERSON_INFO_MV
    (PERSON_EFFECTIVE_START, PERSON_EFFECTIVE_END)
    LOGGING
    NOPARALLEL;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N3 ON APPS.XXMPHRI_PERSON_INFO_MV
    (EMPLOYEE_NUMBER)
    LOGGING
    NOPARALLEL;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N4 ON APPS.XXMPHRI_PERSON_INFO_MV
    (SYSTEM_PERSON_TYPE)
    LOGGING
    NOPARALLEL;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N5 ON APPS.XXMPHRI_PERSON_INFO_MV
    (PERSON_tYPE)
    LOGGING
    NOPARALLEL;
    CREATE INDEX APPS.XXHR_PERSON_INFO_MV_N6 ON APPS.XXMPHRI_PERSON_INFO_MV
    (PERSON_USAGE_EFF_START, PERSON_USAGE_EFF_END)
    LOGGING
    NOPARALLEL;
    This creates a MV with about 900,000 records in it.
    If I then run the following SQL, the report takes about 2 minutes to run:
    SELECT employee_number,
    person_effective_start,
    person_effective_end
    FROM xxhr_person_info_mv
    WHERE TRUNC(SYSDATE) between person_effective_start and person_effective_end
    Is there any way to improve the performance of this query?
    I'm then joining this mV to two other MVs, one with 1 million rows, and another with 3 million rows. Performance is not great!
    Any help appreciated.
    Thanks
    Alex

    If you are on 9i or later, could you please post a properly formatted (using pre tags), complete output of DBMS_XPLAN.DISPLAY for your statement?
    In addition run an SQL*Plus autotrace and post the output, too:
    SET AUTOTRACE TRACEONLY TIMING ON
    <Run your statement>
    This should also include the information how many rows your statement actually returns.
    How long does it take if you use a full table scan instead of an index range scan?
    SELECT /*+ full(xxhr_person_info_mv) */
    employee_number,
    person_effective_start,
    person_effective_end
    FROM xxhr_person_info_mv
    WHERE TRUNC(SYSDATE) between person_effective_start and person_effective_end;Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Will Materialized view log reduces the performance of DML statements on the master table

    Hi all,
    I need to refresh a on demand fast refresh Materialized view in Oracle 11GR2. For this purpose I created a Materialized view log on the table (Non partitioned) in which records will be inserted @ rate of 5000/day as follows.
    CREATE MATERIALIZED VIEW LOG ON NOTES NOLOGGING WITH PRIMARY KEY INCLUDING NEW VALUES;
    This table already has 20L records and adding this Mview log will reduce the DML performance on the table ?
    Please guide me on this.

    Having the base table maintain a materialised view log will have an impact on the speed of DML statements - they are doing extra work, which will take extra time. A more sensible question would be to ask whether it will have a significant impact, to which the answer is almost certainly "no".
    5000 records inserted a day is nothing. Adding a view log to the heap really shouldn't cause any trouble at all - but ultimately only your own testing can establish that.

  • Materialized View Logs on OLTP DB- Performance issues

    Hi All,
    We have a request to check what would be the performance impact of having Matirialized View (with FAST refresh each 5 and each 30 min).
    We have been using some APIs( I don't have full details of this job) to refresh tables in Reportign DB and want to switch to MVIEWS in the next release.
    The base tables for this MVs are in DB1 with high DML activity.
    We are planing to create 7 MVs on a reporting DB pointing to the corresponding tables in DB1.
    I am setting up the env with the required tables to test this and also want to know your experiences in implementing Mviews with Fast refresh pointing to a typical OLTP DB as I am new to MVIEWS.
    How it affects the performance of DML statements on base tables?
    How often you had to do complete refresh because of invalid/outdated Mview Logs?
    other Maintenance overheads?
    any possible workarounds?
    Oracle Version: 9.2.0.8
    OS : HP-UX
    Thank you for sharing your experiences.

    Doing incremental refreshes every 5 minutes willadd some amount of load to the OLTP system. Again,
    depending on your environment, that may or may not be
    significant.
    what factors can effet this? Among other things, the size of the materialized view logs that need to be read, which in turn depends on the number of transactions and the setup of the materialized views, the current load on the OLTP system, etc. If you're struggling with an I/O or CPU bottleneck at peak processing now, for example, having a dozen materialized view refresh processes running as well would probably generate noticable delays in the system. If you have plenty of spare CPU & I/O bandwidth, the refreshes will be less noticable.
    is it the same in 10g R2 too? we are upgrading to the
    same in coming October.Streams is far easier to deal with in 10.2.
    Justin

  • Creating collection vs. materialized view - better performance?

    Hi, I am trying to improve the performance of our application and am looking at everything possible. I am wondering if the use of multiple, complex collections is slowing down our application. Would the use of materialized views, as opposed to collections improve things? thanks Karen

    to provide more info....
    here is the process which creates the list of species based on favorite species identified (and followed by the query to select from this collection)
    declare
    yes_are NUMBER;
    pCount NUMBER;
    l_seq_id NUMBER;
    yes_hms NUMBER;
    found_area NUMBER;
    found_unit NUMBER;
    unitmeasure VARCHAR2(2);
    pbCount NUMBER;
    pbPrice NUMBER;
    begin
    --create license collection so that if error on a submit the information retains
    if apex_collection.collection_exists('LICENSE_COLLECTION') then
    apex_collection.delete_collection('LICENSE_COLLECTION');
    end if;
    --create vessel collection so that if error on a submit the information retains
    if apex_collection.collection_exists('SUPVES_COLLECTION') then
    apex_collection.delete_collection('SUPVES_COLLECTION');
    end if;
    apex_collection.create_or_truncate_collection('FP_COLLECTION');
    --create collection to save landings
    apex_collection.create_or_truncate_collection('SPECIES_COLLECTION');
    --loop through the favorite species and populate with pre-existing data
    for rec IN (select *
    from frequent_species
    where permit_id = :G_PERMIT_ID
    order by fav_order)
    LOOP
    -- check to see if there is a priceboard entry for the favorite species
    select count(*) into pbCount
    from price_board
    where permit_id = :G_PERMIT_ID and
    species_itis = rec.species_itis and
    grade_code = rec.grade_code and
    market_code = rec.market_code and
    unit_of_measure = rec.unit_measure and
    price is not null;
    -- if there is a price board entry
    if pbCount = 1 then
    --get the default price for that species combination
    select price into pbPrice
    from price_board
    where permit_id = :G_PERMIT_ID and
    species_itis = rec.species_itis and
    grade_code = rec.grade_code and
    market_code = rec.market_code and
    unit_of_measure = rec.unit_measure and
    price is not null;
    --add landings row with price board data
    l_seq_id := apex_collection.add_member('SPECIES_COLLECTION',
    null,
    null,
    null,
    rec.species_itis,
    rec.grade_code,
    rec.market_code,
    rec.unit_measure,
    nvl(rec.disposition_code,:G_FIRST_DISPOSITION),
    0, null,pbPrice,null);
    -- no price board entry
    else
    -- add landings row without any priceboard data
    l_seq_id := apex_collection.add_member('SPECIES_COLLECTION',
    null,
    null,
    null,
    rec.species_itis,
    rec.grade_code,
    rec.market_code,
    rec.unit_measure,
    nvl(rec.disposition_code,:G_FIRST_DISPOSITION),
    0, null,null,null);
    end if;
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>14,
    p_attr_value => 'ERROR');
    --set first disposition
    :G_FIRST_DISPOSITION := nvl(rec.disposition_code,:G_FIRST_DISPOSITION);
    found_area:=0;
    -- All rows need to be checked to determine if additional info is needed based on partner_options table
    -- check if AREA will be needed
    select count(*) into found_area
    from partner_options
    where partner_id = :G_ISSUING_AGENCY and
    substr(species_itis,1,6) = rec.species_itis and
    option_type = 'ARE' and
    nvl(inactivate_option_date, sysdate) >= sysdate;
    -- landing row requires AREA data
    if found_area > 0 then
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>13,
    p_attr_value => 'Y');
    -- landing row does NOT require AREA data
    else
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>13,
    p_attr_value => 'N');
    end if;
    found_unit := 0;
    -- check if COUNT will be needed
    select count(*) into found_unit
    from partner_options
    where partner_id = :G_ISSUING_AGENCY and
    substr(species_itis,1,6) = rec.species_itis and
    option_type = 'LBC' and
    nvl(inactivate_option_date, sysdate) >= sysdate;
    -- landing row requires UNIT data
    if found_unit > 0 then
    select unit_measure into unitmeasure
    from partner_options
    where partner_id = :G_ISSUING_AGENCY and
    substr(species_itis,1,6) = rec.species_itis and
    option_type = 'LBC' and nvl(inactivate_option_date, sysdate) >= sysdate;
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>17,
    p_attr_value => 'Y');
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>19,
    p_attr_value => unitmeasure);
    --landing row does NOT require UNIT data
    else
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq => l_seq_id,
    p_attr_number =>17,
    p_attr_value => 'N');
    end if;
    -- check if HMS
    SELECT count(*) into yes_hms
    FROM HMSSpecies a
    where hmsspeciesitis = rec.species_itis;
    -- landing row requires HMS data
    if yes_hms > 0 and rec.grade_code = '10' then
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq=> l_seq_id,
    p_attr_number=>20,
    p_attr_value=>'Y');
    else
    -- landing row does NOT require HMS data
    apex_collection.update_member_attribute(p_collection_name=>'SPECIES_COLLECTION',
    p_seq=> l_seq_id,
    p_attr_number=>20,
    p_attr_value=>'N');
    end if;
    end loop;
    end;
    and the query for the region:
    SELECT
    apex_item.text(1,seq_id,'','','id="f01_#ROWNUM#"','','') "DeleteRow",
    apex_item.text_from_LOV(c004,'SPECIES')||'-'||apex_item.text_from_LOV(c005,'GRADE')||'-'||apex_item.text_from_LOV(c006,'MARKETCODE')||'-'||apex_item.text_from_LOV_query(c007,'select unit_of_measure d, unit_of_measure r from species_qc') unit,
    apex_item.select_list_from_LOV(6,c008,'DISPOSITIONS','onchange="getAllDisposition(#ROWNUM#)"','YES','0',' -- Select Favorite -- ','f06_#ROWNUM#','') Disposition,
    apex_item.select_list_from_LOV(7,c009,'GEARS','style="background-color:#FBEC5D; "onFocus="checkGearPreviousFocus(#ROWNUM#);"onchange="getAllGears(#ROWNUM#)"','YES','0','-- Select Favorite --','f07_#ROWNUM#','') Gear,
    apex_item.text(8,TO_NUMBER(c010),5,null,'onchange="setTotal(#ROWNUM#)"','f08_#ROWNUM#','') Quantity,
    apex_item.text(9,TO_NUMBER(c011),5,null,'onchange="getPriceBoundaries(#ROWNUM#)"','f09_#ROWNUM#','') Price,
    apex_item.text(10, TO_NUMBER(c012),5,null, 'onchange="changePrice(#ROWNUM#)" onKeyPress="selectDollarsFocus(#ROWNUM#);"','f10_#ROWNUM#','') Dollars,
    apex_item.select_list_from_LOV_XL(11, c014,'AREAFISHED','style="background-color:#FBEC5D; "onchange="getAllAreaFished(#ROWNUM#)"','YES','ERROR','-- Select Area Fished --','f11_#ROWNUM#','') Area_Fished,
    apex_item.text(12, c018,4,null,'style="background-color:#FBEC5D; "onBlur="setUnitQuantity(#ROWNUM#)"','f12_#ROWNUM#','') UNIT_QUANTITY,
    apex_item.text(13, 'CN',3,null,'readOnly=readOnly','f13_#ROWNUM#','') UNIT_COUNT,
    apex_item.checkbox(14,'Y','id="f14_#ROWNUM#" style="background-color:#FBEC5D; " onClick="alterYes(#ROWNUM#);" onKeyPress="alterYes(#ROWNUM#);"',c021) FinsAttached,
    apex_item.checkbox(15,'N','id="f15_#ROWNUM#" style="background-color:#FBEC5D; " onClick="alterNo(#ROWNUM#);" onKeyPress="alterNo(#ROWNUM#);"',c022) FinsNotAttached,
    apex_item.checkbox(16,'U','id="f16_#ROWNUM#" style="background-color:#FBEC5D; " onClick="alterUnk(#ROWNUM#);" onKeyPress="alterUnk(#ROWNUM#);"',c023) FinsUnknown
    from apex_collections
    where collection_name = 'SPECIES_COLLECTION' order by seq_id
    /

  • To increase performance of refresh of Materialized View

    Dear Members,
    I have a set of 231 Materialized Views on a remote database from which I am refreshing data through materialized views. I.e. I have 231 materialized views which are directly copying data from the source views into our internal databases. (basically select column1, column2.... from viewname@dblinkname) There are no where clauses in any of the tables. The refreshes take around 4 hours to run. They either get fast refreshed or complete refresh. It's an on-demand type of refresh.
    Could anyone give me suggestions of improving the performance of these refreshes? I would appreciate some guidance in this case.
    Kind Regards,
    Suhas

    Hi Suhas,
    If you are asking about Coherence database integration, you could start here ...
    http://coherence.oracle.com/display/COH34UG/Read-Through,+Write-Through,+Write-Behind+and+Refresh-Ahead+Caching
    Thanks

  • Performance consequences to adding materialized view logs to tables?

    I am writing a very complex query for a client of our transactional database system and this will require the creation of a materialized viewbecause all attempts at tuning to make performance acceptable have failed.
    I want to enable fast refresh of the MVIEW but I am confused regarding the consequences of the addition of adding materialized view logs to the base tables.
    Some of the tables are large and involved in alot of transactions and I am wondering if the performance of INSERT/UPDATES will be seriously affected by the presence of an mview log.
    This may be a simple question to answer but I was unable to find a clear cut answer in the literature.
    Thanks for any answers!!
    Chris Mills
    Biotechnology Data Management Consultant

    Last time i looked into this there were three cases to consider
    If you're doing conventional row-by-row DML then the impact is just one insert into a heap table per row modified.
    If you are modifying a high number of rows using bulk-binds then the overhead is very severe because modifying 1,000 rows on the base table causes 1,000 non-bulk bound inserts into the log table.
    Direct path inserts have extremely low overhead because the MV log is not touched. Instead, the range of new rowids added is logged in ALL_SUMDELTA
    http://oraclesponge.wordpress.com/2005/09/15/optimizing-materialized-views-part-ii-the-direct-path-insert-enhancement/

  • ORA-01555 when performing refresh of materialized views via DBMS_JOB

    All,
    With this project needing to be finished soon and an issue occuring on the local database, I am hopefuly one of you will have the answer or resolution so that I may complete this project soon....
    Here is the setup..
    10g database (remote)
    9i database (local
    DB Link from local to remote database
    103 materialized views in local database that are refreshed by pulling data from dblink to remote database.
    A PL/SQL procedure has been created which sets the v_failures variable = 0 and then performs a check to see if the current job has a failure and if so, inserts that value into the v_failures variable. When that reaches "1", then the procedure does nothing and closes out. If the failures are equal to "0" then it performs a DBMS_MVIEW.REFRESH procedure for each materialized view.
    This worked the first time but its continually failing now with the ORA-01555 error (snapshot too old). From what I can tell, the dbms_job duration is 4 seconds and the Last_Exec is 2m 7s after it starts (8:30 PM). With that said, our DBAs working o nthe project have increased the Undo_Retention settings and assure us that shouldn't be the problem. Odd thing is, this never happened in the dev environment when we were developing/testing - only in the production environment once it got migrated.
    I am looking for possible causes and possible solutions to the ORA-01555 error. A sample of the code in my procedure is below:
    CREATE OR REPLACE PROCEDURE Ar_Mviews IS
    V_FAILURES NUMBER := 0;
    BEGIN
    BEGIN
              SELECT FAILURES INTO V_FAILURES FROM USER_JOBS WHERE SCHEMA_USER = 'CATEBS' AND WHAT LIKE '%DISCO_MVIEWS%';
              IF V_FAILURES = 1 THEN NULL;
              ELSE
    DBMS_MVIEW.REFRESH ('AR_BATCH_RECEIPTS_V', 'C');
              DBMS_OUTPUT.PUT_LINE(V_FAILURES); END IF;
    END;
    BEGIN
         SELECT FAILURES INTO V_FAILURES FROM USER_JOBS WHERE SCHEMA_USER = 'CATEBS' AND WHAT LIKE '%DISCO_MVIEWS%';
              IF V_FAILURES = 1 THEN NULL;
              ELSE
    DBMS_MVIEW.REFRESH ('AR_BATCHES_ALL_V', 'C');
              DBMS_OUTPUT.PUT_LINE(V_FAILURES); END IF;
    END;
    END Ar_Mviews;
    ---------------------------------------------------------------------------------------------------------------

    We are doing complete refreshes and doing it that way for consistency in the data presented. Because some materialized views are dependent upon data in other materialized views, we have them ordered in a procedure so that when one finishes, the next starts and they are also in a specific order as to ensure accurate data.
    The condition for v_failures is done so that the job doesn't get, lets say, 90% finished and hit an error and start over again. We do the IF statement which results in NULL (do nothing) so that the job doesn't repeat itself over and over again. If one MV fails, we have to consider the job a failure and do nothing else because the one MV that failed may have been a dependency of another MV down the line. (i.e. MV7 calls MV3 and MV3 fails, so the whole job fails because MV7 can't be accurate without the most current data from MV3).
    As well, this is being performed in off-business hours after backup to tape, etc. and prior to start of business so that no one is using the system when we run this job. That won't always be the case when we move to high availability with this system for varying time-zone end-users.
    I hope I have answered your question and look forward to continued feedback.
    Thanks!

  • Performance difference between tables and materialized views

    hi ,
    I created a materialized view on a query that involves partition table in it.
    When i used the same query and created a table out of it <create table xyz as select * from (the query)> ,the table got created quickly.
    So does that mean performance wise creating table is faster than creating/refreshing the materialized view ?or is that due to the refresh method i use ?Currently i use a complete refresh

    I created a materialized view on a query that involves partition table in it.
    When i used the same query and created a table out of it <create table xyz as select * from (the query)> ,the table got created quickly.
    So does that mean performance wise creating table is faster than creating/refreshing the materialized view ?or is that due to the refresh method i use ?Currently i use a complete refresh Well, for starters, if you created the materialized view first and then the standard table, the data for the second one has already been fetched recently and so will reduce your I/O due to caching, and will therefore be quicker. There are also other factors such as the materialized view creating other internal bits that are required to allow for refreshes to be done quickly, such as the primary key etc which you haven't created on your second creation.
    What you have shown is that two completely different statements running at different times, appear to operate with different speed. It is not a comparison of whether the materialized view is slower or quicker than the create table statement.

  • Query performance on materialized view vs master tables

    Hi,
    I am afraid of strange behavior in db, on my master tables UDBMOVEMENT_ORIG(26mil.rows) and UDBIDENTDATA_ORIG(18mil.rows) is created materialized view TMP_MS_UDB_MV (UDBMOVEMENT is synonym to this object) which meets some default conditions and join condition on these master tables. MV got about 12milions rows. I created MV to query not so huge objects, MV got 3GB, master tables toghether 12GB. But I don't understand that even physical reads and consistent gets are less on MV than on master tables, the final execution time is shorter on master tables. See my log below.
    Why?
    Thanks for answers.
    SQL> set echo on
    SQL> @flush
    SQL> alter system flush buffer_cache;
    System altered.
    Elapsed: 00:00:00.20
    SQL> alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:00.65
    SQL> SELECT
    2 UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBIdentData.sCardSubType, UDBIdentData.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBIdentData.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBIdentData.tTarifTimeStart, UDBIdentData.tTarifTimeEnd, UDBIdentData.cLicensePlate, UDBIdentData.lMoneyValue, UDBIdentData.lPointValue, UDBIdentData.lTimeValue, UDBIdentData.tProdTime, UDBIdentData.tExpireDate
    3 FROM UDBMOVEMENT_orig UDBMovement, Udbidentdata_orig UDBIdentData
    4 WHERE
    5 UDBMovement.lGlobalId = UDBIdentData.lGlobalRef(+) AND UDBMovement.sComputer = UDBIdentData.sComputer(+)
    6 AND UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBIdentData.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    7 AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25 AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    8 ORDER BY tActionTime, lBlock, lSequenz;
    4947 rows selected.
    Elapsed: 00:00:15.84
    Execution Plan
    Plan hash value: 1768406139
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 7166 | 1238K| | 20670 (1)| 00:04:09 |
    | 1 | SORT ORDER BY | | 7166 | 1238K| 1480K| 20670 (1)| 00:04:09 |
    | 2 | NESTED LOOPS | | | | | | |
    | 3 | NESTED LOOPS | | 7166 | 1238K| | 20388 (1)| 00:04:05 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| UDBMOVEMENT_ORIG | 7142 | 809K| | 7056 (1)| 00:01:25 |
    |* 5 | INDEX RANGE SCAN | IDX_UDBMOVARTICLE | 10709 | | | 61 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | UDBIDENTDATA_PRIM | 1 | | | 1 (0)| 00:00:01 |
    |* 7 | TABLE ACCESS BY INDEX ROWID | UDBIDENTDATA_ORIG | 1 | 61 | | 2 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - filter("UDBMOVEMENT"."STRANSTYPE">0 AND "UDBMOVEMENT"."SDEVICE"<1000 AND
    BITAND("SSALEFLAG",1)=0 AND "UDBMOVEMENT"."SDEVICE">=0 AND BITAND("UDBMOVEMENT"."SSALEFLAG",4)=0)
    5 - access("UDBMOVEMENT"."TACTIONTIME">=TO_DATE(' 2011-05-05 06:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."TACTIONTIME"<TO_DATE(' 2011-05-05 12:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."SARTCLASSREF"<100)
    filter("UDBMOVEMENT"."SARTCLASSREF"<100)
    6 - access("UDBMOVEMENT"."LGLOBALID"="UDBIDENTDATA"."LGLOBALREF" AND
    "UDBMOVEMENT"."SCOMPUTER"="UDBIDENTDATA"."SCOMPUTER")
    7 - filter("UDBIDENTDATA"."SCARDTYPE"=2)
    Statistics
    543 recursive calls
    0 db block gets
    84383 consistent gets
    4485 physical reads
    0 redo size
    533990 bytes sent via SQL*Net to client
    3953 bytes received via SQL*Net from client
    331 SQL*Net roundtrips to/from client
    86 sorts (memory)
    0 sorts (disk)
    4947 rows processed
    SQL> @flush
    SQL> alter system flush buffer_cache;
    System altered.
    Elapsed: 00:00:00.12
    SQL> alter system flush shared_pool;
    System altered.
    Elapsed: 00:00:00.74
    SQL> SELECT UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBMovement.sCardSubType, UDBMovement.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBMovement.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBMovement.tTarifTimeStart, UDBMovement.tTarifTimeEnd, UDBMovement.cLicensePlate, UDBMovement.lMoneyValue, UDBMovement.lPointValue, UDBMovement.lTimeValue, UDBMovement.tProdTime
    2 FROM UDBMOVEMENT WHERE
    3 UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBMovement.sCardType IN (2) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    4 AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    5 AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5 ORDER BY tActionTime, lBlock, lSequenz;
    4947 rows selected.
    Elapsed: 00:00:26.46
    Execution Plan
    Plan hash value: 3648898312
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 2720 | 443K| 2812 (1)| 00:00:34 |
    | 1 | SORT ORDER BY | | 2720 | 443K| 2812 (1)| 00:00:34 |
    |* 2 | MAT_VIEW ACCESS BY INDEX ROWID| TMP_MS_UDB_MV | 2720 | 443K| 2811 (1)| 00:00:34 |
    |* 3 | INDEX RANGE SCAN | EEETMP_MS_ACTTIMEDEVICE | 2732 | | 89 (0)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - filter("UDBMOVEMENT"."STRANSTYPE">0 AND BITAND("UDBMOVEMENT"."SSALEFLAG",4)=0 AND
    BITAND("SSALEFLAG",1)=0 AND "UDBMOVEMENT"."SARTCLASSREF"<100)
    3 - access("UDBMOVEMENT"."TACTIONTIME">=TO_DATE(' 2011-05-05 06:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND "UDBMOVEMENT"."SDEVICE">=0 AND "UDBMOVEMENT"."SCARDTYPE"=2 AND
    "UDBMOVEMENT"."TACTIONTIME"<TO_DATE(' 2011-05-05 12:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
    "UDBMOVEMENT"."SDEVICE"<1000)
    filter("UDBMOVEMENT"."SCARDTYPE"=2 AND "UDBMOVEMENT"."SDEVICE"<1000 AND
    "UDBMOVEMENT"."SDEVICE">=0)
    Statistics
    449 recursive calls
    0 db block gets
    6090 consistent gets
    2837 physical reads
    0 redo size
    531987 bytes sent via SQL*Net to client
    3953 bytes received via SQL*Net from client
    331 SQL*Net roundtrips to/from client
    168 sorts (memory)
    0 sorts (disk)
    4947 rows processed
    SQL> spool off
    Edited by: MattSk on Feb 4, 2013 2:20 PM

    I have added some tkprof outputs on MV and master tables:
    SELECT tmp_ms_udb_mv.zIdDevice, tmp_ms_udb_mv.sDevice, tmp_ms_udb_mv.zIdLocal, tmp_ms_udb_mv.sComputer, tmp_ms_udb_mv.tActionTime, tmp_ms_udb_mv.sCardSubType, tmp_ms_udb_mv.sCardType, tmp_ms_udb_mv.cEpan, tmp_ms_udb_mv.cText, tmp_ms_udb_mv.lArtRef, tmp_ms_udb_mv.sArtClassRef, tmp_ms_udb_mv.lSequenz, tmp_ms_udb_mv.sTransMark, tmp_ms_udb_mv.lBlock, tmp_ms_udb_mv.sTransType, tmp_ms_udb_mv.lGlobalID, tmp_ms_udb_mv.sFacility, tmp_ms_udb_mv.sCardClass, tmp_ms_udb_mv.lSingleAmount, tmp_ms_udb_mv.sVAT, tmp_ms_udb_mv.lVATTot, tmp_ms_udb_mv.tTarifTimeStart, tmp_ms_udb_mv.tTarifTimeEnd, tmp_ms_udb_mv.cLicensePlate, tmp_ms_udb_mv.lMoneyValue, tmp_ms_udb_mv.lPointValue, tmp_ms_udb_mv.lTimeValue, tmp_ms_udb_mv.tProdTime
    FROM tmp_ms_udb_mv WHERE
    tmp_ms_udb_mv.sTransType > 0 AND tmp_ms_udb_mv.sDevice < 1000 AND tmp_ms_udb_mv.sDevice>= 0 AND tmp_ms_udb_mv.sCardType IN (1) AND (bitand(tmp_ms_udb_mv.sSaleFlag,1) = 0 AND bitand(tmp_ms_udb_mv.sSaleFlag,4) = 0) AND tmp_ms_udb_mv.sArtClassRef < 100
    AND tmp_ms_udb_mv.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    AND tmp_ms_udb_mv.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    ORDER BY tActionTime, lBlock, lSequenz
    call count cpu elapsed disk query current rows
    Parse 1 0.04 0.10 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 596 0.17 27.07 2874 8894 0 8925
    total 598 0.21 27.18 2874 8894 0 8925
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60
    Rows Row Source Operation
    8925 SORT ORDER BY (cr=8894 pr=2874 pw=0 time=27071773 us)
    8925 MAT_VIEW ACCESS BY INDEX ROWID TMP_MS_UDB_MV (cr=8894 pr=2874 pw=0 time=31458291 us)
    8925 INDEX RANGE SCAN EEETMP_MS_ACTTIMEDEVICE (cr=68 pr=68 pw=0 time=161347 us)(object id 149251)
    SELECT
    UDBMovement.zIdDevice, UDBMovement.sDevice, UDBMovement.zIdLocal, UDBMovement.sComputer, UDBMovement.tActionTime, UDBIdentData.sCardSubType, UDBIdentData.sCardType, UDBMovement.cEpan, UDBMovement.cText, UDBMovement.lArtRef, UDBMovement.sArtClassRef, UDBMovement.lSequenz, UDBMovement.sTransMark, UDBMovement.lBlock, UDBMovement.sTransType, UDBMovement.lGlobalID, UDBMovement.sFacility, UDBIdentData.sCardClass, UDBMovement.lSingleAmount, UDBMovement.sVAT, UDBMovement.lVATTot, UDBIdentData.tTarifTimeStart, UDBIdentData.tTarifTimeEnd, UDBIdentData.cLicensePlate, UDBIdentData.lMoneyValue, UDBIdentData.lPointValue, UDBIdentData.lTimeValue, UDBIdentData.tProdTime, UDBIdentData.tExpireDate
    FROM UDBMOVEMENT_orig UDBMovement, Udbidentdata_orig UDBIdentData
    WHERE
    UDBMovement.lGlobalId = UDBIdentData.lGlobalRef(+) AND UDBMovement.sComputer = UDBIdentData.sComputer(+)
    AND UDBMovement.sTransType > 0 AND UDBMovement.sDevice < 1000 AND UDBMovement.sDevice>= 0 AND UDBIdentData.sCardType IN (1) AND (bitand(UDBMovement.sSaleFlag,1) = 0 AND bitand(UDBMovement.sSaleFlag,4) = 0) AND UDBMovement.sArtClassRef < 100
    AND UDBMovement.tActionTime >= TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.25
    AND UDBMovement.tActionTime < TO_DATE('05/05/2011 00:00:00', 'dd/mm/yyyy hh24:mi:ss') + 0.5
    ORDER BY tActionTime, lBlock, lSequenz
    call count cpu elapsed disk query current rows
    Parse 1 0.03 0.06 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 596 0.76 16.94 3278 85529 0 8925
    total 598 0.79 17.01 3278 85529 0 8925
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 60
    Rows Row Source Operation
    8925 SORT ORDER BY (cr=85529 pr=3278 pw=0 time=16942799 us)
    8925 NESTED LOOPS (cr=85529 pr=3278 pw=0 time=15017857 us)
    22567 TABLE ACCESS BY INDEX ROWID UDBMOVEMENT_ORIG (cr=17826 pr=1659 pw=0 time=7273473 us)
    22570 INDEX RANGE SCAN IDX_UDBMOVARTICLE (cr=111 pr=111 pw=0 time=112351 us)(object id 143693)
    8925 TABLE ACCESS BY INDEX ROWID UDBIDENTDATA_ORIG (cr=67703 pr=1619 pw=0 time=8154915 us)
    22567 INDEX UNIQUE SCAN UDBIDENTDATA_PRIM (cr=45136 pr=841 pw=0 time=3731470 us)(object id 108324)

  • Performance issue accessing Materialized Views

    Hi All
    I am extracting source data from Materialized View through DBLink which is having record count of 1500000 rows.
    But its taking lot of time(Approx. 30-45 min) during loading.
    Is there any other way to load the data in a faster way??
    regards
    Gourisankar

    Hi,
    I take it the simple 'Select from "materialized view' doesn't take this long. This could be down to the issue when issuing an 'INSERT' statement using a database link, even if you use the Optimizer hint 'DRIVING_SITE' does not help. The problem is that when issuing an INSERT statement using a DB LINK the filtering of records does not happen on the far side of the link. All the data is moved across the link to the target db, and then filtered etc.
    One way to get round this is to use a pipelined function to run your select to get your values, then call the pipeline function via ODI.
    Still use the DIRVING_SITE optimizer hint in the pipelined function when selecting your data.
    Cheers
    Bos

  • Need advise : Risks of using  materialized views

    Hi -
    I need some advise on whether using a materilzied view can help in teh folloiwng scenario.
    Scenario : You have a large tables which has sy 60 million rows, This is a demand management application which accesses this data at various aggregate levels (It is not using any OLAP functionality). The worksheets that is used to display the data takes up hours to execute on.
    To solve teh above scenario I created partitions on the base table say by time and the on top I created some materialized views. This gave me tremendoud gain in performance for the worksheets.
    Question is : Will Materialzied views help in situations say where the user changes the data in the worksheet and another user tries to query the changes will the materialized view be able to show the changes or do I need enforce a fast refresh. all the standard option of enabling query rewrite and fast refresh are enabled. The parameter for query_rewrite_integrity is set to Trusted.
    Please advise what could be teh pitfalls in managing such huge data where some of it changes infrequently.

    Hi Arch,
    Will Materialzied views help in situations say where the user changes the data in the worksheet and another user tries to query the changes will the materialized view be able to show the changes or do I need enforce a fast refresh.That depends on your "stale tolerance":
    http://www.dba-oracle.com/t_materialized_view_fast_refresh_performance.htm
    pitfalls in managing such huge data where some of it changes infrequently. If it does not change frequently, then it's just a matter of extra disk space for the materializations. I have more notes here:
    http://www.dba-oracle.com/t_finding_materialized_view_contents.htm
    Hope this helps. . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Performace issue on MATERIALIZED view

    Hi Gurus,
    Need help to understand perfoamance issue on MATERIALIZED view. I have created mat view on some big table of 76GB. Mat view is of 25 GB.
    I have refreshed mat view before running report. In OEM it ish showng going for full table scan and estimated time is of 2 hrs where full tablescan on base table on which mat view is created is of 20 Mins . I am using fast refresh on demand .
    We are using Oracle 10.2.0.4 on Sun Sprac 64bit 10 platform and version.
    Could you please let me know what could be the reason why mat views are performing poor ?
    Thanks & Regards

    You have MLOG created on your master table, right?
    OK, then check DBA_MVIEWS. Look for LAST_REFRESH_TYPE, If everything is OK, i should be FAST.
    If everything is OK by now, the problem can be the nature of master table. If there is great amount of changes in master table, set periodic job which will do refresh often (since 'often' is fuzzy word, it can be every 5, 10, 30 minutes...). Incremental refresh will perform beter if there is small amount of changes in MLOG table!
    You can check your MLOG table. If it is huge (size in MB) and there is only few records in it, try to realocate the space issuing "ALTER TABLE MLOG$_table_name;"
    Hope something will be helpfull...
    Best regards

  • Fast Refresh in Materialized Views Partitioned

    Hi all.. I've a little problem... let's go into it:
    I have two tables:
    A dimension table , named D1
    A partitioned fact table, named F1
    I create two materialized views log in each of 2 tables.
    Ok.. Now, I created a Materialized View Partitioned, name MV_F1D1, with incremental refresh for it...
    I can create the Materialized View Partitioned with fast refresh, no problem here...
    First time Oracle make a Complete refresh, everything ok.. Then I make a
         BEGIN
              DBMS_SNAPSHOT.REFRESH('MV_F1D1','F');
         END;
    And everything goes well...
    But.. In my fact tables, I load the data with a temporary table, named F1_NP (Fact table no partitionated)
    with the data of the current month. Every week I do this:
    1) Load the data in F1_NP
    2) If the LAST_PARTITION_MM_YYYY not exists in F1, I add the partition to F1...
    3) ALTER TABLE F1 EXCHANGE PARTITION LAST_PARTITION_MM_YYYY WITH TABLE F1_NP
    And, this is the problem....
    After that process, the Log table of F1 is empty..
    When i'm trying to fast refresh the mv_F1D1, I got an error
         ORA-12097 changes in the master tables during refresh, try refresh:
    THis occurs when only I truncate a partition
    in the fact table too.
    Ok.. My only solution here is to make a complete refresh.. But I have only a new partition. I expected that with the PCT, Oracle let me
    do it a fast refresh.      
    I'm looking for a solution in the web, and I installed the utlxmnv.sql:
    1) Before add a partition in F1
    truncate table mv_capabilities_table;
    exec DBMS_MVIEW.EXPLAIN_MVIEW ( 'MV_F1D1' );
    select * from mv_capabilities_table;
    PCT Y
    REFRESH_COMPLETE Y
    REFRESH_FAST Y
    REWRITE Y
    PCT_TABLE Y F1
    PCT_TABLE N D1 2068 Relation is not partitioned.
    REFRESH_FAST_AFTER_INSERT Y
    REFRESH_FAST_AFTER_ONETAB_DML Y
    REFRESH_FAST_AFTER_ANY_DML Y
    REFRESH_FAST_PCT Y
    REWRITE_FULL_TEXT_MATCH Y
    REWRITE_PARTIAL_TEXT_MATCH Y
    REWRITE_GENERAL Y
    REWRITE_PCT Y
    PCT_TABLE_REWRITE Y F1
    PCT_TABLE_REWRITE N D1 2068 Relation is not partitioned.
    2) After truncate a partition in F1
    truncate table mv_capabilities_table;
    exec DBMS_MVIEW.EXPLAIN_MVIEW ( 'MV_F1D1');
    select * from mv_capabilities_table;
    PCT Y
    REFRESH_COMPLETE Y
    REFRESH_FAST Y
    REWRITE Y
    PCT_TABLE Y F1
    PCT_TABLE N D1 2068 Relation is not partitioned.
    REFRESH_FAST_AFTER_INSERT N F1 2077 Mv log is newer than last full refresh
    REFRESH_FAST_AFTER_INSERT N F1 2077 Mv log is newer than last full refresh
    REFRESH_FAST_AFTER_ONETAB_DML N 2146
    REFRESH_FAST_AFTER_ANY_DML N F1 2076
    REFRESH_FAST_AFTER_ANY_DML N 2161
    REFRESH_FAST_PCT Y
    REWRITE_FULL_TEXT_MATCH Y
    REWRITE_PARTIAL_TEXT_MATCH Y
    REWRITE_GENERAL Y
    REWRITE_PCT Y
    PCT_TABLE_REWRITE Y F1
    PCT_TABLE_REWRITE N D1 2068 Relation is not partitioned.
    BEGIN
              DBMS_SNAPSHOT.REFRESH('MV_F1D1','F');
    END;
    ORA-32313: REFRESH FAST of "MV_F1D1" unsupported after PMOPs
    any ideas? Can I fast refresh a MV partitioned (composed with a Table Dimension and Fact table)
    with PCT when I add data in the Dimension Table/add a partition in a Fact Table Partitioned?

    Look at ATOMIC_REFRESH option, if you set this to FALSE you may see performance gain as it'll use DIRECT PATH and TRUNCATE. Data will be unavailable while being refreshed though.
    Cheers
    Si

Maybe you are looking for

  • Report does not show data , but data exists in the cube.

    Hi All, I have a situation where I could not show the data in the report. When I load data from an extractor 0CO_OM_WBS_1 into a Cube directly I am able to show the data in my report. When I load the same extractor to a DSO and from the DSO when I lo

  • Help needed with Custom Web ADI Integrator for Mail Merge

    Hello, I've created a custom web adi integrator to generate letters to advise of end of probationary periods. The problem is that at the end of the process when the letter is opened in Word 2007, it only displays one record, and does not allow me to

  • Unable to kill the concurrent program

    Hi I have two cocnurrent programs say TEST_PROG1, TEST_PROG2. if i am submitting TEST_PROG1 then oracle session id and oracle process id are popluating in fnd_concurrent_requests table where as if iam doing the same with TEST_PROG2 then oracle sessio

  • Adobe Air and JSON

    How to get json data from another site ., for cross domain? Thanks

  • Performance monitor

    BEA performance monitor 8.1 gives following error any clue? java.io.IOException: Read channel closed