Smartview HSGET refresh performance

Hi
I received this email from one of our clients :
Can you advise what we need to do to improve the speed of smartview HSGET refreshes?  I am guessing the simple answer would be to reduce the size of the file, or have multiple of files with the data across.  However I am looking for more of a long term holistic fix for this problem.
We see smartview HSGet formulas as a very powerful modelling tool for our analysis and budget submissions, however there are some limitations namely the speed of refresh’s which are holding us back from using the tool to the best of its ability.
Interested in any ideas/solutions.
Cheers

HsGetValue can be slow depending on the number of cells using the function, it has to call and return for each cell.
The other option is to look at using the VBA functions available as you should get better performance.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Does smartview button "Refresh All" perform HSSETVALUE-Functions ?

    Hello together,
    i need to know if the smartview button "refresh all" performs the formula "hssetvalue" or if this formula is run exclusively by submit data.
    Many thanks in advance
    Martin

    I think it depend on the version. In older versions Refresh did submit as well. In current versions it does not. To be sure I would test it.

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

  • Snapshot fast refresh performance

    I have a master table with a snapshot log
    on the primary database (8.1.6 Oracle and
    AiX). There are 20,000 rows in the snapshot
    log which take about 6 hrs to perform a fast
    refresh to a snapshot database on the same
    set up.
    i am looking for some ways to tune the snapshot to decrease the time of the refresh.

    The refresh time is primarily dependent upon the speed of the remote connection. Also it is not so much the number of rows as it is the size of the rows. As far as tuning the refresh you can check to see if the master snapshot log HWM has grown very large. The refresh will perform a full table scan when it determines which rows it needs to refresh.

  • Problem in refreshing Performance Manager Metric

    Hi,
    We are Using SAP BusinessObjects XI 3.1 SP3.
    On this environment, we have configured SAO BO Performance manager / Dashboard manager Application (not EPM).
    While refreshing the metric following error occurs;
    Error:The probe engine cannot query the repository.(EPM 03008)<Metric Name> : ORA-00936:missing expression
    Any clue on this?
    BTW, I have tried to debug the problem and verified that, the query generated for metric is syntactically correct and runs without any problem (when @prompt replaced by actual date).
    Some more details are as below;
    Created a Calendar (Monthly) and also Created a Dimension.
    The Connection is created which has privileges to access both PM Repository Schema and the Report Data Schema.
    Thanks,
    With Regards,
    Sachin Dalal

    the very simple strategy to do is to call removeAllItems() method for the 2nd combox box and then insert the contents. this is because the validate() method is not repeatedly called and so the contents are not updated immediately.

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • SmartView auto refresh OBIEE report

    I cannot find way to automate process between SmartView and OBIEE.
    1.Smartview connect OBIEE
    2. after connected, auto refresh report.
    Please help.

    Hi,
    About this:
    >
    Suppose from emp table i am querying a report as
    select emp_num,emp_name from emp;
    i want to insert this emp_name and emp_num to the new table.
    >
    What exactly you want to do?
    1) After every refresh (i.e. after 5 minutes) you want to create a new table an insert the values of emp_name and emp_num.
    2) You want to store the values of emp_name and emp_num after every refresh in a new table(that is already created).
    Didn't get your purpose behind this?
    Regards,
    Kiran

  • Cliente refresh - performance

    Hi All,
    When I've to refresh some client and by refresh I mean maintain the same cliente number with new data, I usualy delete it first (scc5) and then copy it (SCCL)
    Is there any performance difference between separate the steps and just do a client copy without delete the target first?
    Regards

    I do not think that this relates to any performance problem.
    Infact, this is the first time I got to see that before client copy, someone is deleting the client and do newclient/SCCL.
    We always do sccl not by deleting the target and recreating but by just overwriting. why we need deletion ? why you do it ?

  • Commit performance on table with Fast Refresh MV

    Hi Everyone,
    Trying to wrap my head around fast refresh performance and why I'm seeing (what I would consider) high disk/query numbers associated with updating the MV_LOG in a TKPROF.
    The setup.
    (Oracle 10.2.0.4.0)
    Base table:
    SQL> desc action;
    Name                                      Null?    Type
    PK_ACTION_ID                              NOT NULL NUMBER(10)
    CATEGORY                                           VARCHAR2(20)
    INT_DESCRIPTION                                    VARCHAR2(4000)
    EXT_DESCRIPTION                                    VARCHAR2(4000)
    ACTION_TITLE                              NOT NULL VARCHAR2(400)
    CALL_DURATION                                      VARCHAR2(6)
    DATE_OPENED                               NOT NULL DATE
    CONTRACT                                           VARCHAR2(100)
    SOFTWARE_SUMMARY                                   VARCHAR2(2000)
    MACHINE_NAME                                       VARCHAR2(25)
    BILLING_STATUS                                     VARCHAR2(15)
    ACTION_NUMBER                                      NUMBER(3)
    THIRD_PARTY_NAME                                   VARCHAR2(25)
    MAILED_TO                                          VARCHAR2(400)
    FK_CONTACT_ID                                      NUMBER(10)
    FK_EMPLOYEE_ID                            NOT NULL NUMBER(10)
    FK_ISSUE_ID                               NOT NULL NUMBER(10)
    STATUS                                             VARCHAR2(80)
    PRIORITY                                           NUMBER(1)
    EMAILED_CUSTOMER                                   TIMESTAMP(6) WITH LOCAL TIME
                                                         ZONE
    SQL> select count(*) from action;
      COUNT(*)
       1388780MV was created
    create materialized view log on action with sequence, rowid
    (pk_action_id, fk_issue_id, date_opened)
    including new values;
    -- Create materialized view
    create materialized view issue_open_mv
    build immediate
    refresh fast on commit
    enable query rewrite as
    select  fk_issue_id issue_id,
         count(*) cnt,
         min(date_opened) issue_open,
         max(date_opened) last_action_date,
         min(pk_action_id) first_action_id,
         max(pk_action_id) last_action_id,
         count(pk_action_id) num_actions
    from    action
    group by fk_issue_id;
    exec dbms_stats.gather_table_stats('tg','issue_open_mv')
    SQL> select table_name, last_analyzed from dba_tables where table_name = 'ISSUE_OPEN_MV';
    TABLE_NAME                     LAST_ANAL
    ISSUE_OPEN_MV                  15-NOV-10
    *note: table was created a couple of days ago *
    SQL> exec dbms_mview.explain_mview('TG.ISSUE_OPEN_MV');
    CAPABILITY_NAME                P REL_TEXT MSGTXT
    PCT                            N
    REFRESH_COMPLETE               Y
    REFRESH_FAST                   Y
    REWRITE                        Y
    PCT_TABLE                      N ACTION   relation is not a partitioned table
    REFRESH_FAST_AFTER_INSERT      Y
    REFRESH_FAST_AFTER_ANY_DML     Y
    REFRESH_FAST_PCT               N          PCT is not possible on any of the detail tables in the mater
    REWRITE_FULL_TEXT_MATCH        Y
    REWRITE_PARTIAL_TEXT_MATCH     Y
    REWRITE_GENERAL                Y
    REWRITE_PCT                    N          general rewrite is not possible or PCT is not possible on an
    PCT_TABLE_REWRITE              N ACTION   relation is not a partitioned table
    13 rows selected.Fast refresh works fine. And the log is kept quite small.
    SQL> select count(*) from mlog$_action;
      COUNT(*)
             0When I update one row in the base table:
    var in_action_id number;
    exec :in_action_id := 398385;
    UPDATE action
    SET emailed_customer = SYSTIMESTAMP
    WHERE pk_action_id = :in_action_id
    AND DECODE(emailed_customer, NULL, 0, 1) = 0
    commit;I see the following happen via tkprof...
    INSERT /*+ IDX(0) */ INTO "TG"."MLOG$_ACTION" (dmltype$$,old_new$$,snaptime$$,
      change_vector$$,sequence$$,m_row$$,"PK_ACTION_ID","DATE_OPENED",
      "FK_ISSUE_ID")
    VALUES
    (:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,
      sys.cdc_rsid_seq$.nextval,:m,:1,:2,:3)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      2      0.00       0.03          4          4          4           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      0.00       0.04          4          4          4           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          2  SEQUENCE  CDC_RSID_SEQ$ (cr=0 pr=0 pw=0 time=28 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         4        0.01          0.01
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.94       5.36      55996      56012          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.94       5.38      55996      56012          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.02          4.91
    select dmltype$$, max(snaptime$$)
    from
    "TG"."MLOG$_ACTION"  where snaptime$$ <= :1  group by dmltype$$
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.70       0.68      55996      56012          0           1
    total        4      0.70       0.68      55996      56012          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT GROUP BY (cr=56012 pr=55996 pw=0 time=685671 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=1851 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.00          0.38
    delete from "TG"."MLOG$_ACTION"
    where
    snaptime$$ <= :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.71       0.70      55946      56012          3           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.71       0.70      55946      56012          3           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  DELETE  MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=702813 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55946 pw=0 time=1814 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3530        0.00          0.39
      db file sequential read                        33        0.00          0.00
    ********************************************************************************Could someone explain why are the SELECT/UPDATE/DELETE on MLOG$_ACTION so "expensive" when there should only be 2 rows (old value and new value) in that log after an update? Is there anything I could do to improve the performance of the update?
    Let me know if you require more info...would be glad to provide it.

    Brilliant. Thanks.
    I owe you a beverage.
    SQL> set autotrace on
    SQL> select count(*) from MLOG$_ACTION;
      COUNT(*)
             0
    Execution Plan
    Plan hash value: 2727134882
    | Id  | Operation          | Name         | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |              |     1 | 12309   (1)| 00:02:28 |
    |   1 |  SORT AGGREGATE    |              |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| MLOG$_ACTION |     1 | 12309   (1)| 00:02:28 |
    Note
       - dynamic sampling used for this statement
    Statistics
              4  recursive calls
              0  db block gets
          56092  consistent gets
          56022  physical reads
              0  redo size
            410  bytes sent via SQL*Net to client
            400  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> truncate table MLOG$_ACTION;
    Table truncated.
    SQL> select count(*) from MLOG$_ACTION;
      COUNT(*)
             0
    Execution Plan
    Plan hash value: 2727134882
    | Id  | Operation          | Name         | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |              |     1 |     2   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE    |              |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| MLOG$_ACTION |     1 |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement
    Statistics
              1  recursive calls
              1  db block gets
              6  consistent gets
              0  physical reads
             96  redo size
            410  bytes sent via SQL*Net to client
            400  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedJust for fun...comparison of the TKPROF.
    Before:
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.94       5.36      55996      56012          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.94       5.38      55996      56012          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=5364554 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=56012 pr=55996 pw=0 time=46756 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       3529        0.02          4.91
    ********************************************************************************After:
    update "TG"."MLOG$_ACTION" set snaptime$$ = :1
    where
    snaptime$$ > to_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          7          1           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          7          1           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  UPDATE  MLOG$_ACTION (cr=7 pr=0 pw=0 time=79 us)
          2   TABLE ACCESS FULL MLOG$_ACTION (cr=7 pr=0 pw=0 time=28 us)
    ********************************************************************************

  • Smartview Refresh Changes Dimensions To Old Version

    I am having an issue refreshing a smartview file that our department shares on a public network drive. When our users save the file to their own computer, open in smartview and refresh, the member options/dimensions change to an older saved version. We have all become used to this situation when only one or two dimensions would change but now as we are changing from forecast to plan and 2013 to 2014, etc. it is creating a bigger issue.
    Are there some settings or bugs that anyone knows about so that no matter what members are selected in the POV, when the file opens they stay and dont revert back?

    Log into HSS (Hyperion Shared Services) console (http://server:port/interop/index.jsp)
    Expand the Application group->Foundation->Deployment Metadata->Shared Services Registry->Essbase-> APS LWA
    Check the host/server name and port in the properties files present beneath them if they are pointing to the right server and port.
    HTH -
    Jasmine

  • Refresh a materialized view in parallel

    Hi,
    To improve the refresh performance for a materialized view, I set up it to be refreshed in parallel. The view can be refreshed successfully, however, I did not see the view is refreshed in parallel from session browser, can someone let me know if I miss any steps?
    1) In DB A (running 8 CPUs), set up the base table to be parallelized, the base table is called table1
    ALTER TABLE A.table1 PARALLEL ( DEGREE Default INSTANCES Default );
    2) In Database A, set up the materialized view log
    CREATE MATERIALIZED VIEW LOG
    ON table1 WITH primary key
    INCLUDING NEW VALUES;
    3) In Database B (in the same server with Database A), there is an existing table called table1, prebuilt with millions of records in the table. Due to the size of table1, I have to use prebuilt option
    Drop MATERIALIZED VIEW table1;
    CREATE MATERIALIZED VIEW table1 ON PREBUILT TABLE
    REFRESH FAST with primary key
    AS
    select /*+ PARALLEL(table1, 8) */ *
    from table1@A;
    4) in Database B, I executed this stored procedure -
    EXECUTE DBMS_MVIEW.REFRESH(LIST=>'table1',PARALLELISM=>8, METHOD=>'F');
    Thanks in advance!
    Liz

    I checked the execution plan when I executed the refresh stored procedure again, Does the plan indicates the parallel execution is working or not?
    SELECT STATEMENT REMOTE ALL_ROWSCost: 32,174 Bytes: 775,392 Cardinality: 1,182                                                   
    13 PX COORDINATOR                                              
    12 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10001 :Q1001                                        
         11 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001                                   
         9 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001Cost: 32,174 Bytes: 775,392 Cardinality: 1,182                               
              6 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1001                         
                   5 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1001                    
                        4 PX SEND ROUND-ROBIN PARALLEL_FROM_SERIAL SYS.:TQ10000 A               
                             3 VIEW APP_USR. Cost: 12,986 Bytes: 6,070,196 Cardinality: 275,918           
                                  2 HASH UNIQUE Cost: 12,986 Bytes: 9,105,294 Cardinality: 275,918      
                                       1 TABLE ACCESS FULL TABLE A.MLOG$_TABLE1 A Cost: 10,518 Bytes: 9,105,294 Cardinality: 275,918
         8 PARTITION RANGE ITERATOR PARALLEL_COMBINED_WITH_PARENT :Q1001Cost: 0 Cardinality: 1                          
         7 INDEX RANGE SCAN INDEX (UNIQUE) PARALLEL_COMBINED_WITH_PARENT A.PK_TABLE1 :Q1001Cost: 0 Cardinality: 1                     
    10 TABLE ACCESS BY LOCAL INDEX ROWID TABLE PARALLEL_COMBINED_WITH_PARENT A.TABLE1 :Q1001Cost: 0 Bytes: 634 Cardinality: 1

  • Refresh in alv report

    hi masters,
    i m working on alv report where i have to change the data. after changing the data i have to do the refresh for that i m using the method below
    wa_STBL-ROW = 'X'.
    wa_STBL-COL = 'X'.
    CALL METHOD grid_obj->refresh_table_display
        EXPORTING
             IS_STABLE      = wa_stbl
              I_SOFT_REFRESH = 'X '
    EXCEPTIONS
    finished = 1
    OTHERS = 2.
    but after executing this method its giving run time error saying that ' Access via 'NULL' object reference not possible.  You attempted to use a 'NULL' object reference (points to 'nothing') access a component (variable: "GRID_OBJ").'. can u plz help me where i m going wrong. and how to implement the refresh in alv report in details..
    thanks in advance.
    regards,
    vicky

    hi,
    when i clicked on refresh button it is going to pf-status after that as i shown in below code it goes to refresh_report. but in this 'form refresh_report'  it won't display the data. it goes into the infinite loop. within that form- endform. plz can u give me solution for this problem?
    set pf-status 'STANDARD_FULLSCREEN'.
      case sy-ucomm.
        when '&REFRESH'.
    perform refresh_report.
    endcase.
    form refresh_report .
      perform build_fieldcatlog.
      perform event_call.
      perform populate_event.
      perform data_retrieval.
      perform build_listheader using it_listheader.
      perform display_alv_report_fm.
    endform.                    " REFRESH_REPORT

  • Issue in Complete Refresh of a Materialized View

    Hello,
    We have an MV in the Datawarehouse that does a FAST REFRESH daily. Every Saturday, a COMPLETE REFRESH is done as part of the normal Database Activities. The Database is Oracle 9i. The MV contains a Join between a Dimension and Fact Table of a Datawarehouse and does a FAST REFRESH using the MV logs.
    However, last Sat, the COMPLETE REFRESH has failed. The DBA tried to run the script twice again but it failed on both occasions with an UNDO TABLESPACE error (ORA-30036). The DBA tried to extend the Tablespace; it did not help either.
    - Could this be linked to Tablespace allocations for the MV? Are there any specific steps that can be followed to resolve this?
    - Can the MV be dropped and re-created?
    Would appreciate a response on the same.
    Many Thanks,
    Ketan

    Hi Ketan,
    I guess this is probably because MV complete refresh performs "delete" from the MV and then insert. If the MV is large, it may fail on undo allocation.
    Please take a look at Re: materialized view refresh time!  Plz Help me! you can see the explanation, metalink note and the solution (how to perform truncate instead of delete).
    HTH
    Liron Amitzi
    Senior DBA consultant
    [www.dbsnaps.com]
    [www.orbiumsoftware.com]

  • Sun Ray @ home  - VPN bad performance

    Hi All
    we've setup a test configuration with a Sun Ray Server (dedicated network) a CISCO PIX 501 and one DTU SR2.
    The screen refresh performance on the DTU is very bad. Other DTUs, connected directly to the SRSS are fine.
    We've played around with the bandwith limit on the SR2 (GUI firmware):
    no limit: DTU unusable
    3000: DTU slow
    1500: DTU more slower
    700: unusable
    The throughput of the PIX 501 should be around 3MPS. The PIX does nothing else than serving one DTU. All components (SR Server, PIX, DTU) are in the same room and connected directly to each other
    Is the small PIX the reason for the bad performance ?
    Should it work smoothly with only one DTU?
    Which Hardware (vpn box) is recommended to serve at least 10 SR2 DTUs ?
    Thanks a lot.
    CU
    Carsten

    You problem could be fragmentation. Have you lowered the default MTU (1500). MTU of 1356 works with my ASA 5505. Here is a nice article on the subject, http://blogs.sun.com/ThinkThin/entry/the_importance_of_mtu .

  • Low performance issue with EPM Add-in SP16 using Office Excel 2010

    We are having low performance issues using the EPM Excel Add-in SP16 with Excel 2010 and Windows 7
    The same reports using Excel 2007 (Windows 7 or XP) run OK
    Anyone with the same issue?
    Thanks

    Solved. It's a known issue. The solution, as provided by SAP Support is:
    Dear Customer,
    This problem may relate to a known issue, which happens on EPM Add-in
    with Excel 2010 when zoom is NOT 100%.
    to verify this, could please please try to set the zoom to 100% for
    your EPM report in question. and observe if refreshing performance is
    recovered?.
    Quite annoying, isn't it?

Maybe you are looking for