Slow cube request

Hi there again,
well I eventually managed to get my OLAP cube done, I can now play with it in Answers, so things are moving forward. However I now seem to have a performance problem when building views in Answers. A view typically requires about one minute to compute, this seems very much to me, knowing that the initial fact table upon which the cube is built has only 200k rows.
Here is for instance the text of an SQL query submitted to the Oracle 11g server by Answers, which returns ten rows in about one minute:
=================================================
SELECT t1936.sh_long_description AS c5, t1936.cd_long_description AS c6,
t1936.bl_long_description AS c7
FROM exitcs_view t1992,
dttm_view t1968,
bnumber_view t1936,
anumber_view t1904,
traffic_view t2001
WHERE ( t1936.dim_key = t2001.bnumber
AND t1904.dim_key = t2001.anumber
AND t1968.dim_key = t2001.dttm
AND t1904.level_name = N'SH'
AND t1936.cd_long_description = N'212'
AND t1936.level_name = N'BL'
AND t1936.sh_long_description = N'SHORT'
AND t1968.level_name = N'ALLDT'
AND t1992.dim_key = t2001.exitcs
AND t1992.level_name = N'ALLCS'
=================================================
and the resulting autotrace info :
=================================================
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|Time |
| 0 | SELECT STATEMENT | | 1 | 100 | 29 (0)|00:00:01 |
| 1 | JOINED CUBE SCAN PARTIAL OUTER| | | | | |
| 2 | CUBE ACCESS | TRAFFIC | | | | |
| 3 | CUBE ACCESS | ANUMBER | | | | |
| 4 | CUBE ACCESS | BNUMBER | | | | |
| 5 | CUBE ACCESS | DTTM | | | | |
|* 6 | CUBE ACCESS | EXITCS | 1 | 100 | 29 (0)|00:00:01 |
Predicate Information (identified by operation id):
6 - filter(SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),31,32,2))=U'SH' AND
SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),24,25,2))=U'212' AND
SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),15,16,2))=U'BL' AND
SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),23,24,2))=U'SHORT' AND
SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),55,56,2))=U'ALLDT' AND
SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),5,6,2))=U'ALLCS')
Statistics
8176 recursive calls
1580 db block gets
5195 consistent gets
856 physical reads
258396 redo size
542 bytes sent via SQL*Net to client
360 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
79 sorts (memory)
0 sorts (disk)
10 rows processed
=================================================
and trace :
=================================================
call count cpu elapsed disk query current rows
Parse 1 0.08 0.11 3 766 298 0
Execute 1 0.00 0.00 2 15 0 0
Fetch 2 61.84 61.96 1 795 0 10
total 4 61.93 62.08 6 1576 298 10
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 90
Rows Row Source Operation
10 JOINED CUBE SCAN (cr=965 pr=3 pw=0 time=0 us cost=29 size=100 card=1)
=================================================
Can somebody spot a problem here ? My understanding is that these kind of requests against the dimensions of a cube should be almost instantaneous, however it feels like here the server is doing a "full table scan" of some sort.
Thanks again,
Christian

Hi there,
here is a trace of a long query (more than 1 minute):
BTW The Time column in the execution plan looks wrong, is it ok ?
SQL> exec dbms_aw.execute('dotf ''MY_DIR/limits.log'' ');
PL/SQL procedure successfully completed.
SQL> select T1992.CS_LONG_DESCRIPTION as c1,
  2       T1936.NUMTYP as c2,
  3       sum(T2001.CALLS) as c3
  4  from
  5       EXITCS_VIEW T1992,
  6       DTTM_VIEW T1968,
  7       BNUMBER_VIEW T1936,
  8       ANUMBER_VIEW T1904,
  9       TRAFFIC_VIEW T2001
10  where  ( T1936.DIM_KEY = T2001.BNUMBER and T1904.DIM_KEY = T2001.ANUMBER
and T1968.DIM_KEY = T2001.DTTM and T1904.LEVEL_NAME = N'ALLNUM' and
T1936.LEVEL_NAME = N'BL' and T1968.LEVEL_NAME = N'ALLDT' and
T1992.ALLCS_LONG_DESCRIPTION = N'ALL' and T1992.DIM_KEY = T2001.EXITCS and
T1992.LEVEL_NAME = N'CS' )
11  group by T1936.NUMTYP, T1992.CS_LONG_DESCRIPTION
12  order by c1, c2;
Cause01                                                      EMERGENCY                     1                                       
Cause01                                                      FIXED                       801                                       
..........  cut some more result lines  ..........
CauseTC3                                                     MOBILE                     1503                                       
CauseTC6                                                     MOBILE                     2634                                       
63 rows selected.
Execution Plan
Plan hash value: 335363548                                                                                                         
| Id  | Operation                       | Name    | Rows  | Bytes | Cost (%CPU)| Time     |                                        
|   0 | SELECT STATEMENT                |         |     1 |    29 | 37908 (100)| 00:07:35 |                                        
|   1 |  SORT GROUP BY                  |         |     1 |    29 | 37908 (100)| 00:07:35 |                                        
|   2 |   JOINED CUBE SCAN PARTIAL OUTER|         |       |       |            |          |                                        
|   3 |    CUBE ACCESS                  | TRAFFIC |       |       |            |          |                                        
|   4 |    CUBE ACCESS                  | ANUMBER |       |       |            |          |                                        
|   5 |    CUBE ACCESS                  | BNUMBER |       |       |            |          |                                        
|   6 |    CUBE ACCESS                  | DTTM    |       |       |            |          |                                        
|*  7 |    CUBE ACCESS                  | EXITCS  |     1 |    29 | 37907 (100)| 00:07:35 |                                        
Predicate Information (identified by operation id):                                                                                
   7 - filter(SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),16,17,2))=U'ALLNUM' AND                                                          
              SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),33,34,2))=U'BL' AND                                                              
              SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),50,51,2))=U'ALLDT' AND                                                           
              SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),8,9,2))=U'ALL' AND                                                               
              SYS_OP_C2C(SYS_OP_ATG(VALUE(KOKBF$),5,6,2))=U'CS')                                                                   
Statistics
       2119  recursive calls                                                                                                       
       1289  db block gets                                                                                                         
       6289  consistent gets                                                                                                       
         12  physical reads                                                                                                        
     286928  redo size                                                                                                             
       1966  bytes sent via SQL*Net to client                                                                                      
        404  bytes received via SQL*Net from client                                                                                
          6  SQL*Net roundtrips to/from client                                                                                     
         20  sorts (memory)                                                                                                        
          0  sorts (disk)                                                                                                          
         63  rows processed                                                                                                        
SQL> exec dbms_aw.execute('dotf eof');
PL/SQL procedure successfully completed.
SQL> quitThe limits.log file doesn't seem to provide a lot of information:
11/28/08 12:29:44.027 ->AW DETACH MOC
11/28/08 12:29:44.041 ->push OLAP.MOC!EXITCS, OLAP.MOC!ANUMBER, OLAP.MOC!BNUMBER, OLAP.MOC!DTTM
11/28/08 12:29:44.045 ->pop OLAP.MOC!EXITCS, OLAP.MOC!ANUMBER, OLAP.MOC!BNUMBER, OLAP.MOC!DTTM
11/28/08 12:31:07.520 ->AW DETACH MOC
11/28/08 12:31:07.743 ->AW DETACH MOC
11/28/08 12:31:07.751 ->AW DETACH MOC
11/28/08 12:31:21.214 ->dotf eofThe trace file is large, however the timeline might help the trained eye pin-point the culprit. It starts with the following lines intermixed with a lot of other classical tkprof things:
11/28/2008 12:29:33 OLAP - Constructing 19 modules
11/28/2008 12:29:33 OLAP -   Constructing xsmemmgr ord=2000
11/28/2008 12:29:33 OLAP -   Constructing xsexcept ord=3000
11/28/2008 12:29:33 OLAP -   Constructing xsstack ord=4000
11/28/2008 12:29:33 OLAP -   Constructing xsics ord=5000
11/28/2008 12:29:33 OLAP -   Constructing xsiomgr ord=6000
11/28/2008 12:29:33 OLAP -   Constructing xspgmgr ord=7000
11/28/2008 12:29:33 OLAP -   Constructing xssqlout ord=7498
11/28/2008 12:29:33 OLAP -   Constructing xseng ord=8000
11/28/2008 12:29:33 OLAP -   Constructing xsaggr ord=9000
11/28/2008 12:29:33 OLAP -   Constructing xstf ord=11000
11/28/2008 12:29:33 OLAP - Done Constructing
11/28/2008 12:29:33 OLAP - Constructing 19 workspaces
11/28/2008 12:29:33 OLAP -   Constructing xscbm ord=1500 ws idx=1 sz=40
11/28/2008 12:29:33 OLAP -   Constructing xsmemmgr ord=2000 ws idx=7 sz=24
11/28/2008 12:29:33 OLAP -   Constructing xsexcept ord=3000 ws idx=3 sz=96
11/28/2008 12:29:33 OLAP -   Constructing xsstack ord=4000 ws idx=11 sz=24
11/28/2008 12:29:33 OLAP -   Constructing xsics ord=5000 ws idx=4 sz=480
11/28/2008 12:29:33 OLAP -   Constructing xsiomgr ord=6000 ws idx=5 sz=248
11/28/2008 12:29:33 OLAP -   Constructing xspgmgr ord=7000 ws idx=8 sz=984
11/28/2008 12:29:33 OLAP -   Constructing xssqlout ord=7498 ws idx=10 sz=72
11/28/2008 12:29:33 OLAP -   Constructing xseng ord=8000 ws idx=2 sz=1464
11/28/2008 12:29:33 Engine - OLAP LMS memory check enabled.
11/28/2008 12:29:33 OLAP -   Constructing xsaggr ord=9000 ws idx=0 sz=24
11/28/2008 12:29:33 OLAP -   Constructing xstf ord=11000 ws idx=12 sz=48
11/28/2008 12:29:33 OLAP -   Constructing xsfin ord=32768 ws idx=17 sz=344
11/28/2008 12:29:33 OLAP -   Constructing xsdfn ord=32768 ws idx=16 sz=1760
11/28/08 12:29:33.890 [ DynPagePool] grew by 56KB to 56KB
11/28/2008 12:29:33 OLAP -   Constructing xslmt ord=32768 ws idx=15 sz=120
11/28/2008 12:29:33 OLAP -   Constructing xsilp ord=32768 ws idx=14 sz=448
11/28/2008 12:29:33 OLAP -   Constructing xsix ord=32768 ws idx=13 sz=432
11/28/2008 12:29:33 OLAP -   Constructing xsdmlcmd ord=32768 ws idx=18 sz=40
11/28/2008 12:29:33 OLAP -   Constructing xsldolap ord=2147483647 ws idx=6 sz=24
11/28/2008 12:29:33 OLAP -   Constructing xssnsr ord=2147483647 ws idx=9 sz=48
11/28/2008 12:29:33 OLAP - Done Constructing workspaces wsArray=0x0x2b506dff3c58
11/28/2008 12:29:33 OLAP - Initializing OLAP session
=====================
11/28/08 12:29:33.930 [ DynPagePool] grew by 56KB to 56KB
11/28/08 12:29:33.931 [    MD Cache] Beginning cache initialization
11/28/08 12:29:33.931 [ DynPagePool]   grew by 112KB to 224KB
11/28/08 12:29:33.932 [    MD Cache]   Cache initialized
11/28/2008 12:29:33 OLAP - Done initializing
=====================
11/28/08 12:29:43.942 [ DynPagePool]   grew by 42KB to 266KB
11/28/08 12:29:43.944 [ DynPagePool]   grew by 49KB to 315KB
11/28/08 12:29:43.945 [ DynPagePool]   grew by 63KB to 378KB
11/28/08 12:29:43.947 [ DynPagePool]   grew by 70KB to 448KB
11/28/08 12:29:43.949 [ DynPagePool]   grew by 84KB to 532KB
11/28/08 12:29:44.005 [ DynPagePool]   grew by 42KB to 266KB
11/28/08 12:29:44.006 [ DynPagePool]   grew by 49KB to 315KB
11/28/08 12:29:44.006 [ DynPagePool]   grew by 63KB to 378KB
11/28/08 12:29:44.007 [ DynPagePool]   grew by 70KB to 448KB
11/28/08 12:29:44.008 [          AW] Done
11/28/08 12:29:44.011 [ DynPagePool] grew by 105KB to 637KB
11/28/08 12:29:44.013 [ DynPagePool] grew by 126KB to 763KB
11/28/08 12:29:44.016 [ DynPagePool] grew by 147KB to 910KB
11/28/08 12:29:44.019 [ DynPagePool] grew by 182KB to 1092KB
11/28/08 12:29:44.022 [ DynPagePool] grew by 126KB to 763KB
11/28/08 12:29:44.024 [ DynPagePool] grew by 217KB to 1309KB
11/28/08 12:29:44.027 [    xsdimich] create hashtable hash:000000006E6D4DA8, size:0KB
11/28/2008 12:29:44 XSQERST - xsqerAWPopStats - Statistics for OLAP_TABLE(OLAP.TRAFFIC) call :
           Compiled in 0.099 seconds
           Number of rows : 825037248
           Ave. row length : 29
11/28/08 12:29:44.027 [          AW] Detaching MOC
11/28/08 12:29:44.029 [     AgClean]   start aggmap=MOC!TRAFFIC_SOLVE_AGGMAP clean=memory
11/28/08 12:29:44.029 [     AgClean]   finish clean=memory
11/28/08 12:29:44.029 [     AgClean]   start aggmap=MOC!___AW_SOLVE_AGGMAP clean=memory
11/28/08 12:29:44.029 [     AgClean]   finish clean=memory
11/28/08 12:29:44.030 [          AW] Done
=====================
11/28/08 12:29:44.036 [          AW] Done
11/28/08 12:29:44.036 [  GDILoopOpt] creating LOOP OPTIMIZED loop descriptor
11/28/08 12:29:44.036 [    AgDangle]   start aggmap=MOC!TRAFFIC_SOLVE_AGGMAP
11/28/08 12:29:44.036 [     AgClean]     start aggmap=MOC!TRAFFIC_SOLVE_AGGMAP clean=memory
11/28/08 12:29:44.036 [     AgClean]     finish clean=memory
11/28/08 12:29:44.036 [ DynPagePool]     grew by 259KB to 1568KB
=====================
11/28/08 12:29:44.040 [    AgDangle]   finish
11/28/08 12:29:44.040 [     AgClean]   start aggmap=MOC!TRAFFIC_SOLVE_AGGMAP clean=session
11/28/08 12:29:44.040 [     AgClean]   finish clean=session
11/28/08 12:29:44.040 [  GDILoopOpt]   LOOP OPTIMIZED loop descriptor  constructed
11/28/2008 12:29:44 XSAWQ - 47429368 xsawqPreprocess - 0 Inhier limits: 0.00000
11/28/2008 12:29:44 XSAWQ - 47429368 xsawqPreprocess - 0 filter limits: 0.00000   0 f 0 g 0 s
11/28/2008 12:29:44 XSQER - 47429368 xsqerInitLoopCtx - Loop Engine Init: 0.00362
11/28/2008 12:29:44 XSAWQ - 47429368 xsawqOpenScan
11/28/08 12:29:44.046 [    xsdimich] create hashtable hash:000000006E715230, size:0KB
11/28/08 12:29:44.047 [    xsdimich] create hashtable hash:000000006E7150F8, size:0KB
11/28/08 12:29:44.047 [    xsdimich] create hashtable hash:000000006E730838, size:0KB
11/28/08 12:29:44.048 [    xsdimich] create hashtable hash:000000006E72D690, size:0KB
11/28/08 12:29:44.049 [    xsdimich] create hashtable hash:000000006E72B590, size:0KB
=====================then I have the following lines all by themselves, the server seems to spend quite some time allocating memory; most of the time is spend here (more than one minute):
*** 2008-11-28 12:29:44.181
11/28/08 12:29:44.181 [ DynPagePool] grew by 308KB to 1876KB
*** 2008-11-28 12:29:49.365
11/28/08 12:29:49.365 [ DynPagePool] grew by 371KB to 2247KB
*** 2008-11-28 12:29:57.197
11/28/08 12:29:57.197 [ DynPagePool] grew by 448KB to 2695KB
*** 2008-11-28 12:30:07.081
11/28/08 12:30:07.081 [ DynPagePool] grew by 539KB to 3234KB
*** 2008-11-28 12:30:18.994
11/28/08 12:30:18.994 [ DynPagePool] grew by 644KB to 3878KB
*** 2008-11-28 12:30:33.931
11/28/08 12:30:33.931 [ DynPagePool] grew by 770KB to 4648KB
*** 2008-11-28 12:30:56.482
11/28/08 12:30:56.482 [   Aggregate] start func=1 vars=1
=====================then I have a lot of lines like these ones, which amount to about 10s :
11/28/08 12:30:56.488 [   Aggregate] finish childsize=0
11/28/08 12:30:56.488 [   Aggregate] start func=1 vars=1
11/28/08 12:30:56.488 [   Aggregate] finish childsize=0
11/28/08 12:30:56.488 [   Aggregate] start func=1 vars=1
11/28/08 12:30:56.488 [   Aggregate] finish childsize=0
11/28/08 12:30:56.488 [   Aggregate] start func=1 vars=1
11/28/08 12:30:56.488 [   Aggregate] finish childsize=0
  ........ cut lots of similar lines  .........
11/28/08 12:31:07.241 [   Aggregate] finish childsize=0
11/28/08 12:31:07.241 [   Aggregate] start func=1 vars=1
11/28/08 12:31:07.241 [   Aggregate] finish childsize=0
11/28/08 12:31:07.241 [   Aggregate] start func=1 vars=1
11/28/08 12:31:07.241 [   Aggregate] finish childsize=0
11/28/2008 12:31:07 XSAWQ - 47429368 xsawqCloseScan: 83.46450  9.93485 loop  47.16872 fetch  16058020 read  1070 retd
                                             Paging: 1147604 hits  60137 misses  33922992 pgpoolsz
                                             Cache.: 0 success  0 failure  0 precompute  0 calcs
...  cut some FETCH/WAIT lines .....
11/28/2008 12:31:07 XSQER - 47429368 xsqerAWFreeResources - AWHT END
                                 Total Partition time: 83.48887  16058020 rows
                                 Lookup time.........: 0.00000  0 rows
                                 Filter time.........: 9.98096
11/28/08 12:31:07.520 [   CCCovStat] CCITerm vcount=56782
11/28/08 12:31:07.520 [          AW] Detaching MOC
11/28/08 12:31:07.522 [     AgClean]   start aggmap=MOC!TRAFFIC_SOLVE_AGGMAP clean=memory
11/28/08 12:31:07.522 [     AgClean]   finish clean=memory
11/28/08 12:31:07.522 [     AgClean]   start aggmap=MOC!___AW_SOLVE_AGGMAP clean=memory
11/28/08 12:31:07.522 [     AgClean]   finish clean=memory
11/28/08 12:31:07.523 [          AW] Doneafter which I guess the request is finished.
Well I hope this blob might help someone figuring out what's going wrong here. I'm willing to provide any additional information as needed. Thanks again to all for your support.
Christian

Similar Messages

  • How to load data from a ODS to CUBE Request ID - by - Request ID?

    <i>How to load data from a ODS to CUBE Request ID - by - Request ID?</i>
    The problem is that... some requests had been eliminated of the cube and the delta control between the ODS and CUBE was lost. The flag "<b>data mart status of request</b>" of all the requests of the ODS had been blank.
    Now it is necessary to load some requests from the ODS for the cube.
    Notes:
    - it is not possible to make a complete load selecting the data to be loaded;
    - the PSA is not being used;
    - considering the data volume it is impracticable to reload the cube completely.
    Thanks in advance,
    Wesley.

    Dear R B,
    Considering the following:
    -> the delta control was lost;
    -> the data already are active in the ODS;
    -> part of the data of the ODS already is in the cube.
    The indicated procedure it only guarantees the load of the data that are in the ODS and that are not in the cube.
    Tks,
    Wesley.

  • R/3 table for the corresponding BW Cube Request details

    Hi BW Gurus,
         What is the R/3 table which staores all the details of the BW cubes request details like, Info package that is being loaded, what is the start time, date, no. records loaded, selection conditions, type of data update, Info source, data source, etc.. infomation.
    Thanks in advance,
    Dilse...
    Harish

    Hi Andreu,
    I have checked the tables we can get the information from all the tables not from  a single table. Thanks for the update.
    There is a table called RSREQDONE, which stores all the information like date, time, info packageid, no. of records updated, full load or delta etc....
    Once again thanks.
    Dilse...
    Harish

  • Deletion of Info cube request

    Hi Experts,
    Currently I am working on BI 7.0.
    Here is the scenario, Actually I want a such setting so I can delete the previous Info Cube request while scheduling info package for new request.
    Actually in 3.0 we had this functionality in Info package "Automatica loading of similar/ Identical request from the Info cube" where we can select the radio button to delete the previous request from Info Cube.
    Do we this type of functiobality in BI 7.0?
    Regards
    Sujay

    Hi Sujay,
    Netweaver 7.0 does not support the feature you are talking about, it is possible only in 3.x.
    You could however incorporate this step 'Delete Overlapping Request' using a process chain. Thats what I do, I too had an issue on this earlier.
    Regards,
    Manu

  • Delete cube request - before or after index creation?

    Hi Folks,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    I guess it is after the index is created otherwise it would take longer to find the requests that should be deleted. The index will be slighly degenerated due to the deletion but should be only marginal.
    I am right or wrong?
    Thanks for all replies in advance,
    Axel

    hi,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    The delete should be done before index creation, as once the index are created then even though the data corresponding to those index is deleted the index still remains. This unnecessarily increases the index table size
    regards,
    Arvind.

  • T-cube request id blank

    Hello Friends,
    Please could you share with me , under what circumstances can a plan cube request id be blank?
    thanks!!
    regards
    YHogan

    I recalled the answer.

  • Cube Request Deletion - Other loads possible or LOCK problems

    Hi Folks,
    can somebody tell me if there could be a lock situation if I have in one process chain the cube request deletion step running (cleaning out some request) and at the same time another load is running into the cube? We would not drop/re-create the index in load situation so no lock by the index operation.
    Any experience?
    From what I have seen there shouldn'T be a problem as the request deletion will be "scheduled" and simply execute if lock is release (if there is any) and the deletion it self places only a quick lock to determine the request ID but doesn't much impact other loads.
    Is this all correct? Or any other views?
    Thanks,
    Axel

    Hi,
    While Loading
    It is not possible to:
    ·        Delete data
    ·        Archive data
    ·        Delete indexes and reconstruct them
    ·        Reconstruct statistics
    For more information:
    http://help.sap.com/saphelp_nw70ehp2/helpdata/en/bb/bdd69f856a67418962d74bfd7bd8af/frameset.htm
    Regards,
    Anil Kumar Sharma .P

  • Simultaneous data activation in cube - request for reporting available

    Hi,
    I'm on BW 3.5.
    I am loading several million records to a cube, processing is to PSA first and subsequently to data target.
    I have broken the load up into 4 separate loads to prevent caches from filling up and causing huge performance issues.
    When I load all the data in a single load, it takes 10 hours to load.  When I break it up into 4 loads it takes 3 hours.
    My problem is that during the loading from PSA to data target, the first data load becomes green and ready for reporting before the last one has finished loading, and so the users get inaccurate report results if they happen to run a report before the last request activates.
    Is it possible to get all 4 requests to activate simultaneously?
    I have tried adding an aggregate to the cube, no good.
    I have tried loading the 4 loads to the PSA in sequential order in the process chain, and then loading from PSA to data target simultaneously (side by side), no good.
    Does anyone have a solution?
    Many thanks,
    Paul White

    Hi ....
    Have you done the Roll up ?
    Since aggregates are there on that cube....until and unless you do the roll up that request will not be available for Reporting...
    Regards,
    Debjani....

  • Error processing cube - requested operation cannot be performed on a file with a user-mapped section open

    Hi,
    We have recently moved our production servers and since the move have been experiencing an intermittent (but frequent) error when processing our OLAP cube.
    The error messages presented are:
    "Error: The following error occurred during a file operation: The requested operation cannot be performed on a file with a user-mapped section open"
    "Error: Errors in the OLAP storage engine: An error occurred while processing index for the <Partition Name> partition of the <Measure Group Name> measure group of the <Cube Name> cube from the <Database Name> database"
    I assume the second message is a consequence of the first error.  The partitions and measure groups seem to vary each time the process is run.
    It appears from similar threads that this is usually caused by backups or anti-virus applications locking the files that Analysis Services is using the process the cube.  I have ensured that there are no backups running at the time of processing and
    I have disabled anti-virus programs without success.
    I have also created a new version of the cube (using the deployment wizard) which deployed without error but then encountered the same error when processing for a second time.  There was nothing (client application wise) using this cube when it failed
    to process.
    As I mentioned earlier, this problem is intermittent.  Sometimes the cube will successfully process but usually it fails to process.
    We have not encountered this error in our previous production environment or in any of our development environments.
    Has anyone encountered this problem before? Any suggestions on possible solutions?
    Thanks
    Rich

    Hi jonesri,
    I think you can try to use SSAS Dynamic Management View to monitor SSAS instance, such as existing connections and sessions. For example, please run the following MDX query:
    SELECT[SESSION_COMMAND_COUNT],
    [SESSION_CONNECTION_ID],
    [SESSION_CPU_TIME_MS],
    [SESSION_CURRENT_DATABASE],
    [SESSION_ELAPSED_TIME_MS],
    [SESSION_ID],
    [SESSION_IDLE_TIME_MS],
    [SESSION_LAST_COMMAND],
    [SESSION_LAST_COMMAND_CPU_TIME_MS],
    [SESSION_LAST_COMMAND_ELAPSED_TIME_MS],
    [SESSION_LAST_COMMAND_END_TIME],
    [SESSION_LAST_COMMAND_START_TIME],
    [SESSION_PROPERTIES],[SESSION_READ_KB],
    [SESSION_READS],[SESSION_SPID],
    [SESSION_START_TIME],[SESSION_STATUS],
    [SESSION_USED_MEMORY],
    [SESSION_USER_NAME],
    [SESSION_WRITE_KB],
    [SESSION_WRITES]
    FROM $SYSTEM.DISCOVER_SESSIONS
    Use Dynamic Management Views (DMVs) to Monitor Analysis Services:
    http://msdn.microsoft.com/en-us/library/hh230820.aspx
    In addition, you can aslo use SQL Profiler to capture some events for further investigation.
    Use SQL Server Profiler to Monitor Analysis Services:
    http://technet.microsoft.com/en-us/library/ms174946.aspx
    If you have any feedback on our support, please click
    here.
    Regards,
    Regards,
    Elvis Long
    TechNet Community Support

  • Zend AMF extremely slow first request

    Hi all,
    I'm having a weird problem with Zend AMF and was wondering if anyone else has seen this before.
    I've got a very simple PHP/MySQL backend that returns multidimensional arrays to Flex via Zend AMF.
    Now this all worked fine up to the point that I started testing my app with a remote server instead of my local test server.
    With the remote server I noticed that sometimes, but always the first time, some PHP function is called it takes forever to call the callback function with a result. I'm talking about around 1 to 2 minutes!
    Now, when I call that same php function via a normal url every time it returns the right results in a couple of milliseconds.
    When the function has been called once it seems to be ok and next time it's called it returns results within milliseconds.
    I've had a look with a network sniffer to see if the transfer of data takes long, but that's all fine...
    So it looks to me as if it just takes forever before the RemoteObject calls it's callback function.
    I'll be testing with some stripped down code later tonight and will also set it up on a different server, but I was hoping someone else has seen this and knows a workaround...
    Thanks
    Skip

    Hmm, i just did some more tests, but the results do update so it doesn't look like it's a cached result.
    I'm not entirely sure but it looks like when multiple AMF methods are called too close to each other they are combined into one HTTP POST request to the AMF gateway. When this happens the response is extremely slow, whereas when I have make the second call after the first one has finished completely the response is ok (around 200 milliseconds).
    You wouldn't happen to know how RemoteObject handles multiple calls to an AMF backend, right?

  • Slow remote request on 7.6.03.15 on Windows server 2008

    Hello all,
    I have install MAXDB on Windows server 2008.
    When I made request from a remote computer on the database with SQLStudio requests are very slow.
    For example I request an empty table :
    SELECT * FROM <TABLENAME> the request seem fast.
    but this one is very slow, I only replace * with all columns
    SELECT  ArcState, Comp_date_time, ContFlag, Deferred, SndAccount, SndAddress, SndCompany, SndName, SndType, RcpAddress, Msn, Notif, OwnerID, OwnerPB, PreviewMessage, PreviewVisible, Priority, Purged, DIST_ABORTED, DIST_ABORTEDCOUNT, DIST_OWNER, DIST_NUM, DIST_VERSION, Status_str, State, Subject, Sub_date_time, SATID, RcpCompany, RcpName, Viewed  FROM <TABLENAME>
    This request takes 0.3s, and the table is empty. Everything work find if sqlstudio is running on local computer even if I use XSERVER
    How could you explain this behavior ?
    To avoid DNS issues I put IP adress in ETC/host, I also disable Windows2008 firewall.
    If you have any idea that could help me, let me know.
    Thanks for help.
    Yann.

    Hi Markus,
    Here is the result of explain.
    ESKDBADM     DBM350_AC     AC_SUB_DATE_TIMED     INDEX SCAN              1     
                 RESULT IS NOT COPIED , COSTVALUE IS              2     
              QUERYREWRITE - APPLIED RULES:          
                 DistinctPullUp              1     
    Keep in mind that the table is empty,
    If I excute this request on remote computer.
    SELECT ArcState, Comp_date_time, ContFlag, Deferred, SndAccount, SndAddress, SndCompany, SndName, SndType, RcpAddress, Msn, Notif, OwnerID, OwnerPB, PreviewMessage, PreviewVisible, Priority, Purged, DIST_ABORTED, DIST_ABORTEDCOUNT, DIST_OWNER, DIST_NUM, DIST_VERSION, Status_str, State, Subject, Sub_date_time, SATID, RcpCompany, RcpName, Viewed FROM DBM350_AC WHERE (((ArcState IS NULL  OR ArcState != 4) AND (PreviewMessage IS NULL  OR PreviewMessage = 0 OR PreviewVisible = 1))) ORDER BY Sub_date_time DESC WITH LOCK ISOLATION LEVEL 0
    on an empty table, it always take 0,608s, I can type several time on F8, It always take 0,608s.
    You said
    "To really compare those two statements - stop and start the database between them - and you will see that both of them take the same amount of time."
    I agree with you if the second call is faster than the first call, because of datacache , but in my case it's always slow.
    How to explain that a select on an empty table can take 0.608s ?
    I think there is something wrong in Windows2008, but I don't kwon what.
    Yann.

  • ABAP Program to close BPC CUBE Request

    Hi
    I urgnetly need abap program to close BPC Open request.
    I tried program RSAPO_SWITCH_TRANS_TO_BATCH
    and function module : RSAPO_CLOSE_TRANS_REQUEST
    It never worked on BPC infocube at all.
    Please help me close yellow request to green ON BPC CUBES.
    Regards

    I tried that doing that does not close the BPC request to green unfortunately.
    and the reason i want this is cuz i am using fm RSDRI_INFOPROV_READ_RFC to read the BPC Cube in program and yellow request provide 0 requests, so first i need to close the request and then read the bpc cube.

  • Which authorization object we have to use for direct cube request ?

    Hello,
    In Analyser,  when we would like to request directly to Cubes, whe have 'No authorisation' message.
    Do you know wich authorisation objet we have to use for that ?
    Thanks in advance for you help.
    Best regards
    Nicolas Trinquand

    Hi Nocolas,
    Authorizations are two levels one is object level means if you want to access cubes, infoareas etc.
    second is data level, for this you need to create datalevel auth objects at RSSM.
    If you want to give authorization to queries relating to cube
    Use S_RS_COMP and give which cube you want.
    If you want to give authorizations to cube use
    S_RS_ICUBE .
    IF you have anyauthorization object ( Customized) data level select the infoprovider on which you want to activate this object at RSSM.then give permissions in your role.
    Check link
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6859e07211d2acb80000e829fbfe/content.htm
    Hope it is clear now.
    Assign points if you felt it is useful.
    Regards,
    Vijay.
    Message was edited by:
            Vijay G

  • How to get Cube Request Reportability From Tables?

    Hello,
    What table contains the reportability status on a request in a cube?  I want to know this so I can programitically figure out what cubes are reportable and which are not, thanks!

    Hope RSODSACTREQ will give you the loaded request and reporting availability information.
    Thanks,
    Arun

  • Checking Info Cube request data

    Hi gurus,
    Let's say I have I needed to look what data the DTP request brought up to the Cube.
    How would/through what navigation steps I shoud see for analysis data in one particular request.
    Thanks for your comments.
    Eddy.

    Try this ....
    right click on your cube and select manage .....
    under request tab copy the required request number ...
    select display data from the context menu of the cube and enter the request number (under data package dimension) and execute .

Maybe you are looking for

  • Report display will be 2010,2011 year wise display and 2012 Month display

    Hi,        I want to Create Report, output will be 2010,2011 year wise display, 2012,2013 month wise display and Fiscal year quarter display, all are display based on input value of Fiscal year data, please anyone guide me, How to do? Thanks, Nandish

  • Folder Redirection and Offline Files

    I have Users documents and Favorites redirected using GPO Folder redirection. So everyone has their data redirected to the server. With the feature Always available offline. Then My raid crashed and that Drive become offline. after 24 HRS everyone go

  • Making eDir and AD play nice

    Hi all: With the help of this forum, I have moved off of ZCM DLUs and roaming profiles and onto AD roaming profiles. Things seem to be working OK. I was wondering if you have any tips/tricks/advise to make eDir and AD work together? In my case, we us

  • How to change internet-search browser?

    My standard internet browser is chrome, however, when I use the "search the internet" option upon right clicking a word in my iBook, Safari is automatically launched for the search. How do I change these settings? Thanks in advance Nikolaj

  • "Microsoft Mobile Services Protocol" Problem

     Hello everyone.  4 days ago, I checked my email accounts on my 9780 under Email Setup.  On my primary email address, a  "@msn.com"  address, I noticed a new feature I had never seen or heard of before, called "Switch to Microsoft Mobile Services Pro