Infospoke on an uncompressed cube

Hi Experts,
I want to know why its not possible to extract using an infospoke on a uncompressed , not agregated infocube(which has number of request ids)
Thanks in advance
D Bret

Hi
An InfoSpoke can extract data in two ways:
1.      Full mode (F): the data corresponding to the selection criteria of the InfoSpoke is transferred from the data source into the respective destination.
2.      Delta mode (D): records that were added since the last extraction only are transferred in this mode. This is only possible for the source objects InfoCube and DataStore object.
Delta Administration displays the requests available in the open hub data source as well as information about whether they have already been read or whether the open hub destination has not yet received them. If that is the case, the requests can be found under the category Requests Not Yet Read. Delta administration offers you the following functions:
You can deactivate delta administration. An additional delta request is subsequently not possible. At the same time, the status of all source requests that have already been read is reset to Not Yet Read.
Delta administration is also deactivated if one of the following events occurs:
A request that has already been extracted is deleted in the open hub data source.
A request that has not yet been extracted is compressed in the open hub data source (InfoCube).
·You can reactivate delta administration. A delta request is then possible again. In the context menu, you can choose Delete Requests. The request is deleted from delta administration and can be requested again (repeat). You may want to do this if a request does not arrive correctly in the target system.

Similar Messages

  • Optimization steps for Uncompressed cube

    I am new to Oracle OLAP, I have designed a cube with 4 dimensions and three measures. Since I have measures like count of staff and hours encoding, I have to group (sum) # of staff and hours encoded for each team and roll up with its tree. At the same time, I have to show average staff memebers for a given period of time with sum of hours encoded. In order to override cube aggregation, I have to uncompress cube. This is resulting serious issues. Can you please suggest me what steps to be followed to optimize performance with same uncompressed cube.
    Appreciate your support.

    There are a few tricks to calculate averages in a compressed cube. The subject was discussed recently on the forum. 
    https://forums.oracle.com/message/10920684#10920684
    The best way, if you can handle it, is to use the MAINTAIN COUNT syntax.  But this may be a stretch if you are new to OLAP.

  • Partitioning an active but uncompressed cube

    Hello Community,
    There are many posts on partitioning InfoCubes.  I promise I have read all of them before asking this question -->
    Partitioning for a fact table must be defined <b>before you activate</b> the InfoCube. It cannot be done afterwards.
    But what if the cube has not been compressed yet ?
    My hope is that the ability to partition any time up until the <b>1st fact-table compression</b> has come with newer BW releases ? 
    The reason I have this hope is because partitioning affects only the E-fact tables.  And, of course, the E-Fact remains empty until the 1st compression.  (F-fact tables are automatically partitioned by the request-ID).
    So why should it be impossible to define the partitioning if the E-Fact table is still empty ?
    Another question : when working with the BW GUI, if it is indeed true that I must unload, partition, and reload the unpartitioned cubes before compressing them, what is lost or gained by skipping that process and simply compressing the unpartitioned cube followed by partitioning with database tools ?
    Thanks!
    Keith

    Hi Keith,
    Sorry, but you have to implement partitioning before the first data load - when the InfoCube is initially activated.
    With regard to you second question, you could indeed do this but you will run into major issues if you need to activate the InfoCube again or do any maintenance on the InfoCube. The activation from BI could result in the activation of the E and F fact tables with differing definitions to the modified DB tables (effectively loosing the partition tables underneath). I have not tested this but it is a big possibility so I would advise care).
    I hope this helps,
    Mike.

  • Got hung in the cube index deletion

    Hi Gurus-
    I got an issue,. where in my cube indicies got hung,.
    In daily PC monitoring the PC failed at the index deletion of a cube. I just repeated the process again failed.
    So finally i went into the cube manage and analysed , there was a red request in the cube manage four days before, It was a file upload which went to red,. So i deleted the red request which was between the green requests.
    Now went to the PC and tried to repeat the cube's del process, monitored the same in sm37  the drop index job in sm37 is running for a long time,. Killed the job and tried to delete indicies manully from the performance tab of the cube manage.,
    Even this is running quite long time in the dialog mode.So I stopped the Transction and scheduled for the delta,. now the request is in yellow mode for days for 3000 records,..
    --> checked with basis ,. No locks , deadlocks were found in SAP and DB level,.
    -->No action is being traced by the sys on this cube,. I mean for any action ,.the jobs in sm37 is with two lines,.nothing is ther to analyse,.
    Plz help me in this regard ,.
    need the delta in the cube,.users are screwing me ,.
    Rgards,
    Vishwa.

    Hi,
    Try running RSRV and whether there is any error in the cube.
    In BI7.0 RSRV they have given so many new option go for combined test for that cube which will tell you whats the error.
    Else ask the basis folk to  try to create the index at database side if required.
    Also Check if there is any open HUB or infospoke related to that cube having dependancy with that cube.
    Hope this helps for you.
    Thanks,
    Arun.

  • Where I can find examples with OLAP DML to update the cube cells?

    Hi,
    Where I can find examples with OLAP DML to update/calculate the cube measure/cells?
    I would like to insert data into the cube by OLAP DML.
    Regards,
    TomB

    Not sure about examples but this is how you should proceed
    1. Limit all your dimension to the leaf level values.
    lmt financialperiod to '200901'
    lmt geography to 'XYZ'
    lmt product to 'LAPTOP'
    2. Limit your measure variable to one measure(this is applicable if you have more than one stored measure in the cube).
    for 10g
    lmt <cube name>prtmeasdim to '<MEASURE NAME>'
    for 11g
    lmt <cube name>measuredim to '<MEASURE NAME>'
    3. Write into the variable.
    for 10g
    <cube name>prttopvar = 100 -- this variable is created for a compressed & partitioned cube. for uncompressed cube the variable name is <cube name>_stored.
    Thanks
    Brijesh

  • InfoSpoke as Data Source in the same system

    Hello Expert,
    I try to include the result of infospoke as transparent table (the source of infospoke is a basic cube), this table I need to include as DataSource for using into other cube into the same system.
    The transparent table is generated succesfully, but when I try to generate generic data source (SAVE), the system indicate a message:
    Invalid extract structure template /BIC/OHZRD_MMC_S of data source [name datasource].
    So it is very urgent and importat because it is part of a possible solution if anybody could help me in this hole or if exist other solution to my problem, please tell me.
    Thanks in advanced.

    Hello,
    I have not tried this method, as i will suggest the use of export data source , rather than extracting to a table and then loading to other data target.
    Now you are taking a long routelike: Cube-> open hub->table->datasource->data target
    if you use an export data source: Cube-> export datasource-> data target
    now, the scenario cant avoid such modeling of infospoke , then the option is to create a view bsed on the generated table and create the datasource on top of the view.
    Happy Tony

  • Where are dimension attributes in AWM - cube viewer?

    Hi -
    Built my first cube and I am looking at the data with AWM's cube viewer.
    I can drill down a hierarchy, but I cannot get at the attributes that were created for the product dimension. The attributes were defined at all levels of the hierarchy.
    Does the AWM cube viewer allow you to access the attributes of a dimension?
    Thanks,
    Frank.

    Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
    1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
    2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
    Did you tried with the global sample available on OTN? That should be good starting point for you.
    You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
    1. Now limit your dimension to that value like this
    lmt <dim name> to '<value>'
    For compressed composite cube
    2. Now check data in the cube like this
    rpr cubename_prt_topvar
    for uncompressed cube you should do
    rpr cubename_stored
    #2 should show the same value which is available in the fact.
    Thanks,
    Brijesh

  • Excessive time when maintaining cube

    Hi there,
    I have a star schema with:
    a) 2 dimensions:
    year with hierarchy : CALENDAR_YEAR ------------>all_years
    location with hierarchy : COUNTRY -------------> CONTINENT -----------> ALL_COUNTRIES
    b) 6 partitioned cubes (uncompressed)
    Each cube contains measures with diffirent data types. In particular, each measure may have 1 of the following 3 data types:
    varchar2 ------------> with aggregation maximum
    int or dec ------------> with aggregation SUM (cube's aggregation)
    date ------------> with aggregation Non additive
    When i execute maintain cube (for 1 of the cubes i have) I leave my pc for 2 hours to load the data and after that it doesn't end but it continues to load data. So, data loading is never done. I have been on my pc for a week trying to solve the problem but nothing has changed. What could the problem be?
    Notes:
    (A)
    I checked vls parameters and tha data's format and they are both compatible. See for yourself:
    SQL> select value from V$NLS_Parameters;
    VALUE
    AMERICAN
    AMERICA
    $
    AMERICA
    GREGORIAN
    DD-MON-RR
    AMERICAN
    WE8MSWIN1252
    BINARY
    HH.MI.SSXFF AM
    VALUE
    DD-MON-RR HH.MI.SSXFF AM
    HH.MI.SSXFF AM TZR
    DD-MON-RR HH.MI.SSXFF AM TZR
    $
    AL16UTF16
    BINARY
    BYTE
    FALSE
    19 rows selected.
    (B)
    Mappings are also ok. I checked them. As for each hierarchy, I gave on each attribute, values that prevent data conflict. I think `all_years` and `all_countries` levels are also ok as they include everything.
    (C)
    My computer is an Intel Pentiium 4 with 2x 512mb ram. I am running oracle 11g home on windows xp professional service pack 2.
    Thanks in Advance

    I need uncompressed cubes because as i said i have non-numeric data types in my data. I have dates, nums and varchar2.
    Anyway.
    i don't understand what you mean by saying dimensional members, but i suppose you are refering to the levels and the hierarchy of each dimension. I have already included that on my previous post. Check it! If you mean something else inform me!
    As for the amount of data:
    YEAR:2 RECORDS (1990 and 1991)
    CREATE TABLE YEARS
    (CALENDAR_YEAR_KEY NUMBER NOT NULL,
    CALENDAR_YEAR_NAME varchar2(40),
    CALENDAR_YEAR_TIME_SPAN NUMBER,
    CALENDAR_YEAR_END_DATE DATE,
    PRIMARY KEY(CALENDAR_YEAR_KEY)
    LOCATION : 256 RECORDS (It also contains a CONTINENT_ID whose value range from 350 to 362 REPRESENTING all oceans, continents and the world. COUNTRY_ID ranges from 1 to 253)
    CREATE TABLE LOCATIONS
    (COUNTRY_KEY varchar2(44) NOT NULL,
    COUNTRY_NAME varchar2(54),
    CONTINENT_KEY varchar2(20) NOT NULL,
    CONTINENT_NAME varchar2(30),
    COUNTRY_ID NUMBER,
    CONTINENT_ID NUMBER NOT NULL,
    PRIMARY KEY(COUNTRY_ID)
    MEASURES : 498 RECORDS (249 records for 1990 and 249 records for 1991)
    CREATE TABLE MEASURES
    (GEOGRAPHY_total_area DEC(11,1),
    GEOGRAPHY_local_area DEC(11,1),
    GEOGRAPHY_arable_land DEC(5,4),
    GEOGRAPHY_permanent_crops DEC(5,4),
    . (various other measures)
    MEASURES_YEAR NUMBER,
    MEASURES_COUNTRY NUMBER,
    PRIMARY KEY(MEASURES_YEAR,MEASURES_COUNTRY),
    FOREIGN KEY (MEASURES_YEAR) REFERENCES YEARS(CALENDAR_YEAR_KEY),
    FOREIGN KEY (MEASURES_COUNTRY) REFERENCES LOCATIONS(COUNTRY_ID)
    TOTALLY : 268 measures
    But to make data loading easier i created 6 cubes on Analytical Workspace Manager each one containing:
    GEOGRAPHY : 51 attributes
    PEOPLE : 24 attributes
    ECONOMY : 40 attributes
    GOVERNMENT : 113 attributes
    COMMUNICATION : 28 attributes
    DEFENSE FORCES : 11 attributes
    (If i made any number counting error, forgive me. I only wanted to show you that there are many measures.)
    So, Is there anything I can do to solve the problem?

  • Help needed for running Aggregation map for a compressed cube

    Hello everybody,
    I am having problem in running an Aggregation map using a model on a compressed cube. Please find the model and Aggregation map descriptions below.
    ---Model
    DEFINE MYMODEL MODEL
    MODEL
    DIMENSION this_aw!ACCOUNT
    ACCOUNT('NIBT') = nafill(OPINC 0) + nafill(OTHINC 0)
    END
    -----Aggregation Map
    DEFINE MY_AGGMAP AGGMAP
    AGGMAP
    MODEL MYMODEL PRECOMPUTE(ALL)
    AGGINDEX OFF
    END
    While running the aggregate on an uncompressed cube the Model is working fine but when I am trying to aggregate a compressed cube it is throwing me error (String index out of range: -1). I would appreciate if anyone could provide some thought to this problem.
    The cubes has five dimensions apart of ACCOUNT and it is partitioned by PERIOD. I am using Oracle 10g 10.2.0.4.0 with AWM.
    Thanks,
    Vishal
    Edited by: user7740133 on Sep 16, 2008 5:23 PM

    Vishal,
    I am not sure of using composites to run the model but you can limit your dimensions values to some values(which has data) and then run the model on cube_prt_topvar and aggregate the cube using default aggmap. You have to limit all dimensions to all before you run the agggmap.
    I just saw the account model you posted initially. In your scenario you can limit your account dimension to have only three values 'NIBT' 'OPINC' 'OTHINC' and for other dimension to all. Now when you run the model you will not get values aggregated for account but for others you will see the aggregated value. If you would like to aggregate values for account also then I would suggest you to limit all the dimensions to leaf level and then run the model and then aggregate the cube using default aggmap.
    Hope this helps.
    Thanks
    Brijesh
    Edited by: BGaur on Oct 25, 2008 1:10 PM

  • Maintain Cube Not Working

    Brijesh,
    I built my dimensions, levels and heirarchies successfully and also created cube. Now that I've built my measures and run the maintainance, I'm not seeing any values in them even though I know I should.
    Based on my mapping, the keys from the fact are going to the right dimensions (and I even made a simplier, just one dimension --> measure cube as well), but no success. There are cases where I know I shouldn't get any data (based on selected values), but when I make valid selection I see only 0.00 being displayed.
    Can you tell where I may have gone wrong here? Are the values made available for selection (the attributes) ONLY supposed to be the same one-to-one values available in the fact table?
    **I'm using the simple SUM aggregate function for my measures, and pretty much all the default configurations given.
    Brijesh Gaur
    Posts: 416
    Registered: 04/03/08
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 10, 2009 3:21 AM in response to: mikeyp Reply
    Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
    1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
    2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
    Did you tried with the global sample available on OTN? That should be good starting point for you.
    You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
    1. Now limit your dimension to that value like this
    lmt <dim name> to '<value>'
    For compressed composite cube
    2. Now check data in the cube like this
    rpr cubename_prt_topvar
    for uncompressed cube you should do
    rpr cubename_stored
    #2 should show the same value which is available in the fact.
    Thanks,
    Brijesh
    mikeyp
    Posts: 14
    Registered: 09/22/09
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 13, 2009 1:24 PM in response to: Brijesh Gaur Edit Reply
    Brijesh,
    Thanks for your suggestions, and here are my results based on that.
    1. No records rejected after running cube maintenance
    2. I didn't limit my dimension to specific value as you recommended, but made my member the same as my Long and Short description attributes using AWM. (Its a flat dimension i.e. no level or hierarchy since the dimension only has one value/meaningful field.
    Based on those steps, I still didn't get the results I was looking for. The fact table has five values for that one dimension and I'm seeing 0.00 for 4 of them, and an inaccurate value for the last one. (this after doing comparison with simple aggregate query against fact)
    Do you have any other possible reasons/solutions?
    **Loading the Global Schema into our dev environment is out of my hands unfortunately, so that's the reason for the prolonged learning curve.

    Brijesh,
    Here's the results of what you suggested:
    1. Creating test dim and fact table with simple case you provided was successful, and AWM was able to map the same values to cube which was created on top of that model.
    2. I took it a step further and changed dim values to be same like existing dim table
    2.b. Also replaced test fact table values to mimic existing values as well so it would match what's available in dim table, and here's where the fun / mystery begins
    Scenario 1:
    I created fact like this...........select dim value, sum(msr) from <existing fact table>
    As you can easily tell, my values were already aggregated in the table, and they also match perfectly in cube created by AWM - no issue.
    Scenario 2:
    Created fact like this............select dim value, msr from <existing fact table>
    Quite clear is that my values are no longer aggregated, and broken down across multiple occurences of dim values; did this so that I can verify that the "sum" will actually work when used in AWM.
    The results from scenario 2 lead me back to same issue being faced before - i.e. the values weren't being rolled up when the cube was created. No records were rejected, and there was only ONE measure value showing up (and it was still incorrect), and everything else was 0.00
    I retrieved this error from the command program that runs in the background. This was generated right after running the maintain cube:
    <the system time> TRACE: In oracle.dss.metadataManager.............MDMMetadataProviderImpl92::..........MetadataProvider is created
    <the system time> PROBLEM: In oracle.dss.metadataManager.........MDMMetadataProviderImpl92::fillOlapObjectModel: Unable to retrieve AW metadata. Reason ORA-942
    BI Beans Graph version [3.2.3.0.28]
    <the system time> PROBLEM: In oracle.dss.graph.GraphControllerAdapter::public void perspectiveEvent( TDGEvent event ): inappropriate data: partial data is null
    <the system time> PROBLEM: In oracle.dss.graph.BILableLayout::logTruncatedError: legend text truncated
    Please tell me this helps shed some light on the main reason for no values coming back; we really need to move forward with using Oracle cubes where we are.
    Thanks
    Mike

  • Find the cubes that are not being compressed

    Hi,
    Need to find the cubes that are not being compressed.
    Please let me know how to find them? is there any tran codes to figure them?
    Thanks,

    Hi
    E fact tables name >> /BIC/E*
    F Fact tables name >> /BIC/F*
    By comparing the two list
    can we find the uncompressed cubes?
    If this is right  it gives one more solution to your problem...
    Thanks

  • Data at all level do not match after applying security

    Hi ,
    We are implementing the security and observed following.
    1. Data is loaded in the cube correctly and report shows up the data correct at all level.
    2. Now we apply the security which which will restricts the users to see some members according to the role they are mapped to.
    3. When we create a report we are seeing the values at second and all below level to be correct but the value at all level still shows same as got in step 1.This means that the value is not dynamically aggregated while creating the report.
    Also we checked that the values are not precomputed at all level for any dimension.
    Any pointers to resolve this?
    Thanks in advance.
    Thanks
    Brijesh

    John, A sureshot way to simulate the relational aggregation for various users (who have vpd) applied on the fact information is to create the AW for each user. That way the scoping of dimension hierarchies and/or facts occur on the relational source views and each user only sees the summation of the visible values in the AW. You can use (local) views in each user's schema on the appropriate AW and make the application access seamless across all the user schemas. Such a solution may be a bit redundant (leaf data present in multiple AWs increasing load time) but should work in all environments since it does not involve any tweaking of internal objects via custom olap dml.
    +++++++++++++
    Regd implementing the approach to have a single AW servicing multiple users but also allow for individual users to see only their data (for both base and summary data).. This can be done in 10gR2. We have used this approach with a 10gR2 AW based on suggestions from people who were in the know :)
    Please note the disclaimers/caveats..
    * Works for 10gR2 with no foreseeable way to port this onto 11g when olap dml object manipulation is prevented by the lock definition (lockdfn) keyword.
    * Custom code needs to be run at startup.. Preferably in PERMIT_READ program alone since this way any changes made by any of the restricted user(s) do not get saved/commited. This is the manner that sql query on AWM using vpd (say) would work.
    * OLAP DML Code is very dependent on the nature of the cube structure and stored level setting.
    * This approach provides for a neat and nifty solution in the context of a PoC/demo but the solution performs a (possibly exhaustive) cleanup of stored information during the startup program for each user session. And since this happens in the context of a read only session, this would happen every time for all user sessions. So be sure to scope out the extent of cleanup required at startup if you want to make this a comprehensive solution.
    *********************** Program pseudo code begin ***********************
    " Find out current user details (username, group etc.) using sysinfo
    limit all dimensions to members at stored levels
    limit dim1 to members that user does *not* have access to.
    NOTE: This can lead to perf issues if PROD dimension has thousands or millions of products and current user has access to only a few of them (2-3 say). We will have to reset the stored information for the majority of products. This is undo-ing the effects of the data load (and stored summary information) dynamically at runtime while the users request a report/query.
    limit dim1 add descendants using dim1_parentrel
    limit dim1 add anscestors using dim1_parentrel
    limit dim1 keep members at stored levels alone... use dim1_levelrel appropriately.
    same for dim2 if reqd.
    "If we want to see runtime summation for stores in North America (only visible info) but see the true or actual data for Europe (say).. then we need to clean up the stored information for stores in North America that the current user does not have access to.
    Scenario I: If Cube is uncompressed with a global composite.. only 1 composite for cube
    set cube1_meas1_stored = na across cube1_composite
    set cube1_meas2_stored = na across cube1_composite
    Scenario II: If Cube is uncompressed with multiple composites per measure.. 1 composite per cube measure
    set cube1_meas1_stored = na across cube1_meas1_composite
    set cube1_meas2_stored = na across cube1_meas2_composite
    Scenario III: If Cube is compressed but unpartitioned..
    set cube1_meas1_stored = na ... Note: This can set more cells as null than required. Each cell in status (including cells which were combinations without data and did not physically exist in any composite) get created as na. No harm done but more work than required. The composite object may get bloated as a result.
    Scenario IV: If Cube is compressed and partitioned..
    Find all partitions of the cube
    For each partition
    Find the <composite object corr to the partiton> for cube
    set cube1_meas1_stored = na across <composite object corr to the partiton>
    "Regular Permit Read code
    cns product
    permit read when product.boolean
    *********************** Program pseudo code end ***********************
    The cube in our aw was uncompressed/partitioned (Scenario I).
    It is more complicated if you have multiple stored levels along a dimension (possible for uncompressed cubes) where you apply security at an intermediate level. Ideally, you'll need to reset the values at load/leaf level, overwrite or recalculate the values for members at all higher stored levels based on this change, and then exit the program.
    HTH
    Shankar

  • Cost-based aggregation

    Hi,
    Is it possible to find out how the cube is aggregated when you use cost-based aggregration?
    The cost-based aggregation is giving me reasonable load times, disk usage and query times. But I can't use this because one of my hierarchies changes rather often causing the complete cube to be re-aggregated. If I use level-based aggregation I can overcome this problem but I am having trouble finding the best configuration for which levels to aggregate on.
    Regards /Magnus

    Magnus,
    I think you are asking about dynamically aggregating over a hierarchy (or some parts of a hierarchy, like a level or a member).
    AWM does not expose that kind of functionality, but its there in the OLAP.
    You can set the levels or even parent members for which the cube data is pre-aggregated. For all the other levels or parent-members, it will be dynamically aggregated. Its done through PRECOMPUTE.
    Here is some explanation. The example is about doing complete dynamic aggregation over a hierarchy. Then I mention other PRECOMPUTE conditions that you can use.
    Lets say you want the cube to be dynamically aggregated over a hierarchy at query time (instead of pre-aggregating over that hierarchy), you can set the PrecomputeCondition of the cube by selecting the dimension and setting PrecomputeCondition to NONE. If you describe the AGGMAP for this cube (in olap worksheet), you will then see PRECOMPUTE(NA) for that dimension. In case of uncompressed cubes, the AGGMAP may still show PRECOMPUTE(<valueset>), but that valueset will be empty.
    You can also query ALL_CUBES view to see the PRECOMPUTE settings. For more PRECOMPUTE options look at RELATION statement documentation in AGGMAP at http://docs.oracle.com/cd/E11882_01/olap.112/e17122/dml_commands_1006.htm#i1017474
    EXAMPLE:
    begin
    dbms_cube.import_xml(q'!
    *<Metadata Version="1.3">*
    *<Cube Name="BNSGL_ACTV" Owner="BAWOLAP">*
    *<Organization>*
    *<AWCubeOrganization PrecomputeCondition="BAWOLAP.PRODUCT NONE"/>*
    *</Organization>*
    *</Cube>*
    *</Metadata> !');*
    end;
    In addition to NONE, the other options for PRECOMPUTE are
    (1). ALL
    (2). AUTO
    (3). n%
    (4). levels of dimensions to be precomputed
    (5). a list of one or more parent members to be precomputed. For rest of the parent members, dynamic aggregation will be done at query time.
    (6). According to documentation, some conditional statements can be used also (although I have not tried it). For example:
    PRECOMPUTE (geography.levelrel ‘L3')
    PRECOMPUTE (LIMIT(product complement ‘TotalProd’))
    PRECOMPUTE (time NE ’2001')
    Note that there maybe a bug because of which the dimensions (over which the dynamic aggregation is desired) should be last dimensions in the aggregation order.
    For your situation, you should look at (4) or (5) or (6)
    .

  • Dimension hiearchies in awm

    Hi I am using 10g and using awm.
    I have a view that populates a dimension.
    This has:
    YEAR ACCOUNTING_TIME_CODE
    FY2008 FY2008
    FY2008 Q1.2008
    FY2008 Q2.2008
    FY2008 Q3.2008
    FY2008 Q4.2008
    These dimensions have a hiearchy of year and then accounting time code. When I load this into the cube, the cube has problems with it as it seems to get the instance when the year and the accounting time code are the same muddled. I have tried naming the year column something different to the accounting_time_code but its not really acceptable for the end user.
    Does anyone know of any way to get around this issue?

    Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
    1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
    2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
    Did you tried with the global sample available on OTN? That should be good starting point for you.
    You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
    1. Now limit your dimension to that value like this
    lmt <dim name> to '<value>'
    For compressed composite cube
    2. Now check data in the cube like this
    rpr cubename_prt_topvar
    for uncompressed cube you should do
    rpr cubename_stored
    #2 should show the same value which is available in the fact.
    Thanks,
    Brijesh

  • BW Partitioning

    Hello everyone our BW query response time has been gradually slowing down now that we are getting up to 5 years of data.  Our BW administrator is looking at implementing partitioning on a few of the large cubes to improve performance.  As you already know  SAP BW already uses database partitioning on many standard tables.  We have not enabled partitioning on our large custom cubes.  SAP documentation suggests this is a straightforward change and itu2019s configured inside SAP BW workbench and not in Oracle.
    When the BW admin implements partitioning on the standard tables will there be any Oracle performance issues, I should be aware of? 
    Do I need to do any specific tuning to accommodate these changes?
    We are currently running Oracle 9.2.0.8 but are going to be upgrading to 10.2.4 in the New Year.
    Any information would be highly appreciated
    Thanks for your time and have a great weekend.

    Hi Pellegrino,
    There is simply only one recommendation for a big datawarehouse running on ORACLE:
    GO FOR IT!
    It's the divide and conquer approach that makes your warehouse fly...
    (besides improving READ performance on queries due to partition pruning, writes can speed up drastically
    if you think of DELETES going into Undo space  vs.  DROPs or TRUNCATEs of partitions)
    SAP BW handles a lot behind the scenes over RSA1. Whats important is to choose the appropriate partitioning criteria
    for compressed cubes (for  uncompressed cubes it's fixed on REQUESTID).
    You have to choose either 0FISCPER or 0CALMONTH wich is defined by the usage of your most-used big queries (i.e. weekly , monthly, quarterly) and calculate the number of necessary partitions by giving a time period from / to.
    Make sure you get a good rows per partition ratio by keeping record numbers big in each partition and partition numbers small by giving a maximal partition number
    i.e.
    CALMONTH:
    max. partition number = 96 (add 1 for MAXVALUE partition)
    from 01.2000  to 12.2007   = 8 * 12 + 1 = 97 partitions     (for each month 1 partition)
    max. partition number = 24 (add 1 for MAXVALUE partition)
    from 01.2000  to 12.2007   = 8 * 3  + 1 = 25 partitions     (for each quarter 1 partition)
    bye
    yk

Maybe you are looking for

  • Syncronisation between AD - OID Version 11.1.1.2.0

    I have bootstrapped successfully users from AD to OID. However the synchronisation for new users extra are failing. Below is the error messages I am receiving. Grateful for any advice. Thanks error in mapping engine AD2OID Error Creating Entry in Dir

  • Service battery shows in battery window?

    I have the following Battery Information:   Model Information:   Serial Number:          W0031TRERE1LA   Manufacturer:          SMP   Device name:          bq20z451   Pack Lot Code:          0000   PCB Lot Code:          0000   Firmware Version:    

  • Unread Email in gmail cannot read unless clicking on reply or forward, empty page.

    When you click on the unread email there's ONLY an empty box with reply or forward, no email body.

  • Credit memo with different Pricing procedure

    Hi all, I am Creating Credit memo with reference to contract, my requirement is system should not copy Contract pricing procedure for Credit Memo we have different Pricing procedure for credit memo But the Problem is while i am creating a credit Memo

  • Cisco Unity Connection 8.5(1) MWI/Notifications issue

    Hi, I think i got the bug CSCtn05523 because the MWI notification fail In a recurring way. Restart the connection notifier service solve this issue but it come back again randomly. I'm looking for a way to show if the SBR (split brain recovery) occur