Using Aggregate storage cubes to aggregate and populate DWHSE

Has anyone ever used aso cube to aggregate data and put it into a datawarehouse? We are exploring the option of using essbase ASO's to aggregate data from a fact into summary form then loading the required data via dataexport in version (9.3.1) OR (9.5)
whatever version supports aso AND dataexport.

Hi Whiterook72,
Heterogenous data sources -> ETL -> warehouse -> essbase/OLAP -> MIS and analyses
Conventionally, in an enterprise , we have essbase or an OLAP engine after the warehouse .As a level of aggregation happens in warehouse ,and for the multidimensional view ,we push it into OLAP.
Contrariwise , in your case ,
Heterogenous data sources -> ETL ->essbase/olap -> warehouse -> MIS and analyses
you want to bring essbas before you load the data into warehouse .This would make essbase to feed from the operational data sources, where we have a lil problem.
Ex: for a bank ,operational data has information at customer level i.e you have individual customer name, and their respective info like addres,transaction info bla bla bla.
So,to feed this info into essbase cube ( with an objective of aggregation) , you got to have millions of members ( i.e all customers ) in your outline .
Which i see is not the objective of essbase .
Just my thoughts , hope they help you
Sandeep Reddy Enti
HCC

Similar Messages

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • How do I use one set of XML data and populate multiple layouts with it?

    I have an XML document with root element BOOK.  Inside BOOK are PAGEs.  (I also have each PAGE as it's own XML file, so that I can update each PAGE individually.)
    I have a print layout of 6"x9" and I want to create an iPad layout (H & V).
    When I create the V layout, I have the 6x9 master set to "scale" in liquid layouts and it works great.  But in my XML structure, everything gets duplicated.  Now there are two sets of data.  When I create the H layout, now I have three sets of data.
    So then I tried deleting everything in my iPad masters and setting them to be based on my print masters, but the content isn't scaling.
    Let me know if you have questions, because I'm not sure where I might need to clarify.  Thanks in advance!

    Roger Wilmut1 wrote:
    nategm wrote:
    On my MacBook Pro, if I get a USB-headphone connector (to add another audio port), will I be able to set one audio port as input and the other as output?
    Yes, either way round. In System Preferences>Sound you can set the input and output ports separately.
    Roger, the System setting have -Nothing- to do with the question being asked. Do you use Logic Pro audio software? If you did you would know Logic has it's own audio settings that run independently from the System audio. There's also the fact that Maverick has problems with "some" older USB 2.0 audio hardware. Basically, this is not a slam-dunk, that's why I suggested asking someone if they've used this combination before. Although... the USB audio device is cheap ($10) it might be worth it to try. Still.............**
    ** I question the wisdom of using cheap audio hardware on a computer that cost $1000+, software that costs $200 and then using the internal audio chip ($2) and an external monitoring device that costs $10. Defeats the whole purpose of using professional audio software.

  • To use same storage area.(deletion and insertion)

    suppose a table has 5 records i used truncate command.then i want to insert a record which might be used same storage area .what steps to be followed to use same storage area.

    If u use tools like TOAD, while truncating a table it will ask u whether to reuse or drop the storage.

  • Can i use iport capsule to store movie and play with apple tv, can i use iport time capsule to store movie and play with apple tv?

    Hi
    Can i use iport storage to store movie and play using apple tv?

    hi Alley_Cat
    Thanks, if understand it correctly, apple tv was not able to browse directly content/storage from iport capsule?
    tq

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • Create and Populate a Hyperion Planning Cube using Hyperion Data Integratio

    Friends,
    I am new to Essbase and have worked extensively in Informatica. Hyperion DIM (OEM version of Informatica) is chosen to create and populate a Hyperion Planning System (with Essbase cube in the backend).
    I am using Hyperion DIM 9.3.
    Can someone let me know (or share a document) how I can do the following
    1) Create a Planning application with a Essbase Cube in the backend using Hyperion Data Integration Management
    2) How to populate the Essbase outline and the actuals Essbase cube with data using DIM.
    Thanks a lot for all help.

    Hi,
    You cannot create planning applications using DIM.
    To load metadata have a look at :- http://www.oracle.com/technology/obe/hyp_fp/DIM_Planning/OBE_Dim_Planning.html
    You can refresh planning database in DIM by
    To enable the Refresh Database property for a session:
    In Workflow Manager, right-click the session and select Edit.
    Click the Mapping tab.
    Select a Planning target.
    Check the Refresh Database box.
    Ok?
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Aggregates in CUBES

    Hi,
    Im trying to create aggregates in Cubes. I right click and go under "Maintain aggregates" . I choose the option for the system to generate them for me, when i do that a window pops up " SPECIFY STATISTICS DATA EVALUATION" what dates am i supposed to put in there for "FROM" im putting todays date, and for  "TO" do i put 12/31/9999 ? What is this for?
    Also what can i do to improve a DSO's performance?
    Thanks

    Hi............
    Do you have secondary indexes on your ODS?
    As mentioned above.........that will be the best way............
    I think you want to improve DSO performance to improve query performance...........if so......its a good way to proceed would be to have your reporting based on the InfoCube, or MultiProviders based on these InfoCubes .
    But this will be a bit of development work, and you will have to move the queries from the ODS to Cubes, deal with workbooks etc.
    Then you can also create aggregates..........because you cannot create aggregates on ODS........
    Check OSS Note 444287
    Using ODS will be performance hit in terms of Activation Time , Report Excution Time ( Tabular Reporting ) . Better to load the data from these ODS to Indiviual Cubes .......and then create a multiprovider of them and then report out from multiprovider . Since reporting from Multi-dimensional Structure is faster than Flat table reporting...
    Also check this link
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Debjani.......
    Edited by: Debjani  Mukherjee on Sep 26, 2008 8:07 AM

Maybe you are looking for

  • Page Spawning - Prompting For Number Of Pages

    I asked a similar question a few years back and at the time, the project went by the wayside.  Well, the project is back on track and now I'm in a jam and need to use some javascript for some easy automation.  Here's the situation.  I have a form wit

  • RAW file size.

    This came up in the Mac Photoshop forum so I thought I'd ask the real photographers. I had been told that one reason a RAW file is smaller than an uncompressed TIFF is because the RAW files contain un-interpolated pixel data, i.e., each pixel locatio

  • Cannot download lightroom trial

    Tried enabling compatability view as others have, still doesn't work.  (Window 7 64 bit) Is the link not working??  Any ideas?  Thank You.

  • Premiere Pro CC hangs on splash screen  "loading ExporterQuicktimeHost.bundle"

    Premiere Pro CC hangs on splash screen  "loading ExporterQuicktimeHost.bundle" This happened before when I had CS6. I'm now using CC and haven't had problems until now. I again get the hang up while loading. Previously I trashed the loading ExporterQ

  • Controlling the Display of Animations using Advanced Actions

    Hi, I'm using Captivate 5.5 and having a bit of difficulty being able to control animations that are conditional on variables. I'm not sure what the best workflow is... I have set up five questions using the enhanced radio buttons widget and there is