Aggregate Storage Backup level 0 data

<p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

<p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

Similar Messages

  • Opening and closing Stock report at Storage location level- date wise

    Dear all
    Is there any standard report to view opening and closing stock at storage location level? ( we have MB5B which resets the entry screen if we enter SLOC details).
    Thanks
    Sam

    Do  I have the option of  selecting Storage Location wise opening and closing stock on a particular date or period, that is what I have requested? Is it possible, evenafter selecting storageloc/batch stock and entering all the SLOCs in the selection creen, the report output does not contain any SLOC, and using ctrl+F8 or display variant does not have SLOC as a field also.
    pls gudie
    sam

  • Reg : client level data,plant level data,storage location level data

    hi
    when we enter the data mm01 we select views.these views are based on cleint level,plant level,storage location level .i want the difference between all the three levels.how do we differentiate.

    As per our business process there are one company and in which many plants and storage location are available. In which many materials are available in different plant in different storage location.
    As per SAP Standard structure: -
    Company >> Company code1 & Company code2... >> Plant 1 & Plant 2 & Plant 3 .... >> Storage location 1 & 2 & 3........
    And in all plant and storage location have different data.
    Suppose in one plant we do quality testing for material and it may be possible in other plant we not perform quality testing for material. So in this case for same material quality data is different for different plants.
    Client Level: - Means data is same for all plant level like Basic data
    Plant Level: - Means data is based on plant level like Purchasing, Plant data, Quality data, Accounting view.
    Storage location level: - Means data is based on storage location level, for different storage location data will be different like Storage location stock data ..
    Due to this you can maintain different data on plant level & storage location level.
    Regards,
    Mahesh Wagh

  • Aggregate Storage Backup Restore

    Hi,
    I am trying to restore an ASO cube from the backup tape. I stopped the existing application. Replaced the files with the backed up ones. And when tried to restart the application from EAS,
    it is giving me error:
    "Cannot add file location: file location directory [ess/mrd/essbase/app//<appname>/metadata/] already exists. If this directory is not in use by another application, please remove it and try again"
    Removing metadata folder does allow me to start the application, but without any data.
    I am very new to ASO cubes.
    Can somebody please help me with restoring process?
    Thanks,
    SR.

    Just wanted to add that the backup was taken from prod server and the restoration server is pre-prod....two different servers.

  • Client level data & Plant level data

    Dear ,
    what exactly client level data  & Plant level data.
    In Material master  Whats the reason that Purchasing is displaying in Client level data & also in Plant level data
    *Client level  Datas:--
    Classification
    Purchasing
    Basic data
    Storage
    Plant level Datas:-
    Accounting
    MRP
    Purchasing
    Costing
    Storage
    Forecasting
    Quality management
    Sales
    Regards,
    Suresh.P

    hi
    the client level data is the field data  which will be common for all plants
    plant level data is the field data which is plant specific
    also in purchasing view many fields are plant specific
    to check it just do one goto mm03
    give material and only select the pur view
    at now give only plant and check fields
    now repeate same without plant ,at this time u will see the client specific field
    the UOM is the client level filed ,it is also in  mRp view but as the mrp is used for plant it is in plznt specific data
    hope it clears
    regards
    kunal

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Tablespace level backup using data pump

    Hi,
    Im using 10.2.0.4 on RHEL 4,
    I have one doubt, can we take a tablespace level backup using data pump,
    bt i dnt wnt to use it for transportable tablespace.
    thanks.

    Yes, you can only for the tables in that tablespace only.
    Use the TABLESPACES option to export list of tablespaces.*here all the tables in that tablespaces will be exported*.
    and you must have the EXP_FULL_DATABASE role to use tablespace mode.
    Have a look at this,
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10825/dp_export.htm#i1007519
    Thanks
    Edited by: Cj on Dec 12, 2010 11:48 PM

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Process of Clearing WM Data at Storage Unit Level

    Dear All,
    I have an issue where our client has decide to temporaryly halt the WM function which was implemented for past three weeks. Now i have stocks in the new warehouse storage locations both in IM & WM. Presently the users are executing the IM Function but not the WM (Storage Unit ) part. They do not want to execute storage unit level process due to practical concerns. How do i go about clearing the WM part of the stock, since the storage unit level information is wrong at present.
    I might have asked this question before this, i would appreciate if you could provide me a detailed procedure, on how i have to go about clearing the WM Data, and enable the user to procedure with the IM Process as before.
    Thanks & Regards
    Shabeen Buhary

    Dear Frenchy,
    Thanks for your response. That is exactly what i am doing at the moment. Transfer the stocks at roll level to 999 storage type.
    I am also looking into the possible of using LI21. But my concern is at the present moment i do not want to maintain any stocks at storage unit level (since the users have stopped using the WM Module due to technical & practical reasons). So storage unit level info is not important for them they want to only execute the IM Part, is there a possibility where i could just remove the storage unit level stocks without affecting the IM portion (since the IM Portion is correct). Any standard transaction, movement type to exexute this.
    And alternative option i presume is to transfer the stocks from the warehouse storage location to non warehouse storage location.
    Regards
    Shabeen Buhary

  • Long Term Storage ideas for Data in an Oracle Database

    We have data that we no longer need access to and want to archive it on tapes for long term storage.
    We want to save the data using datapump or rman from time to time and and store the exports/backups on tapes. The issue we don't want to face in the future is , if Oracle deprecates the methodology we used to save the data in 2012, we no longer have a way to retrieve the data into a database 10 years from now.
    This may sound crazy, but hear me out. Before datapump existed everyone was using vanilla exp to export the data. Now we have datapump as exp's upgrade and we have rman too. What if in 2012 we took an export using datapump or a backup using rman and save it off on a tape. 10 years go by and in 2022 management wants to look at the 2012 data. Lets say by this time Oracle's mothodology changes and they have a much more efficient/robust export/backup method. One way for us to keep the old backups up to date is to import the data back before the method we used in 2012 is deprecated and then export/backup the data using the new method to keep the exports/tapes in sync with then current technology.
    My question is, what methods of saving large amounts of data do you recommend/use to save data to a tape so that you don't have to worry about changing times.
    An idea I heard is to save the data as insert statements. This would work for most of the data except for blobs.
    Any help would be appreciated.
    Thanks in advance.

    >
    Won't an intelligent compression algorithm take care of a lot of the overhead for that?
    >
    That's certainly a valid point to make and one that the OP can test. And because the duplicated metadata is identical and in the same physical row location there may be compression algorithms that can reduce it to just one set.
    There were two points I was trying to make.
    1. Don't just save the data - save the metadata also.
    2. The metadata for each row will be the same so if you use an INSERT format that duplicates it for every row you are potentially wasting a lot of space.
    Your other point was probably more important than these or anything previously mentioned.
    >
    perform restores of the entire database every couple of years or so to make sure that your
    database isn't degrading
    >
    Regardless of the format and media used to backup the data it is important to verify the integrity of the backup periodically.
    There are a lot of ways to do this. One way is a simple MD5 (or other) checksum of data at some level of granularity and then periodically read the data and confirm you still get the same checksum.
    In my opinion the more generic the format that the data is saved in the better the odds that you can recover the data later no matter how technology may have changed.

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Stock valuation down to Storage location level

    Dear all,
    System setup for stock valuation is till Plant level. And understand that's how data stored in table MBEW and MBEWH until plant level only.
    Eventhough that is system standrd, is that possible to get the stock valuation until Storage location level?
    For eg: In MB5B, there is opening stock, closing stock and the material movement posting. I need to get opening and closing stock for each storage location.
    Could you please comment if there are tables store these data? Kindly comment.

    Hi Afshad Irani,
    Thanks for your reply. Understand from the previous post that stock valuation for storage location can be done via table MARDH or report MC.9, but this tools are excess base on period/month basis.
    My case, i need to report storage location stock for both (1)last month 06.2010 and (2)yesterday's stock 06.07.2010. So i can use MARDH table to report (1)last month stock for each storage location. While for (2) yesterday stock, table MARDH isn't help.
    Could you please advice further?

  • Stock report in BW (valuated stock and Storage Location level)

    Dear All,
                Regards.We got an situation here,
    Case 1:  R3 (MB5B) (Both "Storage loc" and "Valuated stock" shows the same Stock Qty)
    1).The BW Stock Report shows values the Correct Values for "Receipt Qty" and "Issue Qty" with the Correct  "Quantity total Stock" aswell.
    Case 2 :   R3 (MB5B) (Stock is different between "Storage Loc" and "Valuated Stock")
    1).The BW Stock Report shows the  "Quantity Total Stock according to VALUATED STOCK Level(MB5B),but  the "Receipt Qty" and "Issue Qty" according to STORAGE LOCATION Level(MB5B) .
    We followed the "How to Handle Inventory "Doc with the proper sequence of Loading of Infosources and Compression.The data is sitting fine at the cube level.
    It would be great if someone can throw some Light on this issue,on how the Query shows a COMBO of both Valuated stock and Storage Location stock....Held up with this issue for quite sometime..Had a look at the   Oss Note (589024)...........
    Manythanks
    Arun

    HI,
    In Inventory we have two quantity fields given to satisfy the requirement.
    Valuated stock Qty - 0VALSTCKQTY
    Storage Location qty - 0TOTALSTCK
    You can make use of them based on your requirement. Hope this helps for you.
    Thanks,
    Arun

Maybe you are looking for

  • Will the macbook be able to run...

    Counter-strike 1.6 The macbook I am getting is the better white one, and I am getting the router and a total of 1 gig ram.

  • Text corrupted in Elements 10

    Starting  a week ago or so, any text entered into Elements 10 is automatically set to what would appear to be "very loose" tracking. As neither kerning nor tracking is available in Elements, there is no way I can restore letter spacing to anything ap

  • Need BOM related solution for Textile Industry (Weaving proces)

    HIi, Made to Order Strategy: Every times customer sends the specification of product. Specification format as given below: Ex. 80x100/154x80*3-118u201D 4/1 Twill Details of specifications:      80: Count of EPI (Ends Per Inch)      100: Count of PPI

  • How can I check the progress of sending attachments?

    Hi, I often have to send large attachments with my emails - is there a way to visually see the progress of a sent email, much like seeing how much of a download has been sent?

  • Cannot mount cdrom

    Hi, I can't mount a CDROM. DVDs are working fine. Here is the output of dmesg: sr 6:0:0:0: [sr0] ASC=0x64 ASCQ=0x0 sr 6:0:0:0: [sr0] CDB: cdb[0]=0x28: 28 00 00 00 da 00 00 00 02 00 end_request: I/O error, dev sr0, sector 223232 Buffer I/O error on de