Essbase exports

Is it better to export in column format or without columns for data backups?

it depends on your needs. You can export all data with non-column format and can load it with no rule file, or you can export a small slice via a report script or calculation script and then import it via a rule file.

Similar Messages

  • ES11X-G4-H essbase export adapter issue

    Hello,
    i have this problem with essbase export adapter for FDM:
    When i perform export step everything seems to be fine but /APPS/myFdmApp/myFile.dat doesn't contain records that i can find with both on web interface and with a query on oracle fdm's system views "VDATAFACT" or "VDATA" (on repository database).
    No errors or reject records infos are shown nor in fdm neither in essbase (system and app) log.
    This happens with very large input file (300mb). Anyone has some ideas on what's happening?
    Thank you in advance,
    Daniele
    FDM: v11.1.2.1
    EsbAdapter: ES11X-G4-H
    S.O.: Win2008 SE R2 (x64)
    RDBMS: Oracle 11.2.0.1.0 (on linux)

    I've opened a service request. Here it is the last message i got from them. Please note that this SR had lasted over 2 month.
    What they say basically is: it doesen't work but FDM is not designed for big amount of data, so we are not going to fix it.
    Which is the max filesize handled by FDM is unknown....
    Daniele
    Hi Daniele,
    Good Morning!
    I tried to Reach you on the Number Mentioned in the SR.We Discussed This Issue With the Product Specialist and They have Suggested Not to Use 200Mb File, because it is too Big for FDM to Handle it, Please Try to Split the File and Load it, FDM is not a ETL tool to Handle Such big File.Please let me know if you want anything Else on This.
    With Regards,
    xxxxx.
    Hello xxxxx,
    could you please provide me the max file size handled by fdm? I'd like to have some docs on that also.
    regards,
    Daniele
    Hi Daniele,
    There is No Limitation as such FDM is not designed to load Millions of records. It's a End user fiinancial tool, not designed to be used as an ETL tool, so there is not a restriction per say, but it's not designed to be used in that manner.
    With Regards,
    xxxxx

  • Essbase Export with CRLF

    I need to convert an Essbase export file to contain CRLF so a VB program can read it. The VB program runs in an automated batch process, so I need this to happen automatically. Does anyone know of a tool that can do this?

    Hi,
    If you're looking for the ability to append different strings, then you should probably be looking to an external process.
    If you're looking to append the same string, I can think of one way. Make your report script fixed-width (that way you always know which character in the line you're changing) and then utilize the MASK command. (you can find it in the Tech Ref under Report Commands)
    Robert

  • Essbase export with a "string" appended with all account dimension members

    Hi Everyone ,
    can any one please suggest , how i can do my essbase export with a string appended with all account dimension members.
    i have tried more options but it doesn't looks working. i tried it by report scripts using RENAME function , but it works for 1 member at a time..
    please suggest !
    thanks
    Edited by: Vivek on Jun 19, 2012 6:55 PM

    Hi,
    If you're looking for the ability to append different strings, then you should probably be looking to an external process.
    If you're looking to append the same string, I can think of one way. Make your report script fixed-width (that way you always know which character in the line you're changing) and then utilize the MASK command. (you can find it in the Tech Ref under Report Commands)
    Robert

  • Essbase Export outline issue in 11.1.2

    Hi Essbase folks,
    I'm working in Essbase 11.1.2.1 and am trying to export outline using maxl command.
    When I execute Maxl,I see statment executed successfully for all the statements. But I do not see output either in server or in my local machine in the specified path.
    here is my code:
    login user pwd on server;
    alter system load application 'Demo';
    alter application 'Demo' load database 'Demo_DB';
    export outline Demo.Demo_DB all dimensions to xml_file "D:\Demoexport.xml";
    logout;
    I've also tried to export one dimension, but that also did not give any output (ran successfully though)
    export outline Demo.Demo_DB list dimensions {"Scenario"} tree to xml_file "D:\Demoexport.xml";
    Can you please let me know if there is something that I'm missing in the code

    These work for me
    export outline sample.basic all dimensions to xml_file "D:\sample.xml";
    export outline sample.basic list dimensions {"Measures"} tree to xml_file "D:\SampleMeasures.xml";
    Ran from a client machine and it creates both the files on the local client machine.
    Have you tried running on the server, are you using a different version of Maxl from the server?
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Essbase export question

    I have the strangest error.
    I am trying to export Level 0 and import in our new enviornement.
    Versions for target and source: 11.1.2.2
    My method of export:*
    Right click [database name] --> Export
    Export to file [ name.txt]
    Export option: Level0 data blocks
    Export in column format
    Expected output.
    Header: Begbalance, jan -------> Dec, Period
    This output
    Header: "HSP_InputValue" "HSP_InputCurrency" "HSP_Rate_USD" "HSP_Rate_RMB" "HSP_Rate_CNY" "HSP_Rate_PLN" "HSP_Rate_EUR" "HSP_Rate_GBP" "HSP_Rate_MXP"
    Now I have exported and imported other applications successfully and all of them include HSP_Rate dimension.
    other info if you need:
    These are EPMA history applications of from 11.1.1.3.
    I am creating new BSO applications in our OOD 11.1.2.2 and importing the data (yes i am not migrating the planning app itself, just migrating the data into this bso)
    please let me know what you think.
    I have even tried restructuring the outline since HSP was the first dimension, i dont know why that was the first. But either way I relocated it in essbase. (did not do it in planning because it was deleted long before. Only essbase app is left. )
    Edited by: 997328 on Jun 7, 2013 2:05 PM
    Edited by: 997328 on Jun 7, 2013 2:24 PM

    997328 wrote:
    Thanks for pointing out the "error" in the subject i changed it.
    Yes I have tried it.
    The thing is, in the output file there is no way to determine which member of the HSP_Rate dimension belongs where in the data. That being said there is no way the load rule will validate either. Here see below.
    HSP_InputValue     HSP_InputCurrency     HSP_Rate_USD     HSP_Rate_RMB     HSP_Rate_CNY     HSP_Rate_PLN     HSP_Rate_EUR     HSP_Rate_GBP     HSP_Rate_MXP     HSP_Rate_INR     HSP_Rate_THB     HSP_Rates
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx - CC10     4210C     100           
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4210M     1000          
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4250M     -100     
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4299M     -132          
    Jul     FY13     BA     Working     Local     Stat_Center     230     xxx- CC10     4501M     0          
    Thats just first few lines of the data. But how can it tell. You see what I mean??
    Edited by: 997328 on Jun 7, 2013 2:23 PMActually there is a way to know what value is what member. The line
    HSP_InputValue     HSP_InputCurrency     HSP_Rate_USD     HSP_Rate_RMB     HSP_Rate_CNY     HSP_Rate_PLN     HSP_Rate_EUR     HSP_Rate_GBP     HSP_Rate_MXP     HSP_Rate_INR     HSP_Rate_THB     HSP_RatesIs a listing of the data values in order. It would start with the first numeric column after the members. from the look of the sample data it looks like these are all hsp_inputvalue. scroll through the file or import it to excel and look at the columns to see if any of the other columns have numeric values.
    To be on the safe side, create a dummy file that has the dimension names like
    Period Years ???? Version Currency entity ????? ????? ?????? then all of the hsp values and use that to build your load rule, that way if there is a row 20000 rows down that has more than hsp_inputvalue, you won't get an error when trying to load the file

  • Add BOM to Essbase exported file?

    We've got a MaxL script that exports a Unicode-enabled database to a text file. By default that file is UTF-8 encoded, however it's "Unix" UTF-8 encoded and therefore has no byte order mark (BOM) at the beginning of the file.<BR>While this practice may not be wrong or bad, we cannot load the files into Cognos Decision Stream. Because DS does not see a BOM it does not recognize the file as UTF8 and chokes on the load.<BR>Is there a way to automate adding the BOM UTF-8 signature to the exported text files? It appears from the documentation that 'essutf8 -s filename.txt' should work, but it does not - the file is still sans BOM. Suggestions?<BR><BR>Thank you.

    >> Thank you so much. I tried to do this. But when I click on the button in the "scene" it says that I cannot add action to the selected item.
    Any thoughts? Some setting I'm not getting right when creating the button?<<
    Your Flash project is set to use Actionscript 3.0. In ActionScript 3.0 code cannot be added to buttons (or any symbol) directly it can only be added to a keyframe in the timeline. Mordys's suggestion above will work for a file that targets Actionscript 2.0 though.
    Since I assume the code was sent by the people you created the banner for and the code is for AS 2.0, you should be able to change the targeting of the movie.
    1- In Flash go to File>Publish Settings.
    2- In the Flash tab of the Publish Settings dialog box Choose Actionscript 2.0 from the actionscript version drop down menu.
    3-Close the window.
    You can now go back to the timeline and add the code to the buttons as Mordy suggested.
    In the getURL action I believe the address of the link should be inside quotes.
    If you need the code for making a button in AS 3.0 it is considerably longer. If you need it shoot me an email, it is in my profile.

  • Essbase LCM export on non-foundation machine fails with error message EPMLC

    Running an lcm utility command line export on a non- foundation machine fails with an error message:
    EPMLCM-30043: Unable to load the plugin class com.hyperion.essbase.lcm.ESBLCMPlugin
    bare in mind that:
    - other exports like planning and foundation metadata exports run succesfull on this non-foundation machine
    - running an essbase export from the foundation user interface succeeds too.
    What files can be missing or other issue can cause this?
    Detlev

    release transfer should work by steps what have you followed only requirement is to have create the similar namespace in the target software component first and then do the release transfer.
    one thing just try unchecking the target SWCV (original objects option at down it should work then)
    1.To release objectse from one SCV to another you need to follow the following procedure.
    Login to Integration Repository.
    Click on Tools -> Release Transfer. You can select the source & target interface or select individual objects if you want..
    If you need further clarifications do let us know.
    2.you may not have authorization to use Release Transfer so i would suggest use copyobject option to copy from one SWCV1 to SWCV2. right click on source select cop option change the SWCV
    Select the namespace under your SWCV. Go to Tools->Release Transfer.
    It will transfer all the objects under this namespace to the specified SWCV.
    For reference plese see the following documents:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/93/a3a74046033913e10000000a155106/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e8/e8573ba5f13048be50532fc3fcea9c/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/18/23078bdbd241f08efb3e36cf10c9d0/frameset.htm
    Or else you can select individual objects under the namespace and right click on the object and select Copy Object. You will get a dialog box where you can select the new location where you want to copy this object.

  • Essbase Studio: Failed to deploy Essbase cube

    Hi
    I have started working with Essbase studio sometime back and I am able to deploy BSO cube with success using the TBCSample Database which comes along with Essbase. Now I wanted to deploy ASO cube, as no sample database is available I thought to create one, I extracted ASOSamp using ODI to CSV files. Then I bulk inserted the csv extracts into MSSQL 2003 server which created 11 tables (Age, Geography, IncomeLevel, Measures, PaymentType, Product, Stores, Time TransactionType, Year). The above mentioned table does not have any keys(Primary, Foreign) as it is an Essbase export.
    I then successful created ASO Cube Schema using the newly created sample database in MSSQL, validated cube schema without any errors.
    Essbase Property Setting:
    Measures Hierarchy is tagged as Dynamic Compression at dimension level
    Time, Product and Year Hierarchy is tagged as Multiple Hierarchies Enabled, Year does not have multiple hierarchies but it has formulas for Variance and Variance % member. Is there a way to tag Year as Dynamic hierarchy?
    But when I try to deploy the cube to Essbase I receive following errors:
    Failed to deploy Essbase cube
    Caused By: Cannot end incremental build. Essbase Error(1060053): Outline has errors
    \\Record #1 - Member name (Time) already used
    + S Time + S
    \\Record #6 - Member name (1st Half) already used
    MTD + S 1st Half + S
    \\Record #7 - Member name (2nd Half) already used
    MTD + S 2nd Half + S
    \\Record #21 - Member name (Qtr1) already used
    Qtr1 + S Feb + S
    \\Record #22 - Member name (Qtr1) already used
    Qtr1 + S Jan + S
    \\Record #23 - Member name (Qtr1) already used
    Qtr1 + S Mar + S
    \\Record #24 - Member name (Qtr2) already used
    Qtr2 + S Apr + S
    \\Record #25 - Member name (Qtr2) already used
    Qtr2 + S Jun + S
    \\Record #26 - Member name (Qtr2) already used
    Qtr2 + S May + S
    \\Record #27 - Member name (Qtr3) already used
    Qtr3 + S Aug + S
    \\Record #28 - Member name (Qtr3) already used
    Qtr3 + S Jul + S
    \\Record #29 - Member name (Qtr3) already used
    Qtr3 + S Sep + S
    \\Record #30 - Member name (Qtr4) already used
    Qtr4 + S Dec + S
    \\Record #31 - Member name (Qtr4) already used
    Qtr4 + S Nov + S
    \\Record #32 - Member name (Qtr4) already used
    Qtr4 + S Oct + S
    \\Record #33 - Member name (Time) already used
    Time + S MTD + S
    \\Record #34 - Member name (Time) already used
    Time ~ S QTD ~ S
    \\Record #35 - Member name (Time) already used
    Time ~ S YTD ~ S
    \\Record #9 - Error adding Attribute to member QTD(Jan) (3320)
    \\Record #9 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD + S QTD(Jan) + S [Jan]
    \\Record #10 - Error adding Attribute to member QTD(Apr) (3320)
    \\Record #10 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Apr) ~ S [Apr]
    \\Record #11 - Error adding Attribute to member QTD(Aug) (3320)
    \\Record #11 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Aug) ~ S [Jul]+[Aug]
    \\Record #12 - Error adding Attribute to member QTD(Dec) (3320)
    \\Record #12 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Dec) ~ S [Oct]+[Nov]+[Dec]
    \\Record #13 - Error adding Attribute to member QTD(Feb) (3320)
    \\Record #13 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Feb) ~ S [Jan]+[Feb]
    \\Record #14 - Error adding Attribute to member QTD(Jul) (3320)
    \\Record #14 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Jul) ~ S [Jul]
    \\Record #15 - Error adding Attribute to member QTD(Jun) (3320)
    \\Record #15 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Jun) ~ S [Apr]+[May]+[Jun]
    \\Record #16 - Error adding Attribute to member QTD(Mar) (3320)
    \\Record #16 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Mar) ~ S [Jan]+[Feb]+[Mar]
    \\Record #17 - Error adding Attribute to member QTD(May) (3320)
    \\Record #17 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(May) ~ S [Apr]+[May]
    \\Record #18 - Error adding Attribute to member QTD(Nov) (3320)
    \\Record #18 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Nov) ~ S [Oct]+[Nov]
    \\Record #19 - Error adding Attribute to member QTD(Oct) (3320)
    \\Record #19 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Oct) ~ S [Oct]
    \\Record #20 - Error adding Attribute to member QTD(Sep) (3320)
    \\Record #20 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    QTD ~ S QTD(Sep) ~ S [Jul]+[Aug]+[Sep]
    \\Record #36 - Error adding Attribute to member YTD(Jan) (3320)
    \\Record #36 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD + S YTD(Jan) + S [Jan]
    \\Record #37 - Error adding Attribute to member YTD(Apr) (3320)
    \\Record #37 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Apr) ~ S [Qtr1]+[Apr]
    \\Record #38 - Error adding Attribute to member YTD(Aug) (3320)
    \\Record #38 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Aug) ~ S [1st Half]+[Jul]+[Aug]
    \\Record #39 - Error adding Attribute to member YTD(Dec) (3320)
    \\Record #39 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Dec) ~ S [1st Half]+[Qtr3]+[Qtr4]
    \\Record #40 - Error adding Attribute to member YTD(Feb) (3320)
    \\Record #40 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Feb) ~ S [Jan]+[Feb]
    \\Record #41 - Error adding Attribute to member YTD(Jul) (3320)
    \\Record #41 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Jul) ~ S [1st Half]+[Jul]
    \\Record #42 - Error adding Attribute to member YTD(Jun) (3320)
    \\Record #42 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Jun) ~ S [1st Half]
    \\Record #43 - Error adding Attribute to member YTD(Mar) (3320)
    \\Record #43 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Mar) ~ S [Qtr1]
    \\Record #44 - Error adding Attribute to member YTD(May) (3320)
    \\Record #44 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(May) ~ S [Qtr1]+[Apr]+[May]
    \\Record #45 - Error adding Attribute to member YTD(Nov) (3320)
    \\Record #45 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Nov) ~ S [1st Half]+[Qtr3]+[Oct]+[Nov]
    \\Record #46 - Error adding Attribute to member YTD(Oct) (3320)
    \\Record #46 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Oct) ~ S [1st Half]+[Qtr3]+[Oct]
    \\Record #47 - Error adding Attribute to member YTD(Sep) (3320)
    \\Record #47 - Aggregate storage outlines only allow formulas in compression dimension or dynamic hierarchies.
    YTD ~ S YTD(Sep) ~ S [1st Half]+[Qtr3]
    \\Record #2 - Incorrect Dimension [Year] For Member [ParentName] (3308)
    ParentName Consolidation DataStorage MemberName Consolidation DataStorage Formula
    \\Record #1 - Member name (Promotions) already used
    S Promotions S
    \\Record #2 - Incorrect Dimension [Promotions] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #3 - Member name (Promotions) already used
    Promotions S Coupon S
    \\Record #4 - Member name (Promotions) already used
    Promotions S Newspaper Ad S
    \\Record #5 - Member name (Promotions) already used
    Promotions S No Promotion S
    \\Record #6 - Member name (Promotions) already used
    Promotions S Temporary Price Reduction S
    \\Record #7 - Member name (Promotions) already used
    Promotions S Year End Sale S
    \\Record #2 - Incorrect Dimension [Payment Type] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #2 - Incorrect Dimension [Transation Type] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #22 - Member name (Home Entertainment) already used
    Home Entertainment + S Home Audio/Video + S
    \\Record #23 - Member name (Home Entertainment) already used
    Home Entertainment + S Televisions + S
    \\Record #24 - Member name (Other) already used
    Other + S Computers and Peripherals + S
    \\Record #25 - Incorrect Dimension [Product] For Member [ParentName] (3308)
    ParentName Consolidation DataStorage MemberName Consolidation DataStorage
    \\Record #26 - Member name (Personal Electronics) already used
    Personal Electronics + S Digital Cameras/Camcorders + S
    \\Record #27 - Member name (Personal Electronics) already used
    Personal Electronics + S Handhelds/PDAs + S
    \\Record #28 - Member name (Personal Electronics) already used
    Personal Electronics + S Portable Audio + S
    \\Record #31 - Member name (All Merchandise) already used
    Products + S All Merchandise + S
    \\Record #32 - Member name (High End Merchandise) already used
    Products ~ S High End Merchandise ~ S
    \\Record #33 - Member name (Systems) already used
    Systems + S Desktops + S
    \\Record #34 - Member name (Systems) already used
    Systems + S Notebooks + S
    \\Record #18 - Error adding Attribute to member Digital Recorders (3320)
    Home Audio/Video + S Digital Recorders + S
    \\Record #36 - Error adding Attribute to member Flat Panel (3320)
    Televisions + S Flat Panel + S
    \\Record #37 - Error adding Attribute to member HDTV (3320)
    Televisions + S HDTV + S
    \\Record #8 - Incorrect Dimension [Income Level] For Member [ParentName] (3308)
    ParentName DataStorage MemberName DataStorage
    \\Record #1 - Member name (Geography) already used
    S Geography S
    \\Record #2 - Error adding member 27425 (3317)
    \\Record #2 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    A M F GREENSBORO - NC S 27425 S 336
    \\Record #3 - Error adding member 36310 (3317)
    \\Record #3 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBEVILLE - AL S 36310 S 334
    \\Record #4 - Error adding member 29620 (3317)
    \\Record #4 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBEVILLE - SC S 29620 S 864
    \\Record #5 - Error adding member 67510 (3317)
    \\Record #5 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABBYVILLE - KS S 67510 S 316
    \\Record #6 - Error adding member 58001 (3317)
    \\Record #6 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERCROMBIE - ND S 58001 S 701
    \\Record #7 - Error adding member 42201 (3317)
    \\Record #7 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - KY S 42201 S 502
    \\Record #8 - Error adding member 21001 (3317)
    \\Record #8 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - MD S 21001 S 410
    \\Record #9 - Error adding member 39730 (3317)
    \\Record #9 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - MS S 39730 S 601
    \\Record #10 - Error adding member 28315 (3317)
    \\Record #10 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERDEEN - NC S 28315 S 910
    \\Record #11 - Error adding member 79311 (3317)
    \\Record #11 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABERNATHY - TX S 79311 S 806
    \\Record #12 - Error adding member 79601 (3317)
    \\Record #12 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABILENE - TX S 79601 S 915
    \\Record #13 - Error adding member 79608 (3317)
    \\Record #13 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    ABILENE - TX S 79608 S 915
    \\Record #14 - Error adding member 79698 (3317)
    \\Record #14 - Aggregate storage outlines only allow any shared member once in a stored hierarchy, including prototype.
    Are these errors due to data source, if yes what could be possible work around?
    Is there any problem with Essbase properties which I have set if so then when I validate cube schema why I dn't get any errors?
    Please help me, I am stuck here not able to deploy ASO Cube.
    Thanks in advance

    Hii
    I have the same problem , you have.
    did you manage to solve it ??
    Thanks in advance

  • Essbase data load process never terminating

    Hi
    We are using Essbase 11.1.2.1.
    We try to load a data file into a BSO Essbase application (essbase export format, so no dataload). We are using "execute in background" option.
    If the file contents no unknown member : it is correctly loaded.
    If the file contents an unknown member : the data load is never terminated : we have killed it after more than 15 hours, and the request is still in "terminating" so we have to kill the essbase process on the server. No error message or file.
    Thanks in advance for your help
    Fanny

    Hi
    I try to start this discussion again, because we still have the same problem, and no idea of why (no idea from the Oracle support too) and my client is waiting for an explanation :-/
    So, some more details :
    Context :
    I have an export (essbase format) from a database.
    If I import this file in a database with all the members used in the file => no problem. The dataload takes around 10 seconds.
    If I import this file in a database with all the members used in the file, excepted 1 => problem. The data load never ends and I have to kill the Essbase application on the Essbase server.
    I have done another test :
    I have imported the file in a database with all the members used in the file (so no problem).
    I have exported level-0 data in columns.
    I have imported this new file in the database with all the members used in the file, using a DLR => no problem. The data load takes around 10 seconds, with a dataload.err generated.
    So, John, you are right, the client shoud really use a column format export. But I need to explain to him why he cannot (not should not) use an export format.
    If using an export format is possible : I have to identify my problem and make the right modification to solve the problem.
    If it not possible : this is a bug, and it is not my responsability anymore and I can close the project!
    Thanks in advance for your great ideas!
    Fanny

  • Unable to start Essbase 6.5.4 in Redhat V9

    I got the following error message when I typed ESSBASE in the shell command: ESSBASE:error while loading shared libraries: libessnet.so: cannot open shared object file: no such file or directory.I did modified the profile to include the following lines: ARBORPATH=/home/hyperion/essbase; export ARBORPATHLD_LIBRARY_PATH=$LD_LIBRARY_PATH: /usr/lib:$ARBORPATH/bin; export LD_LIBRARY_PATHPATH=$PATH:$HOME/bin:$ARBORPATH/bin; export PATHI also modified the csh.cshrc file to include the following line: setenv LD_LIBRARY_PATH"$LD_LIBRARY_PATH:/home/hyperion/essbase/bin"Also, when I typed env at the prompt, it shows that the Essbase environments are properly set. The only thing I didn't do was running root.sh, since I can't even find this file...please help..thanks,Ronny

    Essbase 6.5.4 is not supported on RedHat v9. It is however supported on RedHat 7.3

  • Import XML into Essbase?

    Is it possible to import XML into Essbase or are we limited to flat files or Excel files? If XML is an option, can someone point me to some documentation?
    Thx!

    Essbase supports the following types of data sources only
    Text files (flat files) from text backups or external sources
    SQL data sources
    Essbase export files (export files do not need a rules file to load)
    Microsoft Excel files with the .XLS extension, Version 4.0 and higher
    Microsoft Excel files, Version 5.0 and higher (loaded only as client artifacts or files in the file system)
    Spreadsheet audit log files

  • Essbase Query

    Hi,
    I have written a batch script which will export the data from Essbase application to a file. But as the size of data is too huge, hence when I export the data, I get 5 files. The problem is when I am importing back, I am doing it manually because when I am trying to pass the file name as parameter, its just taking the first file (file.txt), but ignoring the files like file_1.txt, file_2.txt, file_3.txt and file_4.txt.
    Any suggestions how to proceed?

    2. in a bat/shell script, you could concatinate all the files togeter into a single file using wildcards and use that resulting file to do the loadThe export file will be split into multiple files automatically when it exceeds 2GB. Merging multiple 2GB files into single file is not a good idea. Some file systems can not support the text files > 2GB.
    To handle this kinda issues, my approach will be like this:
    1) Will estimate the size of exported data say 5 GB.
    2) Export the data using parellel export specifying 6 distinct file names or file names generated by my batch script.In the parellel export, Essbase exports data concurrently to a list of file names each with less than 1GB
    3) Use the known file names to load back.
    4) Automate this activity with 3 components (1 batch script, 2 MaxL scripts - One for export and One for import)

  • Export file @ 10 GB

    An "All Data" essbase export file (.txt) went to 10 gb's while the rest are 2 gb each.<BR>Anyone ever seen this behavior before, and will there be an impact while re-importing?<BR><BR>Thanks.<BR><BR>Kizi.

    Hi!<BR><BR>We have the same issue since 2005, and everything goes fine with such big files. we can export data and re-import it from files more than 2gb.<BR>We run Essbase on Windows.<BR>There was the case #589327 to Support and the answer was:<BR>----------------<BR>Essbase does not export its data in "chunks", so the export file sizes may differ.<BR>Basically, it will export until it gets a message from the OS back like "i am full now", then it generates the next file.<BR><BR>So on Windows systems it is quite common to have export files > 2GB (and, on the other hand, on Unix systems this is kind of uncommon.).<BR><BR>As a workaround you may try to use PAREXPORT into so many files that each one should not exceed the 2GB limit.<BR>-----------------<BR><BR>So, don't worry!<BR><BR>Regards,<BR>Georgy<BR>

  • Cube Load Times

    Hi there,<BR><BR>We have one BSO cube implemented on Essbase 7.1.2 and we load data from an SQL server source every night. My question is to do with the time taken to load/calculate the cube.<BR><BR>It started off at around 20mins however in the space of a few months this time has risen a LOT. Its now taking upto 2.5hours to complete the load/calculation process.<BR><BR>If we re-structure the cube, the next days load takes around 1hour.<BR><BR>My question was basically, can anyone reccomend any tips/tricks to optimise the load process?<BR><BR>One thing we have noticed is that the data compression on the cube is set to RLE. IS this the right one or should we be using one of the others?<BR><BR>Any help appreciated! <img src="i/expressions/beer.gif" border="0"><BR><BR>Brian

    With the assumptions that (a) you perform a full calc of the database, and (b) the full calc itself isn't an exceptionally long process, you can typically get a much better overall performance gain by doing the following:<BR><BR>1) Export the input level data (level 0 if you have agg missing turned on).<BR>2) Clear the database<BR>3) Load data from your Essbase export<BR>4) Load data from your SQL source<BR>5) Run your full calc/calc all<BR><BR>The reason you could get an overall performance gain is two-fold:<BR>1) The RLE compression tends to cause a lot of fragmentation within the page files as blocks are written and re-written.<BR>2) The calc engine tends to run faster when it doesn't have to read the upper level blocks in before calculating them.<BR><BR>Both of the above are simplified explanations, and results vary greatly based on the outline/hierarchies and the calc required. One thing to note however, is that if you run any type of input scrubbing/modification script (e.g. allocation, elimination, etc.), the optimum sequence can be quite different than the above (I won't go into the hows and whys here).<BR><BR>Now on to the other ways to optimize your load. In the Database Administrators Guide (DBAG), there is a good section on ways to optimize a file for optimum loading. Basically, the closer the order of the members appearing in your load file matches the order they appear in the outline, the better. For instance, if the columns in the file are dense, and the rows are sorted in the same order as the sparse dimensions, you load MUCH faster than if you do the opposite. This can have a big impact on the fragmentation that occurs during a load, enough so that if you can be pretty sure that the sort for your SQL export is optimum for loading, the above approach (Export/Clear/Load/Calc) won't get you any real benefit -- at least from the fragmentation aspect.<BR><BR>Of course, the outline itself can be less than optimum, so any fixes you address should start with the outline, then move to the SQL file layout, then the load process. The caution here is to be sure that changes you make to align the outline with your dataload don't adversely affect your calc and retrieval performace too much.<BR><BR>Most of this is covered in the DBAG, in greater detail, just not layed out in an order that can be easily followed for your needs. So if you need further details on any of the above, start there.<BR>

Maybe you are looking for