ASO data loading...

<p> </p><p>hi,</p><p>i have a big problem...pls help me...</p><p>i had a rule file in essbase 6.5 for BSO but now i have migratedthe same outline to 7.2.now my rule file is giving me problem whileloading data</p><p>i am loading data for member Days with values like31/28/31/30....for some upperlevel member combinations.but now inaso i can only load data for 0 level members....so the rule file isgiving me wrong result after aggregation for the upper levelmembers...is there any way in which i can load the data for theupperlevel members also or at any other level....</p><p>please help.....</p><p>thanks...</p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p><p> </p>

Modify the load rule to apply a prefix to your incoming member name in that field.<BR><BR>Then add an alias to the first stored member of the incoming upper level member that matches this new name. <BR><BR>For instance:<BR><BR>--- Ancestor2<BR>...... --- Ancestor1<BR>.............. --- Child1 (Alias: Input_Ancestor2)<BR><BR>Obviously, as shown above (hopefully the format is retained enough to make sense), your inputs can't have multiple input aliases to the child without devoting additional alias tables to it. However, this generally isn't an issue because the inputs should trace a natrual 'path' that allows at least one level 0 member for every upper level input.<BR><BR>It looks and sounds confusing, but the issue is simple from a technical perspective... alias the upper level member down to an input member, then deal with it there as you would have otherwise dealt with the upper level member as part of your calc.<BR><BR>HTH.<BR>

Similar Messages

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • ASO data load happening slowly on Essbase 7.1.6

    Hi All ,
    We are loading data files into an ASO cube . The same file used to load in seconds till last week. But this week it is taking hours together.
    We have checked all the server machine performance but of no use .
    Please help

    There is no export feature in v7 for ASO cube, you are going to have to upgrade to get the functionality you are looking for.
    In the mean time, you can try an MDX query instead of a report script, but I don't think it will yield much better results. You'll probably want to break up the report/query to make smaller chunks, maybe one for each year of history or something like that.
    If you plan to stay on 7 you should look at an alternate method for storing the history, for example staging your level 0 data in a relational database and reloading ASO cube from the relational source. You do not want to count on having all your data locked up in an ASO cube with no way to get it out.

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • Data Load MAXLs in ASO

    Hi All,
    Greetings of the day !!!!
    Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
    Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
    1)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    add values create slice;
    2)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    override values create slice;
    Q1
    What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
    Q2
    Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
    Cant we just use add value or override values if we dont want to create slice...
    Q3
    I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
    Pls help me in resolving my doubts... Thanks a lot !!!!

    Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
    Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
    The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
    Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

  • DATA LOAD WORNINGS IN ASO CUBES

    Hi Every one,
    While loading data into ASO cubes in Essbae we are getting wornings like *"Data load strem contains 1.25797e 08 and [0] #misssing cells ".*My data file have #missing values and o's and sepecial carecters like E .I want To load the complete data with out warnings.Kindly let me know if any one's know the solution .Whether i need to change any settings in rule file or how to ingnore those cells .
    Thanks,
    Vikram

    The warnings are really informational messages to let you know it loaded and did not load those values. Which is fine as they tend to bloat a cube (The zeros). #missing is not going to load anyway and the E is exponential format of numbers whinch should not be a problem. Excel will display it this way, but you can format it without the E. You don't mention if you are doing this from EAS or MaxL and what version you are on. In Veraion 11, in EAS there are options in the load dialog to turn on or off the loading of zzeros and missing across the top. In MaxL, I don't see the syntax in the Tech reference, but I thought it was there in 9.

  • Essbase 7.1 - Incremental data load in ASO

    Hi,
    Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
    P1 G1 A1 100
    Now, I get the following 2 rows as per the incremental data from relational source:
    P1 G1 A1 200
    P2 G1 A1 300
    So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
    P1 G1 A1 200
    P2 G1 A1 300
    I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
    And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
    Thanks!

    Hi,
    I think the features such as incremental data loads were available from version 9.3.1
    In the whats new for Essbase 9.3.1 it contains
    Incrementally Loading Data into Aggregate Storage Databases
    The aggregate storage database model has been enhanced with the following features:
    l An aggregate storage database can contain multiple slices of data.
    l Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    l You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    l You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    l You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • ASO Balnak Field Fill with Data Load Rule

    In block storage database I could fill a blank field with a text value by creating field with text and then replacing whole word match with preferred text. I am unable to get this to work in ASO database. Field Properties tab has the option but does not work when I try to use it. Has anyone else encountered this situation?

    Hi,
    Thank you both for your answers. But what confuses me is this: I created a rules file using a file with 12 columns. I selected the appropriate member for each column in Field Properties, and added the View member in the data load header. Then I get the error message: "This field is also defined in the header definition" for all fields. However, if I don't set the members in Field Properties and just set them in the data load header, I get another error message: "There is an unknown member (or no member) in the field name."
    Can you please help?
    Thank you!

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Data load error in essbase studio

    I get the following error when trying to load an ASO cube using Essbase Studio (EPM 11.1.2). This error doen't seem to be documented in any of the Essbase manuals. Question - does this error indicate an essbase server issue or a data source issue? I'm thnking it's datasource related, but my data source is an Oracle database, which I've used previously to load cubes without a problem. I've refreshed the source and can connect to it fine otherwise.
    Error:
    Data load started at: Fri Dec 03 08:52:21 EST 2010.      Data load elapsed time:  10 Minutes 23 Seconds.
    Failed to deploy Essbase cube.
    Caused by: Failed to load data into database: 8020.
    Caused by: Cannot execute a SQL query
    Caused by: Io exception: Socket read timed out
    Caused by: Socket read timed out
    Appreciate any hel with this issue.

    When I have issues with Studio I try to break it down slowly. I build my dimensions one at a time. If it breaks on a single dimension build I trace the issues backwards and usually find my issue in the schema.
    Studio's role in life is to create SQL load rules and as such depend on a good schema definition. Unforntunately, the dimension build rules can't be opened in EAS with the Dataprep Editor (regular load rules) because they're binary and can do things that a normal load rule cannot (text measures, date measures, time varying attributes, etc.). But that doesn't mean the .rul files are un-readable. If you're having trouble with a particular dimension build process, open the load rule it creates with something like Notepad and grab the SQL that Studio is generating and drop it into Toad (or equivalant) to see if it is generating usable code. If not, there's something wrong with your modeling and you need to go back to the mini-schema.
    When you're able to build all dimensions all at the same time, you're almost there. If your issues comes when you want to build and load data, the final debuging steps go quickly. Towards that end, the data load rules (ones that load data vs. building dimensions) generated by Studio can be edited in EAS using the Dataprep Editor. If you know SQL Load Rules, you should be able to figure out. If not, contact John Goodwin, OCS or a partner and set up a consulting visit.

  • "UNICODE_IN_DATA" error in ODI 11.1.1.5 data load interface

    Hello!
    I am sorry, I have again to ask for help with the new issue with ODI 11.1.1.5. This is a multiple-column data load interface. I am loading data from tab-delimited text file into Essbase ASO 11.1.2. The ODI repository database is MS SQL Server. In the target datastore some fields are not mapped to the source but hardcoded with a fixed value, for example, since only budget data is always loaded by default, the mapping for "Scenario" field in the target has an input string 'Budget'. This data load interface has no rules file.
    At "Prepare for loading" step the following error is produced:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 86, in <module>
    AttributeError: type object 'com.hyperion.odi.common.ODIConstants' has no attribute 'UNICODE_IN_DATA'
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:83)
         at java.lang.Thread.run(Thread.java:662)
    I will be very grateful for any hints

    Have you changes any of the Hyperion Java Files?
    I have not seen this exact error before but errors like this when the KM is not in sync with the Java Files.
    Also I always suggest using a rules file.
    If you have changed the files, revert back to the original odihapp_common.jar and see if it works, if you have changed the files to get round the issues I described in the blog you should be alright just to have changed odihapp_essbase.jar
    This is the problem now with Oracle and all there different versions and patches of ODI, it seems to me they have put effort into the 10.1.3.x Hyperion modules and then in 11.1.1.5 just given up and totally messed a lot of things up.
    I hope somebody from Oracle read this because they need to get there act together.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • ODI - How to clear a slice before executing the data load interface

    Hi everyone,
    I am using ODI 10.1.3.6 to load data daily into an ASO cube (version:11.1.2.1). Before loading data for a particular date, I want the region to be cleared in the ASO cube defined by "that date".
    I suppose I need to run a PRE_LOAD_MAXL_SCRIPT that clears the area defined by an MDX function. But I don't know how I can automatically define the region by looking at several coloums in the data source.
    Thanks a lot.

    Hi, thank you for the response.
    I know how to clear a region in ASO database. I wrote a MaxL like the following:
    alter database App.Db clear data in region '{([DAY].[Day_01],[MONTH].[Month_01],[YEAR].[2011])}'
    physical;
    I have 3 seperate dimensions such as DAY, MONTH and YEAR. My question was, I don't know how I can automize the clearing process before each data load for a particular date.
    Can I somehow automatically set the Day, Month, Year information in the MDX function by looking at the day,month,year coloumns in the relational data source. For example if I am loading data for 03.01.2011, I want my MDX function to become {([DAY].[Day_01],[MONTH].[Month_03],[YEAR].[2011])}'. In the data source table I also have seperate coloumns for Day, Month , Year which should make it easier I guess.
    I also thought of using Substitution variables to define the region, but then again the variables need to be set according to the day,month, year coloums in the data source table. I also would like to mention that the data source table is truncated and loaded daily, so there can't be more than one day or one month etc in the table.
    I don't know if I could clearly stated my problem, please let me know if there are any confusing bits.
    Thanks a lot.

  • Maxl data load error

    Hi All,
    I'm trying to load data from a csv file to an Aggregate Storage cube.
    I keep getting a wierd error "Syntax error near ['$']".
    Where there is no $ sign in either the script or the flat file source from where I'm loading data.
    This same script worked earlier using the same rule file.
    I'm running the maxl from the EAS console, and not invoking it through a batch.
    I only had to make a change to the cube name and the spool and error file names in the script.
    In the data file i had to make changes to a few member names, and I remapped it through the rule file, where I ignored one particular column.
    I've validated the rule file more than a couple of times, but do not get any errors.
    I'm not sure why I get such an error.
    Can anyone tell me what I'm doing wrong? Or if any one has seen such an error before can you help me out?
    THanks,
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 7:39 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 7:40 PM

    HEy,
    I'm running the MaxL script through EAS, it doesn't seem like any of the records are going through.
    The script is below:
    login 'usr' 'pwd!' on 'ec2-184-72-157-215.compute-1.amazonaws.com';
    spool on to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.log';
    import database 'VISA_ASO'.'VISA' data from data_file 'E:/dump/Data Load Files/ASO Load files Old format/master_data_load.csv' using server rules_file 'fulldtld' on error write to 'E:/dump/MaxL Scripts/ASO Scripts/Spool files/full_dt_ld_visa_aso_spool.err';
    spool off;
    logout;
    I rechecked a couple of times, doesn't seem like I'm missing any quotes.
    Robb and Jeanette Thanks for your response!
    I'm going to try with a smaller file and update this thread.
    CHeers!
    Anindyo
    Edited by: Anindyo Dutta on Feb 4, 2011 9:14 PM
    Edited by: Anindyo Dutta on Feb 4, 2011 9:16 PM

Maybe you are looking for

  • Message Notifications persisting after iOS 7.1 update

    Exactly what it says on the tin. For some reason, after the update, the notifications on all my text AND message apps have notifications that won't go away no matter how many times I check or reply. That's gonna confuse the **** out of me whenever th

  • Why can't I connect to my Mifi

    I have a Verizon Mifi 4G LTE and I can connect to my Powerbook (which is running on system 10.6.4)  BUT I CAN'T CONNECT TO MY iMAC (which is running on system 10.5.8.  When I try to connect, it "times out".  Any ideas?

  • Prompt for Directory Selection when deploying in WEB

    I have created a Java code to prompt for directory selection and it works when alone. When I get the code with the JavaBean in the forms 6i application, no dialog prompt appears when clicking the "Get_directory" button. The Java code is listed below.

  • Unable to connect to the database, using SCAN in 11gR2

    Hi ALL, I am in big trouble, I am working on 11gR2 4 node RAC database, and 1hr ago i have bounce the database for some issue. It came up and working fine for me, when i am connecting from the server. but the users (Performance team) who are connecti

  • The date on my "date received" for my messages is wrong

    I can't figure out how to change the "date received" category on my mail....any help??!