Error regarding data load into Essbase cube for Measures using ODI

Hi Experts,
I am able to load metadata for dimensions into Essbase cube using ODI but when we are trying same for loading data for Measures encountring following errrors:
Time,Item,Location,Quantity,Price,Error_Reason
'07_JAN_97','0011500100','0000001001~1000~12~00','20','12200','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
'28_JAN_97','0011500100','0000001300~1000~12~00','30','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
'28_JAN_97','0011500100','0000001300~1000~12~00','500','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
Can anyone look into this and reply quickly as it's urgent requiremet.
Regards,
Rohan

We are having a similar problem. We're using the IKM SQL to Hyperion Essbase (DATA) knowledge module. We are mapping the actual data to the field called 'Data' in the model. But it kicks everything out saying 'Unknown Member [Data] in Data Load', as if it's trying to read that field as a dimension member. We can't see what we missed in building the interface. I would think the knowledge module would just know that the Data field is, um, data; not a dimension member. Has anyone else encountered this?
Sabrina

Similar Messages

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Data Load into the Cube based on Fiscal Year

    Hi All,
    I was told to load the data into the cube coming from 3 different datasources, but based on fiscal year. I was told to load the data for 2010, 2009 and so on.
    Any suggestions please...

    Hi Dear,
                Write the following code in start Routine of Update Rule:-
    In BI 3.x
                   Delete DATA_Package where calday lt '20090101' and calday gt '20091231'.
               to load data onlyfor 2009.
              same thing for 2010.
    In BI 7.0
               Start Routine of Transformation:-
              delete source_package where calday lt  '20090101' and calday gt '20091231'.
    Regards
    Obaid

  • Data load into Essbase (append instead of overwrite)

    Hello,
    We are loading data from oracle table to target Essbase cube. How do we handle ODI data load to append instead of overwrite last value?
    Ex: We have data source with M:1 mapping, so we incoporated case statement [Case when Group A, B, C then D] is there a setting in ODI that allows data to append (add) instead of overwriting?
    Currently, the data value in C is loaded into D instead of A+B+C into D.
    Thanks.

    You can put the CASE WHEN in the target mapping and still use a load rule, a load rule does not have anything to do with what you do in the target mappings.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error Regarding Data loading in Planning and budgeting cloud service.

    Hi ,
    I was able to load data in planning and budgeting cloud service before 1 month.
    I loaded via Administration -> Import and Export -> Import data from File.
    Today i loaded the same file to the cloud instance for the same entity that i loaded after clearing the existing data.
    I am getting error while validating itself as
    \\Unrecognized column header value(s) were specified for the "Account" dimension: "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", "Jan", "Feb", "Mar", "Point-of-View", "Data Load Cube Name". (Check delimiter settings, also, column header values are case sensitive.)
    I checked the source file. Everything is correct. I actually loaded the same file before and i was able to load.
    Does anyone know the problem behind. Can anyone give me suggestion please
    Thanks in Advance
    Pragadeesh.J

    Thanks for your response John.
    I had Period and Year dimension in Columns
    I changed the layout of year dimension in period to POV and loaded the data file. I was able to load without any errors.
    I changed the layout again to the original form as before and loaded. I didn't get any errors again.
    It worked somehow.
    Thank you
    Cheers John

  • Reg data loading into essbase using text files

    Can we load data in parallel from 2 files into the same cube using 2 different rules files? Or do we have to load one file at a time?
    Could someone clarify this?

    I do not believe that by selecting two data files and two load rules in AAS you are getting parrallel data loading. If you look at the log, you will find them to be sequential. For ASO cubes, AAS loads the data into a buffer then applies it. The only real parrallel data loading is using multiple threads for one file. Othere than that is it sequential

  • Flat File: no data load into Info Cube

    Hi there,
    i try to load a flat file. When I simulate the upload its works well. But no Data was load in my Info Cube. When I try define a query there are no available.
    Can someone provide me with a solution for this problem?
    With rgds
    Oktay Demir

    Hi Oktay,
    in addition to A.H.P.'s marks, check if
    - Data is posted not only into PSA but also into datatarget,
    - updaterules are active.
    - Check Monitor-Status in Cube-Administration
    - Check availabilitiy for reporting of the request wthin Cube-Administration.
    Cheers
    Sven

  • URGENT: regarding data load in essbase through ODI after upgradation

    Hi,
    I have got this error, Please give me the solution...
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (most recent call last):
    File "<string>", line 29, in <module>
         at org.hsqldb.jdbc.Util.sqlException(Unknown Source)
         at org.hsqldb.jdbc.JDBCStatement.fetchResult(Unknown Source)
         at org.hsqldb.jdbc.JDBCStatement.executeQuery(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
    java.sql.SQLException: java.sql.SQLException: unexpected token: -
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:83)
         at java.lang.Thread.run(Thread.java:662)
    Thanks,
    Rubi

    I have used some condition in data column.
    If I have deleted that condition and directly mapped in data column them interface works fine.
    But whenever I have used some normal CONVERT function like CONVERT("Source column name",Numeric), It gives me that error.

  • I need format for data in excel file load into info cube to planning area.

    Hi gurus,
    I need format for data in excel file load into info cube to planning area.
    can you send me what should i maintain header
    i have knowledge on like
    plant,location,customer,product,history qty,calander
    100,delhi,suresh,nokia,250,2011211
    if it is  right or wrong can u explain  and send me about excel file format.
    babu

    Hi Babu,
    The file format should be same as you want to upload. The sequence of File format should be same communication structure.
    Like,
    Initial columns with Characteristics (ex: plant,location,customer,product)
    date column (check for data format) (ex: calander)
    Last columsn with Key figures (history qty)
    Hope this helps.
    Regards,
    Nawanit

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • ODI : how to raise cross reference error before loading into Essbase?

    Hi John .. if you read my post, I want to say that you impress me! really, thank for your blog.
    Today, my problem is :
    - I received a bad quality data file from ERP extract
    - I have cross reference table (Source ==> Target)
    - >> How to raise the error before loading into Essbase !
    My Idea is the following, (first of all, I'm not sure if it is a good one, and also I meet issue to do it in ODI !)
    - Step 1 : make JOIN between data.txt and cross-reference Table ==> Create a table DATA_STEP1 in the ODISTAGING schema (the columns of DATA_STEP1 are the addition of columns of data.txt those of cross-references Tables (... there is more than 20 columns in my case)
    - Step 2 : Control if there is no NULL value in the Target Column (NULL means that the data.txt file contains value that are not defined in my cross reference Table) by using Filter ( Filter = Target_Account IS NULL or Target_Entity IS NULL or ...)
    The result of this interface is send to reject.txt file - if reject.txt file is not empty then a mail is sent to the administrator
    - Step 3 : make the opposite : Filter NOT (Target_Account IS NULL or Target_Entity IS NULL ... ) ==> the result is sent in DATA_STEP3 Table
    - Step 4 : run properly the mapping : source : DATA_STEP3 (the clean and verified data !) with cross reference Tables and send data into Essbase - NORMALY, there is not rejected record !
    My main problem is : what is the right IKM to send data into the DATA_STEP1, or DATA_STEP3 Table, which are Oracle Table in my ODISTAGING Schema ! I thy with IKM Oracle Incremental Update but I get error, and actually I don't need an update (which is time consumming), I just need an INSERT !
    I'm just lookiing for an 'IKM SQL to Oracle" ....
    regards
    xavier

    Thanks john : very speed !
    I understood better now which IKM is useful.
    I found other information about the error followup with ODI : http://blogs.oracle.com/dataintegration/2009/10/did_you_know_that_odi_generate.html
    and I decided to activate Integrity Constorl in ODI :
    I load :
    - data.txt in ODITEMP.T_DATA
    - transco_account.csv in ODITEMP.T_TRANSCO_ACCOUNT
    - transco_entity.csv in ODITEMP.T_TRANSCO_ENTITY
    - and so on ...
    - Moreover I create integrity constraints between T_DATA and T_TRANSCO_ACCOUNT and T_TRANSCO_ENTITY ... so I expected that ODI will raise for me in E$_DATA (the error table) the bad records !
    However I have one issue when loading data.txt into T_DATA because I have no ID or Primary Key ... I read in a training book that I could use a SEQUENCE ... I try but unsuccessful ... :-(
    Is there another simple way to create a Primary Key automaticaly (T_DATA is in an oracle Schema of course) ?thanks in advance

  • How is data loaded into the BCS cubes?

    We are on SEM-BW 4.0 package level 13. I'm totally new to BCS from the BW view point. I'm not the SEM person but I support the BW side.
    Can anyone explain to me or point me to documentation that explains how the data gets loaded into cube 0BCS_C11 Consolidation (Company/Cons Profit Center? I installed the delivered content and I can see various export data sources that were generated. However I do not see the traditional update rules, infosources etc.
    The SEM person has test loaded some data to this cube and I can see the request under 'manage' and even display the content. However the status light remains yellow and data is not available for reporting unless I manually set the status to green.
    Also, I see on the manage tab under Info-Package this note: Request loaded using the APO interface without monitor log.
    Any and all assistance is greatly appreciated.
    Thanks
    Denny

    Hi Dennis,
    For reporting the virtual cube 0BCS_VC11 which is fed by 0BCS_C11 is used.
    You don't need to concern about the yellow status. The request is closed automatically after reaching 50000 records.
    About datastream - you right - the BW cube is used.
    And if your BW has some cubes with information for BCS on a monthly basis, you may arrange a load from a data stream.
    This BW cube I make as much similar to 0BCS_C11 by structure as possible -- for a smooth data load. The cube might be fed by another cube which contains information in another format. In update rules of the first cube you may transform the data for compatibility of the cubes structure.
    Best regards,
    Eugene

  • AWM Newbie Question: How to filter data loaded into cubes/dimensions?

    Hi,
    I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
    Thanks

    Hi there,
    Which release of Oracle OLAP are you using? 10g? 11g?
    You can use database views to filter your dimension and cube data and then map these in AWM
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Essbase Error(1003050): Data Load Transaction Aborted With Error (1220000)

    Hi
    We are using 10.1.3.6, and got the following error for just one of the thousands of transactions last night.
    cannot end dataload. Essbase Error(1003050): Data Load Transaction Aborted With Error (1220000)
    The data seems to have loaded, regardless of the error. Should we be concerned, and does this suggest something is not right somewhere?
    Your assistance is appreciated.
    Cheers

    Hi John
    Not using a load rule.
    There were two other records which rejected based on absentee members. The error message was different for them, and easily fixed.
    But this error doesn't tell us much. I will monitor the next run to see if the problem persists.
    Thanks

  • Cannot Lock and Send data to an Essbase cube

    Hi all,
    One of our customer is executing a Macro script to lock and send data to the essbase cube from an excel sheet.
    They reported that in several cases where users will submit their data, and later discover that their changes are not in Essbase.
    The calls to EssVRetrieve (to lock the blocks) and EssVSendData are both returning successfully and there is no error message received while executing the above macros.
    I reviewed the application log file and found the following message:
    [Mon Nov 24 18:59:43 2008]Local/Applicn///Warning(1080014)
    Transaction [ 0xd801e0( 0x492b4bb0.0x45560 ) ] aborted due to status [1014031].
    I analysed the above message and found the user is trying to lock the database when already a lock has been applied to it and some operation is being performed on it. Because of that the transaction has been aborted. But customer says no concurrent operation is being performed at that time.
    Can anyone help me in this regard.
    Thanks,
    Raja

    The error message for error 1014031 is 'Essbase could not get a lock in the specified time.' The first thought I have is that perhaps some user/s have the 'Update Mode' option set in their Essbase Options and thus, when they are retrieving data, they are inadvertantly locking the data blocks. If that is the case, you will probably see this issue sporadically as the locks are automatically released when the user disconnects from Essbase.
    To make it stop, you will have to go to every user's desktop and make sure they have that Essbase Option turned off. Further, you will have to look at any worksheets they may use that may have an Essbase Option name stored on it. The range name is stored as a string and includes a setting for update mode. Here is a sample that I created for this post where I first turned 'on' update mode and then turned 'off' update mode:
    A1100000001121000000001100120_01-0000
    A1100000000121000000001100120_01-0000
    Note the 11th character in the first string is '1' which indicates that Update Mode is 'on'; in the second string, it is 'off'.
    This behavior, particularly with update mode, is the only one of the behaviors that I disliked in Excel and pushed me to design our Dodeca product. In Dodeca, the administrator controls all Essbase options and can either set individual options to the value they want or they can allow the user to choose their own options. Most of our customers do not allow the user to set update mode.
    Tim Tow
    Applied OLAP, Inc

Maybe you are looking for