Data load into Essbase (append instead of overwrite)
Hello,
We are loading data from oracle table to target Essbase cube. How do we handle ODI data load to append instead of overwrite last value?
Ex: We have data source with M:1 mapping, so we incoporated case statement [Case when Group A, B, C then D] is there a setting in ODI that allows data to append (add) instead of overwriting?
Currently, the data value in C is loaded into D instead of A+B+C into D.
Thanks.
You can put the CASE WHEN in the target mapping and still use a load rule, a load rule does not have anything to do with what you do in the target mappings.
Cheers
John
http://john-goodwin.blogspot.com/
Similar Messages
-
Error regarding data load into Essbase cube for Measures using ODI
Hi Experts,
I am able to load metadata for dimensions into Essbase cube using ODI but when we are trying same for loading data for Measures encountring following errrors:
Time,Item,Location,Quantity,Price,Error_Reason
'07_JAN_97','0011500100','0000001001~1000~12~00','20','12200','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
'28_JAN_97','0011500100','0000001300~1000~12~00','30','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
'28_JAN_97','0011500100','0000001300~1000~12~00','500','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
Can anyone look into this and reply quickly as it's urgent requiremet.
Regards,
RohanWe are having a similar problem. We're using the IKM SQL to Hyperion Essbase (DATA) knowledge module. We are mapping the actual data to the field called 'Data' in the model. But it kicks everything out saying 'Unknown Member [Data] in Data Load', as if it's trying to read that field as a dimension member. We can't see what we missed in building the interface. I would think the knowledge module would just know that the Data field is, um, data; not a dimension member. Has anyone else encountered this?
Sabrina -
Reg data loading into essbase using text files
Can we load data in parallel from 2 files into the same cube using 2 different rules files? Or do we have to load one file at a time?
Could someone clarify this?I do not believe that by selecting two data files and two load rules in AAS you are getting parrallel data loading. If you look at the log, you will find them to be sequential. For ASO cubes, AAS loads the data into a buffer then applies it. The only real parrallel data loading is using multiple threads for one file. Othere than that is it sequential
-
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
ODI : how to raise cross reference error before loading into Essbase?
Hi John .. if you read my post, I want to say that you impress me! really, thank for your blog.
Today, my problem is :
- I received a bad quality data file from ERP extract
- I have cross reference table (Source ==> Target)
- >> How to raise the error before loading into Essbase !
My Idea is the following, (first of all, I'm not sure if it is a good one, and also I meet issue to do it in ODI !)
- Step 1 : make JOIN between data.txt and cross-reference Table ==> Create a table DATA_STEP1 in the ODISTAGING schema (the columns of DATA_STEP1 are the addition of columns of data.txt those of cross-references Tables (... there is more than 20 columns in my case)
- Step 2 : Control if there is no NULL value in the Target Column (NULL means that the data.txt file contains value that are not defined in my cross reference Table) by using Filter ( Filter = Target_Account IS NULL or Target_Entity IS NULL or ...)
The result of this interface is send to reject.txt file - if reject.txt file is not empty then a mail is sent to the administrator
- Step 3 : make the opposite : Filter NOT (Target_Account IS NULL or Target_Entity IS NULL ... ) ==> the result is sent in DATA_STEP3 Table
- Step 4 : run properly the mapping : source : DATA_STEP3 (the clean and verified data !) with cross reference Tables and send data into Essbase - NORMALY, there is not rejected record !
My main problem is : what is the right IKM to send data into the DATA_STEP1, or DATA_STEP3 Table, which are Oracle Table in my ODISTAGING Schema ! I thy with IKM Oracle Incremental Update but I get error, and actually I don't need an update (which is time consumming), I just need an INSERT !
I'm just lookiing for an 'IKM SQL to Oracle" ....
regards
xavierThanks john : very speed !
I understood better now which IKM is useful.
I found other information about the error followup with ODI : http://blogs.oracle.com/dataintegration/2009/10/did_you_know_that_odi_generate.html
and I decided to activate Integrity Constorl in ODI :
I load :
- data.txt in ODITEMP.T_DATA
- transco_account.csv in ODITEMP.T_TRANSCO_ACCOUNT
- transco_entity.csv in ODITEMP.T_TRANSCO_ENTITY
- and so on ...
- Moreover I create integrity constraints between T_DATA and T_TRANSCO_ACCOUNT and T_TRANSCO_ENTITY ... so I expected that ODI will raise for me in E$_DATA (the error table) the bad records !
However I have one issue when loading data.txt into T_DATA because I have no ID or Primary Key ... I read in a training book that I could use a SEQUENCE ... I try but unsuccessful ... :-(
Is there another simple way to create a Primary Key automaticaly (T_DATA is in an oracle Schema of course) ?thanks in advance -
How can i add the dimensions and data loading into planning apllications?
Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?
you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
- Krish -
Decimals cut off loading into Essbase via DIM
Have you ever run into the issue of small numbers (9 decimals) getting cut off/rounded when loading into Essbase via DIM? When I try to load rates into my Essbase app via DIM they get cut off at 6 decimals even the though the entire rate is 9. For instance, the rate 0.752615338 coming out of the source gets stored in the target as 0.752615. Also, if a load a text file with the 9 digit rate into my target app via DIM it STILL gets cut off. Therefore, I've isolated the issue to the loading of data into the target. I cannot modify the precision, however, for my target. Any thoughts?
Ha ha! Oy! It's like having bees live in your head. You are correct Russ! iTunes 11.2.1. Thanks. I went to verify the release number and looked right at iTunes and my fingers typed Safari. Funny how that works. Back to work!
-
How to Integrate Oracle Ebusiness suite to FDQM to load into Essbase
Hi Experts,
I need a document regarding how to integrate Oracle EBS to FDQM.I am taking target as Essbase.I know how to integrate to HFM and loading the data to HFM.So please share me the document like How to get source file from E-Buisiness suite to load into Essbase.Pleaae share with an example
Thanks in advanceHi;
How to install Oracle Ebusiness suite R12.1.3 on RAC environmentsCheck below thread's mention note advice
Rrac-EBS wiht asm
Re: RAC for EBS R12
rac-r12-ebs-asm
Re: EBS R12 with RAC and ASM on AIX 5.3
RAC-R11-R12
Upgrade the 11.5.10.2 Instance with 10gRAC setup
How to implement shared appl top Shared appltop in R12
Shared appltop in R12
How to configure https for Oracle Ebusiness Suite R12.1.3Enabling SSL in Release 12 [ID 376700.1]
Enabling SSL with Oracle Application Server 10g and the E-Business Suite [ID 340178.1]
Also see*Steven Chan's blog entery*:
http://blogs.oracle.com/stevenChan/2009/08/ssl_advanced_configuration_wizard_ebs12.html
Regard
Helios -
Adding leading zeros before data loaded into DSO
Hi
In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...Hi,
Can you check at info object level, what kind of conversion routine it used by.
Use T code - RSD1, enter your info object and display it.
Even at data source level also you can see external/internal format what it maintained.
if your info object was using ALPHA conversion then it will have leading 0s automatically.
Can you check from source how its coming, check at RSA3.
if your receiving this issue for records only then you need to check those records.
Thanks -
Data load into SAP ECC from Non SAP system
Hi Experts,
I am very new to BODS and I have want to load historical data from non SAP source system into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
Regards,
MonilHi
In order to load into SAP you have the following options
1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
Customer Master - DEBMAS
Article Master - ARTMAS
Material Master - MATMAS
Vendor Master - CREMAS
Purchase Info Records (PIR) - INFREC
The list is endless.........
In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
Do let me know if you need more info on any specific queries or issues you may encounter.
kind regards
Raghu -
How to delete the data loaded into MySQL target table using Scripts
Hi Experts
I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
But in the script i have written the code as
sql('database','delete from <tablename>');
but as it is an SQL Query execution it is rising exception for the query.
How can i delete the data loaded into MySQL Target table using scripts.
Please guide me for this error
Thanks in Advance
PrasannaKumarHi Dirk Venken
I got the Solution, the mistake i did was the query is not correct regarding MySQL.
sql('MySQL', 'truncate world.customer_salesfact_details')
error query
sql('MySQL', 'delete table world.customer_salesfact_details')
Thanks for your concern
PrasannaKumar -
How to make data loaded into cube NOT ready for reporting
Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
Please suggest. <removed>
ThanksSee, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
Hope this helps... -
AWM Newbie Question: How to filter data loaded into cubes/dimensions?
Hi,
I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
ThanksHi there,
Which release of Oracle OLAP are you using? 10g? 11g?
You can use database views to filter your dimension and cube data and then map these in AWM
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Essbase Studio Performance Issue : Data load into BSO cube
Hello,
Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
Thank you very much.Hello user13695196 ,
(sorry I no longer remember my system number here)
Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
I would recommand:
1. to create in your db source schema a View from your sql statement (these behind your data load rule)
2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
If your query runs longer then you think is acceptable then
a) check DB statistiks,
b) check and/or consider creating indexes
c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
(Don't be shy - a DBa is a human being like you and me :-) )
Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
One hint in addition:
We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
Best Regards
(also to you Torben :-) )
Andre
Edited by: andreml on Mar 17, 2012 4:31 AM -
How to extract data from Hyperion essbase and load into essbase application
Hi Guru,
I want to extract data from hyperion essbase 1 application and then load to another application of hyperion essbase.
Can any one know,how to do this.
Please,suggest LKM and IKM that should be used for this.Thank you very much John,
I am using Report script method not your least like method(Calc script as you mentioned in your blog)...:)
I don't know how to check logs of essbase.
what is load rule and how to use that?
My ODI version is:10.1.3.6.2
One more think i want to ask you is i don't have hyperion planning as Technology in topology maneger in my ODI.
Though, in my oracledi\lib\scripts\xml directory, i have TECH_HyperionPlanning.xml
So,how to import technology to have this OR I need something else for this....
Edited by: Maulik Vadodariya on 16 Mar, 2011 3:51 PM
Edited by: Maulik Vadodariya on 16 Mar, 2011 4:11 PM
Edited by: Maulik Vadodariya on 16 Mar, 2011 4:14 PM
Edited by: Maulik Vadodariya on 16 Mar, 2011 4:14 PM
Edited by: Maulik Vadodariya on 16 Mar, 2011 4:15 PM
Maybe you are looking for
-
Recover photos from iPad 2 that's stuck in recovery mode (no backup)
My iPad 2 is stuck in recovery mode (genius bar can't get it out). This happened while I was on vacation and unfortunately some of the videos and photos are not backed up/synced. Before I "restore" through itunes (and erase everything), is there any
-
Everytime I try to install the new itunes it come up with the following message: ' The installer encountered errors before itunes could be configured, Errors occured during installation, your system has not been modified. Please run installer again o
-
My ipod don't start. Screen show Apple logo and flashes. I try to restart holding main button and select button and nothing happen. Thanks
-
Submit button on Adobe Interactive form does not send data back to ABAP
The integration of my ABAP View and Adobe works fine getting data into the form. The form is interactive and I am able to change the data. I am using ZCI and the XML Context. I display the FirstName attribute both on the ABAP View, and the Adobe Form
-
Cron vs other methods to automate email sync
Hello, I would like to automate the process of email sync using mbsync. Manually i run mbsync gmail-inbox I heard cron can do this, but cron makes me nervous. To me its a base component of arch. The arch wiki says its "commonly used for system mainte