Include Data Load completion time in OBIEE

Hi all,
We are using DAC(Database Administration Consolel) for our data load activity.. SO my client wants the Data Load Completion time in all the Dashboards,,
May I know how to do this..

Hi,
You should add the DAC tables (out of the DAC Repository) to Oracle BI. There you will be able to report ETL-data.
Good Luck,
Daan Bakboord
http://obibb.wordpress.com

Similar Messages

  • Trigger sql agent upon a data load completion

    hi all,
    i insert/update data from server A to server B
    once the above step is successful, i want to try to start the sql agent job on server B.
    how can i set the sql agent job on server b to check and make sure that that data load has been completed for that day and then start executing the job steps.
    pleasse guide
    nik

    Hello,
    Yes, there are different ways of making connections to the remote SQL Server and running T-SQL Commands. some examples include:
    OPENQUERY
    Linked Servers
    Powershell through agent
    The best place to do it would be at the end of your import/export script/process.
    Sean Gallardy | Blog |
    Twitter

  • Date of "Completion Time" after incremental merge

    I have a scenario which confuses me, but that might just be me.
    First I take a level 0 backup with the tag of WEEKLY. Then on the following weekdays I take level 1 backups "FOR RECOVER OF COPY WITH TAG WEEKLY" and then recover the weekly copy.
    However when I then do "LIST BACKUP SUMMARY", it shows the WEEKLY backupset with completion time of the original level 0 backup, along with the incrementals after it.
    My understanding is that after I recover (roll forward) the image copy, it can't be recovered to any earlier point in time any longer. So if I RECOVER on Tuesday, I can't do a PITR to Monday anymore. So what is the value in showing me the original time? Is this a bug?
    The other problem this causes is that someone else who looks at this won't know that the image copy has been rolled forward, and would be in for a bit of a surprise if they tried to do things with it.

    First of all as I mentioned you have 2 sets of image copies 1 belongs to 27-jun and other 12-jul. 1 is obsolete you can delete that (27-jun).
    There is no backup on 13, 14, 15 & 17 in the date range of 10 to 18 jul. You are recovering you datafiles upto sysdate-3, means 3 days old backup pieces get applied on the image copies. The "Completion Time" of your image copies is 16-jul and "Ckp Time" is 12-jul means on 16 of jul last time your image copies get recovered and all the backupset <= 12-jul get applied to them because of the condition sysdate-3. Upto 16-jul its ok, there is no backup ran on 17 so no recovery also, on 18-jul also bca there is no backup of 15-jul so no recovery of image copies it only took incr level 1 backup again. Next time when you run this script it should apply the backuppiece belongs to 16-jul to the image copy.
    There is little confusion in days calculation here that is bcz you didn't take backups at a consistent time of the day.And there is no TIME potion in the TAG of backups from 16-jul onwards so dont know at what time you ran that script.
    Overall everything looks fine to me though.
    Daljit Singh

  • Error in master data load in time dependent info object

    hi,
    while loading master data(Texts) to time dependent info object i am getting error like      <b>"INFODEP1 : Data record 8 ('00000512 ') : Invalid "to" date '1-2-2004 '</b>     
    can anyone help where exactly is the error and how to remove it.
    Thanks
    Ashish

    hi,
      the date format should be yyyymmdd.
    regards
    pls assign points if helpful.

  • Master data loading  with Time dependent  InfoObjects

    Hi
    For product master, I have a infoobject Standard Cost, a time dependent keyfigure attribute.
    How do I Load data for product master
    1. From flat file
    2. from R/3 Pleas provide steps to do on R/3 side
    Thanks in advance,
    Regards,
    Vj

    Hi,
    Though you Material grade is time dependent and sequentially changing.
    You can create 4 different DTP with Grade selection (only 1 Transformation).
    For example, 1 DTP with filter Grade A, another DTP with grade B, so on.
    and executive all 4 dtp sequentially and activate master data after every DTP run.
    Hope it will workout.
    Thanks,
    Balaram

  • EIS to Essbase data load process

    I have two user defined queries in a metaoutline. When i perform dataload operation from EIS, the Essbase application log says that dataload updated [] cells and Data Load Elapsed Time [] seconds but the EIS dataload process is still running. It seems that Essbase is loading the data only for the first query and completing dataload while EIS is still running second query.Has anyone else experienced this problem? Essbase and EIS both are 6.5.0.

    In EIS
    1) You have to define the Logical OLAP Model connecting to the relational source.It defines the joins between fact table and dimension tables.
    2) Based on the OLAP Model You have to create meta outline which defines the rules for loading members and data into essbase.

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • Essbase 7.1 - Incremental data load in ASO

    Hi,
    Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
    P1 G1 A1 100
    Now, I get the following 2 rows as per the incremental data from relational source:
    P1 G1 A1 200
    P2 G1 A1 300
    So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
    P1 G1 A1 200
    P2 G1 A1 300
    I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
    And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
    Thanks!

    Hi,
    I think the features such as incremental data loads were available from version 9.3.1
    In the whats new for Essbase 9.3.1 it contains
    Incrementally Loading Data into Aggregate Storage Databases
    The aggregate storage database model has been enhanced with the following features:
    l An aggregate storage database can contain multiple slices of data.
    l Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    l You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    l You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    l You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error Message after data load (error is "1130610")

    We load data using a data load rule. The data seems to load fine except we get the error message "ERROR - 1241101 - Unexpected Essbase error 1130610" on last line below? Any thoughts? Is there a guide somewhere that explains in details all the various error codes and what they refer to? We're using Essbase 6.5.1. Thanks!======================================= OK/INFO - 1003037 - Data Load Updated [364875] cells. OK/INFO - 1003024 - Data Load Elapsed Time : [428.323] seconds. ERROR - 1241101 - Unexpected Essbase error 1130610.=======================================

    The explanation of error 1130610 is the following:Possible Problems - Essbase cannot open a file. Possible Solutions - If you are using an error file, make sure that the error file is being created in a directory that already exists. Make sure you are using the ESSCMD IMPORT command correctly. Put all files the ESSCMD script needs in the $ARBORPATH\APP\applicationName\databaseName directory. Run the ESSCMD script from the $ARBORPATH\APP\applicationName\databaseName directory. Check the ESSCMD script for invalid paths. Make sure every folder that the script is pointing to exists. If you are using an error file, make sure that the error file is being created in a directory that already exists.

  • Error with Informatica when running the data load for BI Apps

    Hi All,
    We are facing a different issue on prod which we haven't got on QA. Below given is session log :
    Target tables:
    W_ORDER_PARTY
    READER_1_1_1> RR_4050 First row returned from database to reader : (Wed Sep 22 10:36:16 2010)
    READER_1_1_1> BLKR_16019 Read [885] rows, read [0] error rows for source table [S_ORDER_BU] instance name [S_ORDER_BU]
    READER_1_1_1> BLKR_16008 Reader run completed.
    LKPDP_1> DBG_21097 Lookup Transformation [LKP_ETL_PROC_WID]: Default sql to create lookup cache: SELECT ETL_PROC_WID,ROW_WID FROM W_PARAM_G ORDER BY ROW_WID,ETL_PROC_WID
    LKPDP_1> TE_7212 Increasing [Index Cache] size for transformation [LKP_ETL_PROC_WID] from [1048576] to [1050000].
    LKPDP_1> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
    LKPDP_1:READER_1_1> DBG_21438 Reader: Source is [SFAPROD1], user [siebel]
    LKPDP_1:READER_1_1> CMN_1021 Database driver event...
    CMN_1021 [DB2 Event Using Array Inserts. connect string = [SFAPROD1]. userid = [siebel]]
    LKPDP_1:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_1:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_1:READER_1_1> RR_4049 SQL Query issued to database : (Wed Sep 22 10:36:17 2010)
    LKPDP_1:READER_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    LKPDP_1:READER_1_1> RR_4035 SQL Error [
    *[IBM][CLI Driver][DB2/AIX64] SQL0204N "SIEBEL.W_PARAM_G" is an undefined name. SQLSTATE=42704*
    sqlstate = 42S02
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT ETL_PROC_WID,ROW_WID FROM W_PARAM_G ORDER BY ROW_WID,ETL_PROC_WID
    DB2 Fatal Error].
    LKPDP_1:READER_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    LKPDP_1:READER_1_1> BLKR_16004 ERROR: Prepare failed.
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [LKP_ETL_PROC_WID], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [EXPTRANS1], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [EXPTRANS1], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [SQTRANS], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [SQTRANS], and the session is terminating.
    TRANSF_1_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    TRANSF_1_1_1> TM_6085 A fatal error occurred at transformation [SQTRANS], and the session is terminating.
    TRANSF_1_1_1> DBG_21511 TE: Fatal Transformation Error.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_ORDER_PARTY] at end of load
    WRITER_1_*_1> WRT_8035 Load complete time: Wed Sep 22 10:36:16 2010
    In the above log W_PARAM_G table is a warehouse table. We have checked Informatica to find out the issue but there is no error in informatica.
    Thanks in advance.
    Deepak.

    Hi Sandeep,
    Yes but if you see the log ,its connecting to OLTP DB(DB2 for Siebel source DB),which it shouldnt
    KPDP_1:READER_1_1> DBG_21438 Reader: Source is [SFAPROD1], user [siebel]
    LKPDP_1:READER_1_1> CMN_1021 Database driver event...
    *CMN_1021 [DB2 Event Using Array Inserts. connect string = [SFAPROD1]. userid = [siebel]]*
    LKPDP_1:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_1:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_1:READER_1_1> RR_4049 SQL Query issued to database : (Wed Sep 22 10:36:17 2010)
    LKPDP_1:READER_1_1> CMN_1761 Timestamp Event: [Wed Sep 22 10:36:17 2010]
    LKPDP_1:READER_1_1> RR_4035 SQL Error [
    *[IBM][CLI Driver][DB2/AIX64] SQL0204N "SIEBEL.W_PARAM_G" is an undefined name. SQLSTATE=42704*
    I Have check the mapplet ,its same no issues in that as well.
    Regards,
    Deepak.

  • Data load error in essbase studio

    I get the following error when trying to load an ASO cube using Essbase Studio (EPM 11.1.2). This error doen't seem to be documented in any of the Essbase manuals. Question - does this error indicate an essbase server issue or a data source issue? I'm thnking it's datasource related, but my data source is an Oracle database, which I've used previously to load cubes without a problem. I've refreshed the source and can connect to it fine otherwise.
    Error:
    Data load started at: Fri Dec 03 08:52:21 EST 2010.      Data load elapsed time:  10 Minutes 23 Seconds.
    Failed to deploy Essbase cube.
    Caused by: Failed to load data into database: 8020.
    Caused by: Cannot execute a SQL query
    Caused by: Io exception: Socket read timed out
    Caused by: Socket read timed out
    Appreciate any hel with this issue.

    When I have issues with Studio I try to break it down slowly. I build my dimensions one at a time. If it breaks on a single dimension build I trace the issues backwards and usually find my issue in the schema.
    Studio's role in life is to create SQL load rules and as such depend on a good schema definition. Unforntunately, the dimension build rules can't be opened in EAS with the Dataprep Editor (regular load rules) because they're binary and can do things that a normal load rule cannot (text measures, date measures, time varying attributes, etc.). But that doesn't mean the .rul files are un-readable. If you're having trouble with a particular dimension build process, open the load rule it creates with something like Notepad and grab the SQL that Studio is generating and drop it into Toad (or equivalant) to see if it is generating usable code. If not, there's something wrong with your modeling and you need to go back to the mini-schema.
    When you're able to build all dimensions all at the same time, you're almost there. If your issues comes when you want to build and load data, the final debuging steps go quickly. Towards that end, the data load rules (ones that load data vs. building dimensions) generated by Studio can be edited in EAS using the Dataprep Editor. If you know SQL Load Rules, you should be able to figure out. If not, contact John Goodwin, OCS or a partner and set up a consulting visit.

  • Data Load log #'s from 1003000 - 1003999

    Hi,Does anyone know as of what each and every # mean that is mentioned above. We are looking for a # that will tell us that data load has started and data load completed. We want to do a search within the log file based on these #'s.Thanks,Minash...

    I have written a parser for just this function. Do you do VB? Let me know if you want the code. <br><br>Jeff McAhren<br>Dallas, Texas

  • FDM event scripts firing twice during data loads

    Here's an interesting one. I have added the following to three different event scripts (one at a time, ensuring only one of these exists at any one time), to clear data before loading to Essbase:
    Event Script content:
    ' Declare local variables
    Dim objShell
    Dim strCMD
    ' Call MaxL script to run data clear calculation.
    Set objShell = CreateObject("WScript.Shell")
    strCMD = "D:\Oracle\Middleware\EPMSystem11R1\products\Essbase\EssbaseClient\bin\startMAXL.cmd D:\Test.mxl"
    API.DataWindow.Utilities.mShellAndWait strCMD, 0
    MaxL Script:
    login ******* identified by ******* on *******;
    execute calculation 'FIX("Member1","Member2") CLEARDATA "Member3"; ENDFIX' on *******.*******;
    exit;
    However, it appears that the clear is carried out twice, both before and after the data has been loaded to Essbase. This has been verified at each step by checking the Essbase application log:
    No event script:
    - No Essbase data clear in application log
    Adding above to "BefExportToDat" event script:
    - Script is executed once after clicking on Export in FDM Web Client (before the "Target System Load" modal popup is displayed). Entries are visible in Essbase Application log.
    - Script is then executed a second time when clicking on the OK button in the "Target System Load" modal popup. Entries are visible in Essbase Application log.
    Adding above to "AftExportToDat" event script:
    - Script is executed once after clicking on Export in FDM Web Client (before the "Target System Load" modal popup is displayed). Entries are visible in Essbase Application log.
    - Script is then executed a second time when clicking on the OK button in the "Target System Load" modal popup. Entries are visible in Essbase Application log.
    Adding above to "BefLoad" event script:
    - Script is NOT executed after clicking on Export in FDM Web Client (before the "Target System Load" modal popup is displayed).
    - Script is executed AFTER the data load to Essbase when clicking on the OK button in the "Target System Load" modal popup. Entries are visible in Essbase Application log.
    Some notes on the above:
    1. "BefExportToDat" and "AftExportToDat" are both executed twice, before and after the "Target System Load" modal popup. :-(
    2. "BefLoad" is executed AFTER the data is loaded to Essbase! :-( :-(
    Does anyone please have any idea how we might execute an Essbase database clear before loading data, and not after we have loaded fresh data? And perhaps on why the above event scripts appear to be firing twice?! There does not appear to be any logic to this!
    BefExportToDat - Essbase Application Log entries:
    +[Wed May 16 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:19:51 2012]Local/Monthly/Monthly/admin@Native Directory/140095859451648/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+
    +...+
    +[Wed May 16 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info(1003037)+
    Data Load Updated [98] cells
    +[Wed May 16 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info(1003024)+
    Data Load Elapsed Time : [0.52] seconds
    +...+
    +[Wed May 16 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:20:12 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+
    AftExportToDat - Essbase Application Log entries:
    +[Wed May 16 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:21:32 2012]Local/Monthly/Monthly/admin@Native Directory/140095933069056/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+
    +...+
    +[Wed May 16 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info(1003037)+
    Data Load Updated [98] cells
    +[Wed May 16 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095930963712/Info(1003024)+
    Data Load Elapsed Time : [0.52] seconds
    +...+
    +[Wed May 16 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:21:47 2012]Local/Monthly/Monthly/admin@Native Directory/140095928858368/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+
    BefLoad - Essbase Application Log entries:
    +[Wed May 16 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:23:43 2012]Local/Monthly/Monthly/admin@Native Directory/140095932016384/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+
    +...+
    +[Wed May 16 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info(1003037)+
    Data Load Updated [98] cells
    +[Wed May 16 16:23:44 2012]Local/Monthly/Monthly/admin@Native Directory/140095929911040/Info(1003024)+
    Data Load Elapsed Time : [0.52] seconds
    +...+
    +[Wed May 16 16:23:45 2012]Local/Monthly/Monthly/admin@Native Directory/140095860504320/Info(1013091)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:23:45 2012]Local/Monthly/Monthly/admin@Native Directory/140095860504320/Info(1013162)+
    +Received Command [Calculate] from user [admin@Native Directory]+
    +[Wed May 16 16:23:45 2012]Local/Monthly/Monthly/admin@Native Directory/140095860504320/Info(1012555)+
    +Clearing data from [Member3] partition with fixed members [Period(Member1); Scenario(Member2)]+

    Hi Larry,
    As mentioned, our exports do not appear to be generating the "-B.Dat" and "-C.Dat" files at present. However, you are correct with the Export and Load event scripts firing twice (once for the main TB file and again for the journal file). Does this also mean it could continue to fire an additional two times for the "-B.Dat" and "-C.Dat" files?
    On the last run, the output was as follows with the modified scripts:
    After clicking on Export in Workflow, the Target System Load modal popup is displayed, and the first two files have been generated:
    14.24.15.0527_BefExportToDat.txt
    14.24.17.0617_AftExportToDat.txt
    After clicking on OK in the Target System Load modal popup, the actual load to Essbase takes place. A further six files are generated:
    14.24.21.0289_BefLoad.txt
    14.24.22.0117_AftLoad.txt
    *14.24.22.0152_BefExportToDat-A.txt*
    *14.24.22.0414_AftExportToDat-A.txt*
    *14.24.22.0433_BefLoad-A.txt*
    *14.24.22.0652_AftLoad-A.txt*
    This makes a lot more sense, since one can see that the event scripts are being run a second time against the journal files during the data load. Many thanks, this solves my problem as I can now place my script where I want in the process chain. It's just a shame that there are not separate event scripts to distinguish between the various .Dat exports/loads, which are clearly occuring at separate times in the process chain.
    Many thanks! :-)
    P.S. Updated script below if anyone wishes to use it:
    Sub BefExportToDat(strLoc, strCat, strPer, strTCat, strTPer, strFile)
    Dim strF, fso, tf, t, temp, m, miliseconds, strSuf
    t = Timer
    temp = Int(t)
    m = Int((t-temp)*1000)
    miliseconds = String(4 - Len(m), "0") & m
    strF = "D:\TEST\" & Replace(Time, ":", ".") & "." & miliseconds & "_BefExportToDat"
    strSuf = UCase(Left(Right(strFile,6),2))
    If strSuf = "-A" Or strSuf = "-B" Or strSuf = "-C" Then
    strF = strF & UCase(strSuf) & ".txt"
    Else
    strF = strF & ".txt"
    End If
    Set fso = CreateObject("Scripting.FileSystemObject")
    Set tf = fso.CreateTextFile(strF, True)
    tf.WriteLine(strFile)
    tf.Close
    Set fso = Nothing
    End Sub

  • ERPi Data load mapping Issue

    Hi,
    We are facing issue with ERPi data load mappings issue. Mapping file (txt file) has 36k records, whenever we are trying to load mappings, it's taking very long time, nearly 1 hour 30mins. but we want to reduce that time. is there any way to reduce data load mapping time??
    Hyperion verion: 11.1.2.2.300
    Please help, thanks in advance!!
    Thanks.

    Any one face the same kind of issue??

  • Data load times

    Hi,
    I have a question regarding data loads. We have a process cahin which includes 3 ods and cube.
    Basically ODS A gets loaded from R/3 and the from ODS A it then loads into 2 other ODS B, ODS C and CUBE A.
    So when I went to monitor screen of this load ODS A-> ODS B,ODS C,CUBE A. The total time shows as 24 minutes.
    We have some other steps in process chain where ODS B-> ODS C, ODS C- CUBE 1.
    When I go the monitor screen of these data loads the total time the tortal time  for data loads show as 40 minutes.
    I *am suprised because the total run time for the chain itself is 40 minutes where in the chain it incclude data extraction form R/3 and ODS's Activations...indexex....
    *Can anybody throw me some light?
    Thank you all
    Edited by: amrutha pal on Sep 30, 2008 4:23 PM

    Hi All,
    I am not asking like which steps needed to be included in which chain.
    My question is when look at the process chain run time it says the total time is equal to 40 minutes and when you go RSMO to check the time taken for data load from ODS----> 3 other data targets it is showing 40 minutes.
    The process chain also includes ods activation buliding indexex,extracting data from R/3.
    So what are times we see when you click on a step in process chain and displaying messages and what is time you see in RSMO.
    Let's take a example:
    In Process chain A- there is step LOAD DATA- from ODS A----> ODS B,ODS C,Cube A.
    When I right click on the display messages for successful load it shows all the messages like
    Job started.....
    Job ended.....
    The total time here it shows 15 minutes.
    when I go to RSMO for same step it shows 30 mintues....
    I am confused....
    Please help me???

Maybe you are looking for