Automated MetaData Load

Greetings,
I'm curious to know how some of the users in this forum perform automated metadata load against dimensions in the Shared Library (and subsequently against any EPMA Planning apps). We have several apps whose dimensions are shared amongst various Planning apps which are updated manually when the dataload fails because of missing members. These are updated manually on a monthly basis due to the relative mall number of missing members.
However, we are building an app with a dimension that is quite dynamic (a lot of new members are added and a few are deleted). This woud be insane to perform manually to update the Shared Library. Thus I'm looking for any suggestions on how to automate this via batch file or via any other means, including using "Create Member Properties".
Any suggestions or ideas would be greatly welcomed...
Many thanks.
cg

CG,
.err genrates only when
no proper names for member
no proper alise name .. while data loading...etc etc
These things can also be achived via some programing langague ... like java .. but u need to know all the possible error and your programme should be able to find out where it has gone wrong so that it can rebuild if any new member or child is missing via data load
unix > u can use SED functions to do that which will find and replace any possible outcomes
itb happen to me some time back where i used Java to replace some mmebr which used to get rejected while data loading .. but it was like specific butin ur case u have to know all possible outcomes ...

Similar Messages

  • Automate DRM Metadata load into HFM

    Is there a way to automate a DRM metadata load into HFM? In both instances, we are using the latest Fusion versions and I wanted to know if there is a way to automate loading the flat file produced by DRM into HFM. How would this be done? Does EPMA versus classic HFM setup play a role? If there is no automation, I imagine this would require an admin to manually export from DRM and manually import into HFM.
    Thanks in advance

    Any thoughts on this guys? I'd like to open the question up a bit.
    How is DRM metadata used to source Hyperion applications? I see there is an XML output of metadata. There is some talk about generating an .ads file. What are the other ways? SQL database views?
    I'm using a classic Planning application. Appreciate any thoughts.
    -Chris

  • Metadata Loads (.app) - What is best practice?

    Dear All,
    Our metadata scan and load duration is approximately 20 mins (full load using replace option). Business hfmadmin has suggested the option of partial dimension loads in an effort to speed up the loading process.
    HFM System Admins prefer Metadata loads with replace option as there seems to less associated risk.
    Using partial loads there appears to be risk to cross dimension integrity checking, changes are merged, potentially duplicating of members when moved in Hierarchy.
    Are there any other risk with partial loads?
    Which approach is considered best practice?

    When we add new entities to our structure and load them with the merge option, they will always appear on the bottom of the structure. But when we use the replace option they will appear in the order that we want it. For us, and for the user friendlyness we always use the replace option. And for us the Metadata-load usually takes at least 35 minutes. Last time - 1.15...

  • HFM Metadata Loading On-going Problem

    Hi There,
    We just migrated to HFM 11.1.1.3 from HFM 4.
    We used to have an issue in HFM 4 where almost everytime we loaded metadata, the system would hang and become unresponsive. The load screen would just sit there without ever completing the load. You could still use the system but the metadata load would never actually load. The only thing that would resolve it would be to reboot all of the HFM application servers.
    This happened to us again the other day but now we're on the new HFM 11.1.1.3. Again, the resolution was to reboot the HFM applications. We tried just restarting the services on the various servers but that didn't work. A full reboot was required.
    And nothing was wrong with the metadata itself as it quickly loaded without errors into our DEV and QA environments. As well, we kicked all of the users out of the system prior to the metadata load. Most get out immediately and certainly no heavy calculations or consolidations are being performed during the metadata load.
    Has anyone else experienced this issue? Should a reboot always precede or accompany a metadata load? Is there any recommendation as to how often you should reboot the Hyperion servers (monthly, quarterly, etc.) as good practice?
    Many Thanks,
    Mike

    We are having a similar issue with version 11.1.2.0.0
    We try to run an application metadata load from the client and we get an error message
    Unexpected error: 80005005 occurred.
    Running the load from the workspace, we receive the following message.
    An error has occurred. Please contact your administrator.
    Show Details:
    Error Reference Number: {F187C156-ABDA-40DD-A687-B471F35535E3};User Name: mpus54965@gsd1w105c
    Num: 0x80004005;Type: 0;DTime: 4/27/2011 1:27:41 PM;Svr: GSD4W023C;File: CHsvMetadataLoadACV.cpp;Line: 347;Ver: 11.1.2.0.0.2762;
    This is the second time we have encountered this problem. Oracle support was not able to determine the cause. The fix is to reboot the database server, but we are looking for insight to the problem. Note: Our current development environment is a single server virtual environment connected to a database server. We are planning to move to a more robust test environment with four applications and one database server in a few weeks.
    Thanks for your help,

  • "MDL1223: Metadata Loader cannot be executed ..." error?

    Hi folks,
    Am getting this strange error from OMBPlus (11Gr2):
    MDL1223: Metadata Loader cannot be executed while other user(s) are accessing the workspace.
    when trying to import an MDL file, when using this statement:
    catch {OMBIMPORT MDL_FILE '$projname.mdl'\
                                USE UPDATE_MODE\
                                CONTROL_FILE 'control.txt'\
                                OUTPUT LOG '$projname.log'} result
    where the CONTROL_FILE (control.txt) simply has:
    IGNOREUOID=Y
    CONFIGPARAM=N
    SINGLEUSER=Y
    Am thinking that SINGLEUSER setting may have something to do with it? Have got an SR open with Oracle on it but no luck there so far - just wondering if anyone else has come across something like this...? Seems to certainly sound like a new 11G-ish/workspace-y kind of thing. Am really curious to know how to determine what users are currently accesing a workspace?
    Thanks,
    Jim

    Hi Jim,
    I would be the SINGLEUSER tag. In 11.1 and 11.2 you should connect using the USE SINGLE_USER_MODE option and remove the SINGLEUSER setting from the param file.
    Regards,
    John

  • Metadata load during 24 hours and still has not finished

    Hi all,
    I have a problem with the metadata load on HFM 9.3.1.3
    It seems it is running from about 24 hours.
    it happened some time that the log has not been produced from the HFM client, but I always have found in the log that the metadata load had finsihed succesfully.
    This time I have no clue that the dataload has completed.
    I have performed some times ago the delete invalid record task, that completed succesfully.
    Can some one tell me wich are the drivers that make the data load process last so long?
    Regards, Andrea Zannetti.

    Changes to the Entity or Currency dimensions can take far longer than changes to the Account, Custom, or Scenario dimensions. The reason for this lies primarily with the calc status tables, since changes to Entity or Currency require HFM to make changes including inserts to all of the CSE/CSN tables. Changes made to the Accounts or Customs do not require updates to the CSE/CSN tables.
    Another common place for very long metadata loads lies with journals. The "Check integrity" box for metadata loads tells HFM to read through each and every journal and journal template in the entire application, checking to see whether even a single journal or template could become invalid if it allowed the metadata to be loaded. NEVER uncheck this, since once metadata has invalidated even one journal, you can never load metadata with the check in place afterward.
    Another possibility is that metadata loads have to wait for all currently running data loads and consolidations to complete before the metadata load can start. Once it starts, subsequent data loads and consolidations have to wait for the metadata load to complete before they can start. This is to preserve data integrity.
    One last place to look for long metadata load times comes with an application whose database is performing poorly, possible because of missing/broken/out of date indexes.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Metadata load using ERPi

    Hi All,
    I am new to ERPi, Our client is willing to implement ERPi Metadata load. The source is People soft 9.1 and the target Hyperion EPMA planning 11.1.2.2.300.20.
    My source COA tree structure is different with my target Hyperion COA structure, can we implement metadata load using ERPi, if so how can we do it?.
    and what are the considerations to be taken care in the metadata load.
    Any thoughts on these would really appreciate your response. Thanks in advance.
    Best regards,
    Vishy

    The response from support is correct.

  • Metadata load problem

    Hi,
    Iam a newbie to Hyperion. Iam trying to load metadata into the classic application for a particular dimension.
    I get the error as "unrecognized column header value "".
    I removed all spaces in the csv file but still the error is the same.
    Please do provide me a clue on what i should to proceed with my metadata load.
    Thanks
    Swami

    Hi John,
    The log file says the following information.
    Unrecognized column header value "Cost Analysis".
    Unrecognized column header value "Alias_Default".
    Unrecognized column header value "Data_Storage"
    Unrecognized column header value "Aggregation_(OPEX)".
    My header in the csv is as below.
    "Cost_Analysis", "Alias_Default", "Data_Storage" and so on..
    So how do i correct it.

  • Automate Metadata loads into EPMA

    Hi,
    My company has 2 systems HCM and EPMA. We are tying to get the data in sync between these 2 systems. I am new to EPMA and was wondering if there is a way to automate the Metadata loads into EPMA? We recently installed ODI but not sure if this can be done VIA ODI. Any help is appreciated.

    Any thoughts on this guys? I'd like to open the question up a bit.
    How is DRM metadata used to source Hyperion applications? I see there is an XML output of metadata. There is some talk about generating an .ads file. What are the other ways? SQL database views?
    I'm using a classic Planning application. Appreciate any thoughts.
    -Chris

  • How to choose member position during ERPi metadata load ?

    Dear All ,
    We are using ERPi to load data and metadata from EBS to our target Hyperion Planning Application .
    during metadata load , members set to be at the root of the dimension .
    My question , How to choose where the extracted members to be placed at certain position at EPMA Hierarchy

    What you would do is set up your load rule as a dim build too. First pass you would load the data file as a dim build adding unknown members to a default parent like "Unknown Members". Then second pass, load the file again as data load and all members will exist for load to complete successfully.

  • Duplicate alias during ERPi metadata load

    Hello All,
    I am trying to load metadata to Hyperion Planning using ERPi from EBS GL. However, many of our descriptions may be same across accounts or other dimensions. In Planning, we generally combine member name plus description to create a unique alias. How can I combine account name and account description to load to load as alias in Hyperion Planning? I am on v11.1.2.2
    TIA

    duplicate post :- ODI - Hyperion Planning Metadata load issue
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Metadata loading forever from some DNG files

    Hello guys,
    I have LR4 under Win7 and for some folders containing DNG files the metadata loading takes forever. I.e. for the first 10 dng files the metadata will load in 5 seconds, for the next 10 files the metadata keeps loading forever and for the last 10 files the metadata will load in 5seconds.
    Do you have similar experience? Any resolution?
    Boris

    I tried to load the images in a program of my own for the analysis of raw images; it said, that those files were not TIFFs at all. Then I looked at the file content with a "neutral" program, without interpreting it and found, that it starts with "8BPS", which is the internal identification of PSD files. I renamed one and loaded in PS with success. The JPEG file to can be easily recognized when viewing the content in binary format.
    Gabor

  • Using the Metadata Loader Command Line Utility

    Hi ,
    Can anybody please let me know the steps involved for import and export of metadata uing the Metadata Loader Command Line Utility with small scripts as an example.
    Thanks in advance.
    Vinay

    I'll assume that command line utility = ombplus...
    using OMBPLUS, Here it is:
    OMBCONNECT my_user/My_password@host:port:SID
    OMBEXPORT TO MDL_FILE 'C:/temp/DELTA_RS52_LICC2.mdl' \
    FROM PROJECT 'NEW_ARCHITECTURE' \
    COMPONENTS ( \
    LOCATION 'TRG_NEW_ARCH_WORKAREA_LOC',\
    CONNECTOR 'TRG_WORKAREA_LIBOWNER_CONNECT', \
    ORACLE_MODULE 'TRG_WHOWNER', \
    TABLE 'CPF_VALID3', \
    TABLE 'CPF_VALID3_2', \
    TABLE 'CPF_VALID3_3', \
    TABLE 'CPF_VALID3_4', \
    MAPPING 'MAP_WA_CLAIM_DIM', \
    MAPPING 'MAP_WA_POLICY_DIM2_INS', \
    MAPPING 'MAP_WDC1_CLIENT_FOR_LIB', \
    FUNCTION 'UPD_WDC1_CLIENT_LIB', \
    FUNCTION 'VALIDATE_CHARGED_PREMIUM_1_F', \
    FUNCTION 'VALIDATE_CHARGED_PREMIUM_2_F', \
    FUNCTION 'VALIDATE_CHARGED_PREMIUM_3_F', \
    FUNCTION 'VALIDATE_WA_DRIVER_VEH_FACT_I', \
    PACKAGE 'INITIALIZATION', \
    ORACLE_MODULE 'TRG_WORKAREA', \
    TABLE 'AUPMGEN', \
    TABLE 'AUPMGEN_TR0', \
    TABLE 'WDC1_CLIENT_LICC', \
    TABLE 'WDC1_CLIENT_LICC_TEMP_UPD', \
    TABLE 'WG_CHARGED_PREMIUM_VALID', \
    FUNCTION 'GET_DT_TRX_TRANSACTION', \
    FUNCTION 'GET_OCC_OP_LKP', \
    PROCEDURE 'DISABLE_ENABLE_CONSTRAINTS', \
    PROCEDURE 'EXEC_WF_CPF_VALIDATIONS', \
    PROCEDURE 'EXEC_WF_DAUTO_DAILY', \
    PROCEDURE 'EXEC_WF_PER_GENDAT_DAILY', \
    PROCEDURE 'EXEC_WF_PER_GENTER_DAUTO', \
    PROCEDURE 'LOAD_PAST_FUTURE_CALENDAR', \
    PROCEDURE 'VALIDATE_CHARGED_PREMIUM_DS', \
    MAPPING 'MAP_AUPMCON_LIB', \
    MAPPING 'MAP_AUPMGEN_LIB', \
    MAPPING 'MAP_AUPMGEN_LIB_CPF', \
    MAPPING 'MAP_AUPMGEN_TR', \
    MAPPING 'MAP_AUPMGEN_TR0', \
    MAPPING 'MAP_AUPMGEN_TR0_CPF', \
    MAPPING 'MAP_AUPMGEN_TR0_CPF_PERF', \
    MAPPING 'MAP_AUPMGEN_TR_CPF_PERF', \
    MAPPING 'MAP_AUPMVEH_LIB', \
    MAPPING 'MAP_AUPMVEH_LIB_CPF', \
    MAPPING 'MAP_CHARGED_PREMIUM_FACT_TR1', \
    MAPPING 'MAP_IA_POLICY_TERM_LKP_2', \
    MAPPING 'MAP_SA_POLICY_SALES_CHAN_LIB', \
    MAPPING 'MAP_SIPGED_DAILY_2_LIB', \
    MAPPING 'MAP_SIPGED_LIB', \
    MAPPING 'MAP_SIPGED_TR', \
    MAPPING 'MAP_SIPRES_LIB', \
    MAPPING 'MAP_SIPVES_LIB', \
    MAPPING 'MAP_WA_CLAIM_FACT_TR1', \
    MAPPING 'MAP_WA_DRIV_VEH_FACT_TR1', \
    MAPPING 'MAP_WDC1_CLIENT_LIB', \
    MAPPING 'MAP_WDC1_CLIENT_LICC_LAST_VERS', \
    PROCESS_FLOW_MODULE 'NEW_ARCH_WF', \
    PROCESS_FLOW_PACKAGE 'DAUTO', \
    PROCESS_FLOW_PACKAGE 'WAUTO') \
    OUTPUT LOG TO 'C:/TEMP/DELTA_RS52_LICC2_exp.log'
    #now to import,still with OMBPLUS,
    OMBCONNECT my_user/My_password@host:port:SID
    OMBIMPORT MDL_FILE 'C:/temp/DELTA_RS52_LICC2.mdl' USE UPDATE_MODE OUTPUT LOG TO 'C:/temp/DELTA_RS52_LICC2_imp.log'
    Hope this is what you wanted
    Michel

  • ODI Planning metadata load takes very long time

    Hi,
    I am using ODI for Hyperion Planning metadata load.
    We are facing performance issues, as the process is taking a very long time.
    The process uses "LKM File to SQL" and "IKM SQL to Hyperion Planning"
    The number of rows to process in the file is around 70000. The file is generated from DRM. The ODI integration process takes around 2 hours to process and load the file to Planning. Even if we add 1 new row to the file and everything else remains same in the file, the process takes that long.
    So, the whole process takes around 3 hours to load to Planning for all dimensions.
    I tried increasing the fetch rows to 200 in source but there is no significant increase in performance. The Heap size is set to maximum of 285 MB in odiparams.bat.
    How can the performance be improved?
    Can I use different LKM or change any other setting?
    Thanks,
    RS

    Hi John,
    In my current implementation, the dimension hierarchies are maintained in DRM.
    So, business directly makes changes in DRM and exports the hierarchies in a text file.
    I agree that loading 70000 records is odd on regular basis, but it makes it easier for business to retain control of their hierarchies and maintainance is easy in DRM.
    On bulk loading to DB table and loading to Planning, I have 2 questions:
    1. Do you think that "LKM SQL to SQL" [Oracle to Planning] will have significant improvement in performance over "LKM File to SQL" [File to Planning], as we are still using "Sunopsis Memory engine" as staging area?
    2. I checked your blog, there you have suggested using "Sunopsis memory engine" for "LKM SQL to SQL".
    Is it mandatory to use "Sunopsis emory engine" as staging area, can we use any other user defined staging area [Oracle tables]?
    Cheers,
    RS

  • Automated data load from APO to SEM transactional cube

    Hi ,
    We have BW-SEM system integrated with APO system.
    I could see automated data loads from APO to SEM transactional cube ..
    Infopackage name as "Request loaded using the APO interface without monitor log" ..
    I don't see any infopackage by this name in both the systems ( APO & SEM )..
    I am not sure how it configured ..
    Appreciate any inputs on how its happens .....
    Thanks in advance

    Hi,
    As I mentioned the starting point will be the tcode BPS0. There will be 2 planning areas created (if I am correct) one for SEM cube and the other for APO cube. The better way to find it will be goto tcode se16 and enter UPC_BW_AREA and key in the cube names in the cube field. this will give you the planning area names now look for a multiplanning area which has the 2 areas included in them (this is available in table UPC_AREAM).
    then goto BPS0 and you will have to find which function is being used to post the data.
    thanks

Maybe you are looking for

  • Runtime Error in Production

    Hi , We are getting the runtime error " RAISE_EXCEPTION" in production server. Error analysis A RAISE statement in the program "SAPLSOI1" raised the exception condition "USER_NOT_EXIST". Since the exception was not intercepted by a superior program,

  • Text field is missing/blank in inventroy adjustment document

    Hi all, while posting adjustement entries for physical inventory through T-code MI07 we are getting field long text i.e. BSEG-SGTXT empty. as these accounting documents are posted automatically this text field should be updated by system only. can he

  • Panther applescript question...

    Hi everyone! I know Panther is a bit old now...but hopefully someone can explain why this doesn't work: tell application "Finder" set theLocation to ((path to home folder as string) & "Desktop:testfile") set theFile to (open for access file theLocati

  • Audio in Captivate 5.5

    Hello, I just upgraded to Captivate 5.5 from Captivate 4.0.  The audio I record in 5.5 sounds like it was recorded in a tunnel but when I use my microphone with 4.0 it sounds fine.  Is there a certain brand/type of microphone that works well with 5.5

  • Graphics performance worse after most recent update

    I just ran the most recent MacBook Pro update and now my graphics performance in WoW is terrible. It has rendered the game unplayable. It's probably a good thing, mind you, because I spend too much time playing anyway. But still, it shouldn't have ma