Warning during metadata loading to hyperion planning

Hi,
Hi,
When I do the mapping even if it is successful and the data is populated in the dimension(Eg. In this case for Business Dimension) I am still getting this warning:
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 2, in ?
Planning Writer Load Summary:
Number of rows successfully processed: 0
Number of rows rejected: 1018
at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.g.y(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)
The no of rows successfully processed is always zero even if it is getting populated. Please help.
Regrards,
VINAY

You set up the logging like this - http://1.bp.blogspot.com/__2AaArK5lW8/SN-G7H6f6TI/AAAAAAAAAd4/Ovyj1GK-dKk/s1600-h/25.png
It will write the output to two logs that are text format and can be opened.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • ODI metadata load impacts Hyperion Planning performance

    I am performing a dimension load of approximately 14,000 members, with a few attributes, from an Oracle view.
    The data is staged instantaneously. But it takes a long time (30mins plus) to load to Planning. During this time, Planning can become unresponsive. We are also having occasional Planning crashes, which may or may not be related.
    Can anyone tell me if this is normal from a performance perspective, and if there is anything we can look at to improve it.
    Thanks in advance.

    What you can do is remove "Enabled for Process Management" for scenarios/versions that don't use workflow.
    If you are using workflow then you can experience problems if you are loading to the entity dimension while a workflow is active.
    You can also increase the maximum jvm memory being used by the planning web application.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error during Export process to Hyperion Planning:

    Hello All,
    I am trying to load data to Hyperion Planning from EBS GL. I have completed the setup including setting up source, target, mappings, rules, etc. The import and validate process from EBS is working fine but export to Hyperion Planning fails right away. In my limited experience with FDMEE, I have not seen this error before and your help is greatly appreciated.
    Here is an excerpt of error from FDMEE process details which is the same error shown in ODI. Everything before this process seems to be working. It seems like it is complaining about a host name but not sure which host name should I be looking at.
    Environment: EBS12, FDMEE 11.1.2.3.520, Everything is on 64bit Linux
    Error:
    2015-01-14 08:53:25,578 DEBUG [AIF]: AIFUtil.callOdiServlet - START
    2015-01-14 08:53:25,690 FATAL [AIF]: Error in CommData.loadData
    Traceback (most recent call last):
      File "<string>", line 4853, in loadData
      File "<string>", line 1122, in callOdiServlet
      File "__pyclasspath__/urllib2.py", line 124, in urlopen
      File "__pyclasspath__/urllib2.py", line 387, in open
      File "__pyclasspath__/urllib2.py", line 497, in http_response
      File "__pyclasspath__/urllib2.py", line 425, in error
      File "__pyclasspath__/urllib2.py", line 360, in _call_chain
      File "__pyclasspath__/urllib2.py", line 506, in http_error_default
    HTTPError: HTTP Error 504: Unknown Host
    2015-01-14 08:53:25,691 DEBUG [AIF]: CommData.updateWorkflow - START
    2015-01-14 08:53:25,691 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 32
              ,PROCESSENTLOAD = 0
              ,PROCESSENTLOADNOTE = 'Load Error'
            WHERE PARTITIONKEY = 3 AND CATKEY = 1 AND PERIODKEY = '2011-04-30' AND RULE_ID = 24
    2015-01-14 08:53:25,692 DEBUG [AIF]: CommData.updateWorkflow - END
    2015-01-14 08:53:25,692 DEBUG [AIF]: Comm.executeScript - START
    2015-01-14 08:53:25,692 DEBUG [AIF]: The following script does not exist: /u01/fdmee_app/ALPLAN/data/scripts/event/AftLoad.py
    2015-01-14 08:53:25,692 DEBUG [AIF]: Comm.executeVBScript - START
    2015-01-14 08:53:25,693 DEBUG [AIF]: The following script does not exist: \u01\fdmee_app\ALPLAN\data\scripts\event\AftLoad.vbs
    2015-01-14 08:53:25,693 DEBUG [AIF]: Comm.executeVBScript - END
    2015-01-14 08:53:25,693 DEBUG [AIF]: Comm.executeScript - END
    2015-01-14 08:53:25,746 DEBUG [AIF]: Comm.finalizeProcess - START
    2015-01-14 08:53:25,747 DEBUG [AIF]: CommData.updateRuleStatus - START
    2015-01-14 08:53:25,747 DEBUG [AIF]:
        UPDATE AIF_BALANCE_RULES
        SET STATUS = CASE 'FAILED'
          WHEN 'SUCCESS' THEN
            CASE (
              SELECT COUNT(*)
              FROM AIF_PROCESS_DETAILS pd
              WHERE pd.PROCESS_ID = 173
              AND pd.STATUS IN ('FAILED','WARNING')
            WHEN 0 THEN 'SUCCESS'
            ELSE (
              SELECT MIN(pd.STATUS)
              FROM AIF_PROCESS_DETAILS pd
              WHERE pd.PROCESS_ID = 173
              AND pd.STATUS IN ('FAILED','WARNING')
            END
          ELSE 'FAILED'
        END
        WHERE RULE_ID = 24
    2015-01-14 08:53:25,748 DEBUG [AIF]: CommData.updateRuleStatus - END
    2015-01-14 08:53:25,749 FATAL [AIF]: Error in COMM Load Data
    2015-01-14 08:53:25,749 DEBUG [AIF]: Comm.updateProcess - START
    2015-01-14 08:53:25,751 DEBUG [AIF]: Comm.updateProcess - END
    2015-01-14 08:53:25,752 DEBUG [AIF]: The fdmAPI connection has been closed.
    2015-01-14 08:53:25,753 INFO  [AIF]: FDMEE Process End, Process ID: 173

    Yes it does. I was able to connect with planning and can see all planning dimensions in FDMEE. In this instance, everything is installed on the same box so it better be resolving the host

  • ERPI: Data Loading problem Hyperion Planning & Oracle EBS

    Hi
    I am trying to load data from Oracle EBS to Hyperion Planning.
    When i push data Zero rows are inserted in Target.
    When i look at Table " SELECT * FROM TDATASEG "
    It is showing me data but it is not comminting data in Target application.
    The reason is Data difference in Source (EBS) and Target.
    In my source Year is 2013 but in Target 'FY14' so for Entity Source is '21' but Target is '2143213'
    Can you please let me know how to solve this issue?
    can i place a Lookup table for this in EPRI.
    i am using EPRI and ODI to push data.
    Regards
    Sher

    Have you setup the data load mapping correctly to map the source value to the proper target value? Based on what you are describing it seems that the system generated * to * map is being used, if you are mapping a source to a different target, this needs to be added to the data load mapping.

  • Sorting a join loading into Hyperion planning

    Hi,
    i have Producr master data (1000+ data). I've to load into planning but i'm not able to understand how to sort (as SQL "order by 1 asc") a join of two table
    MASTER_PRODUCT, DESC_PROD.
    any suggestion?
    Thanks in advance
    DecaXD

    Hi,
    One method is customize the KM.
    If you right click "IKM SQL to Hyperion Planning" and select "Insert Option"
    Name the option something like "ORDER_OPTION" and set it to text.
    Now edit "IKM SQL to Hyperion Planning" and edit command "Load data into planning"
    In there should be
    sql= """select <%=odiRef.getPop("DISTINCT_ROWS") %> <%=odiRef.getColList("", "[EXPRESSION] [ALIAS_SEP] \u0022[COL_NAME]\u0022", ",", "", "INS and !TRG") %> from <%=odiRef.getFrom() %> where      (1=1) <%=odiRef.getFilter() %> <%=odiRef.getJrnFilter() %> <%=odiRef.getJoin() %> <%=odiRef.getGrpBy() %> <%=odiRef.getHaving() %>"""
    Add to the end - <%=odiRef.getOption("ORDER_OPTION")%>
    So it becomes
    sql= """select <%=odiRef.getPop("DISTINCT_ROWS") %> <%=odiRef.getColList("", "[EXPRESSION] [ALIAS_SEP] \u0022[COL_NAME]\u0022", ",", "", "INS and !TRG") %> from <%=odiRef.getFrom() %> where      (1=1) <%=odiRef.getFilter() %> <%=odiRef.getJrnFilter() %> <%=odiRef.getJoin() %> <%=odiRef.getGrpBy() %> <%=odiRef.getHaving() %><%=odiRef.getOption("ORDER_OPTION")%>"""
    Now in your interface and the flow section for the IKM you should have an Option "ORDER_OPTION"
    put in something like
    ORDER BY 1 ASC
    Give that a go.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Tracking load on Hyperion Planning

    We need to come up with a management level report showing the usage of Hyperion Planning in the company.<BR><BR>We need metrics like :<BR>Peak number of users<BR>Average number of users (polling every 30 minutes)<BR><BR>Is there anything within Hyperion Planning for this. I was thinking that I could find something probably in the WebServer for this. We are using WebSphere.<BR><BR>Any suggestions or pointers are welcome.<BR><BR>Thanks in Advance,<BR>Nazim

    Have you ever valued the Evaluation Order?
    I'll bet you haven't. Assuming that the dimension in question is Accounts, move that dimesion to be first in Evaluation Order.
    You can find Evaluation Order in the classic administrator dimension editor. It's the third tab on the right. Dimensions, Performance Settings, Evaluation Order.
    Regards,
    Cameron Lackpour

  • How to choose member position during ERPi metadata load ?

    Dear All ,
    We are using ERPi to load data and metadata from EBS to our target Hyperion Planning Application .
    during metadata load , members set to be at the root of the dimension .
    My question , How to choose where the extracted members to be placed at certain position at EPMA Hierarchy

    What you would do is set up your load rule as a dim build too. First pass you would load the data file as a dim build adding unknown members to a default parent like "Unknown Members". Then second pass, load the file again as data load and all members will exist for load to complete successfully.

  • Loading Attribute Dimension - Hyperion Planning

    Hello Everyone,
    I appreciated a lot informations that I found here. I red the http://john-goodwin.blogspot.com/2008/11/odi-series-loading-essbase-metadata.html
    about loading attribute dimension on Essbase, but when I try to execute the same steps using Hyperion Planning Adapter in ODI, I cannot see the option inside the model to reverse also all dimension attribute, hence I cannot load members on my dimension attribute. Do you know how can I develop the attribute dimension load on Hyperion Planning?
    Regards,
    Wallace Galvão
    Brazil

    Hi,
    You first need to create the attribute dimension in planning, ODI does not create dimensions.
    Once you have created your attribute dimension you can reverse the planning application again and the attribute dimension should be created as a DataStore
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Load Hyperion Planning to R12 Budget using ODI

    hi,
    We are using Hyperion Planning v11 and Oracle Finance R12
    We already setup the ODI repository (Topology, Context, etc) and was able to do reverse using RKM Essbase
    Can anyone give us the steps/procedure on how to Load the Hyperion Planning data to Oracle Budget
    thanks in advance

    Can you please share the knowledge of loading Essbase data in to EBS using ODI?
    I heard you have to run concurrent program to post data in to GL. I am new to concepts of GL. Can you please share the information of the scripts you are mentioning and the knowledge to achieve the data load. Appreciate your help.
    Thanks in Advance.
    Thanks,
    SaiN

  • Outline Load Utility error using Hyperion Planning 11.1.2

    I am using Outline Load Utility to extract outline from Hyperion Planningin .csv format.
    We have three planning cubes and I am trying to extract a dimension that is only a standard dimension in the third cube. However, I get this error message:
    Error: Unable to obtain dimension information and/or perform a data load: com.hyperion.planning.sql.HspMember cannot be cast to com.hyperion.planning.sql.HspTimePeriod
    When I check Shared Services, the dimension is not part of standard dimensions in Global Artifacts for the app. I can see it under the specific Plan Type as a standard dimension.
    When I try to extract any of the dimensions under the Global Artifacts, the command is successful. Here is the syntax I type for outline load command:
    OutlineLoad /A:HFPlan2 /U:admin  /-M /E:c:/position_export.csv /D:Position /L:c:/outlineLoad.log /X:c:/outlineLoad.exc
    Please advise if I need to add something else to the command or totally use any other tool.
    Thanks!

    Log an SR with Oracle, because that is a varchar field with I think 2000 char limit.
    Is that happening for just that application (do you have other apps, or can you create a sample application)
    Regards
    Celvin

  • Hyperion Planning and eBusiness Suite ODI integration

    Hi,
    I have to integrate eBusiness Suite and Hyperion Planning, mostly exporting data from eBusiness Suite to load into Hyperion Planning. I'm allready using eBusiness Suite connectors.
    I'd like to know if there is some start point to understand eBS model besides http://etrm.oracle.com. I found that the technical reference is not of much help to understand where I can find the proper info that I need or what kind of info provides each column/table.
    Thanks!

    Hi Pratik, thank you for your comment. Both links are very helpful. Although I think I wasn't precise enough on my initial post. Im trying to extract info from several modules of eBS (GL, PA, PO, INV, AP and others) to a single table and my approach for the extract is using one source set for each eBS module on ODI interface (with the corresponding tables from each module on each source set).
    The main info I need on this approach is how to avoid exporting duplicate info, because I know that some info is present on several modules.
    I hope I was clear on my explanation, do you have any advice on where can I get that info?. Thank you

  • Outline load utility Data load error in Planning

    Hi,
    We are loading the data using OutlineLoad utility in Hyperion Planning 11.1.2.1,but data is not loading and the log file showing the below error.
    Unable to obtain dimension information and/or perform a data load: com.hyperion.planning.olap.NoAvailableOlapConnectionsException: Unable to obtain a connection to Hyperion Essbase. If this problem persists, please contact your administrator.
    Can you please why we are getting this error.
    Thanks.

    Hi John,
    I tried refresh the db and running the utility again, still its showing same error,
    and the log file showing the following.
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2560/Info(1051164)
    Received login request from [::ffff:IP]
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2560/Info(1051187)
    Logging in user [admin@Native Directory] from [::ffff:IP]
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2788/Info(1051001)
    Received client request: List Applications (from user [admin@Native Directory])
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///11872/Info(1051001)
    Received client request: List Databases (from user [admin@Native Directory])
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///11232/Info(1051164)
    Received login request from [::ffff:IP]
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///11232/Info(1051187)
    Logging in user [admin@Native Directory] from [::ffff:IP]
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2824/Info(1051001)
    Received client request: Get Application Info (from user [admin@Native Directory])
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2560/Info(1051001)
    Received client request: Get Application State (from user [admin@Native Directory])
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2788/Info(1051001)
    Received client request: Select Application/Database (from user [admin@Native Directory])
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///2788/Info(1051009)
    Setting application BUD_APP active for user [admin@Native Directory]
    [Sat Oct 15 16:52:23 2011]Local/ESSBASE0///11872/Info(1051001)
    Received client request: Logout (from user [admin@Native Directory])
    Thanks

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Unable to load metadata using outlineload utility in Hyperion planning classic app

    Hi All,
    We are trying to update metadata into classic planning application using outlineload utility, but we are facing error as given below::
    at com.hyperion.planning.utils.HspOutlineLoad.main(Unknown Source)
    Error encountered with Database connection, recreating connections.
    Nested Excetpion: java.sql.SQLSyntaxErrorException: ORA-00942: table or view doe
    s not exist
    Query Failed: SQL_DELETE_EXPIRED_EXTERNAL_ACTIONS:[100]
    java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist 
    Please suggest a way to solve this issue.
    Regards,
    Mahaveer

    Looks like its sequence issue. Can you check if HSP_ACTION_ID_SEQ sequence is available for this planning application.
    Check this :
    Unable to update Dimension Security on a Load Balanced Planning Environment (Doc ID 1058156.1)
    Hope this help!
    Sh!va

  • Loading Metadata from ODI to Hyperion Planning Custom Dimension

    Customer want to load a metadata from ODI to Hyperion planning customer dimension using flat files(.txt files).
    Is it possible to load the metadata into custom dimension? If this is possible, do we need any other KM for planning except RKM Hyperion Planning?
    Because when i try to map the dimension from source to taget the connection is blanck. Getting "Used by target columns none".
    Please refer the image
    http://1.bp.blogspot.com/_Z0lKn46L41I/TJuZcsQxIjI/AAAAAAAAA90/TTv79fbQ9ks/s1600/ODIIssue.JPG
    Thanks
    Vikram

    Yes you can load to custom dimensions just like the other dimensions, the custom dimensions should be reversed into the model.
    You need to use the IKM SQL to Hyperion Planning, make sure you set the staging area different than the target.
    If you want to see how to load metadata to planning, have a read of :- http://john-goodwin.blogspot.com/2008/10/odi-series-part-5-sql-to-planning.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for