Informatica and Essbase Best Practice questions

We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
We are using:
Informatica 8.6.1 (Linux)
Essbase 11.1.1.3 (Windows 2003)
1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
Any assistance you have with the two products working together I would GREATLY appreciate it!
Robert

AS i know addUser(User user){ ... } is much more useful for several reasons:
1.Its object oriented
2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

Similar Messages

  • EPMA Essbase vs. Classic Essbase | Best practice question

    Dear all,
    With no formal training on the topic of EPMA but with the experience of using it together with Planning, I'm aware that deployment in EPMA is destructive for any content created manually in the Essbase application under it.
    e.g. when doing a full deployment all manually added calculation scripts will be erased.
    I have following questions about the Essbase EPMA applications:
    Is Essbase deployment similar destructive as with EPMA Planning?
    Must all application content also reside in both EPMA and calculation manager? Is it possible to only use EPMA to control master data and do all other actions in Essbase?
    I see a real added value of using EPMA for a client of mine but if I (fear that I) might need to tell them to migrate all their scripts from Essbase administration console to Calculation manager. As such, it will be hard to find the added value of the switch for them.
    What about Essbase studio? I'm not aware of any need for Essbase studio in the context of a Planning EPMA implementation, but what about Essbase EPMA? Is this a must-do, adviced or just nice to have?
    Very much appreciate any advice in this context.
    Erik
    Belgium

    It depends what you mean by destructive, if you mean does it suffer from the same sort of problems then yes, personally I find it difficult to understand the benefits of moving away from a classic way of maintaining essbase applications to an EPMA managed world, just my opinion.
    You should be able to see use calc scripts in the same way as before and the same goes for other functionality.
    Essbase Studio is required as EPMA uses some of the codeline to deploy the applications to Essbase.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • HTML and CSS Best Practices for Eloqua?

    Hello Topliners Community,
    My name is Ben and I am a Web Designer. I am currently looking for any guidance on HTML and CSS best practices when working with Eloqua. I am interested in the best practices for e-mail and landing pages.
    Thank you,
    Ben

    Personally I like to upload my custom created html/css into Eloqua instead of using the WYSIWYG.
    But if you must then right clicking on text boxes and click edit source is the way to go.
    There was a good discussion on editing your forms with CSS:
    Energize Your Eloqua10 Forms with CSS
    created by Ryan Wheler on Nov 2, 2012 8:44 AM, last modified by Greg Stotler on Sep 19, 2013 2:00 PM
    Version 2
    CSS can be used to heavily customize the layout of forms in Eloqua10.  In this article we will provide sample cover some common formatting use cases on Eloqua10 Landing Pages.  Further details about uses of CSS in Eloqua10 form templates can be found here: EE12 - Do It - Eloqua - Energize E10 Forms
    Eloqua10 Forms HTML Structure
    Below is an outline of the structure of the HTML generated by Eloqua when a form is added to a landing page.  By targeting the HTML classes highlighted below, we can control the layout of any form on your landing page.
      For the rest of page: http://topliners.eloqua.com/docs/DOC-3015

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

  • Bean best practice question

    Simple questions (hopefully), I know how to code this but just want some advice on the best way to do the following:
    1. User enters data into HTML form and submits
    2. Some Java at the backend grabs these details and emails them off somewhere
    I am thinking of doing the following, but what’s the best way?
    1. HTML form submits and data is sent directly to a JavaBean (FormBean.java)
    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?
    Just a bit confused on what’s best practice for this stuff?
    Thanks!

    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? A better approach is this way
    a) Have all the form data in the form bean
    b) Write sendMail in a all together different class, as action.
    c) Send the form bean as a parameter to sendMail for processing and sending an email
    This way your sendMail() will become a kind of a service. Tomorrow you might have some other data, which you will have to send it in an email. In that case, you just reuse sendMail() method. Otherwise, if you have sendMail() in form bean itself, then if there are many form beans, then you would have to write sendMail() in every form bean, which is a bad practice. One principle of OOAD is to separate the functionality, which is redundant in your classes and make it as a separate module. If there are changes to the sendMail() functionality then, by having it in one module, you only have to change it at one place.
    Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?You can have a servlet which acts like a controller, which receives the request parameters, constructs the form bean and invokes appropriate Action (in your case sendMail()). This is same as an MVC framework. Instead of you re-inventing the wheel to create a servlet controller, form bean, action etc. You could use one of the several MVC frameworks available in the market, such as Struts or Spring MVC.

  • Group by best practice question

    Consider this example:
    TABLE: SALES_DATA
    firm_id|sales_amt|d_date|d_data
    415|45|20090615|Lincoln Financial
    415|30|20090531|Lincoln AG
    416|10|20081005|AM General
    416|20|20080115|AM General Inc.
    I want the output to be grouped by firm_id with the sum of sales_amt and the d_data
    that corresponds to the latest d_date (i.e. max(d_date))
    Proposed query:
    select firm_id, sum(sales_amt) total_sales, substr(max(d_data), instr(max(d_data), '~') + 1) firm_name from (
    select firm_id, sales_amt, d_date || '~' || d_data from sales_data
    group by firm_id
    output is as expected:
    firm_id|total_sales|firm_name
    415|75|Lincoln Financial
    416|30|AM General
    I know this works but my QUESTION is: is there a better way to do this and is the above approach to concatenate columns when you want to aggregate multiple columns against any best practices.
    Thanks very much!

    Here's a way that uses analytics (I just like them):
    SQL> select * from sales_data;
                 FIRM_ID            SALES_AMT D_DATE               D_DATA
                     415                   45 15-JUN-2009 00:00:00 Lincoln Financial
                     415                   30 31-MAY-2009 00:00:00 Lincoln AG
                     416                   10 05-OCT-2008 00:00:00 AM General
                     416                   20 15-JAN-2008 00:00:00 AM General Inc.
    SQL> select firm_id, sum_amt, d_data
      2  from
      3  (
      4     select firm_id, d_data
      5           ,sum(sales_amt) over (partition by firm_id) sum_amt
      6           ,row_number() over (partition by firm_id order by d_date desc) rn
      7     from   sales_data
      8  )
      9  where rn = 1
    10  ;
                 FIRM_ID              SUM_AMT D_DATA
                     415                   75 Lincoln Financial
                     416                   30 AM General

  • HyperV 2012 best practice question (storage of guests)

    I was reading this post:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    and I saw two things, one where it says fiber channel is not support and one where is says loopback is not support?
    so my question is for a best practice, would local storage (say a local attached raid 10) be considered best practice? or a shared
    san lun? and if a san, would virtual stuff like IBMs SVC be supported?

    Using a SoFS with SMB3 as a Storage for Hyper-V and SQL Server/Clusters is Microsoft Nr.1 and you will see more and more push for that Design in every next Version.
    The TOP Features for me in that Combo is, you get 100% of all Features on Day One when new Version comes out, within Microsoft SMB 3.x are some realy sexy Things in it and you can use all the Standard Ethernet Stuff yu have and know already for years. If
    you Need super Speed you can do that with some RDMA Cards, still Ethernet :-) and if you do a Live Migration between any Combo of Clusters and Single Nodes you never Need to move the Data and Config Files :-) they always stay on
    \\SoFS\VMs :-)
    All Information you need are on http://smb3.info , Jose, my Mr. SoFS  collects and blogs about all Features and Step-by-Step Guides to test and prove that even on one sinlge Notebook, very very cool.
    As a Backend fot the SoFs you have many choices, from the new Wave of JBODs togehter with Storage Spaces up to the old Style SAN Boxes with or without Special Features. Allways check the Windows Server Catalog for supported Confgs, specially on the JBOD
    Side, make sure you get one with R2 Support und also with SES Support (SCSI Enclosure Services ). Jose also Blogs about that Topics very well.
    https://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    http://www.windowsservercatalog.com/results.aspx?&chtext=&cstext=&csttext=&chbtext=&bCatID=1642&cpID=0&avc=10&ava=0&avq=0&OR=1&PGS=25&ready=0
    Hyper-V Cluster up to 64 Nodes and SoFS up to 8 Nodes depending on your Scaling Needs and having Storage splitted over separat DataCenters should be a good start :-)
    Just let me know of you need some Advice after doing a Design Session.
    Udo

  • BEST PRACTICE QUESTION: XI or direct R/3 RFC?

    We are creating an interface from R/3 to an externally facing (internally built) website.
    Question: What is the best practice defining whether or not we should pivot thru XI? This is a single interface that will not be reused; there would be no mapping within XI; and the call must be real-time and syncronous.
    Thanks much for your knowledge/assistance.

    Hi,
    the best practice is :
    don't use XI for sync interfaces
    but if the flow from RFC to this website will be very ocasional
    and you don't want to play with receving RFC data on the website
    then you can use it just to help yourself with an adapter
    Regards,
    Michal Krawczyk

  • Best Practices: Question about Passing DataSet to Crystal through C#

    I have used the tutorial provided by Business Objects and have successfully passed a dataset from C# code to Crystal Reports.
    I have a few questions about "best pratices" though.
    It appears that when passing the dataset to crystal then you no longer have the ability to put SQL Expressions in the report anymore otherwise errors will occur.
    So, I'm trying to come up with a way to have a custom field in the SELECT statement and have it show up on the report as a field. So, in the below example I created a custom SQL field called TIMES7 in the query.
    "Select CLM_ID, CLM, PAID_111X, *(PAID111X * 7) As TIMES7*_ From WIKI.MULCRICKET WHERE CLM_ID < 5"
    If I create a formula field in Crystal and set it's value to {MULCRICKET .TIMES7} then it works like a hybrid SQL Expression at run-time.
    This does work but has issues because the database field doesn't really exist in the design environment which causes error when the formula is saved. But it does work at run-time.
    I was wondering if their was a best practice for this?

    So why are you using a formula that doesn't work? If it errors in the designer it's telling you it's not supported.
    Drop the formula into a filed and hide it if you don't want it displayed, this way it gets into the record selection formula.
    "Best Practices" is don't use formulae that won't verify in the Designer. "If it doesn't work in the designer it won't work in code"
    If you need to add an "unknown" field at run time then use RAS to insert the field.

Maybe you are looking for