Is it a best practice to Input Data directly to Parent Currency ?

Dear Gurus,
I understand this question is very simple, but I am wondering is it a best practice to Input data directly into <Parent Currency>?
I am having a scenario where users do translation in Oracle and they wish to see the same number in HFM. As the process that they follow is unique in nature by Entity->Account [OR] for all Entities->Account, creating Override accounts is not helping us and rapidly increasing the Override hierarchy month-on-month.
Thanks...
Satya

By overriding the accounts, I assume you are referring to accounts that are translated historically as opposed to the default translation performed by HFM?
If so, I have seen two general approaches to achieve this:
     a) Allowing the user to directly override the translation at IsTransCurr (i.e. <parent currency> if parent is in a currency other then entity)
     b) Entering in a special currency rate in a rate account and then adjusting the translate routine to translate your account by that rate.
I personally, am a bigger fan of the first option.

Similar Messages

  • Best Practice on querying Data from Database

    Hello and I was wondering what is the preferred and best practice for querying data from an SQL database inside a JSP page. Is it using the JSTL library or another method? Thanks

    It depends on the size of the application really.
    The "correct and preferred" approach in a large MVC app would be to have a seperate class that does all the database access, retrieving the data into java objects.
    Check out [url http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html] DAO pattern
    You then "save" the data into request/session attributes, and forward to a jsp page to render the result.
    Most approaches recommend a separation between JSP (the view) and SQL code.
    The JSTL sql tags are provided more for "quick and dirty" code applicable in small applications, or for fast prototyping. That approach is not really robust for large scale applications.
    Cheers,
    evnafets

  • Best practice for heirachical data

    First off, I have to say that JMX in Java 6 is terrific stuff. Bundling jconsole in with Java has made JMX adoption so much easier for us.
    Now, to my question. We have read-only hierarchical data (think a DOM tree) that we would like to publish via JMX. What is the best practice? We see two possibilities:
    1. Publish each node of the tree with it's own object name and type. This will allow jconsole to display the information in the tree control.
    2. Publish just the root of the tree with an object name and type and then use CompositeType to describe the nodes of the tree. This means you look at the tree in the "Attribute Value" panel of jconsole.
    Is there any best practices for such data? We have implemented #2 and it works but we are wondering if long term this might lead to unforeseen consequences.
    Thanks in advance.
    --Marty                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Path,
    I did go with #1 and it worked out great. Every node in our tree is an ObjectName node. Works very well for us.
    --Marty                                                                                                                                                                                                                                                                   

  • Best practice for sharing data with model window

    Hi team,
    what would the best practice for sharing data with a modal
    window be ? I use a modal window to display record details from a
    record list, but i am not quite sure how to access the data from
    the components in the main application in the modal window.
    Any hints would be welcome
    Best
    Frank

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Best Practices on Routine Data Load.

    Can someone please tell me what are the best practices on routine data load from one database to another?
    We have PeopleSoft system where new employees' records are created; however, these new employees are required to take new employee tests that is being tracked by an application outside Peoplesoft on an Oracle db. Therefore, we need to populate the Oracle db with the new employee's information - on a daily basis or as needed. The data we will need to track are new employees or rehires, changes on existing employees - position, title, etc, terminated employees - date of termination, etc.
    What is the best practice to get the employee's information to the Oracle db?
    Any suggestions are appreciated.
    -andy

    Depends on your source and your database version which you didn't mention. What is the easiest way to get them out of your source database?
    Perhaps a database link though that might be a security violation.
    Perhaps as a delimited ASCII file loaded using SQL*Loader or an external table.
    Can you provide more information and database version numbers?

  • Best Practice - Form input

    I have a master detail where I want to display up to 10 detail records. If I currently have 3 detail records, I want to display the three and have 7 blank lines for additional input on the form. Any suggestions on the best way to handle this. Normal functionality would display the 3 and the user would create a new row on the form as they added each row. Looks like JBO:ROW would allow me to create the rows prior to displaying the form, but I would have to Delete any null rows before I commit the new rows entered. Or maybe use an array populated from the data source. I would also think this would be a common requirement in an application in need of a "Best Practice". Has anyone used a collection for a multivalued column in JDeveloper?
    Thanks Jim

    I haven't come across such a usage thus far in the Java client cases (though web apps do perform such empty row stuff)
    Anyway, here's how you may get this to work. First off all you will need to use JDev 9.0.3.1 (patchset from Oracle Support/Metalink as that has the new method that needs to be used in this solution).
    Create a custom listener for the iterator (that the table is bound to). You may use the JUPanelBinding.addRowSetListener... methods to get RowSetListener events on the desired iterator.
    In the rangeRefreshed event, check the number of rows in the range and if that's less than the number of rows in the table, create and insert new rows in the table and set the new rows into STATUS_INITIALIZED by calling setNewRowState(STATUS_INITIALIZED).
    That'd take care of filling in the white-space (or gray space) in the table with empty rows. Now if the user does not enter any value in these empty rows, the transaction will treat these new rows as 'temporary' and will not attempt to post them. Only when a user enters values into them or makes the rows STATUS_NEW again by some call to setAttribute on the Row, the row will get added to the transaction and posted during commit/post cycle.

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Obiee 11g : Best practice for filtering data allowed to user

    Hi gurus,
    I have a table of the allowed areas for each user.
    I want to show only the data facts associated with these allowed areas.
    For instance my user scott can see France and Italy data.
    I made a variable session. I put this session variable in a filter.
    It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
    I need all the possible values.
    I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
    I've read things on internet about using stragg or valuelistof but neither worked.
    What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
    Thanks in advance, Emmanuel

    Check this link
    http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/

  • Best practice for extracting data to feed external DW

    We are having a healthy debate with our EDW team about extracting data from SAP.  They want to go directly against ECC tables using Informatica and my SAP team is saying this is not a best practice and could potentially be a performance drain.  We are recommending going against BW at the ODS level.  Does anyone have any recommendations or thoughts on this?

    Hi,
    As you asked for Best Practice, here it is in SAP landscape.
    1. Full Load or Delta Load data from SAP ECC to SAP BI (BW): SAP BI understand the data element structure of SAP ECC, and delta mechanism is the continous process of data load from a SAP ECC (transaction system) to BI (Analytic System).
    2. You can store transaction data in DSOs (granular level), and in InfoCubes (at a summrized level) within SAP BI. You can have master data from SAP ECC coming into SAP BI separately.
    3. Within SAP BI, you SHOULD use OpenHub service to provide SAP BI data to other external system. You must not connect external extractor to fetch data from DSO and InfoCube to target system. OpenHub service is the tool that faciliate data feeding to external system. You can have Informatica to take data from OpenHubs of SAP BI.
    Hope I explain to best of your satisfaction.
    Thanks,
    S

  • Best practice for encrypting data in CRM 2013 (other than the fields it already encrypts)

    I know CRM 2013 can encrypt some values by default, but if I want to store data in custom fields then encrypt that, what's the best practice?  I'm working on a project to do this through a javascript action that when triggered from a form would reference
    a web service to decrypt values and a plugin to encrypt on Update/Create, but I hoped there might be a simpler or more suggested way to do this.
    Thanks.

    At what level are you encrypting?  CRM 2013 supports encrypted databases if you're worried about the data at rest.
    In transit, you should be using SSL to encrypt the entire process, not just individual data.
    you can use field-level security to not display certain fields to end users of a certain type if you're worried about that.  It's even more secure than anything you could do with JS, as the data is never passed over the wire.
    Is there something those don't solve?
    The postings on this site are solely my own and do not represent or constitute Hitachi Solutions' positions, views, strategies or opinions.

  • Best Practice for Master Data Reporting

    Dear SAP-Experts,
    We face a challenge at the moment and we are still trying to find the right approach to it:
    Business requirement is to analyze SAP Material-related Master Data with the BEx Analyzer (Master Data Reporting)
    Questions they want to answer here are for example:
    - How many active Materials/SKUs do we have?
    - Which country/Sales Org has adopted certain Materials?
    - How many Series do we have?
    - How many SKUs below to a specific season
    - How many SKUs are in a certain product lifecycle
    - etc.
    The challenge is, that the Master Data is stored in tables with different keys in the R/3.
    The keys in these tables are on various levels (a selection below):
    - Material
    - Material / Sales Org / Distribution Channel
    - Material / Grid Value
    - Material / Grid Value / Sales Org / Distribution Channel
    - Material / Grid Value / Sales Org / Distribution Channel / Season
    - Material / Plant
    - Material / Plant / Category
    - Material / Sales Org / Category
    etc.
    So even though the information is available on different detail  levels, the business requirement is to have one query/report that combines all the information. We are currently struggeling a bit on deciding, what would be the best approach for this requirement. Did anyone face such a requirement before - and what would be the best practice. We already tried to find any information online, but it seems Master data reporting is not very well documented. Thanks a lot for your valuable contribution to this discussion.
    Best regards
    Lukas

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Best practice to extract data from Hyperion Enterprise 5.5

    We are looking into extracting high-level data from our Hyperion Enterprise 5.5 and am in the process of researching what are the best practices to do that. I am reading the docs for the APIs that I can call from VB6. I am also interested if there are available Java APIs out there.Thanks in advance and Happy Holidays to everyone!Angelito [email protected]

    The easiest is using HAL (Hyperion Application Link). I have used HAL to extract data, organizations, account, subs, entities, etc.

  • Best practice for saving data in SQL server

    Hi all
    Hoping for a little help on this question. 
    If i have a list of fields ex. (name,address,postal,phone etc.). Then i create a webform/task
    to gather some of theese fields (name, postal), then i make another webform/task to gather some other fields (address, phone). 
    What is best practice in the SQL server for storing returning values.
    Is it: 
    1. to make a table with all the fields in the list + taskid. Theese fields could be in
    correct format (number, date etc.). And all answers to all tasks is inserted into this table. 
    2. Make a value table for each field with the correct type + task id. So all name values
    are stored in the "name value table" with the task id.
    How would i select values from a certain task from this kind of setup?
    3. ??
    Best regards
    Bo

    Hi Atul
    Thanks for your reply, can you elaborate a bit further on this, since i am still a little confused. 
    Let me try to explain my scenario at bit more:
    Say instead that it is 50 fields in a table with their own unique ID, maybe an answer table
    would look like this:
    taskid | field_1 | field_2 | field_3 | field 4 | field_n
    So no matter which fields the user fillsout it will can be stored in one table. 
    QUestion is, is this a good way to do it? and how do i select from this table using a join
    As far as i know you cant name columns in a table with just numbers, which would have been
    great, giving the columnnames the field_id.
    OR
    Would you have 50 tables each with a field_id and a value (of correct type) ?
    And could you give me an example of how to bind and select from this kind of structure ?
    Also inserting into 50 tables on a save.... is that the right way to go? :)
    Best regards
    Bo

  • Best Practice in Manipulating Data

    JDeveloper 11g 11.1.1.2.0
    This question is more of a design issue and best practice.
    I have a simple set of data composed of this sample:
    Date DataA DataB
    2010/01/02 24 20
    2010/01/03 34 30
    2010/01/04 50 40
    etc...
    The challenging part is that i need to the design the UI a little bit inverted to something like this.
    (Data) 01/02 01/03 01/04
    DataA 24 34 50
    DataB 20 30 40
    So far my thoughts in manipulating this will be creating a map to match the structure in a backingbean and bind that map in an af iterator.
    I was wondering if there is better approach in implementing this? I'm also curious if transforming this into a map is best done in the bean.
    Edited by: Mar Vince Reyes on Jun 18, 2010 6:56 PM
    Edited by: Mar Vince Reyes on Jun 18, 2010 6:57 PM
    Edited by: Mar Vince Reyes on Jun 18, 2010 6:58 PM

    Thanks for the reply John.
    I have tried using the pivot table and it was also one of my first options.
    But with the example i have above, i was more curious if with this simple example, if a bean would be the best area to do the manipulation.
    I do agree about the pivot table though. I will also dig in more about using the pivot table.

  • Best Practices for Remote Data Communication?

    Hello all
    I am developing a full-fledged website in Flex 3.4 and Zend Framework, PHP. I am using the Zend_AMF class in Zend framework for communicating the data to remote server.
    I will be communicating to database in the following way...
    get data from server
    send form data to server
    send requests to server to get data in response
    Right now I have created just a simple login form which just sends two fields username and password in the method in service class on remote server.
    Here is a little peek into how I did that...
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml">
      <mx:RemoteObject id="loginService" fault="faultHandler(event)" source="LoginService" destination="dest">
        <mx:method name="doLogin" result="resultHandler(event)" />
      </mx:RemoteObject>
      <mx:Script>
        <![CDATA[
          import mx.rpc.events.ResultEvent;
          import mx.controls.Alert;
          private function resultHandler(event:ResultEvent):void
            Alert.show("Welcome " + txtUsername.text + "!!!");
        ]]>
      </mx:Script>
      <!-- Login Panel -->
      <mx:VBox>
        <mx:Box>
          <mx:Label text="LOGIN"/>
        </mx:Box>
        <mx:Form>
          <mx:FormItem>
            <mx:Label text="Username"/>
            <mx:TextInput id="txtUsername"/>
          </mx:FormItem>
          <mx:FormItem>
            <mx:Label text="Password"/>
            <mx:TextInput id="txtPassword" displayAsPassword="true" width="100%"/>
          </mx:FormItem>
          <mx:FormItem>
          <mx:Button label="Login" id="loginButton" click="loginService.doLogin(txtUsername.text, txtPassword.text)"/>
          </mx:FormItem>
        </mx:Form>
      </mx:VBox>
    </mx:Application>
    This works fine. But if I create a complicated form which has many fields then it would be almost unbearable to sent each fields as an argument of a function.
    Another method that can be used is using HttpService which supports XML like request and response.
    I want to ask what are best practices in Flex when using remote data communication on a large scale? Like may be using some classes or objects which store data? Can somebody guide me on how to approach data storing?
    Thanks and Regards
    Vikram

    Oh yes, I have done study about Cairngorm, haven't really applied it though. I thought that it helps in separating the data models, presentation and business logic into various layers.
    Although what I am looking for is something about data models may be?
    Thanks and Regards
    Vikram

Maybe you are looking for