Pool : best practice ODI : PLSQL or Interface object ?

Hello,
My ODI consultant has developped an interface to load a flat file into Hyperion Planning :
* first step : load flat file into staging : done with "Interface" object
* second step : transform staging table (1,2,3 ==> JAN, FEB, MAR // transform "-" into ND_Customer ... very easy transformation !) : done trough a PLSQL Procedure. Result is load into FACT_TABLE
* third step : load FACT_TABLE into ESSBASE : done with "interface" object
During design, we didn't discuss the technology, but after the build, I'm very suprised by the second step. There is no justification to do it with PLSQL. My consultant explains me : "I'd rather to use PLSQL". But from my point of view, ODI best practice is to use "Interface" (more flexible, you can change the topology without impact in interface etc ...)
What is your point of view? Should I raise an issue and expect from my consultant a rewriting with "interface" object?
Rgds

Thx SH, the complexity (use of two intermediate tables : STAGING and FACT) is due to our requirment to archive the original data during one year (in STAGING) and to give an audit trail from Essbase to original data (before transformation). From Essbase we could go back to FACT Table (same member name) then goes back to STAGING by using and unique ID that produces a link between tables.
From my point of view ODI Interface is the simplier way to maintain the "mapping", instead of PLSQL, but I would have more feedbacks from other developper to be sure of my feeling (I've done only 2 Hyperion Planning + ODI Project before the current one).
The complexity of interface are low or medium : simple filter on one or two dimensions / DECODE mapping on Month / group by on similar records / for few interfaces, more complexe rules with IF statement.
Thx in adavance

Similar Messages

  • What is the BEST practice - use BO or Java Object in process as webservice

    Hi All,
    I have my BP published as web service. I have defined My process input & output as BOs. My BP talks to DB through DAO layer(written in JAVA) which has Java objects. So I have BO as well as java Objects. Since I am collecting user input in BO, I have to assign individual values contained in BO to Java object's fields.
    I want to reduce this extra headache & want to use either of BO or Java object.I want to know What is the best practice - use BO or Java object as process input. If it is BO,how I can reuse BOs in Java?
    Thanks in advance.
    Thanks,
    Sujata P. Galinde

    Hi Mark,
    Thanks for your response. I also wanted to use java object only. When I use java object as process input argument..it is fine. But when I try to create Process web service, I am getting compilation error - "data type not supported".....
    To get rid of this error, I tried to use heir (BO inheriting from java class). But while invoking process as web service, it is not asking for fields that are inherited from java class.
    Then I created Business Object with a field of type java class... This also is not working. While sending request, it is giving an error that - field type for fields from java class not found.
    Conclusion - not able to use java object as process(exposed as web service) input argument .
    What is the Best & feasible way to accomplist the task - Process using DAO in Java & exposed as web service.
    Thanks & Regards,
    Sujata

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best practice for number of result objects in webi

    Hello all,
    I am just wondering if SAP has any recommendation or best practice document regarding number of fields in Result Objects area for webi. We are currently running on XI 3.1 SP3...one of the end user is running a webi with close to 20 objects/dimensions and 2 measure in result objects. The report is running for 45-60 mins and sometimes timing out. The cube which stores data has around 250K records and the report would return pretty much all the records from the cube.
    Any recommendations/ best practices?
    On similar issue - our production system is around 250GB what would be the memory on your server typically...currently we have 8GB memory on the sap instance server.
    Thanks in advance.

    Hi,
    You mention Cubes so i suspect BW or MS AS .   Yes,  OLAP data access (ODA) to OLAP DataSets is a struggle for WebIntelligence which is best at consuming Relational RowSets.
    Inefficient MDX queries can easily be generated by the webi tool, primeraly due to substandard (or excessive) query and document design. Mandatory filters and focused navigation (i.e. targetted BI questions) are the best for success.
    Here's an intersting article about "when is a webi doc too big" https://weblogs.sdn.sap.com/pub/wlg/18706
    Here's a best practice doc about webi report design and tuning ontop of BW MDX : https://service.sap.com/~sapidb/011000358700000750762010E 
    Optimization of the cube itself, including aggregates and cache warming is important. But especially  use of Suppress Unassigned nodes in the BW hierarchy, and "query stripping" in the webi document.
    finally,  patch level of the BW (BW-BEX-OT-MDX) component is critical.  i.e. anything lower than 7.01 SP09 is trouble. (memory management, mdx optimization, functional correctness)
    Regards,
    H

  • Is there any best practice or standard for database object naming ?

    Hi
    Thank you for reading my post
    is there any standard or best practice for databse objects naming ?
    for example how should we name columns of a table ? should it be like TOTAL_VOTE or TOTALVOTE and many other items.
    Thanks

    what does oracle suggest as a naming schema for tables , fields , views. indexes , tablespaces , ... If you look at the data dictionary you will see that not even Oracle keeps rigidly to any specific standard, although there are tendencies :)
    "The nice thing about standards is that there are so many of them to choose from."      
    -- Andrew Tannenbaum
    Cheers, APC

  • SAP Best Practice - working parallely on same object

    Experts,
    We are doing a roll-out on a SAP box for a particular geography, which is already being used/live for other markets. There are 2 teams, one supporting the existing system, and our team for roll-out.
    There are common includes that can be accessed by both the teams (for e.g. MV45AFZZ, RV50AFZZ, RV60AFZZ etc.).
    Could you please suggest the best practice, in which both the teams can work on the same include simultaneously (like defining Z-includes in the standard include and working on them) and avoid any conflicts/ locks.
    Rgds,
    Birendra

    Birendra Chatterjee wrote:
    > Could you please suggest the best practice, in which both the teams can work on the same include simultaneously (like defining Z-includes in the standard include and working on them) and avoid any conflicts/ locks.
    Not possible within the same System (sy-sysid) and the same Client (sy-mandt) - not that it makes any sense to do it anyway!
    Cheers,
    Sougata.

  • Connection/Connection Pooling - Best Practices

    Hi everyone,
    I'm doing my first JDBC application, and I have some questions about the right way to do things. We've got a series of business objects with lots of database abstraction and everything, and the situation comes up where we're making a lot of calls to the database to populate our objects. The high number of calls occur because of the level of abstraction we need, so when we get a Person object, we don't do a join on the address table and the phone table, but instead make seperate calls to those tables. Aside from the fact that this may not be the best way to do things, what is the best way to manage the connections? It's pretty costly time-wise to create a bunch of new connections, so I was just using one connection and passing it through our database call objects, so I'd created a connection to the DB, get my Person information, pass that connection on to read from the Address table, then again to Phone table. I know this can't be good, but it's a lot faster than creating a new connection every time. Also, I don't know how reusing the connection for different things is screwing up the cursor, or causing the application to hang until the connection is free again.
    I've read some stuff about connection pooling with JDBC 2.0, but the need for the JINI calls is confusing to me.
    Can someone take a few minutes to describe the right way to get this to work with Java? I'm using the MSSQL JDBC driver availiable on Microsoft's site, but I didn't notice which version of JDBC it supports. It's Type 4 driver, but I don't know what that means either.
    Thanks in advance,
    Jim

    They're not JINI calls, they're JNDI calls - Java Naming and Directory Interface. They're just doing a lookup to get the data source from the connection pool.
    When you see it done that way, it's usually a container like Tomcat or WebLogic that's handling the connection pool for you. Are you using either of those, or were you going to try to write your own pooling mechanism?
    Type 4 driver means it's 100% pure Java, no native code. You can read all the different types at:
    http://java.sun.com/products/jdbc/driverdesc.html
    There's another driver at SourceForge jTDS for M$ SQL Server that's pretty good. I've used it with some success, switching away from the M$ implementation:
    http://jtds.sourceforge.net/
    Good luck. - MOD

  • Best practice for saving and recalling objects from disk?

    I've been using the OOP features of LabVIEW for various projects lately and one thing that I struggle with is a clean method to save and recall objects.
    Most of my design schemes have consisted of a commanding objects which holds a collection of worker objects.  Its a pretty simple model, but seems to work for some design problems.  The commander and my interface talk to each other and the commander sends orders to his minions in order to get things done.  For example, one parrent class might be called "Data Device Collection" and it has a property that is an array of "Data Device" objects.
    The Data Device object is a parent class and its children consist of various data devices such as "DAQmx Device", "O-Scope Device", "RS-232 Device", etc.
    When it comes to saving and loading data, the commanding class's "Save" or "Load" routine is called and at that time all of the minions' settings are saved or recalled from disk.
    My save routine is more-or-less straight forward, although it still requires an overwriting "Save" and "Load" vi.  Here is an example:
    It isn't too bad in that it is pretty straight forward and simple and there also would be no changes to this if the data structure of the class changed at all.  It also can save more generalized settings from it's parrent's class which is also a good feature.  What I don't like is that it looks essentially the same for each child class, but I'm at a loss on an effective way to move the handling of the save routing into the parent class.
    The load routine is more problematic for me.  Here is an example:
    Again, the desirability of moving this into the parent class would be awesome.  But the biggest complaint here is that I can't maintain my dynamic dispatch input-output requirements because the object that I load is strictly typed.  Instead I have to rely on reading the information from the loaded object and then writing that information to the object that exists on the dynamic dispatch wire.  I also dislike that unlike my Save Routine, I will need to modify this VI if my data structure of my object changes.
    Anyway, any input and insight would be great.  I'm really tired of writing these same VIs over-and-over-and-over again, and am after a better way to take care of this in the parent class by keeping the code generalized but still maintain the ability to bring back the saved parameters of each of the children classes.
    Thanks for your time.

    I'm with Ben. Don't rely on the current ability to serialize an object. Create a save method and implement some form of data persistence there. If you modify your class you might be disappointed when you cannot load objects you previously saved. It mostly works but as soon as you reset the version information in the class, you can no longer load the old objects. This is fine if you know how to avoid resetting the history. One thing that will do this is if you move the class into or out of a library. It becomes a new class with version 1.0.0 and it no longer recognizes the old objects.
    [Edit:  I see that you are just writing to a binary file. I'm not sure you can load older objects anyway using that method but I have never tried it.]
    This will not help you right now but there are plans for a nice robust API for saving objects.
    =====================
    LabVIEW 2012

  • Best practice for core data managed objects

    Hello
    I'd like to konw if there is a document available listing the good practices when managing core data managed objects.
    For example should I keep those objects in memory in a singleton class, or save thme to the DB and load them when needed, ... I am trying to figure out how to manage Annotation views representing managed objects when using the MapKit.
    Thanks

    Seen this?
    Using Managed Objects

  • Best practice for method calling on objects within a collection.

    Hi guys
    As you may be aware, based on my other thread here. I'm designing a card game in Java. I was hoping for some advice on the best practise on how methods should be called on a custom Object contained within a custom Collection.
    I have an instance variable for the Deck class as follows: List<Card> deckWhen creating an instance of the class I use deck = new ArrayList<Card>();So I have a Deck which only holds Card objects. My question is, for the Card methods, should I call them on the Card objects after 'getting' the Cards from the Deck or should I write methods within the Deck class which handles this method calling. Code explanation is as follows:
    Deck standardDeck = new Deck();I want to retrieve the suit value of a card within the deck. Is this the best way to do it this way:
    standardDeck.getCardAt(50).getSuit();
    //getCardAt is a method within the Deck class, getSuit() is a method within the Card classor this way:
    standardDeck.getSuitForCardAt(50);
    //getSuitForCardAt() is a method within the Deck class. This method calls the getSuit() method within its method body.Cheers for any help guys.
    Edited by: Faz_86 on Jul 10, 2010 9:53 AM

    Hey Saish
    Thanks for the response.
    My Card class does indeed override hashCode(), equals() and toString().
    The reason I am asking a card from the deck for its Suit is simply because of the rules of the game being played. The game I made is a 'Card Shredding' game where a player attempts to remove as many cards from their hand during each turn. The first to remove all their cards is the winner.
    When the game starts, two decks are created. A standard 52 card deck and an empty deck. Then 8 cards are dealt to each player and one card is dealt into the empty deck. The suit and value of the card on the empty deck called the 'shredding deck' dictates which moves are valid during each turn; The played card must match the Suit or the Value of the current card on the 'shredding deck'
    For example:
    Card on the empty deck = 8 of Spades
    The only card from a players hand which can be removed are any Spade or an Eight of any suit.
    Going back to the Deck.getSuitOfCardAtIndex(int index) , this method is needed because both the AI player and human player need to have the ability to take a look at the cards which have been added to the 'shredding deck'. Again this is because of the rules of the game. Therefore I need a method to take a look at the Suit and Value for any card in the 'shredding deck'.
    Taking all this into account, so far I have the following in my Deck class. Please comment on my design so far. As you can see I've tried to follow the Law Of Demter by creating many little wrapper methods. I understand totally wh getters and setters are bad but I cannot come up with a design solution to achieve what I need to based on the rules of the game without users getters. - Any tips on this would be great.
         public Card dealCard()
              Card cardToDeal = deck.remove(0);
              return cardToDeal;
         public void addCard(Card usedCard) //This method is used to add 'used' cards to the deck.
              deck.add(usedCard);
         public Card getFaceCard() //Returns the current face up playing card
              Card faceCard = deck.get(deck.size()-1);
              return faceCard;
         public int getFaceCardValue()
              int faceCardValue = deck.get(deck.size()-1).getValue();
              return faceCardValue;
         public int getFaceCardSuit()
              int faceCardSuit = deck.get(deck.size()-1).getSuit();
              return faceCardSuit;
         public String getFaceCardName()
              String faceCardName = deck.get(deck.size()-1).toString();
              return faceCardName;
         public Card getCardAt(int position) //Returns the current face up playing card
              Card card = deck.get(position);
              return card;
         public int getFaceCardValueAt(int position)
              int cardValue = deck.get(position).getValue();
              return cardValue;
         public int getFaceCardSuitAt(int position)
              int cardSuit = deck.get(position).getSuit();
              return cardSuit;
         public String getFaceCardNameAt(int position)
              String cardName = deck.get(position).toString();
              return cardName;
         public int getDeckSize() //When recycling cards, the size of the deck is needed to determine the best time to add more cards.
              return deck.size();
         }

  • ACL Best Practice - On the Internet interface

    I have a question relating to ACL's on a routers 'Internet' facing interface.
    Further to reading several whitepapers on the topic, a recommended ACL would typically contain the following statements.
    In addition, the Cisco SDM automatically generates a similar externally facing ACL:
    ip access-list extended INBOUND
    permit icmp any any echo
    permit icmp any any echo-reply
    permit icmp any any unreachable
    deny ip 10.0.0.0 0.255.255.255 any
    deny ip 172.16..0.0 0.15.255.255 any
    deny ip 192.168.0.0 0.0.255.255 any
    deny ip 127.0.0.0 0.255.255.255 any
    deny ip host 0.0.0.0 any
    deny ip any any
    My question is thus...
    What is the point of lines 4-8 when the last line blocks them anyway?
    I appreciate that when we view the ACL we can see the number of matches per explicit ACL entry, but in terms of blocking functionality, I can't see the added benefit.
    Instead, the following ACL would provide the same benefit and be simpler to maintain.
    ip access-list extended INBOUND
    permit icmp any any echo
    permit icmp any any echo-reply
    permit icmp any any unreachable
    deny ip any any
    Am I missing something obvious?
    Thanks in advance for assistance,
    Regards.

    thanks Jon for your response.
    With regard to your first suggestion relating to a possible typo, my intention was not "permit ip any any".
    My main point is that there are several example configurations posted on the Internet which at the top of the ACL explicitly deny specfic types of traffic then have a blanket 'DENY ALL' at the end. Here's another example someone else has posted:
    http://www.velocityreviews.com/forums/t34618-cisco-837-wan-interface-accesslist.html
    With regard to your second suggestion, your right, I should have included a command like:
    permit tcp any any established log
    I appreciate this ACL is not stateful and I should use either the firefall feature set or a dedicated firewall applicance.
    My question primarily is related to my first point. i.e. what is the point of :
    deny ip 10.0.0.0 0.255.255.255 any
    deny ip 172.16..0.0 0.15.255.255 any
    deny ip 192.168.0.0 0.0.255.255 any
    deny ip 127.0.0.0 0.255.255.255 any
    deny ip host 0.0.0.0 any
    when we have the following statement at the end:
    deny ip any any
    There are many example Internet facing ACLs posted on the net that propose this same example configuration.
    thanks again for your response.
    - peter

  • Best practices for firewall external interface addressing

    Hi all,
    Can anyone explain what is more secure when addressing the outside interface of a firewall in a network diagram?
    1st option:  
                              ISP router:
                                   interface 1 (connected to the internet).
                                   interface 2 to the firewall with public ip address.
                               Firewall:
                                   interface 1 (connected to the router): public ip address
                                   interface 2 (connected to internal network): private ip address (RFC1918)
    2nd option:
                             ISP router:
                                  interface 2 (connected to the internet (ISP)).
                                  interface 1 to the firewall with private ip address (RFC1918).
                             Firewall:
                                 outside interface 2  (connected to the router): private ip address (RFC1918)
                                 inside interface 1 (connected to internal network): private ip address (RFC1918)
    Any response is welcome.

    It's not so much what is more secure as where you want to do the NAT and how may public IPs you have.
    So if you only has a small block of public IPs and you wanted to use them for NAT on the firewall then you could use a private link between the ISP router and the firewall.
    Usually though an ISP gives you two blocks, a /30 for the point to point link and then a larger subnet for actual use on the firewall.
    For a single ISP setup doing the NAT on the firewall is usually the way it is done especially if you are using VPNs as if you NAT on the router it can interfere with the VPN.
    If you end up with multiple ISPs then you may need to move some or all of the NAT configuration to the routers although it is not always necessary and you may still do it on the firewall. It depends on a lot of other things such as IP addressing, ISP advertisement of public IPs etc.
    Jon

  • Best practice ?  send Object to request or desired pieces of data?

    Newbie to this style of programming...
    Is the best practice to put the customer object in the session or request object and allow jsp's to declare and instantiate the customer object out of the session/request (then use whatever getters are needed to properly display the desired data)?
    Or would it be better to send the customer ID, Name, address etc as a string to the session/request object and just have the JSP declare the strings & instantiate from the session/request object(thus keeping more complicated java code out of the jsp)?
    Thanks for the help in advance!

    Doesn't this result in more code? If we send the object, we need code to declare and instantiate the object, then use the getters to get the data to display.
    If I just send the necessary data, I just need to declare a string = request.getParameter... or just display the request.getParameter.
    I actually like the concept of sending the object, it seems cleaner and less likely to result in servlet changes down the road, but i want to make sure there is not some other reason NOT to do this.

  • Web service, best practice

    Hi,
    I would need some oppionions on best practices for a WS interface.
    Lets say I have a system with 5 different states on an entity, lets say states are A, B, C, D and E. It is not possible to shange from any state to any other state, there are certain rules.
    Shall the knowledge on these transition rules be on the service consumer or in the service itself. What I'm looking for is what kind of operations I shall expose:
    setState(State aState)
    or
    changeToStateA()
    changeToStateC()
    And so on... In the first case all knowlege of state transitions must be on the service consumer. In the second case this is not needed as the operation will take care of that.
    Is there any guidelines on this?
    Thanks,
    Mattias

    services should be idempotent and stateless.
    that means that transitions and workflow should be the responsibility of the client.
    %

  • Best Practice to ref selectedItem in DataGrid

    Hi,
    I have a datagrid and with a double click event:
    doubleClick="viewItem(event);"
    In my viewItem() function I can reference the selectedItem
    using the
    dataGridID.selectedItem.property notation but I thought it
    was best practice to use the event object as follows:
    private function viewItem (event:Event):void {
    Alert.show( event.target.selectedItem.property.toString() );
    However when I try this I get the following error:
    ReferenceError: Error #1069: Property selectedItem not found
    on mx.controls.dataGridClasses.DataGridItemRenderer and there is no
    default value.
    Is it possible to reference a property this way in a dataGrid
    and is it best practice?

    The reason is that the event's target is the renderer, not
    the DataGrid. I would expect currentTarget to be the datagrid, so
    you can use that if you like. Cast it to a DataGrid before you use
    it:
    private function viewItem(event:Event):void{
    Alert.show( (event.currentTarget as
    DataGrid).selectedItem.property.toString() );
    }

Maybe you are looking for

  • Can't install Audio content

    I am unable to install the audio content for Soundtrack on FCS2 . During the process the installer will ask for Audio content disc 1 ,I insert the disk and then after a few seconds the installer asks for the Final Cut Studio install disk . I insert t

  • Symbolic accounts

    hi experts can any one explain me about symbolic accounts, and the difference between symbolic accounts and g/l accounts.how to post payroll results to fico

  • MDM- Matrix Application Installation (.EAR)

    Hello, I would like to install the Matrix Application EAR file on the WebAS as mentioned in the installation guide. It says "Install the .EAR files on Web AS using the SDM in the SAP NetWeaver Developer". How do I proceed with that? I tried deploying

  • Frenquently i lost the permissions setting

    Some precisions please... I use my set-up installation since september 2005. I make 4 disk with my 160 Gig harddisk which had user permissions for work on differents app . As my son as on his disk all the game app and Tiger is installing on a part of

  • J2SE version

    Hello! I am new to Java programming and I was wondering which version of J2SE would be best to download v 1.3.1 or v 1.4 Beta 3? The only reason I am not downloading the newest version without asking is because it is a beta version. Could this be a p