Mappings performance

Hi,
Initially I have two mappings, the first insert new records in the target table. The second updates some fields of new records using expressions and the updated table is in the same time the source and target. In order to have only one mapping which inserts and updates the target table, I simply copy these expressions in the first mapping.
When I run the first mapping including the update,I found it's not faster than before (15 minutes longer).
Could anybody have any explanation of this issue ?
Thank you.

Ragu,
Unfortunately this issue was introduced in 9.0.3.36.3 (lower versions did not have this... 'feature'). The issue has been rolled into the 9.2.0.3 version, but there is no backport for 9.0.3.
Any chance you can migrate to 9.2? or go back to 9.0.3.35?
Mark.

Similar Messages

  • When we wil go for abap mapping ??

    Hi,
    As we know there are graphical, XSLT, JAVA mappings are there apart from ABAP mapping. I have gone through below weblog.
    /people/udo.martens/blog/2006/08/23/comparing-performance-of-mapping-programs
    and also help.
    http://help.sap.com/saphelp_nw04/helpdata/en/12/05731a10264057badc32d3d3957015/frameset.htm
    None of them says ABAP mapping is either faster or stable as compared to other mappings. Even though it is the case, still when we will go for abap mapping ??
    Is it like that it is totally depend on the available resources in hand ??
    thanks
    kumar

    > The SAP XI/PI mapping is the most efficient as it
    > only loads the part of the source message that are
    > used to create the target message(s) at runtime.
    >
    > Java and XSLT have to load the whole message into
    > memory to process the message. This can be
    > inefficient and if dealing with large messages can
    > cause issues.
    About your statement.
    If you consider the field mapping (or UDF) runtime, then you are correct. But if you consider the whole mapping runtime, you also have to "load" the full message in message mapping, obviously. What happens is that it is transparent to the developer, since loading and parsing is done by standard. But message mapping also deals with loading and parsing the whole message (and it is done with Java underneath). Then I don't think message mapping will have a significantly better/worse performance, when compared with Java mappings (performing normal xml processing methods).
    As for XSLT, the performance problems happens because you have a XSLT processor running over Java VM. Then, if you have heavy load on it, the mapping runtime will consume the resources necessary to run the xslt processor (which is, by itself, very resource consuming) and also to treat that heavy input.
    Regards,
    Henrique.

  • New training materials available for OWB 11.2

    Oracle University now offers an instructor-led course on OWB11.2 entitled Data Integration and ETL with Oracle Warehouse Builder.
    The entire course is 5 days long and is divided into two parts:
    * Part I is 3 days and covers designing and debugging ETL mappings, performing data cleansing, integrating with OBIEE and other basic ETL functionality included in the Oracle Database license.
    * Part II is 2 days and covers metadata management, accessing non-Oracle sources (code templates), right-time data warehousing, and the other features in the Enterprise ETL/ ODI EE license.
    For links to this and other training resources, including the free OBE's, see http://www.oracle.com/technology/products/warehouse/htdocs/OTN_Training.html

    The Linux version of OWB 11gR2 was released with DB 11gR2 for Linux on Sept 1, 2009.
    Typically, OWB conforms to the DB schedule for releasing other platforms. However, upon receiving numerous requests for OWB on Windows, we're evaluating the possibility of releasing a Windows stand-alone client sooner rather than later. At this time, though, there are no official announcements on when to expect that.

  • IPv6 LAN to IPv4 Internet

    Hi Guys,
    We have a 881G router which is connecte to internet through 3G (IPv4 DHCP).
    I have been given a requirement to translate the communication from the IPv6 clients to IPv4 internet.
    I have configured the DHCP part.. but I am stuck translating the IPv6 to the outgiong interface (Dialer - internet) and vice versa.
    Any help would be appreciated.
    Thanks
    Arjun

    Hello Arjun,
    The solution is in the document you linked:
    Port Address Translation or Overload
    Port Address Translation (PAT), also known as Overload, allows a single IPv4 address to be used among multiple sessions by multiplexing on the port number to associate several IPv6 users with a single IPv4 address.
    This is exactly what you need. It is the equivalent of NAT Overload for IPv4 where the inside address is an IPv6 rather than an (non routable) IPv4.
    This are the steps:
    Configuring PAT for IPv6 to IPv4 Address Mappings
    Perform this task to configure PAT for IPv6 to IPv4 address mappings. Multiple IPv6 addresses are mapped to a single IPv4 address or to a pool of IPv4 addresses and using an access list, prefix list, or route map to define which packets are to be translated.
    SUMMARY STEPS
    1. enable
    2. configure terminal
    3. ipv6 nat v6v4 source {list access-list-name | route-map map-name} pool name overload
    or
    ipv6 nat v6v4 source {list access-list-name | route-map map-name} interface interface name overload
    4. ipv6 nat v6v4 pool name start-ipv4 end-ipv4 prefix-length prefix-length
    5. ipv6 nat translation [max-entries number] {timeout | udp-timeout | dns-timeout | tcp-timeout | finrst-timeout | icmp-timeout} {seconds | never}
    6. ipv6 access-list access-list-name
    7. permit protocol {source-ipv6-prefix/prefix-length | any | host source-ipv6-address} [operator [port-number]] {destination-ipv6-prefix/prefix-length | any | host destination-ipv6-address}
    If any of these are misterious, please read about configuring NAT in
    Configuring Network Address Translation: Getting Started
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080094e77.shtml
    Cheers
    Fabio
    P.S. Please rate useful answers.

  • Mappings Creation Performance Issue:

    Mappings Creation Performance Issue
    I am having a performance problem when linking attributes from Splitter Transformation (has app. 80 input ports and 9 different filter condition for 9 target tables) to other transformations. It takes app. 6 minutes to link one attribute from Splitter Transformation to other transformations; generally it should take less than a second. This is the problem only with Splitter Transformation, other transformations works very well.
    Any thoughts on this will be very helpful.
    Thanks
    Ragu Mandala

    Ragu,
    Unfortunately this issue was introduced in 9.0.3.36.3 (lower versions did not have this... 'feature'). The issue has been rolled into the 9.2.0.3 version, but there is no backport for 9.0.3.
    Any chance you can migrate to 9.2? or go back to 9.0.3.35?
    Mark.

  • We have many mappings, which one is good in performance wise ?

    We have many mappings, which one is good in performance wise ?

    HI
    Different Mapping Techniques are available in XI. They are: Message Mapping, XSLT Mapping, Java Mapping and ABAP mapping.
    u2022The integration repository includes a graphical mapping editor. It includes built-in functions for value transformations and queue and context handling.  There is an interface for writing user-defined functions (java) as well.
    u2022XSLT mappings can be imported into the Integration Repository; java methods can be called from within the XSLT style sheet. Advantages of this mapping are: open standard, portable, extensible via Java user-defined functions.
    u2022If the transformation is very complex, it may be easiest to leverage the power of Java for mapping.
    u2022ABAP mapping programs can also be written to transform the message structures.
    Message Mapping
    SAP XI provides a graphical mapping tool that generates a java mapping program to be called at run time.
    u2022Graphically define mapping rules between source and target message types.
    u2022Queue-based model allows for handling of extremely large documents.
    u2022Drag-and-drop.
    u2022Generates internal Java Code.
    u2022Built-in and user-defined functions (in Java)
    u2022Integrated testing tool.
    u2022N:M mapping is possible.
    JAVA MAPPING:
    Usually Java mapping is preferred when the target structure is relatively complex and the transformation cannot be accomplished by simple graphical mapping.
    For e.g. consider a simple File->IDoc scenarion where the source file is a simple XML file, whereas the target file is an IDoc with more than one hierarchy level e.g FINSTA01. Content conversion in XI can only create a single level hierarchy, so in this scenario a Java mapping would come in handy.
    See these:
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10dd67dd-a42b-2a10-2785-91c40ee56c0b
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-i
    /people/thorsten.nordholmsbirk/blog/2006/08/10/using-jaxp-to-both-parse-and-emit-xml-in-xi-java-mapping-programs
    When to use Java mapping
    1) Java mapping are used when graphical mapping cannot help you.
    Advantages of Java Mapping
    1)you can use Java APIs and Classes in it.
    2) file look up or a DB lookup is possible
    3) DOM is easier to use with lots of classes to help you create nodes and elements.
    Java mapping can be used when you have complex mapping structures.
    ABAP MAPPING:
    ABAP mappings are mapping programs in ABAP objects that customers can implement using the ABAP Workbench.
    An ABAP mapping comprises an ABAP class that implements the interface IF_MAPPING in the package SAI_MAPPING. The interface has a method EXECUTE with the some signature.
    Applications can decide themselves in the method EXECUTE how to import and change the source XML document. If you want to use the XSLT processor of SAP Web AS, you can use the ABAP Workbench to develop a stylesheet directly rather than using ABAP mappings.
    In ABAP mapping you can read access message header fields. To do this, an object of type IF_MAPPING_PARAM is transferred to the EXECUTE method. The interface has constants for the names of the available parameters and a method GET, which returns the respective value for the parameter name. The constants are the same as in Java mappings, although the constant MAPPING_TRACE does not exist for ABAP mappings. Instead, the trace object is transferred directly using the parameter TRACE of the method IF_MAPPING~EXECUTE.
    For more details refer
    http://help.sap.com/saphelp_nw70/helpdata/EN/ba/e18b1a0fc14f1faf884ae50cece51b/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5c46ab90-0201-0010-42bd-9d0302591383
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e3ead790-0201-0010-64bb-9e4d67a466b4
    /people/sameer.shadab/blog/2005/09/29/testing-abap-mapping
    ABAP Mapping
    /people/udo.martens/blog/2006/08/23/comparing-performance-of-mapping-programs
    https://websmp101.sap-ag.de/~sapdownload/011000358700003082332004E/HowToABAPMapping.pdf
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    /people/r.eijpe/blog
    ABAP Mapping Vs Java Mapping.
    Re: Message Mapping of type ABAP Class not being shown
    Re: Performance of mappings (JAVA, XSLT, ABAP)
    XSLT Mapping
    XSLT stands for EXtensible Stylesheet Language Transformations. It is an XML based language for transforming XML documents into any other formats suitable for browser to display, on the basis of set of well-defined rules.
    /people/sap.user72/blog/2005/03/15/using-xslt-mapping-in-a-ccbpm-scenario
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    The above menotined are the mapping present in the XI.
    When it is critical and complicate we go for the ABAP,JAVA or XSLt mapping. For simple Mapping we go for the graphical mapping.
    the selection of mapping also depends upon the requirement and alos on our scenario.
    cheers

  • Performance of mappings (JAVA, XSLT, ABAP)

    Hello everybody,
    I would like to know about the performance of mappings using different languages (JAVA, XSLT, ABAP).
    Does anybody know which one will have the best performance?
    Thanks a lot.
    Regards Mario

    Hi Mario,
    I thought i will start of from scratch. Mapping is basically done to convert one form of xml into another form. This can be done using either of them mentioned below.
    - Graphical mapping
    - XSLT mapping
    - JAVA mapping
    - ABAP mapping
    There is no hard and fast rule for using the mapping techniques. But, I will try to put things in the right perspective for you.
    Graphical Mapping is used for simple mapping cases. When, the logic for your mapping is simple and straight forward and it does not involve any complex logic.
    Java and XSLT mapping are used when graphical mapping cannot help you.
    When the choice is between Java and XSLT, XSLT is simpler than java mapping and easier. But, it has its drawbacks. One among them being that you cannot use Java APIs and Classes in it. There might be cases in your mapping when you will have to perform something like a properties file look up or a DB lookup, such scenarios are not possible in XSLT and so, when you want to use some specific Java API's you will have to go for Java Mapping.
    Java Mapping uses 2 types of parsers. DOM and SAX. DOM is easier to use with lots of classes to help you create nodes and elements, but, DOM is very processor intensive.
    SAX parser is something that parses your XML one after the other, and so is not processor intensive. But, it is not exactly easy to develop either.
    To know more about each of them please go thru the following links. And if you ask me your which is better, it depends basically on the scenario you implementing and the complexity involved. Anyways please go thru the following links:
    Graphical mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/6d/aadd3e6ecb1f39e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/43/c4cdfc334824478090739c04c4a249/content.htm
    /people/bhanu.thirumala/blog/2006/02/02/graphical-message-mapping-150-text-preview
    http://www.sapgenie.com/netweaver/xi/mapping1.htm
    /people/alessandro.guarneri/blog/2006/01/26/throwing-smart-exceptions-in-xi-graphical-mapping
    XSLT mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/73/f61eea1741453eb8f794e150067930/content.htm
    http://www.w3.org/TR/xslt20/
    JAVA mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/content.htm
    DOM parser API
                     http://java.sun.com/j2se/1.4.2/docs/api/org/w3c/dom/package-frame.html
    ABAP mapping
    /people/r.eijpe/blog
    To know more about the value mapping tools for the SAP Exchange Infrastructure (XI), please go thru the following link:
    http://www.applicon.dk/fileadmin/filer/XI_Tools/ValueMappingTool.pdf
    To get an idea as to what value mapping is, please go thru the following links:
    http://help.sap.com/saphelp_nw04/helpdata/en/13/ba20dd7beb14438bc7b04b5b6ca300/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/f2/dfae3d47afd652e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/2a/9d2891cc976549a9ad9f81e9b8db25/content.htm
    most of the links that I have provided also helps you get the step by step procedure of doing the same. And also involves the procedure to implement certain advanced features.
    And please go through this link which clearly explains the 3 types of mappings.
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    Hope this clears your doubt fully.
    Regards,
    Abhy
    PS: AWARD POINTS FOR HELPUL ANSWERS.

  • Having problems  performing field mappings

    Hi Experts...
    I am trying map  werks field to another field of the same type .
    However we are suppose to map the last two characters of source work field.
    The source incoming data for werk can have 2 ,3 or 4 characters.
    I have tried various combinations of if ....if without else ...length ...text functions
    but they are not working.
    can somebody tell me how to go about it using standard functions available .
    Regards,
    deepak

    Hi Deepak -
    Much easier with a very simple User Defined function (UDF).  Just create a simple UDF with one input parameter (call it 'a' which is default) and just put in the following and save:
       <i>return a.substring(a.length() - 2, a.length());</i>
    Regards,
    Jin

  • HP Express Card Mini Remote Mappings, Issues, Woahs, Questions.

    Ok, I am trying to actually get this Mini-Remote to be useful for my DV5-1235DX.
    First, is there an easy way to remap the buttons on the remote control?  I will never use HP Smart DVD.  I do not use HP Smart Menu.  I will not use Media Center.  I prefer my own software over those packages. 
    1). Any way to map those buttons to something else and stop the calls they make starting the existing software?
    2). Is there a full list of which media functions the remaining buttons are mapped to?  I have found some so far provided below.
    Remote Button Virtual Key Media Key
    Stop VK_MEDIA_STOP MEDIA_STOP
    Rewind MEDIA_FASTFORWARD
    Play/Pause VK_MEDIA_PLAY_PAUSE MEDIA_PLAY_PAUSE
    Fast Forward MEDIA_REWIND
    Previous VK_MEDIA_PREVIOUS_TRACK MEDIA_PREVIOUSTRACK
    Next VK_MEDIA_NEXT_TRACK MEDIA_NEXTTRACK
    Up MEDIA_CHANNEL_UP
    Down MEDIA_CHANNEL_DOWN
    Direction Up VK_UP
    Direction Down VK_DOWN
    Direction Left VK_LEFT
    Direction Right VK_RIGHT
    OK VK_RETURN
    Back VK_BROWSER_BACK BROWSER_BACKWARD
    Settings "i" Volume Down VK_VOLUME_DOWN VOLUME_DOWN
    Mute VK_VOLUME_MUTE VOLUME_MUTE
    Volume Up VK_VOLUME_UP VOLUME_UP
    As can be seen I am still missing some key buttons and their matched Media Key Function Name. 
    I would appreciate any and all help on finding the missing button functions and remapping the current app start calls.
    Thank you very much in advance for your time,
    Kelly
    P.S. - EDIT: A programmer friend pointed out the IR Software/Driver may be sending virtual keys and not just media key signals.  With that I was able to map a few more to their virtual key and can use those buttons now as well.
    With this new information I can now map all buttons except the little "i" and the 5 buttons across the top (Power, DVD, SmartMenu, Display Switch, MediaCenter).
    P.P.S. - Is there even an application that does anything when pressing the "i" button on the remote?  I've tried almost every application that was installed on the laptop and not one has responded when pressing that button.  The HP User Guides section 'HP Mobile Remote Control (Select Models Only) > Button quick reference' lists this button as Settings and says:
    "Press to display system information. The button may also be used to display settings menus for some multimedia software."  It does not display system information and I have yet to find any software it displays anything in.
    Message Edited by kelendral on 2009-04-25 07:48 PM

    Well, I called HP Support today.
    They verified the only reaction they could get from that "Settings" button on a similar model (they did not have the exact DV5-1235dx on hand) was to interrupt the screen saver.  This is the exact and only behavoir I have found with that key.
    They were also unable to provide any technical information (such as remote code, key being emulated, virtual key command, or media key mapping).
    So basically on a similar model device they experienced the exact same thing, the key appears to have no function in any of the software provided by HP.
    It is very disappionting that their technical support was so lacking that they could not describe the functionality of the key even after being sent up a level and passed over to a third tech.
    Until I figure out the mapping of this key, it appears to be a nice way to remotely interrupt the screensaver.  Then again any of the other keys will do that as well.
    I did find some use for knowing both the VK and MK mappings.  Media Player Classic HomeCinema allows the use of either VK or MK mapping.  It also allows the use of both.  By using both some keys can be made to perform double duty.
    Example:  MPC-HC has the Play function mapped to MK of MEDIA_PLAY_PAUSE.  By mapping the VK of VK_MEDIA_PLAY_PAUSE to one of the Jump Backward commands [eg Jump Backward (medium)] then hitting the Play/Pause key while something is playing will pause playback, and do a short rewind.  It will also do another short rewind and start playing again on the next press.  
    Example 2: MPC-HC has the Mute function mapped to MK VOLUME_MUTE.  By mapping the VK of VK_VOLUME_MUTE to Toggle Subtitles then pressing Mute will mute the sound and turn subtitles on.  Press again to turn subtitles off and turn the sound back on.
    This remote is similar to other HP laptops and so if you happen to know a piece of software which responds to the 'Settings i' button on any model of the HP laptop with any express card style remote running Windows Vista 64-bit please give a shout.
    Thanks again,
    Kelly

  • Using a hashmap for caching -- performance problems

    Hello!
    1) DESCRIPTION OF MY PROBLEM:
    I was planing to speed up computations in my algorithm by using a caching-mechanism based on a Java HashMap. But it does not work. In fact the performance decreased instead.
    My task is to compute conditional probabilities P(result | event), given an event (e.g. "rainy day") and a result of a measurement (e.g. "degree of humidity").
    2) SITUATION AS AN EXCERPT OF MY CODE:
    Here is an abstraction of my code:
    ====================================================================================================================================
    int numOfevents = 343;
    // initialize cache for precomputed probabilities
    HashMap<String,Map<String,Double>> precomputedProbabilities = new HashMap<String,Map<String,Double>>(numOfEvents);
    // Given a combination of an event and a result, test if the conditional probability has already been computed
    if (this.precomputedProbability.containsKey(eventID)) {
    if (this.precomputedProbability.get(eventID).containsKey(result)) {
    return this.precomputedProbability.get(eventID).get(result);
    } else {
    // initialize a new hashmap to maintain the mappings
    Map<String,Double> resultProbs4event = new HashMap<String,Double>();
    precomputedProbability.put(eventID,resultProbs4event);
    // unless we could use the above short-cut via the cache, we have to really compute the conditional probability for the specific combination of the event and result
    * make the necessary computations to compute the variable "condProb"
    // store in map
    precomputedProbabilities.get(eventID).put(result, condProb);
    ====================================================================================================================================
    3) FINAL COMMENTS
    After introducing this cache-mechanism I encountered a severe decrease in performance of my algorithm. In total there are over 94 millions of combinations for which the conditional probabilities have to be computed. But there is a lot of redundancy in this set of feasible combinations. Basically it can be brought down to just about 260.000 different combinations that have to be captured in the caching structure. Therefore I expected a significant increase of the performance.
    What do I do wrong? Or is the overhead of a nested HashMap so severe? The computation of the conditional probabilities only contains basic operations.
    Only for those who are interested in more details
    4) DEEPER CONSIDERATION OF THE PROCEDURE
    Each defined event stores a list of associated results. These results lists include 7 items on average. To actually compute the conditional probability for a combination of an event and a result, I have to run through the results list of this event and perform an Java "equals"-operation for each list item to compute the relative frequency of the result item at hand. So, without using the caching, I would estimate to perform on average:
    7 "equal"-operations (--> to compute the number of occurences of this result item in a list of 7 items on average)
    plus
    1 double fractions (--> to compute a relative frequency)
    for 94 million combinations.
    Considering the computation for one combination (event, result) this would mean to compare the overhead of the look-up operations in the nested HashMap with the computational cost of performing 7 "equal' operations + one double fration operation.
    I would have expected that it should be less expensive to perform the lookups.
    Best regards!
    Edited by: Coding_But_Still_Alive on Sep 10, 2008 7:01 AM

    Hello!
    Thank you very much! I have performed several optimization steps. But still caching is slower than without caching. This may be due to the fact, that the eventID and results all share long common prefixes. I am not sure how this affects the computation of the values of the hash method.
    * Attention: result and eventID are given as input of the method
    Map<String,Map<String,Double>> precomputedProbs = new HashMap<String,Map<String,Double>>(1200);
    HashMap<String,Double> results2probs = (HashMap<String,Double>)this.precomputedProbs.get(eventID);
    if (results2Probs != null) {
           Double prob = results2Probs.get(result);
           if (prob != null) {
                 return prob;
    } else {
           // so far there are no conditional probs for the annotated results of this event
           // initialize a new hashmap to maintain the mappings
           results2Probs = new HashMap<String,Double>(2000);
           precomputedProbs.put(eventID,results2Probs);
    * Later, in case of the computation of the conditional probability... use the initialized map to save one "get"-operation on "precomputedProbs"
    // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
    results2probs.put(result, condProb);And... because it was asked for, here is the computation of the conditional probability in detail:
    * Attention: result and eventID are given as input of the method
    // the computed conditional probabaility
    double condProb = -1.0;
    ArrayList resultsList = (ArrayList<String>)this.eventID2resultsList.get(eventID);
    if (resultsList != null) {
                // listSize is expected to be about 7 on average
                int listSize = resultsList.size(); 
                // sanity check
                if (listSize > 0) {
                    // check if given result is defined in the list of defined results
                    if (this.definedResults.containsKey(result)) { 
                        // store conditional prob. for the specific event/result combination
                        // Analyze the list for matching results
                        for (int i = 0; i < listSize; i++) {
                            if (result.equals(resultsList.get(i))) {
                                occurrence_count++;
                        if (occurrence_count == 0) {
                            condProb = 0.0;
                        } else {
                            condProb = ((double)occurrence_count) / ((double)listSize);
                        // the variable results2probs still holds a reference to the inner HashMap<String,Double> entry of the HashMap  "precomputedProbs"
                        results2probs.put(result, condProb);
                        return condProb;
                    } else {
                        // mark that result is not part of the list of defined results
                        return -1.0;
                } else {
                    throw new NullPointerException("Unexpected happening. Event " + eventID + " contains no result definitions.");
            } else {
                throw new IllegalArgumentException("unknown event ID:"+ eventID);
            }I have performed tests on a decreased data input set. I processed only 100.000 result instances, instead of about 250K. This means there are 343 * 100K = 34.300K = 34M calls of my method per each iteration of the algorithm. I performed 20 iterations. With caching it took 8 min 5 sec, without only 7 min 5 sec.
    I also tried to profile the lookup-operations for the HashMaps, but they took less than a ms. The same as with the operations for computing the conditional probability. So regarding this, there was no additional insight from comparing the times on ms level.
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:22 AM
    Edited by: Coding_But_Still_Alive on Sep 11, 2008 9:24 AM

  • Interface Mappings in BPM Collect Pattern

    Hello
    I am new to XI development and currently facing problems while implementing collect pattern of BPM.
    I am trying to map IDoc structure with target legacy format.In BPM I had a block with infinite loop where I am transforming IDocs to target legacy format.Then I am trying to transform list element (Multiline Abstract container variable) to form a single message.For this I tried to have interface map with Occurences 0..unbound..but while activation of Interface Map it gives error
    <b> Mapping program Message does not match the interface mapping. The number or frequencies of source or target messages for the message mapping are not identical to the number or frequencies of source or target interfaces.</b>
    If I use single line in Interface map then in BPM it gives error when passed Multiline element to this Interface map.
    I checked the sample program provided by SAP but I found out that every message interface was using same Message Type which has got occurence as 1 but in the message mapping program its 0..unbound.
    My question is can we have data type (or Message type )with Occurence as 1 and Message Mapping program using same message type with occurences 0..unbound..I tried to find such option but could'nt find that.Otherwise how to do Interface Map for transforming Multiline parameter of BPM to a single message.
    Please help..Thanks in advance..
    Regards
    Rajeev Patkie

    Initially I tried to perform test by tree view and it worked fine.But it was one message in the source and the same was in target.Later as given by you I updated XML source message.The source message looks like
    ******************Source Message***********************
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
      <ns0:Message1>
        <ns1:MATMAS_to_Stockware_MT xmlns:ns1="http://mccormick.com/ez_dev">
          <MAT_List>
            <ZPITNO>121212</ZPITNO>
            <ZPIDS>Test Message</ZPIDS>
            <ZPPOPN />
            <ZPCNQT />
            <ZPZLOC />
            <ZPZPCB />
            <ZPZCPA />
            <ZPGRWE />
            <ZPSAEL />
            <ZSPLDY />
            <ZPFRAG />
            <ZPZCRO />
            <ZPZOPT />
          </MAT_List>
        </ns1:MATMAS_to_Stockware_MT>
      </ns0:Message1>
    </ns0:Messages>
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
      <ns0:Message2>
        <ns1:MATMAS_to_Stockware_MT xmlns:ns1="http://mccormick.com/ez_dev">
          <MAT_List>
            <ZPITNO>78912</ZPITNO>
            <ZPIDS>Test Message12</ZPIDS>
            <ZPPOPN />
            <ZPCNQT />
            <ZPZLOC />
            <ZPZPCB />
            <ZPZCPA />
            <ZPGRWE />
            <ZPSAEL />
            <ZSPLDY />
            <ZPFRAG />
            <ZPZCRO />
            <ZPZOPT />
          </MAT_List>
        </ns1:MATMAS_to_Stockware_MT>
      </ns0:Message2>
    </ns0:Messages>
    ************************End Source*********************
    if this would be the message multiline is generating its bound to fail as I guess every XML has one topmost element and over here.So I changed my source XML to look like
    ******************Start message******************
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
      <ns0:Message1>
        <ns1:MATMAS_to_Stockware_MT xmlns:ns1="http://mccormick.com/ez_dev">
          <MAT_List>
            <ZPITNO>121212</ZPITNO>
            <ZPIDS>Test Message</ZPIDS>
            <ZPPOPN />
            <ZPCNQT />
            <ZPZLOC />
            <ZPZPCB />
            <ZPZCPA />
            <ZPGRWE />
            <ZPSAEL />
            <ZSPLDY />
            <ZPFRAG />
            <ZPZCRO />
            <ZPZOPT />
          </MAT_List>
        </ns1:MATMAS_to_Stockware_MT>
      </ns0:Message1>
      <ns0:Message2>
        <ns1:MATMAS_to_Stockware_MT xmlns:ns1="http://mccormick.com/ez_dev">
          <MAT_List>
            <ZPITNO>78912</ZPITNO>
            <ZPIDS>Test Message12</ZPIDS>
            <ZPPOPN />
            <ZPCNQT />
            <ZPZLOC />
            <ZPZPCB />
            <ZPZCPA />
            <ZPGRWE />
            <ZPSAEL />
            <ZSPLDY />
            <ZPFRAG />
            <ZPZCRO />
            <ZPZOPT />
          </MAT_List>
        </ns1:MATMAS_to_Stockware_MT>
      </ns0:Message2>
    </ns0:Messages>
    **********************End Message****************
    but this produces only one output message
    ******************Start Message***********************
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
      <ns0:Message1>
        <ns1:MATMAS_to_Stockware_List xmlns:ns1="http://mccormick.com/ez_dev">
          <MAT_List>
            <ZPITNO>121212</ZPITNO>
            <ZPIDS>Test Message</ZPIDS>
            <ZPPOPN />
            <ZPCNQT />
            <ZPZLOC />
            <ZPZPCB />
            <ZPZCPA />
            <ZPGRWE />
            <ZPSAEL />
            <ZSPLDY />
            <ZPFRAG />
            <ZPZCRO />
            <ZPZOPT />
          </MAT_List>
        </ns1:MATMAS_to_Stockware_List>
      </ns0:Message1>
    </ns0:Messages>
    **********************End Message****************
    I am also pasting schema of message mappings
    *****************Source Schema************************
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema targetNamespace="http://sap.com/xi/XI/SplitAndMerge" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://sap.com/xi/XI/SplitAndMerge">
      <xsd:import namespace="http://mccormick.com/ez_dev" />
      <xsd:element name="Messages" xmlns:p0="http://mccormick.com/ez_dev">
        <xsd:complexType>
          <xsd:sequence>
            <xsd:element name="Message1" form="qualified">
              <xsd:complexType>
                <xsd:sequence>
                  <xsd:element ref="p0:MATMAS_to_Stockware_MT" minOccurs="0" maxOccurs="unbounded" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
                </xsd:sequence>
              </xsd:complexType>
            </xsd:element>
          </xsd:sequence>
        </xsd:complexType>
      </xsd:element>
    </xsd:schema>
    ********************End Source***********************
    ********************Target Schema******************
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema targetNamespace="http://sap.com/xi/XI/SplitAndMerge" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://sap.com/xi/XI/SplitAndMerge">
      <xsd:import namespace="http://mccormick.com/ez_dev" />
      <xsd:element name="Messages" xmlns:p0="http://mccormick.com/ez_dev">
        <xsd:complexType>
          <xsd:sequence>
            <xsd:element name="Message1" form="qualified">
              <xsd:complexType>
                <xsd:sequence>
                  <xsd:element ref="p0:MATMAS_to_Stockware_List" />
                </xsd:sequence>
              </xsd:complexType>
            </xsd:element>
          </xsd:sequence>
        </xsd:complexType>
      </xsd:element>
    </xsd:schema>
    ****************************End Target*************
    Thanks in advance.
    Regards
    Rajeev

  • Issue in Target Column mappings in ODI

    Hi Guru's,
    Unable to uncheck Insert and Update checkbox in the Update section of target column mappings.
    How we can uncheck the Insert and Update checkbox for the columns which should not affect in the target datastore?
    Thanxs
    --Madhavi                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hi
    My requirement is to only perform updation on some columns based on the key
    How it can be implemented in the ODI interface mappings?
    --Madhavi                                                                                                                                                                                                                                                                                                                   

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Performance question

    Which mapping is performance wise better, comparison between graphical, abap, java and XSLT mapping.
    suppose when the mapping is simple which is better and when mapping is complex which is better.

    Hi Srinu,
    I thought i will start of from scratch.Mapping is basically done to convert one form of xml into another form. This can be done using either of them mentioned below.
    - Graphical mapping
    - XSLT mapping
    - JAVA mapping
    - ABAP mapping
    There is no hard and fast rule for using the mapping techniques. But, I will try to put things in the right perspective for you.
    Graphical Mapping is used for simple mapping cases. When, the logic for your mapping is simple and straight forward and it does not involve any complex logic.
    Java and XSLT mapping are used when graphical mapping cannot help you.
    When the choice is between Java And XSLT, XSLT is simpler than java mapping and easier. But, it has its drawbacks. One among them being that you cannot use Java APIs and Classes in it. There might be cases in your mapping when you will have to perfrom something like a properties file look up or a DB lookup, such scenarios are not possible in XSLT and so, when you want to use some specific Java API's you will have to go for Java Mapping.
    Java Mapping uses 2 types of parsers. DOM and SAX. DOM is easier to use with lots of classes to help you create nodes and elements, but , DOM is very processor intensive.
    SAX parser is something that parses your XML one after the other, and so is not processor intensive. But, it is not exaclty easy to develop either.
    To know more about each of them please go thru the following links. And if you ask me your which is better, it depends basically on the scenario you implementing and the complexity involved. Anyways please go thru the following links:
    Graphical mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/6d/aadd3e6ecb1f39e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/43/c4cdfc334824478090739c04c4a249/content.htm
    /people/bhanu.thirumala/blog/2006/02/02/graphical-message-mapping-150-text-preview
    http://www.sapgenie.com/netweaver/xi/mapping1.htm
    /people/alessandro.guarneri/blog/2006/01/26/throwing-smart-exceptions-in-xi-graphical-mapping
    XSLT mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/73/f61eea1741453eb8f794e150067930/content.htm
    http://www.w3.org/TR/xslt20/
    JAVA mapping
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/content.htm
    http://java.sun.com/j2se/1.4.2/docs/api/org/w3c/dom/package-frame.html
    ABAP mapping
    /people/r.eijpe/blog
    To know more about the value mapping tools for the SAP Exchange Infrastructure (XI), please go thru the following link:
    http://www.applicon.dk/fileadmin/filer/XI_Tools/ValueMappingTool.pdf
    To get an idea as to what value mapping is, please go thru the following links:
    http://help.sap.com/saphelp_nw04/helpdata/en/13/ba20dd7beb14438bc7b04b5b6ca300/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/f2/dfae3d47afd652e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/2a/9d2891cc976549a9ad9f81e9b8db25/content.htm
    most of the links that I have provided also helps you get the step by step procedure of doing the same. And also involves the procedure to implement certain advanced features.
    And please go through this link which clearly explains the types of mappings.
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    Hope this clears your doubt fully.
    Regards,
    Abhy

  • Can someone pleas tell me about abap, java and xslt mappings

    Hi,
    can someone please tell me about abap, java and xslt mappings.
    Thanks,
    Bernard.

    HI,
    JAVA mapping
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-i /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-ii /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-iii /people/ravikumar.allampallam/blog/2005/06/24/convert-any-flat-file-to-any-idoc-java-mapping /people/amol.joshi2/blog/2006/03/10/think-objects-when-creating-java-mappings /people/sameer.shadab/blog/2005/09/29/testing-abap-mapping sample code for java mapping blog=/pub/wlg/4143 tutorial sax and dom
    ABAP mapping
    ABAP mappings run on ABAP Stack and are developed in the ABAP workbench of the Integration Server.
    You normally do not need to use the ABAP mappings and is preferable for someone with ABAP programming background. I should say JAVA functions would suffice any complex scenarios.
    refer step by step guides for ABAP Mapping
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5c46ab90-0201-0010-42bd-9d0302591383
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e3ead790-0201-0010-64bb-9e4d67a466b4
    /people/sameer.shadab/blog/2005/09/29/testing-abap-mapping
    ABAP Mapping
    /people/udo.martens/blog/2006/08/23/comparing-performance-of-mapping-programs
    https://websmp101.sap-ag.de/~sapdownload/011000358700003082332004E/HowToABAPMapping.pdf
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    /people/r.eijpe/blog
    ABAP Mapping Vs Java Mapping.
    Re: Message Mapping of type ABAP Class not being shown
    Re: Performance of mappings (JAVA, XSLT, ABAP)
    XSLT Mapping
    XSLT stands for EXtensible Stylesheet Language Transformations. It is an XML based language for transforming XML documents into any other formats suitable for browser to display, on the basis of set of well-defined rules.
    /people/sap.user72/blog/2005/03/15/using-xslt-mapping-in-a-ccbpm-scenario
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/01a57f0b-0501-0010-3ca9-d2ea3bb983c1
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9692eb84-0601-0010-5ca0-923b4fb8674a
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/006aa890-0201-0010-1eb1-afc5cbae3f15
    /people/prasadbabu.nemalikanti3/blog/2006/03/30/xpath-functions-in-xslt-mapping
    https://www.sdn.sap.com/irj/sdn/advancedsearch?cat=sdn_all&query=xslt+mapping&adv=false&sortby=cm_rnd_rankvalue#
    Steps required for developing XSLT Mapping
    u2022 Create a source data type and a target data type
    u2022 Create Message types for the source and target data types.
    u2022 Create Message Interfaces includes Inbound Message interface and Outbound Message interface.
    u2022 XSLT Mapping does not require creation of Message mapping, so donu2019t create any Message mapping.
    u2022 Create an .XSL file which converts source data type into target data type.
    u2022 Zip that .xsl file and import it into Integration Repository under Imported Archives.
    u2022 In Interface Mapping choose mapping program as XSL and specify this zip program. (Through search help you will get XSL Mapping programs that you imported under Imported Archives, select your corresponding XSL Program)
    u2022 Test this mapping program by navigating to Test tab.
    By having look at above steps you can easily find out that this mapping is no where different from other mapping programs, here the challenging lies in creating an XSLT file. If you spend couple of minutes in studying XPATH tutorial you would be in ideal position to create an XSL Transformation (.xsl extension).
    If you still find difficulties in generating XSL Transformation, then you can make use of a tool u201CAltova MapForceu201D which will create XSL file for you.
    Steps for creating XSL file using this tool:
    1. Open the Alto MapForce, import the source .xml and .xsd file in it
    2. Similarly import the target .xml and .xsd in MapForce.
    3. These two data files should match with source and target data types in Integration Repository.
    4. Complete the graphical mapping using extensive list of XSLT functions available there.
    5. Save the mapping file.
    6. Click the XSLT tab. You will have the entire xslt logic there.
    7. Copy that content and save it as .xsl file.
    8. Zip above .xsl file and import the same into IR under Imported Archives.
    Hope this clears your doubts
    Thanks
    Saiyog

Maybe you are looking for