Mapping Best Practice Doubt

Dear SDN,
I have a best practice doubt.
For an scenario where it is needed to mapping a value to another value, but the conversion is based on certain logic over R/3 data what is the recommended implementation:
1.  Use Value Mapping Replication for Mass Data   or
2.  Use XSLT ABAP Mapping calling an RFC ??
Best regards,
Gustavo P.

Hi,
I would suggest you use XSLT ABAP mapping or,
Use the RFC LookUp API available from SP 14 onwards to call the RFC from your message mapping itself.
Regards
Bhavesh

Similar Messages

  • Java WebDynpro context mapping  best practices

    Hi Friends,
    the data to provide in context for every view controller and component controller.. can bemaintained in different ways.
    1. to map the view controller fields with component controller only when it is required at both the places to be accessed.
    rest all fields which do not need to be accessed at both the places may not be maped.
    or:- Whats the advantage of not mapping the fields between view controllers and component controller?
    2.
    instead of fields as value attributes, one Value Node may be used  as a grouping for a particular group of fields. is is best practice to group the fields into value node as per screen grouping?
    for example screen has three sub parts, so three value node.. and each value ndoe may contain different value attributes. which scenario should be consider as best practice?
    Thanks!

    <i>1) Advantage of not mapping is perfomance;</i>
    Very weak argument. There is no any significant performance lost when mapping used (I bet you save less then a percent comparing to "direct" access).
    Just put simple: your business data originates from controller. You must to show it on view, hence the need for mapping.
    Also view may require certain context nodes just to setup and control UI elements. Declare these nodes directly in view controller and you'll need no mapping in this case.
    Valery Silaev
    EPAM Systems
    http://www.NetWeaverTeam.com

  • Mapping creation best practice

    What is the best practice while designing OWB mappings.
    Is it best to have less number of complex mappings or more number of simple mappings particularly when accessing remote DB to
    extract the data.
    A simple mapping may be having lesser number of source tables and the complex mapping may be one
    which will have more source tables and more expresssions.

    If you're an experienced PL/SQL (or other language) developer then you should adopt similar practices when designing OWB mappings i.e. think reusability, modules, efficiency etc. Generally, a single SQL statement is often more efficient than a PL/SQL procedure therefore in a similar manner a single mapping (that results in a single INSERT or MERGE statement) will be more efficient than several mappings inserting to temp tables etc. However, it's often a balance between ease of understanding, performance and complexity.
    Pluggable mappings are a very useful tool to split complex mappings up, these can be 'wrapped' and tested individually, similar to a unit test before testing the parent mapping. These components can then also be used in multiple mappings. I'd only recommend these from 10.2.0.3 onwards though as previous to that I had a lot of issues with synchronisation etc.
    I tend to have one mapping per target and where possible avoid using a mapping to insert to multiple targets (easier to debug).
    From my experience with OWB 10, the code generated is good and reasonably optimised, the main exception that I've come across is when a dimension has multiple levels, OWB will generate a MERGE for each level which can kill performance.
    Cheers
    Si

  • What oracle best practices in mapping budgeting to be implement at item

    Dear Consultant's
    Really i need you values Consultantancy
    What oracle best practices in mapping budgeting to be implement at item category level or item level
    I want to check fund at encumbrance account according to item level
    Case:
    I have there item category
    One is Computer's items
    Tow is printer's items
    Third is food's item's
    I want to implement my budget on item category level
    Example:
    I want my purchase budget for item with printer's type not exceed 30000USD
    And For item with type food's not exceed 45000usd
    How to map this in oracle application
    The modules implemented on my site
    (GL, AP, AR, INV, PURCHASING, OM)
    Please give me the oracle best practice that handle this case
    Thanks for all of you

    Hi,
    It is really difficult to have Budgetary Control on Inventory Items in Average Costing enviornment as you can have only one Inventory Account at the Inventory Organization level.
    You have to modify your PO / Requisition Account Generator to populate the Encumbrance Account in PO / Requisition based upon item category. Moreover, the "Reverse Encumbrance" flag in your Inventory Org needs to be unchecked so that the encumbrances are not revered when the goods are received.
    Gajendra

  • SAP Best Practice: Can't open process maps

    Hi Community,
    when I try to open a process map within the SAP best practice package I receive this error message (tried various browser, low security settings):
    Any ideas how to avoid this error message?
    Thanks in advance,
    Frank

    Hey Frank,
    is my assumption correct, that you use either Firefox or Chrome? Please try with IE.
    In case this does not work, please go to:
    https://websmp108.sap-ag.de/~form/handler?_APP=00200682500000002672&_EVENT=DISPLAY&_SCENARIO=&_HIER_KEY=501100035870000006532&_HIER_KEY=601100035870000146943&
    Select the country you are interested in and download.
    Best,
    xiaoma

  • Best Practices for ASAP Inputs - As-Is Business Process Mapping

    I am new to the SAP world and my company is in the early phases of implementation.  I am trying to prepare the "as-is" business process maps for the Project Preparation and Business Blueprint phases and I am looking for some best practices.  I've been told that we don't want them to go too deep but are there best practices and/or examples that give more information on what we should be capturing and the format.
    I have searched the forums, WIKI, ASAP documentation, and other areas but have not found much at this level of detail.  I have reviewed the [SAP BPM Methodology|http://wiki.sdn.sap.com/wiki/display/SAPBPX/BPM+Methodology] but again I am looking for more detail if anyone can direct me to that.
    Thank you in advance for any assistance.
    Kevin

    Hello Kevin,
    You can try to prepare a word document for each of your As-Is processes first before going to As-Is Process Design in a flowchart.
    The word document can have 7 sections -
    The first section can include Name of the Process Owner, Designation, Process Responsibility, User Department(s) involved, Module Name and a Document number for reference.
    The second section can include Process Definition details - Name of the major process, Name of the minor process, Name of the sub process and a Process ID for future reference.
    The third section can be titled as Inputs - this contains details of - Input, Vendor for the input, Type of Input (Data / Activity / Process), Category of Input (Vital / Essential / Desirable) and Mode of Information (Hard / Soft Copy).
    The fourth section can be Process Details. Here you can write the process in detail.
    The fifth section to contain outputs of the process, customer to whom these outputs are sent, type of output (report / approval / plan / request / email / fax), Category of Output (Vital / Essential / Desirable) and Mode of Information (Hard / Soft Copy).
    The sixth section can be Issues / Pain Areas in this process - Issue Description, Remarks, Expectations, Priority (High / Medium / Low)
    The seventh section can be expected reports in future out of this process for internal and external reporting.
    Hope this helps your question.

  • Best Practices for Reprocessing or Reexecuting a mapping

    Hi All
    Please,
    May someone tell me which is the best practice to execute a mapping when it fails?
    How does Oracle Warehouse Builder manage the re execution and re processing of a map from a Source system to staging area tables?
    - Truncate the stage table and start again?
    - Populate an identical error table and reprocess only this table after fixing each Extraction mistake? This to avoid a whole processing when the quantity of mistakes is considerably lower than successful records
    - Has OWB a log table of error records? Can I process over this table?
    - Any valuable link ?
    (The questions applies for mappings between staging area tables and the Warehouse tables too)
    I`m working with DB v 9.2.0.5, and OWB v.10g.
    Thanks in Advance
    LEONARDO QUINTERO RIPPE
    [email protected]
    Technical Consulting

    Roland,
    The doc is linked on [SAP on DB2 for z/OS|SAP on DB2 for z/OS] or access the doc directly [here|https://service.sap.com/~sapidb/011000358700000525542007E] .
    Regards,
    Thomas

  • Mapping Content Best Practice

    Anyone got suggestions on best practice for mapping contents...
    a) create a lot of mappings - almost 1 for each table, dim, etc;
    b) put many items into logically grouped mappings (e.g. staging static data extract, staging fact extract, etc)
    c) somewhere in between - most staging stuff gets chucked together, major dimension/fact refresh has its own mapping, possibly put many related smaller dims (status, etc) in one mapping, etc..
    I assume the considerations are parallel execution, monitoring, maintenance/clarity, simplifying process flows, etc.
    I suppose I'd go for c), but if anyone can suggest things to avoid that'd be good!

    Hi Chewy,
    Mapping composition is not an exact science, as you state yourself.
    The problem also extends to workflows and workflow subprocess division.
    I've made some core rules for myself, that may work for others too:
    1) If I want to be able to reload a target in case of problems, it should be in a separate mapping.
    2) Generally, each target file has its own mapping. If several disconnected flows have the same target, I put them in the same mapping.
    3) If I split a single source into multiple targets, I generally use a single mapping for that
    4) If flows are complex (have many operators), I never put two or more together. If your 3600x4800px monitor can't display all of your mapping, you're on wrong track.
    5) If I want to sequence simple flows and don't care about 1) or which one runs first (10gR1 caveat), I put them into the same mapping. Eg. when the source cannot handle concurrent queries well.
    When it comes to workflows, things get more complicated.
    In my current project, most source data moves through three modules: staging, ods (relational), and data mart (star schema). I have two requirements for packaging mappings into workflow subprocesses:
    1) There should be a subprocess per source module that transfers all data from the source to the staging area
    2) Each target (dim or cube) in the data mart should have a silo-type subprocess that handles the data from staging through ods into the data mart. Rule 1) from mappings applies here, as well.
    To do this efficiently, you should have a PL/SQL function that checks eg. if the source loaded successfully into staging before commencing with target loading.
    Regards, Hans Henrik

  • Best Practices to update Cascading Picklist mapping for Account record type

    1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
    2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
    3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
    Thanks!

    Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

  • Best practices TopLink Mapping Workbench multi-user + CVS?

    This might be a very important issue, in our decision whether or not to choose TopLink --
    How well is multi-user development and CVS supported when using the TopLink Mapping Workbench? Are there best practices regarding this use case?
    Thanks.

    We have no problem with the workbench and CVS. Only a couple of our developers are responsible for the mappings so we havn't really run into concurrent edits. It's pure XML so a decent mergetool with XML support should let you resolve conflicts pretty easily.

  • Best practices Struts for tech. proj. leads

    baseBeans engineering won best training by readers of JDJ and published the first book on Struts called FastTrack to Struts.
    Upcoming class is live in NYC, on 5/2 from 7:30 AM to 1:00PM. We will cover db driven web site development, process, validation, tiles, multi row, J2EE security, DAO, development process, SQL tuning, etc.
    We will teach a project tech lead methods that will increase the productivity of his team and review best practices, so that they can benchmark their environment.
    Sign up now for $150, the price will be $450 soon as we get closer to the date (price goes up every few days). The web site to sign up on is baseBeans.net* .
    You will receive a lab/content CD when you sign up.
    Contact us for more details.
    ·     We preach and teach simple.
    ·     We use a very fast DAO DB Layer – with DAO side data cache
    ·     We use JSTL
    ·     We use a list backed Bean w/ DAO helper design for access to any native source and to switch out DAO.
    ·     We use J2EE security, container managed declarative authorization and authentication. (no code, works on any app. server).
    ·     Struts based Content Management System. A Struts menu entry like this:
    <Item name="About_Contacts"      title="About/Contacts"
    toolTip="About Us and Contact Info" page="/do/cmsPg?content=ABOUT" />
    passes to the action the parm of “about” which the DAO populates.
    You can peak at the source code at sourceforge.net/projects/basicportal or go to our site baseBeans.net. (16,000 downloads since Oct. 2002)
    Note that the baseBeans.net is using the Content Management System (SQL based) that we train on. (our own dog food)
    Note: We always offer money back on our public classes.
    Vic Cekvenich
    Project Recovery Specialist
    [email protected]
    800-314-3295
    <a href =”baseBeans.net”>Struts Training</a>
    ps:
    to keep on training, details, best practice, etc. sign up to this mail list:
    http://www.basebeans.net:8080/mailman/listinfo/mvc-programmers
    (1,000 + members)

    Hi,
    We use only Stateful release modes for application modules, defined in the action mappings in struts-config.xml exactly the same way as in your example. Stateful mode releases the module instance back to the pool and it can be reused by other sessions as well. However, all the code that uses the app modules and view objects, etc, must be written with the assumption that the module or the view object the code is operating on can be a different instance from the one in the previous request in the same session.
    The concept of BC4J is that this recycling of modules should be transparent for the users of the app modules, but this is not exactly the case. Some things are not passivated in the am's snapshots and are not activated in case of recycling, for example, custom view object properties or entries in the userData map (or at least were not in 9.0.5, I doubt this is changed in 10.1.2.) These are things that you have to manually passivate and activate if you use them to store some information that is relevant to a particular user session.
    All chances are that these strange things that you experience only occur in sessions that use recycled application modules, that is, there was passivation and subsequent activation of vo and am states. I have found it useful as a minimum to test the application with only 1 application module in the pool and at least 2 user sessions, constantly recycling this one am instance. Many of the problems that will surface in a real application usage only when there is a high load can be experienced in this artificial setup.

  • BC4J Best Practices

    Hi,
    I am looking for any best practices doc from Oracle or anyone on implementing features in BC4J objects. Examples will be "Where a method/feature should go?"
    I am facing a choice of writing the insert, update and delete methods in the view objects, but it is equally possible and correct if I have them in the app modules, by obtaining a handle to the view object. But which one is the best practice?
    Please add your comments and suggestion to this thread?
    Patrick.

    Hi,
    We use only Stateful release modes for application modules, defined in the action mappings in struts-config.xml exactly the same way as in your example. Stateful mode releases the module instance back to the pool and it can be reused by other sessions as well. However, all the code that uses the app modules and view objects, etc, must be written with the assumption that the module or the view object the code is operating on can be a different instance from the one in the previous request in the same session.
    The concept of BC4J is that this recycling of modules should be transparent for the users of the app modules, but this is not exactly the case. Some things are not passivated in the am's snapshots and are not activated in case of recycling, for example, custom view object properties or entries in the userData map (or at least were not in 9.0.5, I doubt this is changed in 10.1.2.) These are things that you have to manually passivate and activate if you use them to store some information that is relevant to a particular user session.
    All chances are that these strange things that you experience only occur in sessions that use recycled application modules, that is, there was passivation and subsequent activation of vo and am states. I have found it useful as a minimum to test the application with only 1 application module in the pool and at least 2 user sessions, constantly recycling this one am instance. Many of the problems that will surface in a real application usage only when there is a high load can be experienced in this artificial setup.

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

Maybe you are looking for

  • Issue with OWA (404 error)

    Issue with OWA (404 error) From the server itself i can use IPv6 https://[fe80::feef:ff56:c498:1f0]/owa - this opens the mailbox. https://127.0.0.1/owa - this opens the mailbox https://localhost/owa - this results in 404 - File or directory not found

  • Flash CS5 doesn't save advanced color effects on symbols

    In CS4 I'm used to inverting instances of my symbols by using a color effect: Style: Advanced Red: -100%  xR  +255 Green: -100%  xG  +255 Blue: -100%  xB  +255 However, everytime I save these settings in a CS5 file, upon opening the settings have cha

  • Order Management Ship-To location

    Hi, I am new to OM module and We are implementing OM for our client and i came through a scenario for which i need to do the setups. Scenario goes like this, my client's customer do a Dropship in his business process. when ever he get an order he pla

  • Set Production Order to Release status

    Hi, I’m trying to add a new production order through the DI API.  I’m using SAP 2005A SP01 PL 07.  I’m creating it from a sales order.  But if I try to set the status of the Production Order to Release I get a -5002 error.  If I set the status as Pla

  • Content Pane Problem

    Greetings! I'm facing two related problems in developing a Java Applet. My applet class extends JApplet. 1) The current default layout of the whole window (not individual components) seems to be a flowLayout. I'd like to set the layout to a gridBagLa