MDS Best Practice Approach - Sample HR Scenario

Thanks for taking to time to read my MDS requirement...just looking for a better way to go about it.
Here is the requirement: 
Every month CEO releases an excel list of approved employment positions that can be filled to the HR.   The HR dept wants to be able to add positions that CEO approves and remove positions that the CEO feels are
no longer necessary.  The recruiting group wants to track/modify this master list of positions per the CEOs discretion and assign employees to potentially each position as people are hired/terminated.
The HR data steward must be able to:
-when a position is filled, must be enabled to assign employees to the positions for org chart reporting
-they need the ability to assign/reassign parent child relationships for any position i.e. the Director Position manages multiple Manager positions which manage multiple Register Clerk positions.
I am new to MDS and am initially not sure how to approach this problem...do I create one entity for 'Positions' and another for 'employees' ?   I'm thinking with that approach I can create employee as an domain based attribute for Position, then
create a derived Hierarchy for the Position parent/child relationships...just wondering if this is a good approach.
Are there other things I should be taking into consideration?  Thanks!

If your Material list document is not excessively long it probably wouldn't be too much overhead to add a few extra columns using the CalculatedColumn action block.  These extra columns, even though the number would be the same for all rows could contain a column for each of your aggregated functions.
Then in your iGrid just set the Column Width for these addtional fields to zero and upon UpdateEvent of the grid you could javascript them from Row number 1 to your desired html elements, etc.

Similar Messages

  • Best practice approach for seperating Database and SAP servers

    Hi,
    I am looking for a best practice approach/strategy for setting up a distributed SAP landscape i.e separating the database and sap servers. If anyone has some strategies to share.
    Thanks very much

    I can imagine the most easiest way:
    Install a dialog instance on a new server and make sure it can connect nicely to the database. Then shut down the CI on the database server, copy the profiles (and adapt them) and start your CI on the new server. If that doesn't work at the first time you can always restart the CI on the database server again.
    Markus

  • Best Practice for a given Scenario

    Hi all,
    Suppose I've a following scenario:
    Show in a page, a list of materials and in this same page to show a line with informations based on agregated functions (Count, Average and other calculations).
    Initially I thought in declare a OutputMaterialsXML parameter that contains the material list and others OutputConsolidationXML parameters that contains the average, count based on material list, so using XPath I can populate all output on once.
    If I want to use XAcuteQuery I need to decide what the output I will show, so I will need to create tow XAcuteQuery, but it will impact on performance, because I'll need to execute the same transaction two times.
    Does anybody has a good approach to this case?
    Best regards

    If your Material list document is not excessively long it probably wouldn't be too much overhead to add a few extra columns using the CalculatedColumn action block.  These extra columns, even though the number would be the same for all rows could contain a column for each of your aggregated functions.
    Then in your iGrid just set the Column Width for these addtional fields to zero and upon UpdateEvent of the grid you could javascript them from Row number 1 to your desired html elements, etc.

  • With 2008 - What would be the 'best practice' approach for giving a principal access to system views

    I want to setup a job that runs a few select statements from several system management views such as those listed below.  Its basically going to gather various metrics about the server, a few different databases and jobs.
    msdb.dbo.sysjobs
    msdb.dbo.sysjobhistory
    sys.dm_db_missing_index_groups
    sys.dm_db_missing_index_group_stats
    sys.dm_db_missing_index_details
    sys.databases
    sys.dm_exec_query_stats
    sys.dm_exec_sql_text
    sys.dm_exec_query_plan
    dbo.sysfiles
    sys.indexes
    sys.objects
    So, there a number of instance-level permissions that are needed, mainly VIEW SERVER STATE
    https://msdn.microsoft.com/en-us/library/ms186717.aspx
    Granting these permissions to a single login seems like introducing a maintenance headache for later.  What about a server role?
    Correct me if Im wrong, but this is a new feature of 2012 and above, the ability to create user-defined server roles.
    Prior to version 2012, I will just have to settle for granting these instance-level permissions to individual logins.  There wont be many logins that need this kind of permissions, but id rather assign them at a role level then add logins to that role.
     Then again, there is little point in creating a seperate role if there is only 1...and maybe 2 logins that might need this role?
    New for 2012
    http://www.mssqltips.com/sqlservertip/2699/sql-server-user-defined-server-roles/

    Just as any Active Directory Administrator will tell you you should indeed stick to the rule - "user in role- permissions to role" - in AD terms "A-G/DL-P. And since this is very much possible since SQL Server 2012 why not just do that. You
    lose nothing if you don't ever change that one single user. In the end you would only expect roles to have permissions and save some time when searching for permission problems.
    i.e.
    USE [master]
    GO
    CREATE SERVER ROLE [role_ServerMonitorUsers]
    GO
    GRANT VIEW SERVER STATE TO [role_ServerMonitorUsers]
    GO
    ALTER SERVER ROLE [role_ServerMonitorUsers]
    ADD MEMBER [Bob]
    GO
    In security standardization is just as much key as in administration in general. So even if it does not really matter, it may matter in the long run. :)
    Andreas Wolter (Blog |
    Twitter)
    MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
    www.SarpedonQualityLab.com |
    www.SQL-Server-Master-Class.com

  • Best practice: read information from server

    Hi All,
    currently I am wondering about the best-practice approach to read/write data from an iPhone app to a webserver.
    What is the easiest way to achieve such a scenario? Is it just to build an easy SQL server online an connect via xcode? Using which frameworks/protocols would be best-practice in xcode? What would be the best setup for communication between an iPhone app and any server-instance in the web?
    My goal is more or less to read data from a server when the application is started and write some data to this server when the user has input some text details.
    Regards,
    Patrick

    Please post your questions in the appropriate forums.
    This forum is for the specific product Virtual Server 2005.
    For Hyper-V related questions, use the Hyper-V forum:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverhyperv
    For server questions please use the server forums:
    http://social.technet.microsoft.com/Forums/windowsserver/en-us/home?category=windowsserver
    Microsoft has a lot of documentation, have you read it yet? Googled?
    Clustering:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7173caf4-a5aa-4426-a16b-592a6e6714ec/windows-server-2012-hyperv-cluster-step-by-step?forum=winserverhyperv
    http://www.bing.com/search?q=hyper-v+cluster+2012+r2+step+by+step&src=IE-SearchBox&FORM=IE11SR
    Domain upgrades:
    http://technet.microsoft.com/en-us/library/hh994618.aspx

  • Best Practice for unimplemented OTC project transports?

    Hello,
    We implemented and went live with portions of ERP modules COPA, MM, GL(new), and Consolidations 2 years ago as a Phase 1 implementation.  Phase 2, which began directly afterwards, included CRM, OTC, and BW.  The problem that has come up is that due to numerous implementation issues, we have not gone live with Phase 2 and it is still undetermined when/if we will implement at least OTC in ERP.  We have a 3-system (Dev, QA, Prod) ERP landscape and are running into issues due to inconsistencies between QA and Prod.  All the transports related to the Phase 2 project were created in Dev and moved into QA, but due to the status of the project were never moved into Production.  We recently had an issue with moving changes to our operating concern through the landscape, due to the existing inconsistencies of the operating concern configurations between Dev, QA, and Prod which led to us not being able to re-activate the Operating Concern in Prod until we moved in additional transports that were tied to Phase 2.
    My question is this...What would be the best practice/approach to resolving the inconsistencies in our ERP landscape to assure that we have accurate QA testing of our Phase 1 implementation, but also trying not to lose the existing Phase 2 development if we decide to implement OTC in the future?  I'm considering the below options:
    A)  Move all Phase 2 requests
    - Refresh QA (via system copy of Prod)
    - Move all Phase 2 transports that were originally moved into QA into the refreshed system and test existing Phase 1 business processes to determine risk of moving into Prod
    - Move all Phase 2 transports into Prod in order to maintain 3-system consistency
    B)  2 system consistency
    - Refresh QA (via system copy of Prod)
    - Leave all Phase 2 transports in import queue for QA, and maintain Prod/QA consistency only
    - OTC implementation can be realized with moving Phase 2 transports through QA at some point in the future
    C)  "Reset button"
    - Refresh QA (via system copy of Prod)
    - Refresh Dev (via system copy of Prod) - I'm not sure what technical considerations would need to be made around the development system's role as the origin of rep and dictionary objects and how this can be maintain in a system copy?
    - This would wipe out all Phase 2 development
    I would greatly appreciate anyone's guidance on our options given our current scenario.
    thanks,
    John

    I would suggest to go with the option A though it seems more work with this option. But this is the only option which can effectively work in the long run. With option B, you will not completely eliminate the problem and with option C, you will loose all your phase 2 work, which will be big waste of the efforts.
    Additional advantage with option A is that whenever your organization decides to go-live with Phase 2 work then minimal regression testing will be required as most of your work will be already tested and verified. Regression testing and remediation is significant work whevever a solution is introduced in a working environment.

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Best practice to Load FX rates to Rate Application in SAP BPC 7.5 NW

    Hi,
    What is the best practice/approach to load FX rates to Rate Application in SAP BPC 7.5 NW? Is it from ECC or BW?
    Thanks,
    Rushi

    I have seen both cases.
    1) Rates coming as a flat file from external system, treasury department, and ECC and BPC both loads in to respective systems in batch.
    2) ECC pushes rate info to BW and data in turn get pushed to BPC along with other scheduled process chains.
    How are rates entering your ECC?
    Shilpa

  • 'Best practice' for avoiding duplicate inserts?

    Just wondering if there's a 'best practice' approach for
    handling potential duplicate database inserts in CF. At the moment,
    I query the db first to work out if what I'm about to insert
    already exists. I figure I could also just send the SQL and catch
    the error, which would then tell me the data's already in there,
    but that seemed a bit dodgy to me. Which is the 'proper' way to
    handle this kind of thing?

    MrBonk wrote:
    > Just wondering if there's a 'best practice' approach for
    handling potential
    > duplicate database inserts in CF. At the moment, I query
    the db first to work
    > out if what I'm about to insert already exists. I figure
    I could also just
    > send the SQL and catch the error, which would then tell
    me the data's already
    > in there, but that seemed a bit dodgy to me. Which is
    the 'proper' way to
    > handle this kind of thing?
    i wouldn't consider letting the db handle this as "dodgy". if
    you're seeing the
    majority of inserts as "ok" then you're saving at least 1 db
    interaction per
    insert which can add up in high transaction environments.

  • Best Practice for serving static files (gif, css, js) from front web server

    I am working on optimization of portal performance by moving static files (gif, css, js) to my front web server (apache) for WLP 10 portal application. I end up with moving whole "framework" folder of the portal WebContent to file system served by apache web server (the one which hosts WLS plugin pointing to my WLP cluster). I use <LocationMatch> directives for that:
    Alias /portalapp/framework "/somewhere/servedbyapache/docs/framework"
    <Directory "/somewhere/servedbyapache/docs/framework">
    <FilesMatch "\.(jsp|jspx|layout|shell|theme|xml)$">
    Order allow,deny
    Deny from all
    </FilesMatch>
    </Directory>
    <LocationMatch "/partalapp(?!/framework)">
         SetHandler weblogic-handler
         WLCookieName MYPORTAL
    </LocationMatch>
    So, now browser gets all static files from apache insted of the app server. However, there are several files from bighorn L&F, which are located in the WLP shared lib: skins/bighorn/ window.css, wsrp.css, menu.css, general.css, colors.css; skins/bighorn/borderless/window.css; skeletons/bighorn/js/ util.js, buttons.js; skeleton/bighorn/css/layout.css
    I have to merge these files into the project and physically move them into apache served file system to make mentioned above apache configuration works.
    However, this approach makes me exposed bunch of framework resources, which I do not to intend to change and they should not be change (only custom.css is the place to make custom changes to the bighorn skin). Which is obviously not very elegant solution. The other approach would be intend to create more elaborate expression for LocationMatch (I am not sure it's entirely possible giving location of these shared resources). More radical move - stop using bighorn and create totally custom L&F (skin, skeleton) - which is quire a lot of work (plus - bighorn is working just fine for us).
    I am wondering what is the "Best Practice Approach" approach recommended by Oracle/BEA - giving the fact that I want to serve all static files from my front end apache server instead fo WLS app server.
    Thanks,
    Oleg.

    Oleg,
    you might want to have a look at the official WLP performance support pattern (Metalink DocID 761001.1 ) , which contains a section about "Configuring a Fronting Web Server Serving WebLogic Portal 8.1 Static Artifacts ".
    It was written for WLP 8.1, but most of the settings / recommendations should also to WLP 10.
    --Stefan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Code Set pattern or best practice?

    Hi all,
    I have what I would have thought to be a common problem: the best way to model and implement an organization's code sets. I've Googled, and I've forumed - without success.
    The problem domain is this: I'm redeveloping an existing application, which currently represents it's vast array of code sets using a seperate table for each set. There are currently 180+ of these tables. Not a very elegant approach at present. The majority of these code sets are what I would class as "simple" - a numeric value associated with a textual description - eg 1 = male, 2 = female, or 1 "drinks excessively", 2 "drinks sometimes" ... etc. Most of these will just be used to associate a value with a combo box selected value.
    There are also what I would class as "complex" code sets, which may have 1..n attributes (ie not just a numeric and text value pair). An example of this (not overly complex) is zip code, which has a unique identifier, the zip code itself (which may change - hence the id), a locality description, and a state value.
    Is there a "best practice" approach or pattern which outlines the most efficient way of implementing such code sets? I need to consider performance vs the ability to update the code set values, as some of them may change from time to time without notice at the discretion of government departments.
    I had considered hard coding, creating classes to represent each one, holding them in xml files, storing in the database etc, but it would seem that making the structure generic enough to cater to varying numbers of attributes and their associated datatypes will be at the cost of performance.
    Any suggestions would be greatly appreciated.
    Thanks.
    Paul C.

    Hi Saish,
    Thanks for your response. Yes, this approach is what
    I had considered - I'll be using Hibernate so these
    values will be cached etc.
    I guess my main concern is reducing the huge number
    of very small tables in use. I was thinking about
    this some more, and for the simple tables was
    thinking of 2 tables: 1 (eg "CODE_SET") to describe
    the code set (or ref table etc) in question, the
    second to hold the values. This way 80 odd tables
    would be reduced to 2. Not sure what's best here -
    simpler ER diagram or more performance!Tables...
    Enumeration
    - EnumerationId
    - EnumerationName
    - EnumerationAbbreviation
    EnumerationValues
    - EnumerationId
    - ValueIndex
    - ValueName
    - ValueAbbreviation
    The above allows the names to change.
    You can add a delete flag if values might be deleted but old records need to be maintianed.
    Convention: In the above I specifically name the second table with a plural because it holds a collection of sets (plural) rather than a single set.
    In the first table the id is the key. In the second the id and the index are the key. The ids are unique (of course). The enumeration name should be unique in the first table. In the second table the EnumerationId and value name should be unique.
    Conversely you might choose to base uniqueness on the abbreviation rather than the name.
    The Name vs Abbreviation are used for reporting/display purposes (long name versus short name).
    It is likely that for display/report purposes you will have to deal with each of the sets uniquely rather than a group. Ideally (strongly urged) you should create something that autogenerates a java enumeration (specific with 1.5 or general with 1.4) that uses the id values and perhaps the indexes as the values and the names are generated from the abbreviations. This should also generate the database load table for the values. Obviously going forward care must be taken in how this is modified.

  • CAS array internal DNS IP address best practice

    Hi, Just a question about a best practice approach for DNS and CAS arrays.
    I have an Exchange 2010 Org. I have two CAS/HUB servers and two MBX servers. My external DNS (mail.mycompany.biz) host record points to a public IP address which is NAT'd to the internal IP address of my NLB CAS cluster. I maintain a split brain
    DNS. Should the internal DNS entry for mail.mycompany.biz also point to the public IP address or should it point to the internal IP address of the NLB cluster?

    A few comments:
    The reason you have split DNS is to do exactly these sort of things: inside users hit the inside IP and outside users hit the outside IP.  You'll have to look at your overall network design to see if it makes sense for users to take this shortest route
    to the services, or if there is value in knowing all users simply take the same path.
    You should not be using the same DNS name for your web services (e.g. OWA) as you are for your CAS array.  This can cause very long connection delays on Outlook clients, not to mention overall confusion in your design.  Many orgs will use something
    like "outlook.domain.com" for the Client Access Array and "mail.domain.com" for the web services.  Only the later of these two need to be exposed to the internet.
    Keep in mind, Exchange 2013 dramatically changes this guidance.  There is no more CAS array, and the
    recommended design is to use dedicated namespaces for each web service.
    Mike Crowley | MVP
    My Blog --
    Planet Technologies

  • Oracle R12 HCM to SCM Key Integration Points Best Practice Documentation

    My client is implementing Oracle R12 HCM and SCM modules on a Single Global Instance and would like to know if there are any key integration points or best practice documentation.
    Impacted Scenario’s include:
    1.     Multiple Business Groups
    2.     Retiree Business Groups
    3.     Ex-Pats
    4.     Return to Workers (Payroll vs Pension)
    Thank you,
    Steve

    My client is implementing Oracle R12 HCM and SCM modules on a Single Global Instance and would like to know if there are any key integration points or best practice documentation.
    Impacted Scenario’s include:
    1.     Multiple Business Groups
    2.     Retiree Business Groups
    3.     Ex-Pats
    4.     Return to Workers (Payroll vs Pension)
    Thank you,
    Steve

Maybe you are looking for

  • Cannot Print after upgrading to Mountain Lion

    I can not longer print since upgrading to Mountain Lion 10.8.2.  Prior to the upgrade I was printing; now, even though my printer is on the list (HP Officejet 7410xi, All-in-one) as being compatible, I still get "Power PC applications are not longer

  • Exist and not Exist

    Hello, I am doing following validation, when user does not have admin privileges, it should display error page - "User does not have admin privileges". SELECT 1 FROM apex_workspace_apex_users WHERE user_name =:P6_USERNAME AND is_admin = 'Yes' if I am

  • How to enable Scheduling in Discoverer for Oracle users?

    Hi, For accessing discoverer, our users are uaing Oracle usernames. how can i enable scheduling for them? we have already enabled scheduling for users who use database login to use discoverer. but not able to link oracle users to database. are they l

  • Paste phone n° out of email in work environment to the phone dialing pad is not possible

    Hello, I am trying to paste a phone number out of an email in my work environment to the dialing pad of the phone. When I do that, I receive the error message that I am not allowed to paste work content into a personal application, although I am in t

  • How to raise a soap fault for "business errors"

    Hi folks, I have a scenario. sync SOAP<->PI<->RFC If the update to ECC failed then the RFC response will give back an Error Code in one of the fields, another field will indicate the error description, I will need to raise a soap fault with this erro