Best practice usage of system users for different RFC functions execution

Hello experts,
Could you guys share your thoughts on RFC type 3 - system userid usage:
Would it be recommended to use ONE RFC destination (type 3) for execution of various functions?
Is it recommended using the different RFC destinations (type 3) (with different system user ids) depending on the functions execution?
Thanks in advance for your thoughts.
Thanks
Himadama

Thanks Julius for your information.
Option 1: System user id / password:
Password is sent over the network. But SNC control would take care of this.
1 destination = 1 system userid
If the useru2019s password is compromised the risk would be limited to that destination function group (RFC_NAME in S_RFC auth object)
System users only need RFC authorizations (S_RFC).
Only monitoring System users would be enough.
Better password management if required.
Option 2: u201CCurrent Useru201D option:
One user may need access for more than one function groups access ( RFC_NAME).
Compromising user's password would result more damage than the system user. Since this user has broad access to execute multiple function groups.
Monitoring these users would be overloaded job in this case becoz of increased users numbers. 
The management of authorizations (roles) to these users may require strict approval process.
Option 3: Trusted system option:
u2022          RFC_SYSID :                                                             
u2022          RFC_CLIENT:                                                          
u2022          RFC_USER  : ' '                                                                      
u2022          RFC_EQUSER: Y (for Yes)                                                              
u2022          RFC_TCODE : *                                                              
u2022          RFC_INFO  :                                                          
u2022          ACTVT     : 16                                                             
Seems user requires both the auth objects S_RFCACL and S_RFC in this case.
Compromising user's password would result more damage than the system user. Since this user has broad access to execute multiple function groups.
Monitoring these users would be overloaded job in this case becoz of increased users numbers. 
The management of authorizations (roles) to these users may require strict approval process.
Would you say considering the system userid/password is better option than other methods with SNC control in place? Please share your thoughts.
Thanks
Himadama

Similar Messages

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Call different RFC Functions depending on variable

    Hi,
    I have to implement following requirement:
    My program generates a report for the user and the user can choose which data he wants to have.
    Depending on the user decissions, I have to use different RFC - Functions to get the data.
    I have the name of the RFC Function und I want to call the RFC Function with this name dynamically.
    For example:
    rfcName = "Z_FUNC_1";
    rfc = get the object for Z_FUNC_1 somehow. This is my problem
    rfc.execute();
    Is there any way how to solve this?  I know that I can solve this like this, but this is not the flexible way that i want:
    if (whichRFC == 1) {
      Z_Func_1_input rfc = new Z_Func_input();
    rfc.execute();
    Any ideas?
    Best regards,
    Peter

    Dear Peter,
    There are how many RFC models are you going to make? In my opinion it would be  too tedious and redundant as you have to create diifferent models fo each RFC.
    Instead of this you may create a single RFC and in that you can call your required RFC depending on the user selection.
    It will simplify your application as well as your task.
    Hope it helps!!
    Warm Regards
    Upendra Agrawal

  • Runtime analysis for an RFC function Module

    Hi,
    How to get an Runtime analysis for an RFC function Module?
    I have an RFC function Module I am using it for a WEB INTERFACE . For this function Module I need to get Runtime Analysis.
    Please do not duplicate or cross post
    Edited by: Rob Burbank on Feb 21, 2009 11:42 AM

    Total Questions:  40 (39 unresolved)
    Duplicate thread locked.
    Rob

  • Toy Store Best Practice: How to implement 'cancel' for register user page

    Let's say user wants to register or (even edit) his/her account. But when forwards to register(edit) account page, he/she decides not to do it. I would like to implement 'cancel' button and return the user whereever he/she was before comming to this page.
    What is the best practice?
    Even worse, if user gets to this page and never saves the entry. The model would be dirty. And then let's say that user wants to commit something in DB, it may commit incorrect(blank) entry for created row. What I am up to, is what is the best practice to keep track if the model gets dirty and delete invalid rows in general?

    You might want to read this thread:
    Cancel operation followed by refresh raises JBO-33035
    (very similar discussion I was having with Steve)

  • Best practice to use Time capsule for back-up of 3 different products (MBP 15 OSX lion, MBP 13 OSX lion and MBA 13 OSX mountain lion)?. Only the MBP15 is back-up regularly.

    When I want to save data of the MBA13 on mountain lion (wireless) with time capsule, hois there any best practice to perform?
    After that, assuming that data are back-up, can we easily differentiate data in time capsule belonging to MBP15/13 and MBA13?

    Unfortunately, Apple left off the Ethernet port....the most important port in networking....on the MBA, so your first backup of the entire Mac will need to be done using wireless.
    That may take a day or two unless your MBA has a Thunderbolt port on it in which case you could add a Thunderbolt to Ethernet adapter and connect the MBA to the Time Capsule for the first backup using an Ethernet cable.  It will probably only take 3-4 hours or less doing it this way.
    Once you have the first complete backup done, other subsequent backups can be done using wireless since they will ony take a few minutes, on average.
    Both Macs will backup to the Time Capsule using Time Machine automatically. Backups will be kept completely separate, so one Mac will normally only be able to "see" its own backups.

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best Practice while configuring Traffic Manager for Azure Website

    Hi Team,
    I want to understand What is the best practice while we configure traffic manager for Azure website.
    To give you the base, Here let me explain my requirement. I have one website which 40% target audiences would be East US, while  40% would be UK and rest 20% would be from Asia-pacific.
    Now, What I want is Failover + Performance based Traffic Manager Configuration.
    My thinking:
    1) we need to create 1 website with 2 instances in each region (east us, east asia, west us for an example). so, total 3 deployment of website. (give region based url for the website)
    2) create traffic manager based on performance and add 3 of those instances. that would become website-tmonperformance
    3) create traffic manager based on failover and add 3 of those instances. that would become website-tmonfailover
    4) create traffic manager and ?? don't know the criteria but add both above traffic manager here and take your final url for end user.
    I am not sure (1) this may be the right approach or not (2) if this is right, in the 4th step which criteria we should select while creating final traffic manager round-robin/ performance/ failover?
    after all these if use try to access site from US.. traffic manager will divert that to US Data-Centre or it will wait for failover and till that it will be served from east-asia if in configuration, east-asia is my 1st instance?
    Regards, Brijesh Shah

    Hi Jonathan,
    Thanks for your quick reply. actually question is bit different. Let me explain you different way.
    I was asking for recommendation from Azure Traffic Manager team. whether my understanding is correct or not.We want Performance with Failover.
    So, One azure website we have: take an example todoapp. I deployed that in 3 different region. now, I want to have performance based routing as well as failover based routing. but obviously I can't give two URL to my end user. so, at the top of that I will
    require one more traffic manager. So,
    step 1: I will create one traffic manager with performance criteria named: TMForPerformance.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 2: I will create one more traffic manager with failover criteria named: TMForFailover.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 3: I will create one final traffic manager with performance criteria named: todoapp.trafficmanager.com where I will add these two traffic manager instead of 3 different region's website.
    Question 1) Is it correct structure if we want to achieve Performance with Failover or Is there any better solution?
    Question 2) in step 3, what criteria we should select? performance/ round robin/ failover
    Regards, Brijesh Shah

  • Best Way to Define Employee Numbering For Different Countries

    HI all
    I am working on for a project which will be for Multiple countries.But Each country has different Instances.Now the questions is What could be the Best way to Define Employee numbering for all the Regions ? As for as My knowledge goes It would be Number prefixed by Country.Then How we can have Alpha numeric in employee Coding ? We need to have Fast Formula to Ahieve this task.If any body has come across this scenario please share your thoughts .Any thoughts are greatly Appreciated.
    Cheers
    Kumar cs

    If you have each country on a different instance and you want employees to retain their employee number on transfer between countries, I believe you have two options:
    i) Use manual numbering. One recent change to the system is that you can update the numbering profile from Auto to Manual and back to Auto again. So you could potentially leverage this manual workaround when a cross BG transfer happens.
    ii) Bespoke module to export all emp numbers into a 'central repository', e.g., flat file, that can be accessed by all instances, imported regularly and then referenced by your Fast Formula. You would also need processing to determine when a person is a transfer from another BG or a new hire; probably have to be some control field on flex. Basically, this is a requirement made difficult by the decision to host each legislation on a different instance; I would never recommend clients to go down this route but I guess it's too late in your case.
    If cross BG transfers are not common, e.g., less than 5 a week, I would recommend option 1.

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Best practice DNS in VPN environment for Lync2013 clients

    So I do have those site2site VPNs to connect the small branch offices to the main office. Internal DNS makes sure, that the branch offices can acess all the servers/services in the main office with their domain.local namespace.
    In such a scenario will the Lync2013 clients connect through the VPN to the internal sites due to both lyncdiscover and lyncdiscoverinternal being available?
    Wouldn't it cause way less burden on the VPN routers if clients would simply go out to the internet and connect from the external side so all the Lync traffic does not have to be stuffed through the VPN pipe? I dont see the point to encrypt the traffice
    once more.
    Thanks for your suggestions about best practices!
    HST

      Hi,
    When users connect to the corporate network using a VPN client, Lync media traffic is sent through the VPN tunnel. This configuration can create additional latency and jitter because media traffic must pass through an additional layer of encryption and
    decryption. The issue is compounded when the VPN concentrator is busy.
    If you want to connect Lync server from public network you need to deploy an Edge server.
    The solution to force VPN traffic through the Edge Servers must allow external Lync clients connected through VPN, you can refer to the part of "Solution Configuration" in the link below:
     http://blogs.technet.com/b/nexthop/archive/2011/11/15/enabling-lync-media-to-bypass-a-vpn-tunnel.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Best Practice - Securing Schema from User Access

    Scenario:
    User A requires access to schema called BLAH.
    User A is a developer that built an application using this schema in a separate development environment, although has the same privileges mirrored to production (same roles etc - required for operation of the application built).
    This means that the User has roles that grant Select, Update etc rights for the schema / table in order to use (and maintain) the applications.
    How can we restrict access to the BLAH schema in PRODUCTION, enforcing it to only be accessible via middle tier / application (proxy authentication?)?
    We've looked at using proxy authentication, however, it's not possible to grant roles and rights to the proxy account and NOT have them granted to the user (so they can dive straight in using development tooling and hit prod etc)>
    We've tried granting it on a session basis using proxy authentication (i.e. user a connects via proxy, an we ENABLE a disabled role on the user based on this connection), however, it causes performance issues.
    Are we tackling this the wrong way? What's the best practice for securing oracle schemas (and objects in general) for user access where the users actually get oracle user account (or even use SSO) for day to day business as usual.
    To me this feels like a common scenario, especially where SSO comes into play ...

    What about situations where we have Legacy Oracle Forms stuff? In these cases the user must be granted select etc rights to particular objects, as this can't connect via a middle tier.
    The problem we have is that our existing middle tier implementation is built expecting the user credentials to be passed to it during initial authentication and does not use a proxy, or super user style account.  We have, historically, been 100% reliant on Oracle rights and controls to validate and restrict access to our underlying data.  From what you are saying, we should start to look at using proxy or super user access and move this control process further up - i.e. into Code or Packages ?  If so, does this mean that there is no specific way to restrict schema access to given proxy accounts and then grant normal user accounts to connect through these to get access (kind of a delegated access scenario), without using disabled roles?

Maybe you are looking for