HANA Security - Best Practices for Schema??

Hi,
Currently we don'y have a defined Security model in HANA Studio.Neither there is no defined duties of a BASIS / Security / Developers.
I want to understand what best practices are followed at other customers for defining security for Schema.
1. Who should be creating the schema for Developers / Modelers?
2. Should we use our own ID's to create/maintain these Schema or a Generic ID?
Right now, when developers log in to Studio, by default they are assigned to their own schema (User ID) and they create objects under that.
We(Security team), face issues when other developers need access to schema of another user as they want to develop objects under schema of different user
Also, who should be owning the "SYSTEM" user ID and what steps needs to be done whenever a new schema is created.
Thanks for the help in advance.

>So, if we follow this approach, who should be creating the schema as design time?
Not sure what you mean by that.  We call this design time because you are creating an artifact in the repository and the catalog object doesn't get created until you activate that design time object.
> Security Administrator or Developer/Modeler?
Doesn't really matter. Depends upon your process. However I would say most of the time the developer creates the schema.  The developer doesn't immediately get access to the new schema.  He/She must create a role and that role has to be granted to them before they can see the objects in the new schema.
>Also, for our current scenario, where developers are doing changes in their own schema, what should be done as a Security Administrator to assign access to a user schema to other developers?
They shouldn't be creating objects in their user schema.  That user schema is for internal usage - like the creation of temporary objects. It shouldn't be used for any development.

Similar Messages

  • SAP HANA Security - Best Practice for Access to Schemas??

    Hi,
    Currently we don'y have a defined Security model in HANA Studio.Neither there is no defined duties of a BASIS / Security / Developers.
    I want to understand what best practices are followed at other customers for defining security for Schema.
    1. Who should be creating the schema for Developers / Modelers?
    2. Should we use our own ID's to create/maintain these Schema or a Generic ID?
    Right now, when developers log in to Studio, by default they are assigned to their own schema (User ID) and they create objects under that.
    We(Security team), face issues when other developers need access to schema of another user as they want to develop objects under schema of different user
    Also, who should be owning the "SYSTEM" user ID and what steps needs to be done whenever a new schema is created.
    Thanks for the help in advance.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • What are Printing Security Best Practices for Advanced Features

    In the Networking > Advanced "Enabled Features" what are the best practices settings for security. Trying to find out what all of these are.  Can't find them in the documentation. Particularly eCCL & eFCL?
    Enabled Features
    IPv4 IPv6 DHCP DHCPv6 BOOTP AUTOIP LPD Printing 9100 Printing LPD Banner Page Printing Bonjour AirPrint LLMNR IPP Printing IPPS Printing FTP Printing WS-Discovery WS-Print SLP Telnet configuration TFTP Configuration File ARP-Ping eCCL eFCLEnable DHCPv4 FQDN compliance with RFC 4702
    Thanks,
    John

    I do work with the LAST archived project file, which contains ALL necessary resources to edit the video.  But then if I add video clips to the project, these newly added clips are NOT in the archived project, so I archive it again.
    The more I think about it, the more I like this workflow.  One disadvantage as you said is duplicate videos and resource files.  But a couple of advantages I like are:
    1. You can revert to a previous version if there are any issues with a newer version, e.g., project corruption.
    2. You can open the archived project ANYWHERE, and all video and resource files are available.
    In terms of a larger project containing dozens of individual clips like my upcoming 2013 video highlights video of my 4  year old, I'll delete older archived projects as I go, and save maybe a couple of previous archived projects, in case I want to revert to these projects.
    If you are familiar with the lack of project management iMovie, then you will know why I am elated to be using Premiere Elements 12, and being able to manage projects at all!
    Thanks again for your help, I'm looking forward to starting my next video project.

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Any known security best practices to follow for FMS deployment

    Hi all,
    We have recently deployed Flash Media Streaming server 3.5.2 and Flash Media Encoder on a Windows 2003 machine. Do you guys know of any security best practices to follow for the FMS server deployment on a Windows machine, could you please point me to that resource.

    Hi
    I will add some concepts, I am not sure how all of them work technically but there should be enough here for you to
    dig deeper, and also alot of this is relevant to your environment and how you want to deploy it.
    I have done a 28 server deployment, 4 origin and 24 edge servers.
    All the Edge servers on the TCP/IP properties we disabled file and printer sharing. Basically this is a way in for hackers and we disabled this only on the edge servers as these are the ones presented to the public.
    We also only allowed ports 1935, 80, 443 on our NICs. Protocol numbers are 6 and 17, this means that you are allowing UDP and TCP. So definitely test out your TCP/IP port filtering until you are confortable that all your connection types are working and secure.
    Use RTMPE over RTMP, as it is there to be used and I am surprised not more people use it. The problem as with any other encryption protocol, it may cause higher overhead on resources of the servers holding the connections.
    You may want to look at SWF verification. In my understanding, it works as the following. You publish a SWF file on a website. This is a source code that your player uses for authentication. If you enable your edge servers to only listen for authentication requests from that SWF file, then hopefully you are really lessening the highjacking possibilities on your streams.
    If you are doing encoding via FME then I would suggest that you download the authentication plugin that is available on the Flash Media Encoder download site.
    There are other things you can look at making it more secure like adaptor.xml, using a front end load balancer, HTML domains, SWF domains,
    Firewalls and DRM.
    I hope this helps you out.
    Roberto

  • Looking for Security Best Practices documentation for Sybase ASE 15.x

    Hello, I'm looking for SAP/Sybase best practice documentation speaking to security configurations for Sybase ASE 15.x. Something similar to this:
    Sybase ASE 15 Best Practices: Query Processing & Optimization White Paper-Technical: Database Management - Syba…
    Thanks!

    Hi David,
    This is something I found on the Sybase site:
    Database Encryption Design Considerations and Best Practices for ASE 15
    http://www.sybase.com/files/White_Papers/ASE-Database-Encryption-3pSS-011209-wp.pdf
    ASE Encryption Best Pracites:
    http://www.sybase.com/files/Product_Overviews/ASE-Encryption-Best-Practices-11042008.pdf
    If these do not help, you can search for others at:
    www.sybase.com > serach box on the top right.
    I searched "best pracitces security"
    Can also run advanced search > I typed in "ssl" into exact phrase.
    Hope this helps,
    Ryan

  • What are the best practices for the RCU's schemas

    Hi,
    I was wondering if there is some best practices about the RCU's schemas created with BIEE.
    I already have discoverer (and application server), so I have a metadata repository for the Application Server. I will upgrade Discoverer 10g to 11, so I will create new schema with RCU in my metada repository (MR) of the Application Server. I'm wondering if I can put the BIEE's RCU schemas in the same database.
    Basically,
    1. is there a standard for the PREFIX ?
    2. If I have multiple components of Fusion in the same Database, I will have multiples PREFIX_MDS schema ? Can they have the same PREFIX ? Or They all need to have a different prefix ?
    For exemple: DISCO_MDS and BIEE_MDS or I can have DEV_MDS and this schema is valid for both Discoverer and BIEE.
    Thank you !

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with
    .NET framework.
    That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. 

  • Best practice for standard security role

    Hi, I'd like to know which is the best practice for standard role use, some people tell me that a standard role should never be used, that a copy must be made and assign the users to the copy, but then, why should SAP bother creating the standard role?

    They are provided as a template for you, and you can copy them into a different namespace and make changes there before generating the profiles and authorizations.
    Why you should use a copy of them is because SAP will also update them sometimes. If transactions change in the standard menues with SP's and upgrades, then you will find them in transaction SU25.
    If you do a search on "standard AND roles" in the SDN then you will also find more detailed infos and opinions on the use of them.
    Cheers,
    Julius

  • Best Practice for SSL in Apache/WL6.0SP1 configuration?

    What is the best practice for eanbling SSL in an Apache/WL6.0SP1
    configuration?
    Is it:
    Browser to Apache: HTTPS
    Apache to WL: HTTP
    or
    Browser to Apache: HTTPS
    Apache to WL: HTTPS
    The first approach seems more efficient (assuming that Apache and WL are
    both in a secure datacenter), but in that case, how does WL know that the
    browser requested HTTPS to begin with?
    Thanks
    Alain

    A getScheme should return HTTPS if the client is using HTTPS or HTTP if it
    is using HTTP.
    The option for the plug-in to use HTTP or HTTPS when connecting to Weblogic
    is up to you but regardless the scheme of the client will be passed to
    WebLogic.
    Eric
    "Alain" <[email protected]> wrote in message
    news:[email protected]..
    How should we have the plug-in tell wls the client is using https?
    Should we have the plugin talk to wls in HTTP or HTTPS?
    Thanks
    Alain
    "Jong Lee" <[email protected]> wrote in message
    news:3b673bab$[email protected]..
    The apache plugin tells wls the client is using https and also pass on
    the
    client
    cert if any.
    "Alain" <[email protected]> wrote:
    What is the best practice for eanbling SSL in an Apache/WL6.0SP1
    configuration?
    Is it:
    Browser to Apache: HTTPS
    Apache to WL: HTTP
    or
    Browser to Apache: HTTPS
    Apache to WL: HTTPS
    The first approach seems more efficient (assuming that Apache and WL
    are
    both in a secure datacenter), but in that case, how does WL know that
    the
    browser requested HTTPS to begin with?
    Thanks
    Alain

  • Best Practices for user ORACLE

    Hello,
    I have few linux servers with user ORACLE.
    All the DBAs in the team connecting and working on the servers as ORACLE user and they dont have sperate account.
    I create for each DBA its own account and would like them to use it.
    The problem is that i dont want to lock the ORACLE account since i need it for installation/upgrade and etc , but yet i dont what
    the DBA Team to connect and work with the ORACLE user.
    What are the Best Practice for souch case ?
    Thanks

    To install databases you don't need acces to Oracle.
    Also installing 'few databases every month' is fundamentally wrong as your server will run out of resources, and Oracle can host multiple schemas in one database.
    "One reason for example is that we have many shell scripts that user ORACLE is the owner of them and only user ORACLE have a privilege to execute them."
    Database control in 10g and higher makes 'scripts' obsolete. Also as long as you don't provide w access to the dba group there is nothing wrong in providing x access.
    You now have a hybrid situation: they are allowed interactively to screw 'your' databases, yet they aren't allowed to run 'your' script.
    Your security 'model' is in urgent need of revision!
    Sybrand Bakker
    Senior Oracle DBA

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

Maybe you are looking for

  • Problem mapping to XMLBean using database control

    Hi I have to generate an XML document based on the values returned from the database. I have an industry standard XSD with me and Im trying to use the database control to automatically map the resultset data to the XMLBean. While doing so, it works f

  • Can anyone explain the integration of sd and mm with fico

    Hi all, Can anyone explain with an scenario how fico integrates with sd and mm. thanxs regds hari

  • RSR_OLAP_BADI issue

    Hello ! i use badi RSR_OLAP_BADI for virtual key figure but it does not work like i want. In infocube CHAR1 CHAR2 KFI R               A         10 R               B         20 R               C          30 R               D      40 In report Without

  • Taxes, Fees, and the bottom line?

    So, I'm looking at the More Everything Plan with 4GB's which shows the monthly bill for 120.00 for one line. I'm also looking at the 3GB one for 110.00 a month. I know there are extra charges, taxes, and activation fees. Does anyone have one of these

  • 8520 cant send mms?

    I have always been able to send and receive photos but it stopped working a few weeks ago. Any one have an idea why? HELP greg