Best practice for external but secure access to internal data?

We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
Thanks.

Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
http://pauliom.wordpress.com

Similar Messages

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for report output of CRM Notes field data

    My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data.  Example:  The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment".  As part of the report output, the business wants to freely select the output fields meant to accompany the notes data.  Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this.  Is a custom ABAP application built?  Does data get moved to BW for Reporting (how are notes handles)?  Is data moved to separate system?

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Office Web Apps - Best Practice for App Pool Security Account?

    Guys,
    I am finalising my testing of Office Web Apps, and ready to move onto deploying it to my live farm.
    Generally speaking, I put service applications in their own application pool.
    Obviously by doing so this has an overhead on memory and processing, however generally speaking it is best practice from a security perspective when using separate accounts.
    I have to create 3 new service applications in order to deploy Office Web Apps, in my test environment these are using the Default SharePoint app pool. 
    Should I create one application pool for all my office web apps with a fresh service account, or does it make no odds from a security perspective to run them in the default app pool?
    Cheers,
    Conrad
    Conrad Goodman MCITP SA / MCTS: WSS3.0 + MOSS2007

    i run my OWA under it's own service account (spOWA) and use only one app pool.  Just remember that if you go this route, "When
    you create a new application pool, you can specify a security account used by the application pool to be either a predefined Network Service account or a managed account. The account must have db_datareader, db_datawriter, and execute permissions for the content
    databases and the SharePoint configuration database, and be assigned to the db_owner role for the content databases." (http://technet.microsoft.com/en-us/library/ff431687.aspx)

  • Best practices for Reusable methods that access DBTransaction object

    Hi All,
    In our application, there are reusable methods that has DBTransaction object as parameter, eg :
    public static String postingToGL(DBTransaction dbTran, String pProceName, Number pDocId)
    This method could be called both from : Entity Object or Application Module (AM).
    I have two options about where to implement it :
    (1) put it in a Utils / helper class, so that I can call it both from Entity class and AM. But I have to pass DBTransaction object as parameter (I am not convenient with this)
    or...
    (2) put it in Entity object base class, but the problem is what if application module also nede to call that method ?
    WHat is the best practice in this situation ?
    Thank you,
    xtanto

    Hi,
    what about putting it into an AM base class ? I don't know what you worries are for passing the DBTransaction around.
    Frank

  • Best practice for external drive?

    A really basic set of questions, for which I ought to know the answers:
    with four external firewire drives connected, three of them daisy-chained to one FW800 port, the fourth to a second FW800 port, do I:
    a) when shutting down the computer, eject them, or can I just shut down the computer then shut down the drives?
    b) when ejecting the drives, is there a particular order (eg, the drive that is actually connected to the computer of the three which are tied together being last to eject?)
    c) when sleeping the computer, is it all right to leave the drives spinning and let them sleep when they realize the computer is sleeping?
    d) when starting up, start the drives before turning on the computer? I've actually forgotten this, and started the drives after the computer was up and running, and it doesn't SEEM to have any ill effect, but am I doing something dangerous?

    OS X is pretty tolearant of whatever you do. Just don't turn them off or unplug them unless you either turn off the computer or eject the volume by dragging to the Trash.
    So
    a) shut down the computer then shut down the drives: yes
    b) doesn't matter with my FW drives, but I suppose there could be some that will not pass the FW signals if it is shut off. I believe if they are built correctly they should work like mine and the FW communication will pass through the devices that are powered down.
    c) OS X should be in control of drive sleep. Some HD manufacturers have firmware that conflicts with OS X and insist upon sleeping the drives when they ought not to, or vice versa. When sleeping the computer, just ignore the drives. Do not shut them off though or else OS X will scold you for having removed them unexpectedly. They should sleep on their own, but if their firmware insists otherwise, there is nothing you can do about it.
    d) This doesn't matter at all, unless (obviously) you need to boot the computer from one of the external volumes.
    Read about OS X file system journaling - it should ease your mind about potential corruption: http://support.apple.com/kb/HT2355

  • What is the best practice for External Repairing

    Hi Experts,
    When i go through the thread search I found there are so many way to repair a equipment exteranlly.But could we please tell me which is the best one(subcontracting/through refurbishment order).
    AR
    Edited by: Amit  Rana on Mar 17, 2010 5:46 PM

    I have a question recently that sounds something like yours.  Did some research and here is what i gathers.
    You have to look at what is need.  Do you want to monitor the status and location of the faulty equipment out of your company?  If all you need is to track the repair cost, you can do it directly from the original maintenance order.  just create a PR, convert to PO.  Receive the service and its done.  This is pretty simple process to begin with.
    If you want to monitor the equipment out of your company then you can do it the refurbishment (subcontracting) way.  In this method, you treat the refurbishment work like a subcontracting job where you buy the repair service while providing the faulty equipment as material.  If you are doing this, there are some pre-req:
    1. ideally your material should have split valuation to reflect the different valuation at different condition.
    2. Your material is serialized.
    3. i am no MM expert but if your material need to be marked as sub-con material (if there is any MM expert here, pls help to explain this).
    4. you use a normal work order and not a refurbishment order.
    The process would be:
    1. Create a normal work order with external activity (of type PM02).  PR will be created.
    2. Create PO from the PR.
    3. GI the faulty part to the vendor against the PO.
    4. GR the refurbished part.
    5. Close the order (the usual TECO, settlement + Biz close).
    With this subcontracting method, you can monitor the faulty eq. status via the subcon monitor (Tcode ADSUBCON).
    The reason why you don't use a refurbishment order is with a refurbishment order, you GI the faulty parts to the refurbishment order.  You can't monitor the status via ADSUBCON.
    There are also a simpler way to go around this.  You can use a refurbishment order.  Fellow forumer Pete suggest that we can update user status of the equipment master.  Pick up an un-use field to enter the vendor who is repairing.
    Hope this helps.

  • Best Practices for Service Entry Sheet Approval

    Hi All
    Just like to get some opinion on best practices for external service management - particularly approval process for Service Entry Sheet.
    We have a 2 step approval process using workflow:
    1 Entry Sheet Created (blocked)
    2. Workflow to requisition creator to verify/unblock the Entry Sheet
    3. Workflow to Cost Object owner to approve the Entry Sheet.
    For high volume users (e.g. capital projects) this is cumbersome process - we looking to streamline but still maintain control.
    What do other leaders do in this area?  To me mass release seems to lack control, but perhaps by using a good release strategy we could provide a middle ground? 
    Any ideas or experiences would be greatly appreciated.
    thanks
    AC.

    Hi,
    You can have purchasing group (OME4) as department and link cost center to department (KS02). Use user exit for service entry sheet release and can have two characteristics for service entry sheet release, one is for value (CESSR- LWERT) and another one for department (CESSR-USRC1) .Have one release class for service entry sheet release & then add value characteristics (CESSR- LWERT) and department characteristics (CESSR-USRC1). Now you can design release strategies for service entry sheet based on department & value, so that SES will created and then will be released by users with release code based on department & value assigned to him/her.
    Regards,
    Biju K

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for select access to users

    Not sure if this is the correct forum to post, if not then let me know where should I post.
    From my understanding this is the best forum to ask this questions.
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    Asked in this forum as this is related to PL/SQL access.

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version.
    >
    Are you aware of any "Best Practice Document" to grant select accesses to users on databases. These users are developers which select data out of database for the investigation and application bug fix.
    From time to time user want more and more access to different tables so that they can do investigation properly.
    Let me know if there exists a best practice document around this space.
    >
    There are many best practices documents about various aspects of security for Oracle DBs but none are specific to developers doing invenstigation.
    Here is the main page for Oracles' OPAC white papers about security.
    http://www.oracletechnetwork-ap.com/topics/201207-Security/resources_whitepaper.cfm
    Take a look at the ones on 'Oracle Identity Management' and on 'Developers and Identity Services'.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    This paper by Database Specialists shows how to use Oracle Identity Management to limit access to users such as developers through the use of roles. It shows some examples of users using their own account but having limited privileges based on the role they are given.
    http://www.dbspecialists.com/files/presentations/implementing_oracle_11g_enterprise_user_security.pdf
    And this Oracle White Paper, 'Oracle Database Security Checklist', is a more basic security doc that discusses the entire range of security issues that should be considered for an Oracle Database.
    http://www.oracle.com/technetwork/database/security/twp-security-checklist-database-1-132870.pdf
    You don't mention what environment (PROD/QA/TEST/DEV) you are even talking about or whether the access is to deal with emergency issues or general reproduction and fixing of bugs.
    Many sites create special READONLY roles, eg. READ_ONLY_APP1, and then grant privileges to those roles for tables/objects that application uses. Then that role can be granted to users that need privileges for that application and can be revoked when they no longer need it.
    Some sites prefer creating special READONLY users that have those read only roles. If a user needs access the DBA changes the password and provides the account info to the user. When the user has completed their duties the DBA resets the password to something no one else knows.
    Those special users have auditing on them and the user using them is responsible for all activity recorded in the logs during the time the user has access to that account.
    In general you grant the minimum privileges needed and revoke them when they are no longer needed; generally through the use of roles.
    >
    Asked in this forum as this is related to PL/SQL access.
    >
    Please explain that. Your question was about 'access to different tables'. How does PL/SQL access fit into that?
    The important reason for the difference is that access is easily controlled thru the use of roles but in named PL/SQL blocks roles are disabled. So those special roles and accounts mentioned above are well-suited to allowing developers to query data but are not well-suited if the user needs to execute PL/SQL code belonging to another schema (the app schema).

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

Maybe you are looking for