Best practices for securely storing environment properties

Hi All,
We have a legacy security module that is included in many
different applications. Historically the settings (such as
database/ldap username and password) was stored directly in the
files that use them. I'm trying to move towards a more centralized
and secure method of storing this information, but need some help.
First of all, i'm struggling a little bit with proper scoping
of these variables. If another application does a cfinclude on one
of the assets in this module, these environment settings must be
visible to the asset, but preferrably not visible to the 'calling'
application.
Second i'm struggling with the proper way to initialize these
settings. If other applications run a cfinclude on these assets,
the application.cfm in the local directory of the script that's
included does not get processed. I'm left with running an include
statement in every file, which i would prefer to avoid if at all
possible.
There are a ton (>50) applications using this code, so i
can't really change the external interface. Should i create a
component that returns the private settings and then set the
'public' settings with Server scope? Right now i'm using
application scope for everything because of a basic
misunderstanding of how the application.cfm's are processed, and
that's a mess.
We're on ColdFusion 7.
Thanks!

Hi,
Thank you for posting in Windows Server Forum.
As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
Hope it helps!
Thanks.
Dharmesh Solanki

Similar Messages

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for calling stored procedures as target

    The scenario is this:
    1) Source is from a file or oracle table
    2) Target will always be oracle pl/sql stored procedures which do the insert or update (APIs).
    3) Each failure from the stored procedure must log an error so the user can re-submit the corrected file for those error records
    There is no option to create an E$ table, since there is no control option for the flow around procedures.
    Is there a best practice around moving data into Oracle via procedures? In Oracle EBS, many of the interfaces are pure stored procs and not batch interface tables. I am concerned that I must build dozens of custom error tables around these apis. Then it feels like it would be easier to just write pl/sql batch jobs and schedule with concurrent manager in EBS (skip ODI completely). In that case, one could write to the concurrent manager log and the user could view the errors and correct.
    I can get a simple procedure to work in ODI where the source is the SQL, and the target is the pl/sql call to the stored proc in the database. It loops through every row in the sql source and calls the pl/sql code.
    But I can not see how to set which rows have failed and which table would log errors to begin with.
    Thank you,
    Erik

    Hi Erik,
    Please, take a look in these posts:
    http://odiexperts.com/?p=666
    http://odiexperts.com/?p=742
    They could help you in a way to solve your problem.
    I already used it to call Oracle EBS API's and worked pretty well.
    I believe that an IKM could be build to automate all the work but I never stopped to try...
    Does it help you?
    Cezar Santos
    http://odiexperts.com

  • Best practice for securing confidential legal documents in DMS?

    We have a requirement to store confidential legal documents in DMS and are looking at options to secure access to those documents.  We are curious to know.  What is the best practice?  And how are other companies doing it?
    TIA,
    Margie
    Perrigo Co.

    Hi,
    The standard practice for such scenarios is to use 'authorization' concept.You can give every user to use authorization to create,change or display these confidential documents. In this way, you can control access authorization.SAP DMS system monitors how you work, and prevents you from displaying or changing originals if you do not have the required authorization.
    The below link will provide you with an improved understanding of authorization concept and its application in DMS
    http://help.sap.com/erp2005_ehp_04/helpdata/en/c1/1c24ac43c711d1893e0000e8323c4f/frameset.htm
    Regards,
    Pradeepkumar Haragoldavar

  • "Best Practice" for a stored procedure that needs to access two schemas?

    Greetings all,
    When my company's application is deployed, two schema owners are typically created and all database objects divided between the two. I'll call them FIRST and SECOND.
    In a standard, vanilla implementation there is never any reason for the two to "talk to each other". No rights to objects in one schema are ever granted to the other.
    I am currently charged, however, with writing custom code to roll up data from one of the schemas and update tables in the other with the rollups. I have created a user whose job it is to run this process, and this user has the proper permissions to all necessary objects in both schemas. I'll call this user MRBATCH.
    Typically, any custom objects, whether they be additional staging tables, temp tables or stored procedures are saved in the FIRST schema. I tried to save this new stored procedure in the FIRST schema and compile it, but got "Insufficient priviliges" errors whenever the code in the stored procedure tried to access any tables in the SECOND schema. This surprised me a little bit because I had no plans to actually EXECUTE the stored procedure as FIRST, but I guess I can understand it from the point of view of, you ought be able to execute something you own.
    So which would be be "better" (assuming there's any difference): Grant FIRST all of the rights it needs in SECOND and save the stored procedure in FIRST, or could I just save the stored procedure in the MRBATCH schema? I'm not sure which would be "better practice".
    Is there a third option I'm overlooking perhaps?
    Thanks
    Joe

    In this case I would put it again into schema THIRD. This is a kind of API schema. There are procedures in it that allow some customized functionality. And since you grant only the right to execute those procedures (should be packages of cause) you won't get into any conflicts about allowing somebody too much.
    Note that this suggestion seems very similiar to putting the procedure directly to the executing user MRBATCH. It depends how this schemauser is used. I always prefer separating users from schemas.
    By definition the oracle object to represent a schema is identical to the oracle object representing a user (exception: externally defined users).
    my definition is:
    Schema => has objects (tables, packages) and uses tables space
    User => has priviledges (including create session and connect) and uses temp tablespace only. Might have synonyms and views.
    You can mix both, but sometimes it makes much sense to separate one from the other.
    Edited by: Sven W. on Aug 13, 2009 9:51 AM

  • Best Practice For Secure File Sharing?

    I'm a newbie to both OX X Server and File Sharing protocols, so please excuse my ignorance...
    My client would like to share folders in the most secure way possible; I was considering that what might be the best way would be for them to VPN into the server and then view the files through the VPN tunnel; my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN (i.e. from inside of the internal network)... I don't see any options in Server Admin to restrict users in that way....
    I'm not afraid of the command line, FYI, I just don't know if this is:
    1. Possible!
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    Thanks for any suggestions!

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

  • Best practice for secure zone various access

    I am setting up a new site with a secure zone.
    There will be a secure zone. Once logged in, users will have access to search and browse medical articles/resources
    This is how an example may go:
    The admin user signs up Doctor XYZ to the secure zone.
    The Doctor XYZ is a heart specialist, so he only gets access to web app items that are classified as "heart".
    However, he may also be given access to other items, eg: "lung" items.
    Or, even all items. It will vary from user to user.
    Is there any way to separate areas within the secure zone and give access to those separate areas (without having to give access to individual items - which will be a pain because there will be hundreds of records; and also without having the user log out and log into another secure area)

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

  • Best Practices for securing VTY lines?

    Hi all,
    The thread title makes this sound like a big post but it's not. 
    If my router has say., 193 VTY lines as a maximum, but by default running-config has only a portion of those mentioned, should I set any configs I do on all lines, or just on the lines sh run shows?  Example: 
    sh run on a router I have with default config has: :
    line vty 0 4
    access-class 23 in
    privilege level 15
    login local
    transport input telnet ssh
    line vty 5 15
    access-class 23 in
    privilege level 15
    login local
    transport input telnet ssh
    Yet, I have the option of configuring up to 193 VTY lines:
    Router(config)#line vty ?
      <0-193>  First Line number
    It seems lines 16-193 still exist in memory, so my concern is that they are potentially exposed somehow to exploits or what not.  So my practice is to do any configs I do using VTY 0 193 to ensure universal configuration.  But, my "enabling" the extra lines, am I using more memory, and, how secure is this against somebody trying to say, connect 193 times to my router simtaneously?  Does it increase the likelihood of success on DoS attack for example. 

    Hi guys, thanks for the replies and excellent information.  I'm excited to look at the IOS Hardending doc and the other stuff too. 
    Just to clarify, I don't actually use the default config, I only pasted it from a new router just to illustrate the default VTY line count. 
    I never use telnet from inside or outside, anyting snooping a line will pick up the cleartext as ou both know of course.  SSH is always version 2 etc. 
    I was considering doing a console server from the insidde as the only access method - which I do have set up but I have to remote to it It's just that with power outages at times, the console PC won't come back up (no BIOS setting to return to previous state, no WOL solution in place) so now I have both that plus the SSH access.  I have an ACL on both the VTY lines themselves as well as a ZBFW ACL governing SSH - perhaps a bit redundant in some ways but oh well if there's a zero-day ou thtere for turning off the zbfw I might still be protected  
    Regretfully I havne't learned about AAA yet - that I believe is in my CCNA Security book but first I need to get other things learned. 
    And with regard to logging in general, both enabling the right kind and monitoring it properly, that's a subject I need to work on big time.  I still get prot 25 outbound sometimes from a spam bot, but by the time I manually do my sh logging | i :25 I have missed it (due to cyclic logging with a buffer at 102400).  Probably this woud be part of that CCNA Security book as well. 
    So back to the # of VTY lines.  I will see what I can do to reduce the line count.  I suppose something like "no line vty 16 193" might work, if not it'll take some research. 
    But if an attacker wants to jam up my vty lines so I can't connect in, once they've fingerprinted the unit a bit to find out that I don't have an IPS running for example, wouldn't it be better that they have to jam up 193 lines simultaneously (with I presume 193 source IPs) instaed of 16?  Or am I just theorizing too much here.  I'ts not that this matters much, anybody who cares enough to hack this router will get a surprise when they find out there's nothing worth the effort on the other side But this is more so I can be better armed for future deployments.  Anyway, I will bookmark the info from this thread and am looking forward to reading it. 

  • Best practice for setting an environment variable used during NW AS startup

    We have installed some code which is running in both the ABAP and JAVA environment, and some functionality of this code is determined by the setting of operating system environment variables. We have therefore changed the .sapenv_<host>.csh and .sapenv_<host>.sh scripts found in the <sid>adm user home directory. This works, but we are wondering what happens when SAP is upgraded, and if these custom shell script changes to the .sh and .csh scripts will be overwritten during such an upgrade. Is there a better way to set environment variables so they can be used by the SAP server software when it has been started from <sid>adm user ?

    Hi,
    Thankyou. I was concerned that if I did that there might be a case where the .profile is not used, e.g. when a non-interactive process is started I was not sure if .profile is used.
    What do you mean with non-interactive?
    If you login to your machine as sidadm the profile is invoked using one of the files you meant. So when you start your Engine the Environment is property set. If another process is spawned or forked from a running process it inherits / uses the same Environment.
    Also, on one of my servers I have a .profile a .login and also a .cshrc file. Do I need to update all of these ?
    the .profile is used by bash and ksh
    The .cshrc is used by csh and it is included via source on every Shell Startup if not invoked with the -f Flag
    the .login is also used by csh and it is included via source from the .cshrc
    So if you want to support all shells you should update the .profile (bash and ksh) and one of .cshrc or .login for csh or tcsh
    In my /etc/passwd the <sid>adm user is configured with /bin/csh shell, so I think this means my .cshrc will be used and not the .profile ? Is this correct ?
    Yes correct, as described above!
    Hope this helps
    Cheers

  • Best Practices for Securing Oracle e-Business Suite -Metalink Note 189367.1

    Ok we have reviewed our financials setup against the title metalink document. But we want to focus on security and configuration specific to the Accounts Payable module of Oracle Financialos. Can you point me in the direction of any useful documents for this or give me some pointers??

    Ok we have reviewed our financials setup against the title metalink document. But we want to focus on security and configuration specific to the Accounts Payable module of Oracle Financialos. Can you point me in the direction of any useful documents for this or give me some pointers??

  • Best Practices for Patching RDS Environment Computers

    Our manager has tasked us with creating a process for patching our RDS environment computers with no disruption to users if possible. This is our environment:
    2 Brokers configured in HA Active/Active Broker mode
    2 Web Access servers load balanced with a virtual IP
    2 Gateway servers load balanced with a virtual IP
    3 session collections, each with 2 hosts each
    Patching handled through Configuration Manager
    Our biggest concern is the gateway/hosts. We do not want to terminate existing off campus connections when patching. Are there any ways to ensure users are not using a particular host or gateway when the patch is applied?
    Any real world ideas or experience to share would be appreciated.
    Thanks,
    Bryan

    Hi,
    Thank you for posting in Windows Server Forum.
    As per my research, we can create some script for patching the server and you have 2 servers for each role. If this is primary and backup server respectively then you can manage to update each server separately and bypass the traffic to other server. After
    completing once for 1 server you can just perform the same step for other server. Because as I know we need to restart the server once for successful patching update to the server.
    Hope it helps!
    Thanks.
    Dharmesh Solanki

  • Best practices for securing communication to internet based SCCM clients ?

    What type of SSL certs does the community think should be used to secure traffic from internet based SCCM clients ?  should 3rd party SSL certs be used ?  When doing an inventory for example of the clients configuration in order to run reports
    later how the  data be protected during transit ?

    From a technical perspective, it doesn't matter where the certs come from as there is no difference whatsoever. A cert is a cert is a cert. The certs are *not* what provide the protection, they simply enable the use of SSL to protect the data in transit
    and also provide an authentication mechanism.
    From a logistics and cost perspective though, there is a huge difference. You may not be aware, but *every* client in IBCM requires its own unique client authentication certificate. This will get very expensive very quickly and is a recurring cost because
    certs expire (most commercial cert vendors rarely offer certs valid for more than 3 years). Also, deploying certs from a 3rd party is not a trivial endeavor -- you more less run into chicken and egg issues here. With an internal Microsoft PKI, if designed
    properly, there is zero recurring cost and deployment to internal systems is trivial. There is still certainly some cost and overhead involved, but it is dwarfed by that that comes with using with a third party CA for IBCM certs.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Best practices for VTP / VLAN environment

    Hi,
    We currently have 1 VTP domain where all our network devices are configured.
    We now want to join another VTP domain to this domain and I wander what the best approach will be to do this.
    1. I can configure all the VLAN ID's at my own VTP server ( there are no overlapping ID's ) and configure the devices from the other domain as clients ( but what happens to the VLAN configurations made on the old VTP server ? ).
    2. connect our 2 networks
    or is it better to change only the VTP domain on the other VTP server ( the same as ours now ) and then connect the networks together ( what will happen with the VTP/vlan configuration of both servers, will they be added or will the sever with the highest revision number just copy his database to the other server and probably delete the current VTP/VLAN configuration ?
    Is is good to have 2 VTP servers ?

    I would like to warn you!
    As soon as the VTP server you want to get rid of will be moved in the other domain, its config revision number will be 0 (The change of domain name will reset the config revision number). Its vlan database will be erased by the VTP server of the new domain. All the vlans of the old VTP domain will be lost.
    I would proceed in this way (VTP A is the VTP you want to keep, VTP B is the VTP you want to merge into VTP A):
    1) configure all the VLANs of VTP B in VTP A
    2) reconfigure all VTP B's switches as VTP A client.
    That's it.
    Regards,
    Christophe

Maybe you are looking for

  • HT1414 if i connect my iphone 4 to my pc will it restore to my ipod's settings? it happend to me once alredy

    if i connect my iphone 4 to my pc will it restore to my ipod's settings? it happend to me once alredy. and it erased everything but it put in all the apps conacts and snapshots from my ipod to my iphone ? will it happen again if i connect it ?

  • AUTOMATIC SYNCHRO WITH XP AND 7.1.30.8

    The automatic synchro of calendar, notes, contacts doesn't work properly with the last version of PC Suite 7.1.30.8 on a XP PRO système. Initialisation of the synchro doesn't start or the program stops during initialisation. My phone is a 6680 one. I

  • 1280x1024 to 1280x720

    for example, i recorded the game video with Fraps, 1280x1024, now i want resolution to be 1280x720, when the programm starts adobe animated cartoon, we must create a project, there must choose one of the popular format video, so here i like pick: AVC

  • Resolve kernal (?) panic problem

    I'm not sure if this is a kernal panic but, this morning I have had my computer screen go grey requesting that I shutdown/restart. I'm running Lion. Below is a portion of the Panic Report. Can anyone please take a moment to see what the problem is? (

  • Lightroom and DxO: fix lens flaws in a TIF file exported from LR

    Hi, I'm in love with LR, I don't mind if people tell me that other Raw converter do a better job in demosaicing. Anyway some time ago a friend told me about DxO Optics Pro, I tried and loved the lens corrections. I thought I could prform those correc