Best practice to secure database during patching/upgrade-Vault disabled

I have a highly confidential system and to protect the data in the DB, I intend to use Oracle Vault. In my company, we have centralized services and therefore, the oracle Unix owner and sysdba are owned by this group, not the application dba that "knows" the system (me).
The problem I have is what would be the best way to protect my data from the sysdba user during the window that Vault is disabled?
I thougth maybe I could un-mount and file protect the tablespace from the Unix oracle user during the upgrade process, but the tablespace could be outdated when I remount it after the patch. I am not sure how I might patch it seperatley. I am thinking I might export and drop the schema during the patching, then load it back in when vault is enabled again. This will work for a year or so, but I envision the databases getting too large to do this efficiently.
Does anyone have any thoughts on how I might protect the data in this scenario?
Edited by: user4714217 on Oct 20, 2008 11:06 AM for clarity

Hi,
I did not understood your question properly, but the thing which I understood is like you do not want any other user to access the database when your security vault is disabled.
At that time you can
1) Start the database in restrict mode (only a sysdba can do it).
2) Change the password/lock the user till the time your patching activity is going (any user with dba role can do it, so might be not so useful)
Regards
Anurag Tibrewal

Similar Messages

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • Best Practice for Security Point-Multipoint 802.11a Bridge Connection

    I am trying to get the best practice for securing a point to multi-point wireless bridge link. Link point A to B, C, & D; and B, C, & D back to A. What authenication is the best and configuration is best that is included in the Aironet 1410 IOS. Thanks for your assistance.
    Greg

    The following document on the types of authentication available on 1400 should help you
    http://www.cisco.com/univercd/cc/td/doc/product/wireless/aero1400/br1410/brscg/p11auth.htm

  • Best practices to secure out of bound management access

    What are the best practices to secure Out Of Bound Management (OOBM) access?
    I planning to put in an DSL link for OOBM. I have a console switch which supports SSH and VPN based on IPSec with NAT traversal. My questions are -
    Is it secure enough?
    Do I need to have a router/firewall in front of the console switch?
    Im planing to put a Cisco 1841 router as an edge router. What do you think?
    Any suggestions would be greatly appreciated.

    Hi,
    You're going to have an OOB access via VPN?
    This is pretty secure (if talking about IPsec)
    An 1841 should work fine.
    You can check the design recommendations here:
    www.cisco.com/go/srnd
    Chose the security section...
    Hope it helps.
    Federico.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

  • Best Practice Internet Security with ADO / OraMTS / OraOLEDB and 9i?

    Hi people,
    I have the following scenario to support and I URGENTLY need some information regarding the security model vs performance envelope of these platforms.
    We currently are developing a web-application using IE 5.0^ as our browser, IIS 5.0 as our server, ASP (JScript) as our component glue, custom C++ COM+ middle tier components using ADO / Oracle OLE DB to talk to a Solaris based Oracle 9i instance.
    Now it comes to light from the application requirements that the system should, if at all possible, be supporting Virtual Private Databases for subscribers [plus we need to ease backend data service development and row-level security combined with fine grained audit seems the way to go].
    How does one use Oracle's superior row-level security model in this situation?
    How does one get the MS middle tier to authenticate with the database given that our COM+ ADO components are all required to go through ONE connection string? [Grrrr]
    Can we somehow give proxy rights to this identity so that it can "become" and authenticate with an OID/LDAP as an "Enterprise User"? If so, how?
    I have seen a few examples of JDBC and OCI middle-tier authentication but how does one achieve the same result as efficiently as possible from the MS platform?
    It almost appears, due to connection pooling that each call to the database on each open connection could potentially be requiring a different application context - how does one achieve this efficiently?
    If this is not the way to go - how could it work?
    What performance tradeoffs do we have using this architecture? (And potentially how will we migrate to .Net on the middle tier?)
    As you can see, my questions are both architectural and technical. So, are there any case studies, white papers or best practice monographs on this subject that are available to either Technet members or Oracle Partners?
    Alternatively, anyone else come up against this issue before?
    Thanks for your attention,
    Lachlan Pitts
    Developer DBA (Oracle)
    SoftWorks Australia Pty Ltd

    Hi people,
    I have the following scenario to support and I URGENTLY need some information regarding the security model vs performance envelope of these platforms.
    We currently are developing a web-application using IE 5.0^ as our browser, IIS 5.0 as our server, ASP (JScript) as our component glue, custom C++ COM+ middle tier components using ADO / Oracle OLE DB to talk to a Solaris based Oracle 9i instance.
    Now it comes to light from the application requirements that the system should, if at all possible, be supporting Virtual Private Databases for subscribers [plus we need to ease backend data service development and row-level security combined with fine grained audit seems the way to go].
    How does one use Oracle's superior row-level security model in this situation?
    How does one get the MS middle tier to authenticate with the database given that our COM+ ADO components are all required to go through ONE connection string? [Grrrr]
    Can we somehow give proxy rights to this identity so that it can "become" and authenticate with an OID/LDAP as an "Enterprise User"? If so, how?
    I have seen a few examples of JDBC and OCI middle-tier authentication but how does one achieve the same result as efficiently as possible from the MS platform?
    It almost appears, due to connection pooling that each call to the database on each open connection could potentially be requiring a different application context - how does one achieve this efficiently?
    If this is not the way to go - how could it work?
    What performance tradeoffs do we have using this architecture? (And potentially how will we migrate to .Net on the middle tier?)
    As you can see, my questions are both architectural and technical. So, are there any case studies, white papers or best practice monographs on this subject that are available to either Technet members or Oracle Partners?
    Alternatively, anyone else come up against this issue before?
    Thanks for your attention,
    Lachlan Pitts
    Developer DBA (Oracle)
    SoftWorks Australia Pty Ltd

  • Best Practices of security for develop applications

    I need information about a model to use for develop application using Forms and Reports. I read many documents about best security practices for database, but I don´t find information about how can I join the database security with my software, and how can I establish an standard for my programmers.
    Thanks you for your help.

    There are a number of levels of implementation pain here-- best practices in a Fortune 500 company, for example, are likely to require a lot more infrastructure than best practices in a 5000 person organization. A Fortune 500 is also much more likely to have requirements based on the needs of a security team separate from the DBA group, requirements about auditing, etc.
    At the high end, everyone in your organization might be an enterprise user authenticated against a LDAP repository (such as Active Directory) with a variety of functional roles granted to those users and potentially something like fine-grained access control in the database. Depending on how applications are deployed, you might also be using proxy authentication to authenticate these individual users.
    Deploying this sort of infrastructure, though, will be somewhat time intensive and will create a degree of administrative overhead that you may not need. It will also potentially require a decent investment in development costs. Your needs may be far simpler (or more complex), so your security model ought to reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Any best practices for proxy databases

    Dear all,
    is there any caveat or best practice when using a proxy database?
    Is it secure and wise to create them on the master device? Can it grow? Or is it similar to a MSSQL linked server?
    Thank You for your patience,
    Arthur

    Hello,
    This statement is for proxy database as well.
    Note: For recovery purposes, Sybase recommends that you do not create other system or user databases or user objects on the master device.
    AdaptiveServer Enterprise 15.7 ESD #2 > Configuration Guide for Windows > Adaptive Server Devices and System Databases
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38421.1572/doc/html/san1335472527967.html?resultof=%22%6d%61%73%74%65%72%22%20%22%64%65%76%69%63%65%22%20%22%64%65%76%69%63%22%20%22%75%73%65%72%22%20%22%64%61%74%61%62%61%73%65%22%20%22%64%61%74%61%62%61%73%22%20
    The  Component Integration Services Users Guide is very good start in some part it is like a link server but the option are many and it all depends on your use case and remote source.
    Niclas

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Best practice for securing confidential legal documents in DMS?

    We have a requirement to store confidential legal documents in DMS and are looking at options to secure access to those documents.  We are curious to know.  What is the best practice?  And how are other companies doing it?
    TIA,
    Margie
    Perrigo Co.

    Hi,
    The standard practice for such scenarios is to use 'authorization' concept.You can give every user to use authorization to create,change or display these confidential documents. In this way, you can control access authorization.SAP DMS system monitors how you work, and prevents you from displaying or changing originals if you do not have the required authorization.
    The below link will provide you with an improved understanding of authorization concept and its application in DMS
    http://help.sap.com/erp2005_ehp_04/helpdata/en/c1/1c24ac43c711d1893e0000e8323c4f/frameset.htm
    Regards,
    Pradeepkumar Haragoldavar

Maybe you are looking for

  • No depreciation areas have been defined for asset 10001-2

    Hi, While creating Goods Receipt(MIGO) the following error is showing "No depreciation areas have been defined for asset 10001-2" Could u plz, suggest me what i have to do. Thanks in Advance, Sudheer.

  • Issue with IDOC occurence and SeeBurger message mapping

    Hey Guys While developing a EDI 850 to IDOC scenario i came across this issue with pre-delivered Seeburger mapping(A_850_V4010_to_I_ORDERS05). I actually need to post multiple IDOC's to SAP system in the same message so i changed the IDOC occurence t

  • ITunes Not Responding/Cannot Sync iPhone since Firmware 3.0

    Since I installed 3.0 on my iPhone, iTunes keeps crashing when i connect my iPhone, i can no longer sync it with iTunes. When I connect the iPhone, iTunes stops responding. I have restarted my PC and my iPhone and have even un-installed and re-instal

  • New arrayCollection not by reference

    I know I can make a copy of an ArrayCollection with ObjectUtil.copy  or  writing byte array and registering the type. But what happens if you take the source of an existing ArrayCollection, the Array itself, and create a new array? Is it unreferenced

  • Sync from ipod to mac

    sync from ipod to mac