Best Practice for Distributing Databases to Customers

I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
of the SQL Server will accept the database.
What do you recommend and can you point me to some documentation that handles this practice?
Thank you.

Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
detach/attach, script generation, Microsoft Sync framework, and a few others.
EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
backup/maintenance routine in your environment.
As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

Similar Messages

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice for Distributed TREX NFS vs cluster file systems

    Hi,
    We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
    Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
    Thanks in advance,
    Zareh

    I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
    That should be the initial question:
    Aare cluster file system solutions supported on plain TREX implementation?
    Thanks again,
    Zareh

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Any best practices for proxy databases

    Dear all,
    is there any caveat or best practice when using a proxy database?
    Is it secure and wise to create them on the master device? Can it grow? Or is it similar to a MSSQL linked server?
    Thank You for your patience,
    Arthur

    Hello,
    This statement is for proxy database as well.
    Note: For recovery purposes, Sybase recommends that you do not create other system or user databases or user objects on the master device.
    AdaptiveServer Enterprise 15.7 ESD #2 > Configuration Guide for Windows > Adaptive Server Devices and System Databases
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38421.1572/doc/html/san1335472527967.html?resultof=%22%6d%61%73%74%65%72%22%20%22%64%65%76%69%63%65%22%20%22%64%65%76%69%63%22%20%22%75%73%65%72%22%20%22%64%61%74%61%62%61%73%65%22%20%22%64%61%74%61%62%61%73%22%20
    The  Component Integration Services Users Guide is very good start in some part it is like a link server but the option are many and it all depends on your use case and remote source.
    Niclas

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best practice for making database connection to Forms 10 apps?

    Hi
    To upgrade our Forms applications we are moving from version 3 to 10.
    Our old system runs Forms applications and the connection to the database is based on the individual user. This means that any tables or views used require that the user has specific access granted to them. We have a bespoke system to manage this which generates scripts (GRANT statements) based on lists of tables and users and their appropriate access.
    I have concerns that managing the table access for thousands of individual users in the Forms 10 environment is going to be technically difficult, especially with RADs to consider. Is it feasible to generate and frequently refresh RAD scripts to maintain the current list of users and their permissions?
    I am trying to decide if it is better to:
    A) Connect with the same database user (such as "APP_USER") which has access to everything
    or
    B) Connect with individual usernames/passwords
    Currently, the individual user database passwords are generated weekly and users have means to obtain them (once signed in) rather than setting and remembering them. Some views refer to the Oracle system parameter "USER" to decide what data is returned so this functionality would need to be preserved.
    Any help is greatly appreciated, especially if you can tell me if option A or B is how you connect at your site.

    Thanks for the advice so far.
    It would appear that connecting with individual usernames is not a fundamental error, which I was concerned about.
    Will it still be necessary to create and refresh RAD scripts, or is this only an issue when using OID? We have OID here already because we have a website using Oracle Portal. The sign-on process for this connects to Active Directory for authentication.
    I do not like the idea of having to schedule a refresh of RAD scripts, perhaps 3 times a day, just to keep it current. I do not think the RADs are expected to change as frequently as this, but perhaps other forum members have experience of this?

  • TestStand best practices for distribution

    We're looking for best practices for distributing TestStand
    systems.  I've found the TestStand
    Style Guide but it's a little sparse on how to set up the
    distributed systems.  We're looking for guidelines on where to put
    configuration data, where to put sequence files, how to manage users,
    and similar. 
    We'll be distributing systems to various contract manufacturers in
    China as well as using the systems in multiple locations in-house.
    What have you done with distributions and what problems have you seen?
    Right now we're planning to separate the Deployment Engine from our
    sequence files and putting all our configuration into our distribution
    kit for the Deployment Engine.

    The TestStand Reference manual provides good information on system
    distribution.  Chapter 14 : Deploying TestStand Systems covers the
    necessary information for distributing your TestStand
    application.  I do have a few suggestions and caveats:
    (1)  Make sure you use a Work Space when distributing your
    files.  A Work Space makes it easy to package all of your files
    and dependencies.  Moreover, the distribution wizard provides a
    feature that displays all the files that will be included in the
    installation package into an easy to use Tree View.
    (2)  TestStand currently does not look for embedded DLL
    dependencies.  So, if your code is calling a DLL module that calls
    a DLL module, be sure to include the embedded dependency DLL in your
    WorkSpace.
    (3)  StationGlobals and Custom Data Types are usually missed in
    installation.  Be sure that you are including your
    StationGlobals.ini file and MyTypes.ini file from the Cfg directory
    into your installation workspace.
    If you have more specific questions, please feel free to post them here!
    Good Luck!
    Tyler Tigue
    Applications Engineer
    National Instruments

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practice for Removing Zeroes from Database

    Does anyone have some clever bits of code or best practices for evaluating a database and instances of zeroes? I'm working on cleaning up our rules file and am thinking the best way to start would be to write some code to look for zeroes and write them to a log file. This would at least indicate if there was even a problem with zeroes (which there may or may not be).
    Any suggestions out there / utilities / code samples?
    Thanks.

    We accomplished this using data extracts from a subset of scenarios/years/entities/accounts to ensure that all of our potential rules could be checked to ensure they were not writting zero's. This worked pretty well for our purposes, a text editor called EmEditor allows for VB macros in it pretty easily and we could write a quick macro to check for strings ending in "; 0." You may also want to review your check box of calculated in your extract and see if the zeros are a result of calculations. A rule output could work pretty well, although it would take some defining as you would have to write it out in a sub and make sure that you capture the data of all subroutines if your zero's are rule driven or actual inputs. May want to review some if you have very small insignificant values getting written, seen items that have one value 13 places to the right of the decimal that were not really signficant.
    JTF

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best Practices for Using Service Controller for Entity Framework Database

    I'm running into an issue in my first time creating a Web Service with a .NET backend with Azure. I designed a database in Entity Framework and had it create the models, but I couldn't create a controller for the table unless I made the model inherit from
    EntityData. Here's the catch, the Database Model has int Id, but EntityData has string Id, so, of course, I'm getting errors. What is best practice for what I'm trying to do?
    Michael DiLeo

    hi Michael,
    Thanks for you posting!
    Sorry for I am not totally understanding your issue. Maybe two points need your confirm:
    1. I confuse with the "Service controller"? IS your meaning MVC controller? Or ServiceController(http://www.codeproject.com/Articles/31688/Using-the-ServiceController-in-C-to-stop-and-start
    2.whether  The type of ID in the model is match to the database ? In other words, Is the type of IDin .edmx matched to the database?
    By the way, it seems that this issue is more related to EF. You could post this issue on EF discussion for better support.
    Thanks & Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for