Best Practice for Designing Database Tables?

Hi,
I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
sends about 4K records per day.
currently each table hold from 10K records to 300K records
What is the best practice to design a database in this situation? 
When accessing database from a C# application, which is better to use, direct SQL commands or views? 
a detailed description about what is best to do in such scenario would be great. 
Thanks in advance.
Edit:
Tables columns are:
[MessageID]
      ,[MessageUnit]
      ,[MessageLong]
      ,[MessageLat]
      ,[MessageSpeed]
      ,[MessageTime]
      ,[MessageDate]
      ,[MessageHeading]
      ,[MessageSatNumber]
      ,[MessageInput]
      ,[MessageCreationDate]
      ,[MessageInput2]
      ,[MessageInput3]
      ,[MessageIO]

Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
it right now and I cannot do anything else in the code to make it faster)
At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
AND
the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
Other than proper indexing what else I need? Tom
Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
Thanks in advance. 

Similar Messages

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • JDeveloper 11.1.1.3: Best Practice for Checkboxes in Table

    Hi there,
    I'm having problems with checkboxes inside the table component.
    Can someone please fill me in as to what the best practice is to use checkboxes inside a table ?
    In the database, we are storing values Y and N.
    Thanks,
    Mark

    Hi Mark,
    I suppose you are talking about ADF Faces applications. If so, then I have two prefered approaches tested and used in real practice:
    *1) First approach: Create a simple converter, define it in the faces-config.xml and set it in the <af:selectBooleanCheckbox> tags' "converter" attribute, for example:*
    package mypackage;
    import javax.faces.application.FacesMessage;
    import javax.faces.component.UIComponent;
    import javax.faces.context.FacesContext;
    import javax.faces.convert.Converter;
    import javax.faces.convert.ConverterException;
    public class BooleanYNConverter implements Converter {
      public BooleanYNConverter() {
      public Object getAsObject(FacesContext facesContext, UIComponent uiComponent, String string) {
        if (string==null) return null;
        String s = string.trim();
        if (s.length()==0) return null;
        if (s.equalsIgnoreCase("true")) return "Y";
        if (s.equalsIgnoreCase("false")) return "N";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + string + " to Y/N. It must be either true or false",
            "Cannot convert " + string + " to Y/N. It must be either true or false" );
        throw new ConverterException(errorMessage);
      public String getAsString(FacesContext facesContext, UIComponent uiComponent, Object object) {
        if (object == null) return "";
        if (object.equals("Y")) return "true";
        if (object.equals("N")) return "false";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + object + " to true/false. It must be either Y or N",
            "Cannot convert " + object + " to true/false. It must be either Y or N" );
        throw new ConverterException(errorMessage);
    }In faces-config.xml:
      <converter>
        <converter-id>BooleanYNConverter</converter-id>
        <converter-class>mypackage.BooleanYNConverter</converter-class>
      </converter>In JSF page:
    <af:selectBooleanCheckbox ... converter="BooleanYNConverter"/>
    N.B. If you use this approach, the ViewObject attribute's Control Type should be set to "Default" instead of "Checkbox" (see the attribute's Control Hints section in the dialog box)!
    *2) Second approach: In the PageDef define a button binding for Y/N values and map the corresponding item in the table binding to this button binding. In this way you will remap VO attribute's Y/N value to true/false as how the checkbox component expects it. Neither converters nor additional configuration is necessary, but you have to do this on each checkbox field again:*
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.56.60" id="TestPagePageDef" Package="view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables"/>
        <iterator Binds="DeptViewRO" RangeSize="25" DataControl="AppModuleDataControl" id="DeptViewROIterator"/>
      </executables>
      <bindings>
        <tree IterBinding="DeptViewROIterator" id="DeptViewRO">
          <nodeDefinition DefName="model.DeptViewRO" Name="DeptViewRO0">
            <AttrNames>
              <Item Value="DeptID"/>
              <Item Value="DeptName"/>
              <Item Value="Flag" Binds="MyFlag"/>
            </AttrNames>
          </nodeDefinition>
        </tree>
        <button IterBinding="DeptViewROIterator" StaticList="true" id="MyFlag">
          <AttrNames>
            <Item Value="Flag"/>
          </AttrNames>
          <ValueList>
            <Item Value="Y"/>
            <Item Value="N"/>
          </ValueList>
        </button>
      </bindings>
    </pageDefinition>In the sample above the target VO attribute is called Flag. Have a look at the line <tt><Item Value="Flag" Binds="MyFlag"/></tt>. This line does the magic.
    Hope I've been a bit helpful.
    Dimitar
    Edited by: Dimitar Dimitrov on Nov 13, 2010 1:53 PM
    There was a little mistake: Instead of BooleanYNConverter I had written BooleanYNCheckbox in the <af:selectBooleanCheckbox> tag.

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best practice for existing target table

    We want to develop mappings for existing target tables.
    1. The tables are imported from the datadictionary
    2. The tables are used in the mappings
    Is it a good idea to set deploy to no for these tables, to prevent them from beeing used by the deplomentmanager (default action:create)?
    Thanks your for a good practice
    Stp
    Message was edited by:
    user444776
    Message was edited by:
    user444776

    Hi,
    Yes, you are right. for the tables which already exist in the database schema the deployment actions should be "None". Unless you planned and made any changes to the table structures.
    Cheers
    Mahesh

  • Is OWB the Best Tool for Designing Warehouse Tables?

    My current site uses OWB for ETL but mandates the use of TOAD Data Modeller for design and build of the warehouse tables.
    I always thought that OWB could do this task just as well as TOAD, but I'm not very experienced with OWB.
    Can anyone advise?

    You can design table with OWB and deploy to database, but is better create database objects [tables, views ets.] with SQL by DDL script and import this objects to OWB.
    You can define physical properties, keys, constraints, partitions, indexes, grants ets. for one table in one SQL-script.

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Best practices for "designer - developer" interaction in Flex Mobile

    Hi,
    I'm starting development of a mobile software application and Flex Mobile is the platform I've chosen for that.
    What is the best practice / recommended workflow for designer-developer interaction? For example in web application the designer provides HTML/CSS templates to the developer which integrates them in the Web Application. What is the analogue in Flex Mobile? What should I request as input from the designer?
    I'll appreciate any hints, links, advises or previous experience on the topic.
    Thanks!
    Best Regards,
    Dinko

    If you're using Adaptive Web Design (CSS3 media queries), you can maintain one site with CSS Layouts optimized for different device widths.
    http://www.adobe.com/devnet/dreamweaver/articles/introducing-media-queries.html
    jQuery Mobile
    http://jquerymobile.com/
    If you're actually running separate web sites for mobile and non-mobile devices, have a look at this recent discussion:
    http://forums.adobe.com/message/4177360#4177360
    IMO, there is nothing wrong with providing links for mobile and non-mobile users to choose which site they would prefer to use -- especially for tablets who may have an interest in both.
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists 
    http://alt-web.com/
    http://twitter.com/altweb

  • Any best practices for proxy databases

    Dear all,
    is there any caveat or best practice when using a proxy database?
    Is it secure and wise to create them on the master device? Can it grow? Or is it similar to a MSSQL linked server?
    Thank You for your patience,
    Arthur

    Hello,
    This statement is for proxy database as well.
    Note: For recovery purposes, Sybase recommends that you do not create other system or user databases or user objects on the master device.
    AdaptiveServer Enterprise 15.7 ESD #2 > Configuration Guide for Windows > Adaptive Server Devices and System Databases
    http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc38421.1572/doc/html/san1335472527967.html?resultof=%22%6d%61%73%74%65%72%22%20%22%64%65%76%69%63%65%22%20%22%64%65%76%69%63%22%20%22%75%73%65%72%22%20%22%64%61%74%61%62%61%73%65%22%20%22%64%61%74%61%62%61%73%22%20
    The  Component Integration Services Users Guide is very good start in some part it is like a link server but the option are many and it all depends on your use case and remote source.
    Niclas

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best practice for designing a directory tree

    Hello all,
    I wanted to know what you all think about what is the best way to design a directory tree. Again it seems to me that experience makes the man perfect and I wanted to know what you all think about how to design directory tree model.
    Here is my view; I believe that keeping at least Users/ Employee information flat make more sense to me. I know it’s a good idea that one should consider an organizational hierarchy when designing a tree. But there seems to be some problem too. If employee moves from one department to another within the same organization then directory administrator has to change the location/department of the user. However, if all users are under the same namespace i.e. ou=Users, administrator does not have to change or move users location in the directory to another location.
    Also, while designing a directory there could be several hierarchy, one is organizational hierarchy another could be organizing user info based on job roles i.e. CEO->VPs-Managers->Consultant.... and so on so. So which one should consider. So by considering ou=Users namespace, we are avoiding risk on favoring one type of hierarchy.
    The trade-off in using ou=Users a flat namespace is that browsing of the directory looks less interesting.
    Anyway, this is my view, I would like to know your views too and hope this thread will help other in building their own directory Tree.
    Thanks
    Syed Suhail Ahmad

    Honestly it really depends on what you need your system to do and how you want to handle administration. I have read the argument about keeping things simple with using ou=USERS and seems like the best situation for administration purposes. Currently my client is wanting all identity management to be handled in OID so I am in the process of doing our hierarchy also. I still have to explore the abilities of OID to see how I can set things up.
    There are a few good books and websites that are good for referencing ldap. One book I use is LDAP Programming, Management, and Integration by Donley. For websites, just search on google or check out www.ldapguru.com or www.openldap.org(possibly).
    Hope this helps a little. I honestly can speak on the best way since I haven't implemented my own yet nor do I think one way will always be best for every situation.

  • Best practice for designing a print enviroment​?

    Greetings,
    If there is a better location for this, please let me know.
    Goal:
    Redesign and redeploy my print enviroment with best practices in mind.
    Overview: VMWare enviroment running 2008 R2,  with ~200 printers. I have a majority of HP printers ranging from 10 years old to brand new. Laserjets, MFPs, OfficeJets, etc.. etc.. in addition to Konica, Xerox, and Savin copiers. Many of our printer models aren't support in 2008, let alone x64.
    Our future goals include eprint services, as well as a desire to manage print quality, and consumition levels through something like Web Jetadmin.
    Presently we have a 2003 x86 server running our very old printers and until 6 months ago the rest on a single 2008r2 x64 server. We ended up not giving it the attention of detail it needed and the drivers became very cluttered, this lead to a single UPD PCL6 update that ended up corrupting several drivers across the UPD PCL 5 and 6 spectrum. At that time we brought up a second 2008r2 server and began to migrate those affected. In some instances we were forced to manually delete the drivers off of the clients system32->Spool->Driver and reinstall.
    I haven't had much luck finding good best practice information and figured I'd ask. Some documents I came across suggested that I should isolate a Universal driver to a single server, such as 3 servers for PCL5, PCL6, and PS. Then there is the need to deal with my various copiers.
    I appreciate your advice, thank you!
    This question was solved.
    View Solution.

    This forum is focused on consumer level products.  For your question you may have better results in the HP Enterprise forum here.
    Bob Headrick,  HP Expert
    I am not an employee of HP, I am a volunteer posting here on my own time.
    If your problem is solved please click the "Accept as Solution" button ------------V
    If my answer was helpful please click the "Thumbs Up" to say "Thank You"--V

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

Maybe you are looking for