Newbie's DAO and database design

I have NodeDAO (which creates Node objects from a 'node' table). Node objects have a NodeType object which is handeled by the NodeTypeDAO (acessing 'node_type'
table). For the NodeDAO to return a fully populated Node object, it must first give it a NodeType.
Now should I:
a) Let NodeDAO use NodeTypeDAO to handle NodeTypeObjects?
b) Let NodeDAO use 'node_type' table, which should normally only be handeled by the NodeTypeDAO?
c) Let NodeDAO return an incomplete Node object, and let the complete assembly/disassembly be handeled elsewhere (eg. TransferObjectAssembler)?
I think option c) is most obvious as it gives the clearer design. I'm going to need a TransferObjectAssembler eventually because I need to access different type of NodeDAOs for each kind of NodeType (Category, Image, Document, etc.). It's that part of returned a not fully populated object that seems not quite right.
Have I overlooked another solution to this problem?
Any advice is appreciated, thanks.
--Zta.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Hi Zta,
There are a number of new techniques to Java EE 5 that can be used. DAO's and DTO's may longer be required in your application is you are using the Java Persistance APIs.
Please take a look at the Java EE 5 tutorial located at http://java.sun.com/javaee/5/docs/tutorial/doc, specifically in the Persistence section located at http://java.sun.com/javaee/5/docs/tutorial/doc/PersistenceIntro.html to help inform you of the latest methods and technologies.
Hope this helps - Thanks - Mark

Similar Messages

  • Clustering and database design question

    Hi,
              One of the things we are seeing with running clusters is that, reasonably,
              as we get more users (due to increased capacity of the cluster), we get more
              hits on the database. As these are now coming from multiple servers it is
              more likely they are in separate transactions and so are less coordinated.
              As a result, what we see, is more deadlocks at the database( Sybase 11.9.2).
              We have introduced row-level locking and this helps, also stored procedures
              have shown some improvement in really over-used tables, but I can't help
              thinking this is a general design concern.
              The way I have been intending to scale our cluster is to have identical
              server deployments of all beans and use load-balancing for the client
              connections to the cluster. However, all our cluster members will be using
              the same database so the chance of deadlocks must increase for each server I
              add.
              Is this a general design problem or are there design solutions I can adopt
              to help with this? Can anyone give some good advice, highlight specific
              weblogic features, discuss similar designs or point to some good documents
              on such scalability issues? Perhaps I should add the majority of our
              database access is by entity beans using bean managed persistence.
              Thanks
              Sioux
              

              Give TopLink a try:
              Clustering - Refresh Yes
              Ability to refresh cached beans and objects between nodes of a cluster of application
              servers
              Clustering - Synchronization Yes*
              TopLink WebLogic contains support for synchronous or asynchronous cache synchronization
              between nodes in a cluster. TopLink WebSphere cache synchronization will be supported
              01Q1
              Cocobase has a similar distributed caching scheme. But WebGain should be a better
              bet because they are 40% owned by BEA (better integration with WebLogic)
              Jim Zhou.
              "Mike Reiche" <[email protected]> wrote:
              >
              >
              ><whine>
              >I haven't seen a solution to this yet. Clustering prevents the application
              >server
              >from caching database data - which is the reason we use applications
              >in the first
              >place. All data is read from the db at the beginning of a transaction
              >and written
              >back at the end of a transaction.
              ></whine>
              >
              >We are trying to avoid this by pinning entity bean instances to a single
              >WL instance,
              >then we can set db-is-shared=false and the WL instance can cache the
              >entity bean.
              > Stateful session beans seem more suited to scaling than entity beans
              >- there
              >is only ever one instance of a session bean - if you access it from a
              >different
              >WL instance, it is accessed via RMI. For entity beans - multiple copies
              >exist
              >on the different servers.
              >
              >Read-only entity beans are ok if your data is truly read-only. Still,
              >each WL
              >instance will need to read the data from the DB and each copy of the
              >same entity
              >bean instance on each WL instance takes up memory. From the db's point
              >of view
              >- the more you scale, the worse things get. The only thing that WL clustering
              >scales is CPU. WL clustering does not really help to scale the number
              >of entity
              >beans instances.
              >
              >The Read-Mostly pattern exists - but I find this gives me the worst of
              >both worlds.
              >My beans are not always up-to-date and I have all these duplicate instances.
              >
              >Said that, I would love for someone from BEA to contradict me - and tell
              >me how
              >I can make entity beans scale in a WL cluster.
              >
              >
              >- Mike
              >
              >"Sioux France" <[email protected]> wrote:
              >>Hi,
              >>One of the things we are seeing with running clusters is that, reasonably,
              >>as we get more users (due to increased capacity of the cluster), we
              >get
              >>more
              >>hits on the database. As these are now coming from multiple servers
              >it
              >>is
              >>more likely they are in separate transactions and so are less coordinated.
              >>As a result, what we see, is more deadlocks at the database( Sybase
              >11.9.2).
              >>We have introduced row-level locking and this helps, also stored procedures
              >>have shown some improvement in really over-used tables, but I can't
              >help
              >>thinking this is a general design concern.
              >>The way I have been intending to scale our cluster is to have identical
              >>server deployments of all beans and use load-balancing for the client
              >>connections to the cluster. However, all our cluster members will be
              >>using
              >>the same database so the chance of deadlocks must increase for each
              >server
              >>I
              >>add.
              >>Is this a general design problem or are there design solutions I can
              >>adopt
              >>to help with this? Can anyone give some good advice, highlight specific
              >>weblogic features, discuss similar designs or point to some good documents
              >>on such scalability issues? Perhaps I should add the majority of our
              >>database access is by entity beans using bean managed persistence.
              >>Thanks
              >>Sioux
              >>
              >>
              >
              

  • Web Page and Database Design Problem

    Ok, so I'm trying to develop an app, but am having a few problems and need some advice/help.
    I have a webpage made up of jsp pages. These pages will contain forms that will either list info from the databse, or allow users to enter data to submit to the DB.
    So I will have Servelts that will process the form information.
    I also have written DAO interfaces for different tables. For example I have a config table which olds keys and there values.
    This information will only ever be displayed so I have an interface which getAll() and get(String key).
    I want to avoid putting code like below going into the DAO
    ctx = new InitialContext();
    javax.sql.DataSource ds
    = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();
    PreparedStatement stmt = conn.prepareStatement(query);
    ResultSet records = stmt.executeQuery();
    I'd prefer to make calls from my DAO getAll() method to another class which would create the connection, and query the db, and return the ResultSet, so that I can store/manipulate it anyway I wish, before passing the results back to the servlet.
    Problem is the records seem to come back null!

    ctx = new InitialContext();
    javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();You should have a Connection Pool to create a connection and hand it out to whoover ask for it.
    Check out Hibernate
    Hibernate can manger your connection (you need to set the xml configuration first) .. It's another layer to encapsulate your JDBC connection, request, etc..
    Also check out Spring Framework. Spring can makes Transaction much more easy to implement (using aspect)
    It provides a handful of useful api for you to work with. like JdbcTemplate, HibernateTemplate.

  • Difference between database design and schema design

    Hi i have visited so many database websites and i found so many people saying we can design a data base for you. Is schema designing and database designing is the same. so many palces i find people saying we have to design data base first in order to create a physical databse. so i am little bit confused. are they same? and also what is the difference between data model and schema?

    > the definition i found for logical data model, physical data model and the definition you
    gave for logical database design, physical database design are the same.
    Not correct. The physical design is the implemetation of the logical design. These two designs are at different levels. Also, the logical design will be the same. irrespective of the RDBMS product use.
    What is incorrect is a designer/architect designing a logical design specifically for Oracle.. or specifically for SQL-Server. A logical design has nothing to do with the RDBMS product (or h/w platforms. app servers, web severs and operating systems used).
    So the logical design will always be the same - it is RDBMS independent.
    The physical design is fully dependent on the RDBMS product used. The same logical design will be implemented as different physical designs for Oracle and for SQL-Server.

  • Database design (ERD )for Inventory Management System

    Dear All,
    I am going to develop a simple Inventory Management System software using C# .NET for my learning. After searching different forums, many people have suggested to first create a database design for the software. I want a database design, in short, an ERD
    diagram for simple Inventory Management System which shows proper entities(tables), attributes and relationship between entities.
    It would be highly helpful for me as I am newbie to C# and databases.
    Thanks,
    momersaleem

    Dear Rebecca,
    Thanks for you suggestions.
    As I am going to develop IMS for learning purposes so I think I wouldn't need to go in detail regarding Customer name and addresses. However, I am still thinking of adding country attribute in customers' table which I think will be helpful to sort out customers.
    What's the difference between a purchase and an order?  They're usually the same thing, which doesn't mean you're
    wrong, but what are you picturing here? Purchase entity will be used to keep record of purchases you made and an order entity will be used to keep record of orders that cutomers placed.
    Pricing:
    Any order system needs to manage two very distinct bits of data that are easy to confuse. The price in the Product entity is the current
    price. The price in the Order entity is the selling
    price. Not at all the same thing--current price is almost certainly going to change over time. Selling price won't.
    Does it mean that I'll change the price attribute for product to current_price and add selling_price to order table which will help to keep record of price at the time of order?
    Why did you include a quantity field in the Products table? Is it meant to represent stock on hand?
    Yes you are right. It represents stock in hand.
    Could you please recheck the entities relationships as I am not confirmed whether these are correct or not?
    Thanks,
    momersaleem

  • Multi-lingual Software & Database Design

    I am interested in how the industry is approaching multi-lingual software and database design. Specifically, I would like to know if there are any good resources (whitepapers, web sites, books) that get into the details of what the object model and data model of a multi-lingual design would look like and how the two fit together. I am working on a project that requires Product information to be stored in both English and French. Thank you.

    http://java.sun.com/j2se/1.3/docs/guide/intl/

  • Database design and pl/sql vs external procedures

    hi,
    My project involves predicting arrival time of a bus at a bus-stop, given statistical data of traffic patterns on the previous ‘n’(say 3) days, as well as the current location of the bus(latitude-longitude).
    Given current bus location, I derive my distance-until-destination bus-stop, which must be translated into time until arrival.
    Ive enlisted the triggers and procedures involved in making the prediction. Thse procedures especially the determination of perpendicular distances involve some complex trigonometric operations. I would like to know if my approach is correct and my database design is suited for the operations to b performed.
    Will it be more efficient to implement the procedures as external procedures or as pl/sql blocks
    This is my database design:
    LINKS ( a link is the road segment between adjacent bus-stops)
    LINK_ID                NUMBER      [PRIMARY-KEY]
    START_LATITUDE          NUMBER     
    START_LONGITUDE     NUMBER     
    START_STOP_ID          NUMBER
    END_LATITUDE          NUMBER
    END_LONGITUDE          NUMBER
    END_STOP_ID          NUMBER
    LINK_LENGTH          NUMBER
    BUS_ROUTE
    ROUTE_ID               NUMBER
    LINKS_ENROUTE          VARRAY(30) OF NUMBER
    STOPS_ENROOUTE          VARRAY(30) OF NUMBER
    TRACK(keeps track of current location of bus)
    BUS_ID           NUMBER          [PRIMARY-KEY]
    ROUTE          VARCHAR2(20)
    LATITUDE          NUMBER
    LONGITUDE          NUMBER
    TS               TIMESTAMP
    LINK_ID          NUMBER
    START_STOP     NUMBER
    END_STOP          NUMBER
    ARRIVAL_TIMES(actual arrival times of the bus, updated by track)
    BUS_ID           NUMBER     [PRIMARY-KEY]
    BUS_ROUTE          VARCHAR2(20)
    ARRIVAL          TIMESTAMP
    STOP_ID          NUMBER
    ETA (expected time of arrival)
    BUS_ID          NUMBER
    BUS_ROUTE          VARCHAR2(20)
    BUS_STOP_ID     NUMBER
    ARR_TIME          VARRAY(5) OF TIMESTAMP
    Triggers and procedures
    1)TRACK_TRIGGER
    On insert/update of track determine which link the us is currently on.
    Invoke a procedure that calculates perpendicular distance from current location to all links en-route (cursor on LINKS).
    Results are stored in a temporary table. Select the link-id of the tuple whose perpendicular distance is MINIMUM. This is the link the bus is currently on. Place link-id, start_stop_id and end_stop_id in corresponding row of TRACK.
    2)ARRIVAL_TRIGGER
    update ARRIVAL_TIMES.and store in ARRIVAL_TIMES that start-stop id with the time-stamp of the current track record.
    b)update ETA: Find the BUS-STOPS that come before the START_STOP of current track record. All these rows are deleted from the ETA tables, as the bus has already crossed these stops.
    3)Prediction Algorithm Procedure.
    Determine distance until destination for each STOP, 20 stops down from stop current location.
    Determine current avg. speed of the bus over a 2 hour window, by dividing total distance traveled by time taken.
    Calculate time-until arrival T1=current avg. speed * distance until destination
    From the records of previous ‘n’ days (say n=3) find those buses on the same route that were near the link the bus is currently on. Again determine avg speed over 2 hour window and calculate avg. speed.
    Calculate travel time T(i) = speed*distance until destination.     i.=2,3, 4…
    The final predicted arrival time is a weighted sum of all T(i).
    I hope Im not asking for too much, but the help would be greatly appreciated.
    Thankyou,
    Amina

    hello,
    actually i can manage ETA without a varray, since there will b a maxximum of 3 -4 values of expected arrival times at each stop. this can b done with separate columns.
    though i dont quite understand how lag() will help me...from what i understand lag() is to access values of previous rows. but in ETA table each element in the varray(if there is one) is going to be the expected arrival time of buses on a particular route at that particular stop, and is different from the arrival time at a previous stop(i.e.row).
    but for my other table BUS_ROUTE i have 2 varrays describing the links and stops enroute. in quite a few procedures i have to loop through these arrays and perform some calculations in every iteration is varray the best way 2 go, or nested tables?
    Thank you
    Amina
    As an aside, external procedures tend by their very
    nature to be slow - there's an overhead incurred
    each time we step outside the database. Therefore
    you really ought to avoid using a C extproc unless
    your calculations really cannot be done in PL/SQL or
    a Java Stored Procedure.
    Also, before you go down the VARRAY route you should
    consider the virues of analytic functions, notably
    [url=http://download-west.oracle.com/docs/cd/B1050
    1_01/server.920/a96540/functions56a.htm#83619]LAG()[/u
    rl]. I think you really ought to do some
    benchmarking of parformance before you start afdding
    denormalised columns like ETA. You may find the
    overhead in maintaining those columns exceeds their
    perceived benefits.
    Cheers, APC

  • Logical Database design and physical database implementation

    Hi
    I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
    This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
    implementation"
    Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
    In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
    I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
    Also exactly what is the difference between logical database design vs physical database implementation
    Thanks and Regards

    Logical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
    Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
    It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
    If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference.

  • Oracle database normalisation and logical design documnetation

    Hi ,
    Am a begginer on DB design , can anyone help me find Oracle database normalisation and logical design documnetation for reference.
    Thanks,
    Swaroop

    Database logical design and normalization are typically DBMS (Database Management System) independent. Meaning that you could do a whole LOGICAL design without regards to the platform in which it will be based. There are many, many resources on the internet if you search for them. if you used terms like "logical database design", or "database normalization" on Google or your search engine of choice I imagine you'll come up with many results.
    When it comes to the actual physical design of the database (as in tablespaces, datafiles, indexes, etc) I would first consult the "Oracle Concepts Guide", and then something like the "Application Developer's Guide." This documentation is all available at:
    http://otn.oracle.com
    Hope this helps!

  • Good database design and modelling books

    Hi ,
    I need to work on designing a database from the scratch by creating logical database design and then physical database design.I'm new to database design.
    Can someone please point me to some good database design and modelling related books /tutorials.
    Regards,
    Bharath.

    bharathDBA wrote:
    Hi Girish Thanks for the information.
    I would definitely look into this book later.
    I don't mind paying any amount of money,if that book gives me the knowledge I want.
    As this book is international edition,for shipping it is taking 8-10 business days and by that time I need to complete designing my database and probably I might need to some other book.
    Is this a school assignment? I hope so. Referring back to your opening statement "I need to work on designing a database from the scratch by creating logical database design and then physical database design.I'm new to database design." I can only say that database design is a very big subject. If you are starting from a position of no knowledge at all, I'm afraid there is nothing that is going to give you the knowledge you need in the time frame you have. I will say you need to start by learning the rules of Data Normalization. Make your logical design Third Normal Form. Good can be your friend. There is actually a pretty good write-up on Data Normalization on Wikipedia.

  • Should I learn database design and development skills?

    Hi everyone,
    I am a junior Oracle with 3 years experience. I have got Oracle 10g and 11g OCP certifications.
    I know how to install, configure,monitor and maintain databases, but I don't know hot to design and develop databases.
    I know that employers demand of us more and more. Database design and development skills are the basic requirements.
    Should I start to learn database design courses?
    If yes, please recommend me the books(or Oracle Docs) of Getting Started.
    Thank you very much
    Edited by: user8096439 on Feb 24, 2009 11:59 AM

    user8096439 wrote:
    Are the following books suitable for a getting started designer?
    2 Day Developer Guide      
    2 Day Plus Application Express Developer's Guide      
    2 Day Plus Java Developer Guide      
    2 Day Plus .NET Developer Guide      
    2 Day Plus Locator Developer Guide      
    2 Day Plus PHP Developer GuideYou could do worse, but I think before you plunge into specific technologies (java, .NET, etc) I'd study up on basic data analysis and normalization.
    Google "data normalization" and study up on 1st, 2d, and 3d Normal Form.
    Go through the Oracle docs and get very familiar with the different data type (varchar, number, date, etc)
    Read the Tom Kyte Books.
    Programmers keep wanting to re-invent what the database already does, and treat the database as a data dump. As a result, I'd focus on analysis and design issues before approaching books on programming technology. (and I was an applications programmer/analyst for about 15 years before transitioning to DBA)

  • Requirements for database design and installation

    Hi,
    As a database administrator, how to find out whether the database is design and installed properly?
    Can you please what would be the requirements to be considered towards the databse design for application developer ?
    thanks heaps !!!!

    Mohamed ELAzab wrote:
    regarding that the number of execution is the main thing that affects the performance i said that already above " the application executed it 30 000" but you didn't read my answer correctly. I did not respond to that "answer" of yours as it was not part of your posting that I responded too. In your response, which I quoted, talked about non-sharable SQL retrieving 20 rows and after 3 years it retrieving a million rows. This has no bearing on whether the SQL is sharable or not.
    I don't agree with you regarding that the design is not being done regarding considering the performance bottlenecks.So you decide on what the bottlenecks are up front, and then use these as database design considerations? I fail to see any logic or merit in such an approach.
    i want to let you know that we in the telecoms environment have many problems in our databases because the people who designed those applications didn't take performance in consideration.I understand too well - and it is not that they did not take performance into consideration when designing the database, it is because the design is just plain wrong from the start.
    You do not need to consider amount of memory available, number and speed of CPUs, bandwidth and speed of the I/O system, in order to design a database. These have no relevance at all during the design phase. Especially as the h/w that will run the design in production in a year's time can be drastically different from the h/w that will be used today.
    No, instead you use a proper and correct design methodology and data modeling approach. Why? Because such a design by its very nature will make optimal use of h/w resources and will provide data integrity, scalability and performance.
    Again i think design of the database application must take database performance bottlenecks in consideration like application which doesn't use bind variables if they took into consideration to avoid that it will help the DBA in the future but unfortunately most people doesn't do that. And as I said - using bind variables or not has absolutely nothing to do with the basic question asked in this thread: "+what are the requirements of database design+".
    How does using/not using bind variables influence the design of a table? Determine whether an entity is in 3NF? What the unique identidiers are for an entity? These are the design considerations for a database.. not bind variables.
    Yes, SQLs not using bind variables can cause performance problems. Not paying the electricity bill can cause a power outage for the database server. So what? These issues have no relevance to database design.

  • I just want itunes music and database to be on external HD not internal

    What am I doing wrong. I have itunes music library on my external HD and my itunes preferences say that my itunes music folder location is just that. It's about 41 GB. Ok, but why on my system drive is there an itunes folder in the default user music folder and it is 7 GB and there are certain titles in there that did not make it to the external. The database files are in there as well. Shouldn't everything be located in the external itunes music folder including the database files ? How do I have itunes automatically update the external HD folder and put the database files in there as well and stop using the internal system drive for storage and database.

    I did that a while ago. But now it seems there are 7 GB of stray files. Is there a sync feature that moves only the itunes files that are outside the designated itunes folder, in my case they are living on the system drive for some reason...

  • Suggestion:  Create a Database Design Forum

    I recommend the creation of a new forum dealing exclusively with database design questions, such as setting Primary Keys, Unique constraints, Check constraints, Indexes, schema-creation scripts, etc. There is no forum devoted exclusively to this topic now and I feel it would be very helpful to the user community. It would certainly make searching for answers to design questions much easier.

    Billy  Verreynne  wrote:
    Prohan wrote:
    I don't agree there.
    1. How to create a relational model certainly IS relevant to Oracle, which is a RELATIONAL DBMS.Oracle also supports data warehousing (star schema designs), network/hierarchical designs, object-relational designs - or pretty much any data model that you may come up with. Calling it just a relational DBMS is incorrect.
    2. Your point that logical models are independent of specific technology is correct. What you're missing is that if a specific technology makes use of a certain foundational body of knowledge, that knowledge is a legitimate topic for a forum whose users use that specific technology.That is putting the cart in front of the horse IMO.
    I would rather see data modeling and logical database design being done in a way that is untainted with specific vendor implementations and technology used. There needs to be a clear line dividing the design from the implementation. If not, then design decisions can (and will) be made based not on the correct logical data modeling principles, but whether it can be "handled" by the technology. A design that is tainted like that, will always be less than optimal (especially as technology is continually evolving and changing).
    An OTN forum for database design will invariable be tainted with Oracle technology - and instead of learning sound data modeling fundamentals, a warped view of data modeling will be conveyed. Where doing abc will be acceptable (when it is not), because Oracle has feature xyz that can make the flawed design work (in a fashion).Excellent points. I think (or at least hope) such a forum would attract some number of pure theorists to straighten out the view. This might make for a lively forum, and might actually influence the real products, and might even get the cart on the right side of the horse.
    Hmmm, I guess I do sound hopelessly optimistic.

  • IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan

    Hi Experts,
    IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
    Vinai Kumar Gandla

    Hi Vikki,
    Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
    changes in distribution often require manual statistics updates as the following explanation.
    1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
    this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
    manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
    2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
    however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
    3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
    every insert.
    So in the case above, we could perform manual statistics updates by
    creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
    https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
    Regards,
    Michelle Li

Maybe you are looking for

  • VC using BW3.0 via XMLA and Multicube

    Hi, i have connected a BW3.0 to Visual Composer (7.0) by using xmla connector. The connection works fine and i can create a query in BI integration wizard. But i get an error message when i want to use a multiprovider:"Error - 11021 Cannot create the

  • Pairing with a Bluetooth Headset

    I have a Curve 8530 ... the BB can't find my fully-charged Plantronics 520 headset when I go through the Bluetooth Setup page. Is there something I'm missing or not doing? Solved! Go to Solution.

  • Preparing Functional Specifications

    Hi Gurus, How can we write functional specifications for any functionality to be implemented in CRM. For example I want to prepare functional specification for making "External reference field" read only in crmd_order transaction. Please let me know.

  • Diffc'e between 4.7 and 6.0 in abap hr

    Hi experts, can u say the differences between 4.7 and 6.0 in hr abap.

  • Premiere Pro CS4 won't boot - keeps crashing when trying to load VST plug-ins

    This problem has been a thorn in my side for ages - after a recent software update to some of my VST plug-ins, Premiere Pro will no longer boot.  I have to re-name the VSTPlugins folder to a different name.  It used to be that it would scan the VSTPl