Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
of our databases having greater than 40% fragmentation and some are even above 95%.  
Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
in 2007. 
Thanks,
Troy

It depends. Here are the rules for that job:
Sampled mode
Page count >24 and avg fragmentation in percent >5
Or
Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
Trevor Seward
Follow or contact me at...
&nbsp&nbsp
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Removing Zeroes from Database

    Does anyone have some clever bits of code or best practices for evaluating a database and instances of zeroes? I'm working on cleaning up our rules file and am thinking the best way to start would be to write some code to look for zeroes and write them to a log file. This would at least indicate if there was even a problem with zeroes (which there may or may not be).
    Any suggestions out there / utilities / code samples?
    Thanks.

    We accomplished this using data extracts from a subset of scenarios/years/entities/accounts to ensure that all of our potential rules could be checked to ensure they were not writting zero's. This worked pretty well for our purposes, a text editor called EmEditor allows for VB macros in it pretty easily and we could write a quick macro to check for strings ending in "; 0." You may also want to review your check box of calculated in your extract and see if the zeros are a result of calculations. A rule output could work pretty well, although it would take some defining as you would have to write it out in a sub and make sure that you capture the data of all subroutines if your zero's are rule driven or actual inputs. May want to review some if you have very small insignificant values getting written, seen items that have one value 13 places to the right of the decimal that were not really signficant.
    JTF

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Best Practices for Using Service Controller for Entity Framework Database

    I'm running into an issue in my first time creating a Web Service with a .NET backend with Azure. I designed a database in Entity Framework and had it create the models, but I couldn't create a controller for the table unless I made the model inherit from
    EntityData. Here's the catch, the Database Model has int Id, but EntityData has string Id, so, of course, I'm getting errors. What is best practice for what I'm trying to do?
    Michael DiLeo

    hi Michael,
    Thanks for you posting!
    Sorry for I am not totally understanding your issue. Maybe two points need your confirm:
    1. I confuse with the "Service controller"? IS your meaning MVC controller? Or ServiceController(http://www.codeproject.com/Articles/31688/Using-the-ServiceController-in-C-to-stop-and-start
    2.whether  The type of ID in the model is match to the database ? In other words, Is the type of IDin .edmx matched to the database?
    By the way, it seems that this issue is more related to EF. You could post this issue on EF discussion for better support.
    Thanks & Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What are the best practices for Database management and performance tuning?

    Hello,
    I want to ensure that I am using the best practices for managing and maintaining our Database.
    Is there any documentation out there that outlines how to maintain and ensure top performance out of our database?
    Thank you!
    John Sefton

    I appreciate the responses, however this is not the information I am looking for.
    I am specificaly looking for best practices invloving the managment and performance tuning.
    Example: are their tools that I can install that will monitor the size and response time of the database and alert me if there is degradation in performance?
    Are there specific periodic activities I should be doing to garuntee that my database will continue to function that way it is supposed to?
    Or is this a fire and forget solution that does not need this attention?

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best practice for database migration in 11g

    Hello,
    Database migration is required due to OS change.  Here, I have two database instances say A and B in the old server where RDBMS_VERSION is 11.1.0.7.0. They need to be migrated into a new OS where the oracle has been installed with version 11.2.0.2.0.
    Since all data + objects need to be migrated into the new server, I want to know what the best practice is and how to do that. Thanks in advance for your necessary guidance.
    Thanks and Regards,
    Prosenjit

    Hi Prosenjit,
    you have some options.
    1. RMAN Restore: you can restore your database via rman to the new host, and then upgrade it.
        Please follow instruction from MOS Note: RMAN Restore of Backups as Part of a Database Upgrade (Doc ID 790559.1)
    2. Data Guard: check the MOS Note: Mixed Oracle Version support with Data Guard Redo Transport Services (Doc ID 785347.1)
    3. Full Export / Import (DataPump)
    Borys

  • Best Practice for Plan for Every Part (PFEP) Database/Dashboard?

    Hello All-
    I was wondering if anyone had experience with implementing / developing a Plan for Every Part (PFEP) Database in SAP. My company is looking to migrate its existing PFEP solution (Custom developed Excel/Access system) into SAP. If you are unfamiliar, a PFEP is a dashboard view of a part/material that provides various business groups with dedicated views to data from Material Masters, Info Records, and Vendor Master Records and combines it with historical/forecasting information. The goal is to provide a single source to all the part/material settings for a given part.
    Is there a Best Practice PFEP in SAP? Or if this is something that most companies custom develop in ERP or BI?
    Thanks in advance.
    -Ron

    I think you will likely get a response in SAP ERP - Logistics Materials Management (SAP MM)
    additionally you might want to do some searches based on SAP Lean Inventory, perhaps Kanban. I am assuming you are not using WM or EWM either?
    Where I have seen PFEP incorporated into the supply chain strategy this typically requires not inconsiderable additions to the alternate UoM in MM dropping of automatic replenishment levels (reorder level) and rethinking aspects of the MRP plan so be prepared or significant additional data management work if you haven't already started on that. I believe Ryder logistics uses PFEP and theirSAP infrstructure is managed by IBM; might be an idea to try and find a linkedin  resource from there. You may also find one of the ASUG supply chain,logistics,  MM or WM sigs a good place to also ask questions and look for answers.

  • Best practice for database move to new disk

    Good morning,
    Hopefully this is a straight forward question/answer, but we know how these things go...
    We want to move a SQL Server Database data file (user database, not system) from the D: drive to the E: drive.
    Is there a best practice method?
    My colleague has offered "ALTER DATABASE XXXX MODIFY FILE" whilst I'm more inclined to use "sp_detach_db".
    Is there a best practice method or is it much of a muchness?
    Regards,
    Andy

    Hello,
    A quick search on MSDN blogs does not show any official statement about ALTER DATABASE – MODIFY FILE vs ATTACCH. However, you can see a huge number of article promoting and supporting
     the use of ALTER DATABASE on any scenario (replication, mirroring, snapshots, always on, SharePoint, service broker).
    http://blogs.msdn.com/b/sqlserverfaq/archive/2010/04/27/how-to-move-publication-database-and-distribution-database-to-a-different-location.aspx
    http://blogs.msdn.com/b/sqlcat/archive/2010/04/05/moving-the-transaction-log-file-of-the-mirror-database.aspx
    http://blogs.msdn.com/b/dbrowne/archive/2013/07/25/how-to-move-a-database-that-has-database-snapshots.aspx
    http://blogs.msdn.com/b/sqlserverfaq/archive/2014/02/06/how-to-move-databases-configured-for-sql-server-alwayson.aspx
    http://blogs.msdn.com/b/joaquint/archive/2011/02/08/sharepoint-and-the-importance-of-tempdb.aspx
    You cannot find the same about ATTACH. In fact, I found the following article:
    http://blogs.msdn.com/b/sqlcat/archive/2011/06/20/why-can-t-i-attach-a-database-to-sql-server-2008-r2.aspx?Redirected=true
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Symantec antivirus Best practice for oracle database on windows server 2003

    Hi all,
    I have an oracle database server on windows server 2003 platform of version 10.2.0.4. what would be best practice of running symantec antivirus on that server as well as database file exclusions from scanning them.
    My server had rebooted unexpectedly for many times. in event log i have id as 6008. what may be cause of it..?

    Normally, you don't run a virus scanner on a database server because your database server isn't vulnerable to viruses. It's behind firewalls, people aren't reading mail on it, people aren't plugging thumb drives into it, etc. If you do decide that you need to run a virus scanner on a database server, at least exclude the Oracle data files from the scan. Oracle gets very unhappy if someone else tries to open its data files (or, worse, if someone opens a data file before it gets the chance to acquire exclusive access).
    Justin

  • Best practice for tracking database changes...?

    Dear Oracle gurus,
    I'm still relatively new to database administrating, and recently I ran into a situation which I'm not sure if there's some text-book scenario analysis or practice.
    I find it hard to track all the database changes across different servers. Our company develop software that uses the Oracle database, so we have development and test servers set up here and there, with really minimal control on them. Problem arises when we make rapid design changes to our system, which required multiple and rapid changes to the databases. I find it really hard to keep track of everything, because sometimes I can't patch some server because of people still using it for development/testing/investigation/etc.
    So, is there some kind of good practices for tracking database changes (which we even write patches for), monitoring schema modifications, or maybe even versioning database objects? I've tried to find some information but I think I did not look in the right places or ask the right questions.
    Any help is appreciated.
    Best regards,
    Peter Tung

    The first thing I would start with is:
    Find a version control system that will allow you to store files and version them (PVCS for example). You could for example, store all the sql scripts. Whenever a change is needed, the user could check the program out from the version control tool and make changes and check it back in. Besides sql scripts, you could also store binary files or any type of source code files in a version control system. This would at least put some things in order. In a version control system, you could associate a number or a string with all the files within a patch.

Maybe you are looking for

  • ITunes 10.5 and Vista - will not run

    Well, I upgraded to 10.5 on my Vista 32 bit system. iTunes installs but simply is not responsive after starting up. Even more curious, it prevents Internet access until I reboot the machine. The Internet connection is fine -20 Mbps otherwise. I did a

  • Vendor, Email Addresses, Correspondence of Payments?

    Dear Readers, How do I enable vendor payment notification and clearing through the email of the Vendors given in FK01. How do we email vendors using their email in SAP? How do we use SAP for this? How does SAP correspond with the email bearers? How i

  • Linking vendor to product category

    We include product categories in our catalogs, but in case of special request where user has to describe the requirements, the user does not know which category to pick from the list. Is there a way to link this field (the category) to the vendor so

  • Question on an xml gallery

    I've got an xml gallery that i've built in flash... everything works just fine- but i'm now trying to build an alternate gallery and I'm in a bit over my head. Here are the nodes I have in my xml: <pic>  <caption></caption> <thumbnail>

  • Tracability of errors in Sender RFC

    Dear All, In RFC to file scenario, how to trace weather the RFC have generated the files or not?. I have a problem few files were missing but client(SAPR3) log confirms the RFC executed successfully. Is there any way to check the RFC log that weather