Index fill factor

Hi All,
We have around 150 tables we are planning to change fill factor as 90% for this if you people have any script please suggest me 
Regards
subu

Use the following script for change the fill factor of indexes. Just change the database name in place of AdventureWorks2008R2. This script alter the fill factor to 90% of all indexes for all tables of database. 
DECLARE @Database VARCHAR(255)  
DECLARE @Table VARCHAR(255)  
DECLARE @cmd NVARCHAR(500)  
DECLARE @fillfactor INT
SET @fillfactor = 90
DECLARE DatabaseCursor CURSOR FOR  
SELECT name FROM master.dbo.sysdatabases  
WHERE name IN ('AdventureWorks2008R2')  
ORDER BY 1  
OPEN DatabaseCursor  
FETCH NEXT FROM DatabaseCursor INTO @Database  
WHILE @@FETCH_STATUS = 0  
BEGIN  
   SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' +
  table_name + '']'' as tableName FROM [' + @Database + '].INFORMATION_SCHEMA.TABLES
  WHERE table_type = ''BASE TABLE'''  
   -- create table cursor  
   EXEC (@cmd)  
   OPEN TableCursor  
   FETCH NEXT FROM TableCursor INTO @Table  
   WHILE @@FETCH_STATUS = 0  
   BEGIN  
       IF (@@MICROSOFTVERSION / POWER(2, 24) >= 9)
       BEGIN
           -- SQL 2005 or higher command
           SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
           EXEC (@cmd)
       END
       ELSE
       BEGIN
          -- SQL 2000 command
          DBCC DBREINDEX(@Table,' ',@fillfactor)  
       END
       FETCH NEXT FROM TableCursor INTO @Table  
   END  
   CLOSE TableCursor  
   DEALLOCATE TableCursor  
   FETCH NEXT FROM DatabaseCursor INTO @Database  
END  
CLOSE DatabaseCursor  
DEALLOCATE DatabaseCursor 

Similar Messages

  • Merge table indexes fill factor

    Shouldn't indexes such as MSmerge_current_partition_mappings.ncMSmerge_current_partition_mappings have a fill factor other than 0? I am getting many page splits on this and other merge table indexes. My server has many of the indexes set to 0.
    Using merge replication with 120 subscribers SQL Server 2008 R2 on distribution server.

    These are large tables. 62 million rows in MSmerge_current_partition_mappings. We are seeing high IO numbers and I ran this query with results
    SELECT COUNT(1) AS NumberOfSplits     
    ,AllocUnitName      ,Context
    FROM      fn_dblog(NULL,NULL)
    WHERE      Operation = 'LOP_DELETE_SPLIT'
    GROUP BY      AllocUnitName, Context
    ORDER BY      NumberOfSplits DESC
    984 dbo.MSmerge_current_partition_mappings.ncMSmerge_current_partition_mappings
    LCX_INDEX_LEAF
    443 dbo.MSmerge_contents.uc1SycContents
    LCX_CLUSTERED
    340 dbo.MSmerge_contents.nc5MSmerge_contents
    LCX_INDEX_LEAF
    268 dbo.MSmerge_current_partition_mappings.cMSmerge_current_partition_mappings
    LCX_CLUSTERED
    208 dbo.MSmerge_contents.nc3MSmerge_contents
    LCX_INDEX_LEAF
    159 dbo.MSmerge_contents.nc4MSmerge_contents
    LCX_INDEX_LEAF 

  • SP2013 Default Index Fill factor

    Hi - does anyone have a definitive answer as to what the Fill factor setting should be now for SP2013 - At the moment I have it on 80 but just read this post below and now thinking I should change it to zero?
    http://thesharepointfarm.com/2013/04/the-fill-factor-mystery/
    Thanks
    J

    The fill factor varies. For the default server setting, keep it at 80. SharePoint will specify specific fill factors for each index.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Fill Factor and Too Much Space Used

    Okay, I am on sql server 2012 sp2.  I am doing a simple update on some large tables where there is an int column that allows nulls and I am changing the null values to be equal to the values in another integer column in the same table.  (Please
    don't ask why I am doing dup data as that is a long story!)  
    So it's a very simple update and these tables are about 65 million rows I believe and so you can calculate how much space it should increase by.  Basically, it should increase by 8 bytes * 65 million = ~500Mbytes, right?
    However, when I run these updates the space increases by about 3 G per table.  What would cause this behavior?
    Also, the fill factor on the server is 90% and this column is not in the PK or any of the 7 nonclustered indexes.  The table is used in horizonal partitioning but it is not part of the constraint.
    Any help is much appreciated...

    Hi CLM,
    some information to the INT data type before going into detail of the "update process":
    an INT datatype is 4 bytes not 8!
    an INT datatype is a fixed length data type
    Unfortunatley we don't know anything about the table structure (colums, indexes) but based on your observation I presume a table with multiple indexes. Furthermore I presume a nonclustered non unique index on the column you are updating.
    To understand why an update of an INT attribute doesn't affect the space of the table itself you need to know a few things about the record structure. The first 4 bytes of a record header describe the structure and the type of the record! Please take the
    following table structure (it is a HEAP) as an example for my ongoing descriptions:
    CREATE TABLE dbo.demo
    Id INT NOT NULL IDENTITY (1, 1),
    c1 INT NULL,
    c2 CHAR(100) NOT NULL DEFAULT ('just a filler')
    The table is a HEAP with no indexes and the column [c1] is NULLable. After 10,000 records have been added to the table...
    SET NOCOUNT ON;
    GO
    INSERT INTO dbo.demo WITH (TABLOCK) DEFAULT VALUES
    GO 10000
    ... the record structure for the first record looks like the following. I will first evaluate the position of the record and than create an output with DBCC PAGE
    SELECT pc.*, d.Id, d.c1
    FROM dbo.demo AS d CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) AS pc
    WHERE d.Id = 1;
    In my example the first record allocates the page 168. To examine this page you can use DBCC PAGE but keep in mind that it isn't documented. You have to enable the output of DBCC PAGE by usage of DBCC TRACEON with the TF 3604.
    DBCC TRACEON (3604);
    DBCC PAGE (demo_db, 1, 168, 1);
    The output of the above command shows all data records which are allocated on the page 168. The next output represents the first record:
    Slot 0, Offset 0x60, Length 115, DumpStyle BYTE
    Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 115
    The above output shows the record header which describes the type of record and it gives some information about the structure of the record. It is a PRIMARY RECORD which contains NULLable columns. The length of the record is 115 bytes.
    The next ouptput shows the first bytes of the record and its logical solution:
    10 00 PRIMARY RECORD and NULLable columns
    70 00 OFFSET for information about number of columns (112)
    01 00 00 00 Value of ID
    00 00 00 00 Value of C1
    6a757374 Value (begin) of C2
    If might be complicate but you will get a very good explanation about the record structures in the the Book "SQL Server Internals" from Kalen Delaney.
    The first two bytes (0x1000 describe the record type and its structure. Bytes 3 and 4 define the offset in the record where the information about the number of columns can be picked up. As you may see from the value it is "far far" near the end of the record.
    The reason is quite simple - these information are stored BEHIND the fixed length data of a record. As you can see from the above "code" the position is 112. 112 - 4 bytes for the record header is 108. Id = 4 + C1 = 4 + C2 = 100 = ??? All the columns are FIXED
    length columns so is it with C1 because it has a fixed length data size!
    The next 4 bytes represent the value which is stored in Id (0x01000000) which is 1. C1 is filled with placeholders for a possible value. If we update it with a new value the preallocated space in the record structure will be filled and NO extra space will
    be used. So - based on the simple table structure - a growth WILL NOT OCCUR!
    Based on the given finding the question is WHAT will cause additional allocation of space?
    It can only be nonclustered indexes. Let's assume we have an index on c1 (which is full of NULL). If you update the table with values the NCI will be updated, too. For each update from NULL to Value a new record will be added to the NCI. What is the size
    of the new record in the NCI?
    We have the record header which is 4 bytes. If the table is a heap we have to deal with a RID it is 8 bytes. If your table is a Clustered index the size depends on the size of the Clustered Key(s). If it is only an INT it is 4 bytes. In the given example
    I have to add 8 Bytes because it is a HEAP!
    On top of the - now 12 bytes - we have to add the size of the column itself which is 4 bytes. Last but not least additional space will be allocated if the index isn't unique (+4 bytes) allows NULL, ...
    In the given example a nonlclustered index will consume 4 bytes for the header + 8 bytes for the RID + 4 bytes for C1 + 4 bytes if the index isn't unique + 2 bytes for the NULLable information! = 22 bytes!!!
    Now calculate the size by your number of records. And next ... - add the calculated size for EACH additional record and don't forget page splits, too! If the values for the index are not contigious you will have hundreds of page splits when the data will
    be added to the index(es) :). In this case the fill factor is worthless because of the huge amount of data...
    Get more information about my arguments here:
    Calculation of index size:
    http://msdn.microsoft.com/en-us/library/ms190620.aspx
    Structure of a record:
    http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/
    PS: I remember my first international speaker engagement which was in 2013 in Stockholm (Erland may remember it!). I was talking about the internal structures of the database engine as a starting point for my "INSERT / UPDTAE / DELETE - deep dive" session.
    There was one guy who asked in a quite boring manner: "For what do we need to know this nonsence?" I was stumbeling because I didn't had the right answer to this. Now I would answer that knowing about record structure and internals you can calculate in a quite
    better way the future storage size :)
    You can watch it here but I wouldn't recommend it :)
    http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/VideoRecordings.aspx
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • How can I test the effect of fill factor?

    I noticed that the fill factor in my data base script is set to 80 and I believe i could optimize the performance by setting it to 100. After setting it to 100, how can I test whether it has made a positive effect or not? Currently my database have about
    50 tables and thousands of records. Please advice.
    mayooran99

    You have to monitor the page splits/sec counter. If this counter increases considerably after you change the Fill Factor (FF) from 80 to 100, then it might be a indicator that you FF setting is too high. You can monitor the page splits counter in Perfmon
    but a caveat is that this counter is accumulative of all page splits across all DBs on a particular SQL Server. If the clustered index is on an ever increasing numeric field (like identity field), the page splits do happen at the end as the data gets added
    – this is not necessarily bad, but the Perfmon counter (Page splits/sec) includes the counts for even this type of page splitting too, which should be ignored. 
    Check your daily index fragmentation rate - this can be done by storing the index fragmentation levels in a custom table and comparing it with values after you increase FF to 100
    However, for heavily inserted/updated tables try changing FF value to 90 first (there is no point in changing FF to 100 for heavily  inserted/updated tables as they are bound to incur page splits)
    In general, changing the FF from 80 to 100 may improve the read performance of your queries as more data fits into a single page. 
    There is no blanket % that will be appropriate/optimal for all tables. It all depends on the data and the frequency of the key column that is being updated. So, the only correct answer is TEST, TEST, TEST...
    Satish Kartan www.sqlfood.com

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Invalid column name 'origfillfactor'

    Hi,
    Getting message --- "Failed to load source model. [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'origfillfactor' --- when the OMWB is loading source model.
    I am using OMWB 9.2.0.1.7 Build 20031209 with MSQL Server 7.0 plugin. Source DB is SQL Server 7.
    In case this is related to Index Fill factor, the current setting on SQL Server (SQL Server properties - Database Settings) is Default(optimal). Should this be changed to set it to a fixed value?
    Any hints to solve this....thanks

    Hi,
    I found that there is a column by name "OrigFillFactor" in table sysindexes in SQL.
    Has anyone come across problems related to field names (esp of system tables) while migrating from SQL 7 to Oracle 8? I am not sure why this error is popping up. But wanted to know how the field name problems are resolved. Maybe the same might work for me too!!
    Thanks in anticipation.....

  • Need a Walkthrough on How to Create Database & Transaction Log Backups

    Is this the proper forum to ask for this type of guidance?  There has been bad blood between my department (Research) and the MIS department for 30 years, and long story short I have been "given" a virtual server and cut loose by my MIS department
    -- it's my responsibility for installs, updates, backups, etc.  I have everything running really well, I believe, with the exception of my transaction log backups -- my storage unit is running out of space on a daily basis, so I feel like I have to be
    doing something wrong.
    If this is the proper forum, I'll supply the details of how I currently have things set up, and I'm hoping with some loving guidance I can work the kinks out of my backup plan.  High level -- this is for a SQL Server 2012 instance running on a Windows
    2012 Server...

    Thanks all, after posting this I'm going to read the materials provided above.  As for the details:
    I'm running on a virtual Windows Server 2012 Standard, Intel Xeon CPU 2.6 GHz with 16 GB of RAM; 64 bit OS.  The computer name is e275rd8
    Drives (NTFS, Compression off, Indexing on):
    DB_HVSQL_SQL-DAT_RD8-2(E:) 199 GB (47.2 used; 152 free)
    DB_HVSQL_SQL-Dat_RD8(F:) 199 GB (10.1 used; 189 free)
    DB_HVSQL_SQL-LOG_RD8-2(L:) 199 GB (137 used; 62 free) **
    DB_HVSQL_SQL-BAK_RDu-2(S:) 99.8 GB (64.7 used; 35 free)
    DB_HVSQL_SQL-TMP_RD8-2(T:) 99.8 GB (10.6 used; 89.1 free)
    SQL Server:
    Product: SQL Server Enterprise (64-bit)
    OS: Windows NT 6.2 (9200)
    Platform: NT x64
    Version: 11.0.5058.0
    Memory: 16384 (MB)
    Processors: 4
    Root Directory: f:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL
    Is Clustered: False
    Is HADR Enabled: False
    Database Settings:
    Default index fill factor: 0
    Default backup media retention (in days): 0
    Compress backup is checkmarked/on
    Database default locations:
      Data: E:\SQL\Data
      Log: L:\SQL\LOGs
      Backup: S:\SQLBackups
    There is currently only one database: DistrictAssessmentDW
    To create my backups, I'm using two maintenance plans, and this is where I'm pretty sure I'm not doing something correctly.  My entire setup is me just guessing what to do, so feel free to offer suggestions...
    Maintenance Plan #1: Backup DistrictAssessmentDW
      Scheduled to run daily Monday Through Friday at 3:33 AM
      Step 1: Backup Database (Full) 
        Backup set expires after 8 days 
        Back up to Disk (S:\SQLBackups)
        Set backup compression: using the default server setting
      Step 2: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: L:\SQL\Logs
          File extension: trn
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 1 Day
      Step 3: Maintenance Cleanup Task
        Delete files of the following type: Backup files
        Search folder and delete files based on an extension:
          Folder: S:\SQLBackups
          File extension: bak
          Include first-level subfolders: checkmarked/on
        File age: Delete files based on the age of the file at task run time older than 8 Days
    Maintenance Plan #2: Backup DistrictAssessmentDW TRANS LOG ONLY
      Scheduled to run daily Monday through Friday; every 20 minutes starting at 6:30 AM & ending at 7:00 PM
      Step 1: Backup Database Task
        Backup Type: Transaction Log
        Database(s): Specific databases (DistrictAssessmentDW)
        Backup Set will expire after 1 day
        Backup to Disk (L:\SQL\Logs\)
        Set backup compression: Use the default server setting
    Around 2:30 each day my transaction log backup drive (L:) runs out of space.  As you can see, transactions are getting backed up every 20 minutes, and the average size of the backup files is about 5,700,000 KB.
    I hope this covers everything, if not please let me know what other information I need to provide...

  • Transactional replication very slow with indexes on Subscriber table

    I have setup Transactional Replication for one of our databases where one table with about 5mln records is replicated to a Subsriber database. With every replication about 500-600.000 changed records are send to the Subscriber.
    Since one month I see very strange behaviour when I add about 10 indexes to the Subscriber table. As soon as I have added the indexes replication speed becomes extremely slow (almost 3 hours for 600k records). As soon as I remove the indexes the replication
    is again very fast, about 3 minutes for the same amount of records.
    I've searched a lot on the internet to solve this issue but can't find any explaination for this strange behaviour after adding the indexes. As far as I know it doesn't have to be a problem to add indexes to a Subscriber table, and it hasn't been before on
    another replication configuration we use.
    Some information from the Replication Log:
    With indexes on the Subscriber table
    Total Run Time (ms) : 9589938 Total Work Time : 9586782
    Total Num Trans : 3 Num Trans/Sec : 0.00
    Total Num Cmds : 616245 Num Cmds/Sec : 64.28
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 9580752 
    Time Spent on Commits (ms): 2687 Commits/Sec : 0.00
    Time to Apply Cmds (ms) : 9586782 Cmds/Sec : 64.28
    Time Cmd Queue Empty (ms) : 5499 Empty Q Waits > 10ms: 172
    Total Time Request Blk(ms): 5499 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 10378 Cmds/Sec : 59379.94
    Time Cmd Queue Full (ms) : 9577919 Full Q Waits > 10ms : 6072
    Without indexes on the Subscriber table
    Total Run Time (ms) : 89282 Total Work Time : 88891
    Total Num Trans : 3 Num Trans/Sec : 0.03
    Total Num Cmds : 437324 Num Cmds/Sec : 4919.78
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 86298 
    Time Spent on Commits (ms): 282 Commits/Sec : 0.03
    Time to Apply Cmds (ms) : 88891 Cmds/Sec : 4919.78
    Time Cmd Queue Empty (ms) : 1827 Empty Q Waits > 10ms: 113
    Total Time Request Blk(ms): 1827 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 2812 Cmds/Sec : 155520.63
    Time Cmd Queue Full (ms) : 86032 Full Q Waits > 10ms : 4026
    Can someone please help me with this issue? Any ideas? 
    Pim 

    Hi Megens:
    Insert statement might be slow with not only indexes and few others things too
    0) SQL DB Blocking during inserts
    1) If any insert triggers are existed
    2) Constraints - if any
    3) Index fragmentation
    4) Page splits / fill factor
    Without indexes inserts will be fast because, each time when new row going to insert to the table, SQL Server will do
    1) it will check for the room, if no room page splits will happen and record will placed at right place
    2) Once the record updated all the index should be update
    3) all these extra update work will cause can make insert statement bit slow
    Its better to have index maintenance jobs frequently to avoid fragmentation.
    If every thing is clear on SQL Server Side, you need look up on DISK IO, N/W Latency between the servers and so on
    Thanks,
    Thanks, Satish Kumar. Please mark as this post as answered if my anser helps you to resolves your issue :)

  • Rebuild Indexes question - What is better?

    Is there an advantage of running this via a maintenance task in ConfigMgr. vs. SQL maintenace plan? What about the Fill Factor and other options?

    hi,
    well...that depends on your sql maintainance plan :)
    afaik sccm rebuilds an index if fragmentation rate is above 30% and reorganizes it for any value below this threshold.
    stevethompson provided some useful information on fragmentation on his blog and how to determine wheter the maintainance task does his job.
    http://stevethompsonmvp.wordpress.com/2013/04/19/how-to-determine-if-the-configmgr-rebuild-indexes-site-maintenance-task-is-running/
    kind regards

  • Slow secondary indexes creation

    I have a database table where each record is a fixed length of 16 bytes and the key length is 4 bytes.
    The table has ~3G records, giving file size of 61GB.
    The access method is DB_QUEUE.
    I'm using the 5.1.25 version.
    I'm trying to create two secondary indexes using DB_HASH as access method.
    I set the pagesize to 65K, cachesize to 65M.
    When I start to build the indexes I see that the creation time of the indexes will take me something like 3 days which I think is reasonable. But after a day the performances are dropping drastically, giving me approximately creation time of 1 month.
    What should I do in order to boost performance on such case?
    Edited by: 868434 on Jun 26, 2011 2:07 AM

    Hi,
    I set the pagesize to 65K, cachesize to 65M.You probably mean 64KB; see the DB->set_pagesize() documentation.
    When I start to build the indexes I see that the creation time of the indexes will take me something like 3 days which I think is reasonable. But after a day the performances are dropping drastically, giving me approximately creation time of 1 month. Are you recreating the secondary indexes on a regular basis? If so, what is the reason for this?
    Once you open, associate and populate the secondary (using the DB_CREATE flag for the associate() call), you should keep the secondary databases around and not remove them just to recreate them again. Whenever the primary database is updated, the appropriate updates are performed in the secondaries as well to reflect the changes; see Secondary indexes documentation section.
    Opening (creating a new database file handle) and populating a secondary database on open are potentially very expensive operations; see the subheading on DB_CREATE in the DB->associate() documentation section.
    What should I do in order to boost performance on such case?The best approach would be to not remove and recreate the secondary indexes regularly, but rather create, populate and keep them around as long as the application runs.
    If however you need to recreate them again and again, and given that they use the Hash access method, here are some suggestions:
    1. If you have a support contract and can access MOS (My Oracle Support) have a look over Doc ID 463613.1 and Doc ID 463507.1 notes in the KM repository (they discuss tuning Hash databases' configuration for speeding up insertion times).
    If you cannot access them, let me know and I can make a summary with further guidance.
    2. Review the Hash access method specific configuration doc section and try to experiment with appropriately calculated values for the density / page fill factor, hash table size (estimated no of elements that will be stored in the database), and with a hash function that evenly distributes the (secondary) keys across buckets (and eventually with a Hash key comparison function that sorts the keys appropriately respecting their structure/format, data type and size).
    3. I assume that this is happening in a transactional environment. If so, it might worth trying to use a transaction handle in the DB->associate() call, which has been opened with the DB_TXN_WRITE_NOSYNC or DB_TXN_NOSYNC flag (see DB_ENV->txn_begin()).
    Some general tuning suggestions, that could be used are also given in the following documentation sections: Access method tuning, and Transaction tuning.
    Regards,
    Andrei

  • What happens inside ...... when fired Index Reorganize

    Hi all,
    what happens inside ...... when fired Index Reorganize
    Thanks
    vijay

    Logical scan fragmentation is removed (where possible). This can improve performance of certain queries, because it reduces the amount of big disk head movements need to go through all the pages.
    In addition to that, it brings the amount of unused space in line with the original fill factor of the index. If you haven't specified any fill factor when the index was first created, then all unused space will be removed (where possible). It does this
    by joining the rows of multiple pages onto fewer pages and releasing the empty pages. This can improve performance of many queries, since fewer I/O and less memory is now needed to access the same amount of row data.
    When the command runs, it will lock just one or a few pages at a time. This way, most blocking is avoided, and any blocking that does occur will be short lived.
    This also means that the reorganization occurs inline, within the pages and extents that are already allocated for the specific index. This is one of the big differences with Reindexing, where new pages (and extents) are allocated.
    Gert-Jan

  • Datafile size after index rebuld?

    Hi experts,
    I recently rebuild indexes on some tables having fragmentation >80% and page count >40000 by job setting database in bulk_logged mode during rebuild.
    and most of them were over 90% fragmented.It was off line index rebuild, after rebuild I noticed that my datafile 
    used space decreased 
    from 38 GB to 29 GB instead of increase, see blow table for before and after status. Suggestions please.
    Before rebuild
    SIZE_MB
    USED_MB
    FREE_SPACE
    %_FREE
    AUTO_GROWTH
    NAME
    42305
    38952
    3353
    8
    10%
    ABC
    2200
    18
    2182
    99
    10%
    ABC_LOG
    After rebuild
    SIZE_MB
    USED_MB
    FREE_SPACE
    %_FREE
    AUTO_GROWTH
    NAME
    42305
    29861
    12444
    29
    10%
    ABC
    2200
    228
    1972
    89
    10%
    ABC_LOG
    thanks

    Hi,
    If you are thinking de-fragmantation operation has deleted something , then rest assured this has not happened what you are seeing is normal behavior with offline index rebuild. Perhaps fill factor used during index rebuild was good one and this filler the
    pages aptly requiring less pages(may be) so decrease in size.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • What is database re indexing

    hi
    what  is database re indexing
    what is the concept of it,
    adil

    One can REBUILD or REORGANIZE an existing index.  
    BOL: "This topic describes how to reorganize or rebuild a fragmented index in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL. The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations
    are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does
    not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.
    You can remedy index fragmentation by reorganizing or rebuilding an index. For partitioned indexes built on a partition scheme, you can use either of these methods on a complete index or a single partition of an index. Rebuilding an index drops and re-creates
    the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt
    in a single transaction. Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of
    the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value."
    LINK: http://technet.microsoft.com/en-us/library/ms189858.aspx
    The following blog demonstrates how to REBUILD all the indexes in a database:
    http://www.sqlusa.com/bestpractices2008/rebuild-all-indexes/
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Loss of space after reorg of compressed index

    Hi Oracle experts,
    In 2008 we compressed indexes of ECC system as per sap note 1109743 and
    obtained high compression ratio.
    Now we have upgraded to oracle 11.2.0.2 and BRTOOLS 7.20 (10).
    While using brtools to do table reorg/compression as per note 1431296
    indexes loose compression factor.
                                                                  before                     after        compression
    GLFUNCA           PSAPGLFUNCA       41,574,400             11,468,800 enabled
    GLFUNCA~0       PSAPSTABI             699,712                    5,555,648 enabled
    GLFUNCA~1       PSAPSTABI             263,872                     2,112,384 enabled
    GLFUNCA~2       PSAPSTABI             446,528                     3,531,072 enabled
    GLFUNCA~3       PSAPSTABI          1,051,904                      5,047,872 enabled
    I have tried uncompressing and compressing indexes again but cannot compress to
    same factor, results are same using brtools or oracle command.
    I have tested on coy of system where table GLFUNCA is uncompressed and indexes compressed.
    reorg of indexes result in same bad  compression factor.
    key used for compression is same.
    any  advise, how to regain index compression factor  or anyone having similar experirence.

    Hello Daljit,
    First time I heard about this kind of behavior. I already faced index with the same size after compress, but it happened because the number of colunm was wrong defined.
    My suggestion:
    1.Uncompress one index (rebuild without compress);
    2.Identify number of column that should be compress;
    3.Rebuild with compress nologging;
    4.Active logging
    Be aware all related below notes..
    1289494 - FAQ Oracle compression
    1109743 - Use of Index Key Compression for Oracle Databases
    Regards,
    Jairo Pedroza

Maybe you are looking for

  • Forums Maintenance - January 15, 2008

    Hello Everyone, In my discussions with Lithium (our forum vendor) regarding performance, they have suggested some maintenance on the forums that could help with our community's performance.  The maintenance is to "trim" floated threads and boards (no

  • Problem with call of function F4UT_RESULTS_MAP in search help exit

    Hi everybody, i have a problem concerning call of function F4UT_RESULTS_MAP. I call this function in this way: CALL FUNCTION 'F4UT_RESULTS_MAP'        TABLES             shlp_tab          = p_shlp_tab             record_tab        = p_record_tab     

  • Several problem with touchscreen

    I have a C6903 and Android 4.3 (14.2.A.1.136)... The touchscreen work fine in normal use... But when i play and the Phone turns hot with a high temperature the touchscreen too, then it starts to work very very bad (when i touch on one point starts to

  • Generate Purchase Order with Adobe Forms.

    Hello SAPients! Can someone give me a general idea of the steps that I have to follow to generate the Purchase Order as an Adobe Form? Thanks!

  • CS6 files can be opened by multiple users

    Our Indesign CS6 files are not auto locking anymore. They used to. Now.. either you can not see the lock when someone has a file open... or it lets you open a file someone else is edition. Causing major productions issues. Only change we made was mov