Merge table indexes fill factor

Shouldn't indexes such as MSmerge_current_partition_mappings.ncMSmerge_current_partition_mappings have a fill factor other than 0? I am getting many page splits on this and other merge table indexes. My server has many of the indexes set to 0.
Using merge replication with 120 subscribers SQL Server 2008 R2 on distribution server.

These are large tables. 62 million rows in MSmerge_current_partition_mappings. We are seeing high IO numbers and I ran this query with results
SELECT COUNT(1) AS NumberOfSplits     
,AllocUnitName      ,Context
FROM      fn_dblog(NULL,NULL)
WHERE      Operation = 'LOP_DELETE_SPLIT'
GROUP BY      AllocUnitName, Context
ORDER BY      NumberOfSplits DESC
984 dbo.MSmerge_current_partition_mappings.ncMSmerge_current_partition_mappings
LCX_INDEX_LEAF
443 dbo.MSmerge_contents.uc1SycContents
LCX_CLUSTERED
340 dbo.MSmerge_contents.nc5MSmerge_contents
LCX_INDEX_LEAF
268 dbo.MSmerge_current_partition_mappings.cMSmerge_current_partition_mappings
LCX_CLUSTERED
208 dbo.MSmerge_contents.nc3MSmerge_contents
LCX_INDEX_LEAF
159 dbo.MSmerge_contents.nc4MSmerge_contents
LCX_INDEX_LEAF 

Similar Messages

  • Index fill factor

    Hi All,
    We have around 150 tables we are planning to change fill factor as 90% for this if you people have any script please suggest me 
    Regards
    subu

    Use the following script for change the fill factor of indexes. Just change the database name in place of AdventureWorks2008R2. This script alter the fill factor to 90% of all indexes for all tables of database. 
    DECLARE @Database VARCHAR(255)  
    DECLARE @Table VARCHAR(255)  
    DECLARE @cmd NVARCHAR(500)  
    DECLARE @fillfactor INT
    SET @fillfactor = 90
    DECLARE DatabaseCursor CURSOR FOR  
    SELECT name FROM master.dbo.sysdatabases  
    WHERE name IN ('AdventureWorks2008R2')  
    ORDER BY 1  
    OPEN DatabaseCursor  
    FETCH NEXT FROM DatabaseCursor INTO @Database  
    WHILE @@FETCH_STATUS = 0  
    BEGIN  
       SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' +
      table_name + '']'' as tableName FROM [' + @Database + '].INFORMATION_SCHEMA.TABLES
      WHERE table_type = ''BASE TABLE'''  
       -- create table cursor  
       EXEC (@cmd)  
       OPEN TableCursor  
       FETCH NEXT FROM TableCursor INTO @Table  
       WHILE @@FETCH_STATUS = 0  
       BEGIN  
           IF (@@MICROSOFTVERSION / POWER(2, 24) >= 9)
           BEGIN
               -- SQL 2005 or higher command
               SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
               EXEC (@cmd)
           END
           ELSE
           BEGIN
              -- SQL 2000 command
              DBCC DBREINDEX(@Table,' ',@fillfactor)  
           END
           FETCH NEXT FROM TableCursor INTO @Table  
       END  
       CLOSE TableCursor  
       DEALLOCATE TableCursor  
       FETCH NEXT FROM DatabaseCursor INTO @Database  
    END  
    CLOSE DatabaseCursor  
    DEALLOCATE DatabaseCursor 

  • SP2013 Default Index Fill factor

    Hi - does anyone have a definitive answer as to what the Fill factor setting should be now for SP2013 - At the moment I have it on 80 but just read this post below and now thinking I should change it to zero?
    http://thesharepointfarm.com/2013/04/the-fill-factor-mystery/
    Thanks
    J

    The fill factor varies. For the default server setting, keep it at 80. SharePoint will specify specific fill factors for each index.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Fill Factor and Too Much Space Used

    Okay, I am on sql server 2012 sp2.  I am doing a simple update on some large tables where there is an int column that allows nulls and I am changing the null values to be equal to the values in another integer column in the same table.  (Please
    don't ask why I am doing dup data as that is a long story!)  
    So it's a very simple update and these tables are about 65 million rows I believe and so you can calculate how much space it should increase by.  Basically, it should increase by 8 bytes * 65 million = ~500Mbytes, right?
    However, when I run these updates the space increases by about 3 G per table.  What would cause this behavior?
    Also, the fill factor on the server is 90% and this column is not in the PK or any of the 7 nonclustered indexes.  The table is used in horizonal partitioning but it is not part of the constraint.
    Any help is much appreciated...

    Hi CLM,
    some information to the INT data type before going into detail of the "update process":
    an INT datatype is 4 bytes not 8!
    an INT datatype is a fixed length data type
    Unfortunatley we don't know anything about the table structure (colums, indexes) but based on your observation I presume a table with multiple indexes. Furthermore I presume a nonclustered non unique index on the column you are updating.
    To understand why an update of an INT attribute doesn't affect the space of the table itself you need to know a few things about the record structure. The first 4 bytes of a record header describe the structure and the type of the record! Please take the
    following table structure (it is a HEAP) as an example for my ongoing descriptions:
    CREATE TABLE dbo.demo
    Id INT NOT NULL IDENTITY (1, 1),
    c1 INT NULL,
    c2 CHAR(100) NOT NULL DEFAULT ('just a filler')
    The table is a HEAP with no indexes and the column [c1] is NULLable. After 10,000 records have been added to the table...
    SET NOCOUNT ON;
    GO
    INSERT INTO dbo.demo WITH (TABLOCK) DEFAULT VALUES
    GO 10000
    ... the record structure for the first record looks like the following. I will first evaluate the position of the record and than create an output with DBCC PAGE
    SELECT pc.*, d.Id, d.c1
    FROM dbo.demo AS d CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%) AS pc
    WHERE d.Id = 1;
    In my example the first record allocates the page 168. To examine this page you can use DBCC PAGE but keep in mind that it isn't documented. You have to enable the output of DBCC PAGE by usage of DBCC TRACEON with the TF 3604.
    DBCC TRACEON (3604);
    DBCC PAGE (demo_db, 1, 168, 1);
    The output of the above command shows all data records which are allocated on the page 168. The next output represents the first record:
    Slot 0, Offset 0x60, Length 115, DumpStyle BYTE
    Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP Record Size = 115
    The above output shows the record header which describes the type of record and it gives some information about the structure of the record. It is a PRIMARY RECORD which contains NULLable columns. The length of the record is 115 bytes.
    The next ouptput shows the first bytes of the record and its logical solution:
    10 00 PRIMARY RECORD and NULLable columns
    70 00 OFFSET for information about number of columns (112)
    01 00 00 00 Value of ID
    00 00 00 00 Value of C1
    6a757374 Value (begin) of C2
    If might be complicate but you will get a very good explanation about the record structures in the the Book "SQL Server Internals" from Kalen Delaney.
    The first two bytes (0x1000 describe the record type and its structure. Bytes 3 and 4 define the offset in the record where the information about the number of columns can be picked up. As you may see from the value it is "far far" near the end of the record.
    The reason is quite simple - these information are stored BEHIND the fixed length data of a record. As you can see from the above "code" the position is 112. 112 - 4 bytes for the record header is 108. Id = 4 + C1 = 4 + C2 = 100 = ??? All the columns are FIXED
    length columns so is it with C1 because it has a fixed length data size!
    The next 4 bytes represent the value which is stored in Id (0x01000000) which is 1. C1 is filled with placeholders for a possible value. If we update it with a new value the preallocated space in the record structure will be filled and NO extra space will
    be used. So - based on the simple table structure - a growth WILL NOT OCCUR!
    Based on the given finding the question is WHAT will cause additional allocation of space?
    It can only be nonclustered indexes. Let's assume we have an index on c1 (which is full of NULL). If you update the table with values the NCI will be updated, too. For each update from NULL to Value a new record will be added to the NCI. What is the size
    of the new record in the NCI?
    We have the record header which is 4 bytes. If the table is a heap we have to deal with a RID it is 8 bytes. If your table is a Clustered index the size depends on the size of the Clustered Key(s). If it is only an INT it is 4 bytes. In the given example
    I have to add 8 Bytes because it is a HEAP!
    On top of the - now 12 bytes - we have to add the size of the column itself which is 4 bytes. Last but not least additional space will be allocated if the index isn't unique (+4 bytes) allows NULL, ...
    In the given example a nonlclustered index will consume 4 bytes for the header + 8 bytes for the RID + 4 bytes for C1 + 4 bytes if the index isn't unique + 2 bytes for the NULLable information! = 22 bytes!!!
    Now calculate the size by your number of records. And next ... - add the calculated size for EACH additional record and don't forget page splits, too! If the values for the index are not contigious you will have hundreds of page splits when the data will
    be added to the index(es) :). In this case the fill factor is worthless because of the huge amount of data...
    Get more information about my arguments here:
    Calculation of index size:
    http://msdn.microsoft.com/en-us/library/ms190620.aspx
    Structure of a record:
    http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/
    PS: I remember my first international speaker engagement which was in 2013 in Stockholm (Erland may remember it!). I was talking about the internal structures of the database engine as a starting point for my "INSERT / UPDTAE / DELETE - deep dive" session.
    There was one guy who asked in a quite boring manner: "For what do we need to know this nonsence?" I was stumbeling because I didn't had the right answer to this. Now I would answer that knowing about record structure and internals you can calculate in a quite
    better way the future storage size :)
    You can watch it here but I wouldn't recommend it :)
    http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/VideoRecordings.aspx
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • How can I test the effect of fill factor?

    I noticed that the fill factor in my data base script is set to 80 and I believe i could optimize the performance by setting it to 100. After setting it to 100, how can I test whether it has made a positive effect or not? Currently my database have about
    50 tables and thousands of records. Please advice.
    mayooran99

    You have to monitor the page splits/sec counter. If this counter increases considerably after you change the Fill Factor (FF) from 80 to 100, then it might be a indicator that you FF setting is too high. You can monitor the page splits counter in Perfmon
    but a caveat is that this counter is accumulative of all page splits across all DBs on a particular SQL Server. If the clustered index is on an ever increasing numeric field (like identity field), the page splits do happen at the end as the data gets added
    – this is not necessarily bad, but the Perfmon counter (Page splits/sec) includes the counts for even this type of page splitting too, which should be ignored. 
    Check your daily index fragmentation rate - this can be done by storing the index fragmentation levels in a custom table and comparing it with values after you increase FF to 100
    However, for heavily inserted/updated tables try changing FF value to 90 first (there is no point in changing FF to 100 for heavily  inserted/updated tables as they are bound to incur page splits)
    In general, changing the FF from 80 to 100 may improve the read performance of your queries as more data fits into a single page. 
    There is no blanket % that will be appropriate/optimal for all tables. It all depends on the data and the frequency of the key column that is being updated. So, the only correct answer is TEST, TEST, TEST...
    Satish Kartan www.sqlfood.com

  • BWA Fact Table Index Size

    Hi
    Can anybody tell me how the BWA decides when a fact table index gets split into multiple parts? We have a number of very large cubes that are indexed and some have a fact table index that consists of one logical index which is made up of multiple physical indexes but other, similar sized cubes, just have one very large physical index for the fact table.
    With the one very large physical index we seem to get an overload problem but when they are split into multiple parts we don't.
    Thanks
    Martin

    Hi Martin,
    this depends on the reorg config and the attribute of the index. You can manually trigger a splitting of an index via command 'ROUNDROBIN x', x stand for the number of parts which the index will be split to. Therefore you have to go into trexadmin standalone tool -> landscape right click on index -> split/merge index...
    If you want an automatically split, you have to setup your reorg settings. Goto trexadmin standalone tool -> tab reorg -> options -> here you can choose the type of algorithm. Have a look into note 1313260 and 1163149.
    Do you have a scheduled reorg job?
    Regards,
    Jens
    PS: Every black box can be understood...

  • Problem with table-indexes when using select-options in select

    Hello experts,
    is it right that table-indexes will not be used if you take select-options to select data from the database?
    in detail:
    i have build up an table-index for one of our db-tables and test it via an test-programm. The first test with '=' comparisons worked fine. Every key of the index was used; checked via ST05!
    e.g.:    SELECT * FROM TABLEA INTO ITAB WHERE keya = '1' AND keyb = '2' AND keyc = '3'.
    Now i startet the test with select-options
    e.g.:   SELECT * FROM TABLEA INTO ITAB WHERE keya IN seltabA  AND keyb IN seltabB AND keyc IN seltabC.
    First of all i just filled the seltabs with only 1 value:    eg:  seltabA=      SIGN = 'I'   OPTION = 'EQ'   LOW = '1'     etc.
    Everything worked fine. Every key of the index was used.
    But now, I putted more than one entries in the seltabs e.g.
    seltabA:      SIGN = 'I'   OPTION = 'EQ'   LOW = '1'
                       SIGN = 'I'   OPTION = 'EQ'   LOW = '2'   
                       SIGN = 'I'   OPTION = 'EQ'   LOW = '3'
    From now on, the indexed was not used completely (with all keys).
    Isn't that strange? How can i use select-options or sel-ranges with using the complete table-indexes?
    Thanks a lot,
    Marcel

    Hi Hermann,
    i hope this helps:
    this is the first one, which uses the complete index:
    SELECT                                                                     
      "KOWID" , "LIFNR" , "KLPOS" , "ORGID" , "KOART" , "MATNR" , "GLTVON" ,   
      "GLTBIS" , "WERT" , "ABLIF" , "FAKIV" , "AENAM" , "AEDAT" , "AFORM" ,    
      "HERSTELLER" , "ARTGRP" , "OE_FREITXT" , "ARTFREITEXT" , "STATUS" ,      
      "TERDAT"                                                                 
    FROM                                                                       
      "/dbcon/01_con"                                                       
    WHERE                                                                      
      "MANDT" = ? AND "LIFNR" = ? AND "ORGID" = ? AND "KOART_BASIS" = ? AND    
      "STATUS" = ? AND "GEWAEHR_KOWID" < ? AND ( "STATUS" = ? OR "STATUS" = ? OR
      "STATUS" = ? )  WITH UR                 
    RESULT: 5 IXSCAN /dbcon/01_con05 #key columns:  4
    And the second one, which does not use the complete index! The 3 ranges are filled each with 2 values. Remember; when i fill them each with only one value, the result is the same as you can see above(/dbcon/01_con05 #key columns:  4):
    SELECT                                                                     
      "KOWID" , "LIFNR" , "KLPOS" , "ORGID" , "KOART" , "MATNR" , "GLTVON" ,   
      "GLTBIS" , "WERT" , "ABLIF" , "FAKIV" , "AENAM" , "AEDAT" , "AFORM" ,    
      "HERSTELLER" , "ARTGRP" , "OE_FREITXT" , "ARTFREITEXT" , "STATUS" ,      
      "TERDAT"                                                                 
    FROM                                                                       
      "/dbcon/01_con"                                                       
    WHERE                                                                      
      "MANDT" = ? AND "LIFNR" IN ( ? , ? ) AND "ORGID" IN ( ? , ? ) AND        
      "KOART_BASIS" IN ( ? , ? ) AND "GEWAEHR_KOWID" < ? AND ( "STATUS" = ? OR 
      "STATUS" = ? OR "STATUS" = ? )  WITH UR                                  
    and here the access-plan
       0 SELECT STATEMENT ( Estimated Costs =  5,139E+01 [timerons] )                                                                               
    5     1 RETURN                                                                               
    5     2 NLJOIN                                                                               
    5     3 [O] TBSCAN                                                                               
    5     4 SORT                                                                               
    5 TBSCAN GENROW                                                                               
    5     6 <i> FETCH /dbcon/01_con                                                                               
    7 IXSCAN /dbcon/01_con05 #key columns:  2   
    As you can see, only 2 keys were taken for indexed selection!
    Any idea?
    Kind regards,
    MArcel
    Edited by: Marcel Ebert on Jul 28, 2009 5:25 PM

  • How to obtain the table index in word use LabVIEW Report Generation Toolkit for Microsoft Office

    I created a word templete and it had several tables. When I use the "Word Edit Cell" function in LabVIEW Report Generation Toolkit for Microsoft Office, the function need "table index", and I didn't find any function to get or set the table index in word document. How can I achieve my attention to write value to specified table cell using the "Word Edit Cell" function?
    Thanks for reply!
    YangAfreet

    Hi yangafreet
    You do not need to get the table index for the word edit cell.vi from anywhere. LabVIEW will automatically index all the tables in the document. See the attatched vi for an example.
    Rich
    Attachments:
    Table Edit.vi ‏23 KB

  • To Use  Cursor or  TYPE table Index by PLS_integer

    Hi All,
    Let's see if I have table with no. of records 19,26,20,000.
    If I want to loop through all the records which will be a optimized way To Use Cursor or TYPE table Index by PLS_integer.
    Please guide.
    Thanks.

    What is it you want to do to/with the rows you're looping through?
    Ideally you want to avoid looping, as that's row by row (aka slow by slow) processing and it's expensive time-wise.
    If you're doing DML (insert/update/delete) then you're best off doing it in one sql statement, rather than looping.

  • Fact Table index vs BIA Index

    BIA gurus..
    Prior to our BIA implementation we had the drop and rebuild index process variants in our process chains.
    Now after the BIA implementation we have the BIA index roll-up process variant included in the process chain.
    Is it still required to have the drop and rebuilt index process variants during data load ?
    Do the infocube fact table indexes ever get hit after the BIA implementation ?
    Thanks,
    Ajay Pathak.

    I think you still need the delete/create Index variants as it not only helps in query performance but also speeds up the load to your cubes.
    Documentation in Perfomance tab:
    "Indices can be deleted before the load process and after the loading is finished be recreated. This accelerates the data loading. However, simultaneous read processes to a cube are negatively influenced: they slow down dramatically. Therefore, this method should only be used if no read processes take place during the data loading."
    More details at:
    [http://help.sap.com/saphelp_nw70/helpdata/EN/80/1a6473e07211d2acb80000e829fbfe/frameset.htm]

  • Need to find total no fo  tables/index/m.views in my database

    Hello Everyone ;
    How can i find total no fo  tables/index/m.views in my database ?
    when i  google  i have seen  following  command ;
    SQL> Select count(1) from user_tables where table_name not like '%$%' /
      COUNT(1)
             but i dont understand  what  '%$%'  indicates ?
    Thanks all ;

    Hello Everyone ;
    How can i find total no fo  tables/index/m.views in my database ?
    when i  google  i have seen  following  command ;
    SQL> Select count(1) from user_tables where table_name not like '%$%' /
      COUNT(1)
             but i dont understand  what  '%$%'  indicates ?
    Thanks all ;
    consider to simply Read The Fine Manual YOURSELF!
    Oracle Database Search Results: like

  • New tables & indexes created do not show up in dba_segments view

    Dear all,
    I have created 3 tables and some indexes, but these objects do not show up in dba_segments view. Is this a normal behaviour? Previously, with dictionary managed tablespace, I can specify the minimum extent to create, when the table/index is created. But I'm not sure how the locally managed tablespace work. Please do advice. Thank you very much in advance.
    I'm using Oracle 11g R2 (11.2.0.1.0) for Microsoft Windows (x64), running on Windows 7.
    For the purpose of reproducing this issue, I have created the tablespaces as follow:
    CREATE TABLESPACE CUST_DATA
    DATAFILE 'd:\app\asus\oradata\orcl11gr2\CUST_DATA01.DBF' SIZE 512K
    AUTOEXTEND ON NEXT 256K MAXSIZE 2000K
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
    SEGMENT SPACE MANAGEMENT AUTO;
    CREATE TABLESPACE CUST_INDX
    DATAFILE 'd:\app\asus\oradata\orcl11gr2\CUST_INDX.DBF' SIZE 256K
    AUTOEXTEND ON NEXT 128K MAXSIZE 2000K
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
    SEGMENT SPACE MANAGEMENT AUTO;
    CREATE TABLE CUSTOMER_MASTER (CUST_ID VARCHAR2 (10),
    CUST_NAME VARCHAR2 (30),
    EMAIL VARCHAR2 (30),
    DOB DATE,
    ADD_TYPE CHAR (2) CONSTRAINT CK_ADD_TYPE CHECK (ADD_TYPE IN ('B1','B2','H1','H2')),
    CRE_USER VARCHAR2 (5) DEFAULT USER,
    CRE_TIME TIMESTAMP (3) DEFAULT SYSTIMESTAMP,
    MOD_USER VARCHAR2 (5),
    MOD_TIME TIMESTAMP (3),
    CONSTRAINT PK_CUSTOMER_MASTER PRIMARY KEY (CUST_ID) USING INDEX TABLESPACE CUST_INDX)
    TABLESPACE CUST_DATA;
    SQL> SELECT TABLE_NAME, TABLESPACE_NAME
    2 FROM USER_TABLES
    3 WHERE TABLE_NAME LIKE 'CUST%';
    TABLE_NAME TABLESPACE_NAME
    CUSTOMER_MASTER CUST_DATA
    SQL> SELECT INDEX_NAME, TABLESPACE_NAME
    2 FROM USER_INDEXES
    3 WHERE TABLE_NAME LIKE '%CUST%';
    INDEX_NAME TABLESPACE_NAME
    PK_CUSTOMER_MASTER CUST_INDX
    SQL> SELECT SEGMENT_NAME, SEGMENT_TYPE, TABLESPACE_NAME, BYTES
    2 FROM USER_SEGMENTS;
    no rows selected

    Prior to 11g, when you created a table or whatever, you automatically allocated one extent.
    This is now no longer true and depends on a parameter I don't remember.
    dba_segments is a summary of dba_extents.
    Obviously, if there is no extent allocated, the table (view is defined with inner join) will not show up.
    You could qualify this is as a bug and submit a SR to Oracle. But then the performance impact may be huge.
    Sybrand Bakker
    Senior Oracle DBA

  • Is it permissable to extend an SAP-provided table index?

    (Please note I realize this might not be the best forum for this post; I did look at ABAP development, SAP on Oracle, and a few others, but given some other threads, it seemed like this might be the best place for it.  Apologies if not).
    We have a very large table (GMIA), and I noticed that two customer-created indexes can essentially be combined into one because the first index is RGRANT_NBR plus fields A and B, and the second index is RGRANT_NBR plus fields A, B, C, D, and E.  So I might as well get rid of the first index and just keep the second one having RGRANT_NBR plus fields A through E.
    However, I noticed that SAP-provided index 4 contains simply one field - RGRANT_NBR.  So ideally, I could just add fields A, B, C, D, and E to index 4, then I could get rid of my second customer-created index.
    Question:  Is it permissable to extend an SAP-provided index like this?  As a developer, I'm not in the modifying SAP objects business, but this is the first time I've been presented this situation with a table index.  Given that our GMIA table has MILLIONS of records in it, getting rid of another customer-created index completely might be a great opportunity.
    Thoughts?
    Dave

    I really don't have a requirement for this.  I'm a developer, and I've noticed some of our biggest timeout issues concern programs that hit table GMIA.  So I thought I'd take a look at GMIA and our indexes to learn more about it via SE11 and DB02.
    In our Production environment, we have over 88 million records in this table for a table size of 38.23 GB.  Aside from the 6 SAP-provided indexes, 8 customer indexes have been created by others over time.  It was in looking at these indexes that I noticed our 8th customer index, ZS8, is essentially the same 3 fields as ZS3, plus a few more fields.  Ideally, ZS8 should NOT have been created, and ZS3 should have simply been extended with the additional fields.
    It was suggested to me in another thread a long while back that I could potentially get rid of ZS3 as well and just make SAP index 4 look like ZS8 because SAP index 4 is just indexed by Grant Number (RGRANT_NBR).  ZS8's first index field is Grant Number followed by 5 or 6 additional fields.  That's why I was wondering if it was even a "thing" or a possibility to extend an SAP index, but customizing an SAP component makes Dave a very, VERY nervous boy.
    Basically, I'm alarmed at the number of records in the table and the number of indexes we have.  There's no archiving strategy here, so I probably can't do anything about the number of records in GMIA, which go back to 2006 when we first went live with SAP.  But I can clearly get rid of one customer-created index (ZS3).  And if I can deactivate SAP index 4, I would assume the system would then automatically use ZS8 since the first field is Grant Number for situations where it would have used SAP index 4.
    So that's the background here.  Honestly, I don't know how much improvement these things will make, but getting rid of ZS3 will save 5 GB of space, and presumably "deactivating" SAP index 4 would save almost 5 GB as well.  I'm assuming we might see some negligible performance gains on our table operations involving GMIA, but it's still a beast with a large number of indexes, so I don't know.
    I'm really, really interested in hearing from others' thoughts and recommendations -- your input is MOST welcome here!
    Dave

  • SQL*LOADER(8I) VARIABLE SIZE FIELD를 여러 TABLE에 LOAD하기 (FILLER)

    제품 : ORACLE SERVER
    작성날짜 : 2004-10-29
    ==================================================================
    SQL*LOADER(8I) VARIABLE SIZE FIELD를 여러 TABLE에 LOAD하기 (FILLER)
    ==================================================================
    PURPOSE
    SQL*LOADER 에서 variable length record와 variable size field를 가진 data
    file 을 여러 table에 load하는 방법을 소개하고자 한다.
    ( 8i new feature인 FILLER 절 사용)
    Explanation
    SQL*LOADER SYNTAX
    여러 table에 load하고자 할때에는 control file에 아래와 같이 하면 된다.
    INTO TABLE emp
    INTO TABLE emp1
    fixed length field을 가진 data file을 여러 table에 같은 data을 load하고자
    한다면 아래와 같다.
    INTO TABLE emp
    (empno POSITION(1:4) INTEGER EXTERNAL,
    INTO TABLE emp1
    (empno POSITION(1:4) INTEGER EXTERNAL,
    위와 같이 양쪽 table의 empno field에 각각의 load할 data로부터 1-4까지를
    load 할수 있다. 그러나 field의 길이가 가변적이라면 위와 같이 POSITION 절을
    각 field에 사용할 수 없다.
    Example
    예제 1>
    create table one (
    field_1 varchar2(20),
    field_2 varchar2(20),
    empno varchar(10) );
    create table two (
    field_3 varchar2(20),
    empno varchar(10) );
    load할 record가 comma로 나누어지며 길이가 가변적이라고 가정하자.
    << data.txt >> - load할 data file
    "this is field 1","this is field 2",12345678,"this is field 4"
    << test.ctl >> - control file
    load data infile 'data.txt'
    discardfile 'discard.txt'
    into table one
    replace
    fields terminated by ","
    optionally enclosed by '"' (
    field_1,
    field_2,
    empno )
    into table two
    replace
    fields terminated by ","
    optionally enclosed by '"' (
    field_3,
    dummy1 filler position(1),
    dummy2 filler,
    empno )
    dummy1 field는 filler로 선언되었다. filler로 선언하면 table에 load하지 않는다.
    two라는 table에는 dummy1이라는 field는 없으며 position(1)은 current record의
    처음부터 시작해서 첫번째 field을 dummy1 filler item에 load한다는 것을 말한다.
    그리고 두번째 field을 dummy2 filler item에 load한다. 세번째 field인, one이라는
    table에 load되었던 employee number는 two라는 table에도 load되는 것이다,
    << 실행 >>
    $sqlldr scott/tiger control=test.ctl data=data.txt log=test.log bindsize=300000
    $sqlplus scott/tiger
    SQL> select * from one;
    FIELD_1 FIELD_2 EMPNO
    this is field 1 this is field 2 12345678
    SQL> select * from two;
    FIELD_3 EMPNO
    this is field 4 12345678
    예제 2>
    create table testA (c1 number, c2 varchar2(10), c3 varchar2(10));
    << data1.txt >> - load할 data file
    7782,SALES,CLARK
    7839,MKTG,MILLER
    7934,DEV,JONES
    << test1.ctl >>
    LOAD DATA
    INFILE 'data1.txt'
    INTO TABLE testA
    REPLACE
    FIELDS TERMINATED BY ","
    c1 INTEGER EXTERNAL,
    c2 FILLER CHAR,
    c3 CHAR
    << 실행 >>
    $ sqlldr scott/tiger control=test1.ctl data=data1.txt log=test1.log
    $ sqlplus scott/tiger
    SQL> select * from testA;
    C1 C2 C3
    7782 CLARK
    7839 MILLER
    7934 JONES
    Reference Documents
    <Note:74719.1>

  • TIPS(29) : TABLE에 걸려 있는 INDEX 찾아 보기

    제품 : SQL*PLUS
    작성날짜 : 1996-12-27
    TIPS(29) : TABLE에 걸려 있는 INDEX 찾아 보기
    ===========================================
    /* show_index.sql USAGE: Show the indexes on a table */
    /* This script prompts the user for the table owner and name then gets */
    /* the indexed columns for any indexes on the table */
    column index_name format A20
    column column_name format A25
    column column_position format 999 heading 'Pos'
    column uniq format a5
    set verify off
    break on index_name skip 1
    select C.index_name,substr(I.uniqueness,1,1) uniq, C.column_name,
    C.column_position
    from all_ind_columns C
    ,all_indexes I
    where C.table_owner = upper('&table_owner')
    and C.table_name = upper('&table_name')
    and C.index_owner = I.owner
    and C.index_name = I.index_name
    order by 2 desc,1,4
    /

Maybe you are looking for

  • Proper settings for my mini and my Panavision Plasma?

    Today I got a brand new Mac mini. I had hoped to connect via HDMI to my 46" 1080P Panasonic Plasma. Unfortunately I was short an HDMI port. Fortunately my TV came with a VGA port. So I picked up an adapter from Apple (mini DVI to VGA) and I'm in busi

  • Template Library?

    I modified the default template that ships with Captivate 1. Specifically, the vertical splitter that is on two of the slides is not an image that can be moved to change the width. It would be great to have a place to put templates to share. This wou

  • Where can I see fax results?

    While I'm faxing a document I can see it listed in the fax queue, but as soon as the fax is completed it disappears from the queue. Is there some place where I can see a list of successfully completed faxes, and the times they were sent? Even a list

  • Dual g5 1.8 - 10.4.8 - 10 min. start-up, constant crashing in Finder.

    Hello all, Anytime I run an application, the Mac freezes up. No kernel panick, just normal slow-down. If i leave the computer for 10 minutes and come back, its completed the task assigned it, but I'm unable to do anything else. -start-up takes 10 min

  • Save copy from psd jpg only saves first/top layer

    Am I missing something? I try to save a JPEG and I get the top layer only. The blending options I've chosen and the other layers don't even come into play. What's the deal?