Compression on table and on clustered index

Hi all,
if I had a table with a clustered index on it, is it the same to rebuild the table with compression and rebuild the clustered index with compression?
Is this
ALTER TABLE dbo.TABLE_001 REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW, ONLINE=ON)
Equivalent to this?
ALTER INDEX PK_TABLE_001 ON dbo.TABLE_001 REBUILD WITH (DATA_COMPRESSION=ROW, ONLINE=ON)
where PK_TABLE_001 is the clustered index of the table TABLE_001

Andrea,
Cluster index is table itself organized according to clustering key value. So if you apply compression on CI means internally table would be compressed and because cluster index includes all columsn of the table so complete table would be compressed.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • Deadlock when updating different rows on a single table with one clustered index

    Deadlock when updating different rows on a single table with one clustered index. Can anyone explain why?
    <event name="xml_deadlock_report" package="sqlserver" timestamp="2014-07-30T06:12:17.839Z">
      <data name="xml_report">
        <value>
          <deadlock>
            <victim-list>
              <victimProcess id="process1209f498" />
            </victim-list>
            <process-list>
              <process id="process1209f498" taskpriority="0" logused="1260" waitresource="KEY: 8:72057654588604416 (8ceb12026762)" waittime="1396" ownerId="1145783115" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.430" XDES="0x3a2daa538" lockMode="X" schedulerid="46" kpid="7868" status="suspended" spid="262" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783115" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
               <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales1', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'ABCD' AND cam_window= 2   </inputbuf>
              </process>
              <process id="process9cba188" taskpriority="0" logused="2084" waitresource="KEY: 8:72057654588604416 (2280b457674a)" waittime="1397" ownerId="1145783104" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.427" XDES="0x909616d28" lockMode="X" schedulerid="23" kpid="8704" status="suspended" spid="155" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783104" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
                <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales2', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'12345' AND cam_window= 1   </inputbuf>
              </process>
            </process-list>
            <resource-list>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process9cba188" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process1209f498" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock18ee1ab00" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process1209f498" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process9cba188" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
            </resource-list>
          </deadlock>
        </value>
      </data>
    </event>

    To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
    into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
    like:
    Clustered Index Update
    [analyst_monitor].[IX_Clust_scam_an_name_window]
    cost: 100%
    By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
    <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
    indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
    For example: 
    SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
    This return nothing. Not sure why.

  • How to get list of block identifiers in a empty table and an empty index

    We have an application that has issue with ITL waits: this application is running many INSERT statements on a table that has 2 NUMBER columns and one primary key index. The application is designed to run INSERT statements but they are never committed (this is a software package).
    To check what are the really allocated ITL slots, I know that I can dump data block but I don't know how to get the block identifiers/numbers for an "empty" table and an "empty" index. Does someone knows how to do that ?
    PS: I already had a look to the Metalink notes and I have a Metalink SR for that but maybe OTN forum is faster ?

    You should be able to find the first data/index block with the following, even on an empty table/index.
    select header_file, header_block +1
    from dba_segments
    where segment_name = '<index or table name>';

  • Will reorg of tables and reindex of indexes help ? And How?

    Suddenly users told that one of their applications has become very slow. They identified millions of history data also, which they deleted . But still it did not help. Should we go for reorg of tables and reindex of indexes? Will this at all help ? Will this help to gain space ? or It will help in speed ? How should I do it ?
    Reorg tables
    1. First take export dmp of all the tables of that particular schema.
    2. Then truncate the tables.
    3. Then import that export backup.
    reindex indexes
    1. Find out how much space is used by the indexes.
    2. Add that much of space in a different tablespace
    3. then move the indexes there.
    I am not sure about the steps which I missed. I need some advice from you.
    And how can I gain the space in the Unix server after this delete operation?

    With problems like this, I think that the first response should not be that we need to reorg tables/indexes. Steps are as follows.
    1)Ask users what portion of app is running slow? and how much slow.
    2)Also as others have suggested generate awr report or statspack or estat/bstat and see where time is being spent?
    3)Problem could be that execution plan of some sqls may have changed and they may be performing badly. This is the most likely reason. Other problem could be that there is some other contention in database or there is an I/O problem. All these you can find from these reports.

  • Table and index compression

    Hello all,
    is there any command to take all table and/or all indexes in one tablespace to compress yes???
    Regards
    Olaf

    Hi Olaf,
    strange.
    I think it is related to the different kind of the quotation marks ' " ...
    I modified the command a little bit by putting the double quotation marks " into single ones ' ' , the following runs on my environment:
    coe6p001:db2x72 17> db2 "SELECT 'ALTER TABLE '||RTRIM(TABSCHEMA)||'.''"'||RTRIM(TABNAME)||'"''  COMPRESS YES ; ' from    syscat.tables where tabschema='SAPX72' and TYPE='T' and VOLATILE NOT LIKE 'C' and TBSPACEID=14"
    1                                                                               
    ALTER TABLE SAPX72.'||RTRIM(TABNAME)||'  COMPRESS YES ;                                                                               
    ALTER TABLE SAPX72.'||RTRIM(TABNAME)||'  COMPRESS YES ;                                                                               
    ALTER TABLE SAPX72.'||RTRIM(TABNAME)||'  COMPRESS YES ;
    Edit: Sorry, I just saw, that it does produce the same thing:
    So, i put the following into a script compr.sql
    SELECT 'ALTER TABLE '||RTRIM(TABSCHEMA)||'."'||RTRIM(TABNAME)||'" COMPRESS YES;'
            from    syscat.tables
            where   tabschema='SAPX72'
            and     TYPE='T'
            and VOLATILE NOT LIKE 'C'
            and TBSPACEID=14
    (do not forget the termination character ; at the end )
    When I run it with db2 -tvf compr.sql, it produces the following output:
    coe6p001:db2x72 21> db2 -tvf compr.sql
    SELECT 'ALTER TABLE '||RTRIM(TABSCHEMA)||'."'||RTRIM(TABNAME)||'" COMPRESS YES;' from    syscat.tables where   tabschema='SAPX72' and     TYPE='T' and VOLATILE NOT LIKE 'C' and TBSPACEID=14
    1                                                                               
    ALTER TABLE SAPX72."DYNPTXTLD" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."CO2MAP" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."CO2MAPINF" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."D344L" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."O2PAGINC" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."O2PAGRT" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."DDFTX" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."D342L" COMPRESS YES;                                                                               
    ALTER TABLE SAPX72."D346T" COMPRESS YES;                                                                               
    9 record(s) selected.
    Edited by: Hans-Juergen Moldowan on Jul 8, 2011 8:21 AM

  • Clustered indexes and deadlocks

    Hi,
    I have run into some problem with clustered indexes and deadlocks. I have found some breadcrumbs about this on the web but didn't really understand everything. I am an DBA by accident and mainly BI and DWH developer. The article most relevant to the problem
    seems to be the following:
    SQL Server Deadlocks Caused By Clustered Index Scan .
    The database is running with READ COMMITTED SNAPSHOT Transaction Isolation Level. Problematic seems to be the second query. First a row is inserted into table SubjectRevisionEntity.  Second a row is inserted into table Partner which has a foreign key
    on SubjectRevisionEntity. This foreign key is validated using a Clustered Index Seek.
    Having done some research on the topic using the internet my hypothesis is as follows:
    - Insert from Query 1 in Session 1 locks page in Table SubjectRevisionEntity
    - Now a new session (Session 2) is started. Insert from Query 1 in Session 2 probably locks the same page in Table SubjectRevisionEntity.
    - Insert from Query 2 in Session 1 locks page in Table Partner
       Lock on page in Table SubjectRevisionEntity is necessary to do the Clustered Index Seek. However this page is already locked by Session 2. However, Session 2 needs to lock page in Table Partner which is already locked by Session 1 --> deadlock
    occurs
    Does this make any sense? At the moment I am not having the means to test the hypothesis but I will look after that.
    I am just thinking about countermeasures to undertake. What about configuring the index to avoid page locks? All other queries seem to be fine I suppose as they operate on only one table. My fellows from software engineering favour to replace clustered indexes
    by nonclustered indexes as the already have done in the past. However, I think the disadvantages of nonclustered indexes aka heaps regarding storage (forwarded records) and query performance are much bigger than their use for problems like these.
    Regarding the
    article I did not understand the author's point, that two simultaneous table scans on one table by two sessions won't work. I thought that this is no problem as the sessions would use a shared lock on the table.
    Thank you very much for sharing your expertise in advance!
    Martin

    As you describe this that cannot be the cause of your deadlock.
    After session 1 executes query 1, it will have an IX (Intent Exclusive) lock on the clustered index of table SubjectRevisionEntity (note that a lock on a clustered index is a lock on the table since the table is contained in the clustered index), also an
    IX lock on the page in the index where the new row will be inserted and an X (exclusive) lock on the key that you just inserted.
    When session 2 executes it also needs an IX lock on the clustered index of table SubjectRevisionEntity, this is allowed because multiple sessions can have IX locks on the same resource at the same time, also an IX lock on the page in the index where the
    new row will be inserted (also allowed even if this entry is in the same page as the row inserted by session 1), and an X lock on the key that session 2 inserted.  This is also allowed UNLESS session 2 and session 1 are trying to insert a row with the
    SAME primary key value.  From your description, I gather that session 1 and session 2 are trying to insert different keys.
    Then session 1 attempts to insert a row in Partner which has a foreign key reference to the row in SubjectRevisionEntity.  That means it must check for the existence of the row that session 1 inserted.  But it can do this because all it needs is
    a S (shared) lock on the SubjectRevisionEntity table, and an S lock on the page.  It that get those even though session 2 has an IX lock on those resources because S locks and IX locks are compatible.  It also needs an S lock on the row in SubjectRevisionEntity. 
    That is no problem unless it is trying to reference the row which session 2 just entered.  (Once again, I assume this is not the case in your situation?)  It then inserts the row in Partner getting an IX lock on the Partner table, an IX lock on the
    page and an X lock on the new key in Partner.
    Then session 2 attempts to insert a row in Partner.  That will work unless either it is inserting a row in Partner with the same primary key as the row inserted by session 1 or it is trying to reference the same row in SubjectRevisionEntity that session
    1 inserted.
    So this cannot be the cause of your deadlock unless both sessions are entering the same key values.  The fact that they may both be entering keys on the same page should not be causing you deadlock problems.
    Regarding your question about why 2 table scans of the entire table (or entire clustered index scan if the table has a clustered index), you are correct, the scans do get shared locks and there is no deadlock problem UNLESS both sessions are holding locks
    that are incompatible with S locks.  For example, if you were in read committed mode and session 1 had inserted a row with clustered index key = 47 and session 2 had inserted a row with clustered index key = 23 and and both sessions attempt to do a complete
    clustered index scan (by, for example, by doing something like SELECT <blah blah> FROM <table> WHERE <some nonindexed column> = 0, then both sessions will try to get S locks on every row so session 1 will be stopped at key = 23 and session
    2 will be stopped at key = 47 and that is a deadlock.  BUT
    You are using READ COMMITTED SNAPSHOT, not READ COMMITTED.  In READ COMMITTED SNAPSHOT, writes do not block reads.  So the situation above does not apply to you since neither of the sessions would be blocked because it was attempting to read a
    row which was locked by an update from another session.
    Tom

  • Table and index in the same tablespace

    I have a table and its associated indexes all in one tablespace. Now, will creating a separate tablespace just for the indexes and drop and recreating the indexes the table has in the new tablespace help ???
    Thanks

    In all probability, no. This may have some housekeeping/ recovery benefits but it is very unlikely to have performance benefits. Most modern disk arrays are configured so that data access is spread pretty evenly across all the spindles. If your disk is configured such that you have "hot" and "cold" disks, and if you can move the index to a different tablespace that can be placed on one of the "cold" disks, this will tend to balance out disk I/O and may improve performance.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Analyze table and indexes

    Hi,
    I want to write a dynamic script to analyze the tables and its corresponding Indexes.
    Could anybody please provide a dynamic script as i' m trying to write but not succeeded..
    I have 321 tables and around 500 Indexes to the corresponding tables to ANALYZE...
    Thank you very much..

    user11175946 wrote:
    Hi,
    I want to write a dynamic script to analyze the tables and its corresponding Indexes.
    Could anybody please provide a dynamic script as i' m trying to write but not succeeded..
    I have 321 tables and around 500 Indexes to the corresponding tables to ANALYZE...
    Thank you very much..Please do NOT waste your time doing so
    "For the collection of most statistics, use the DBMS_STATS package, which lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. See Oracle Database PL/SQL Packages and Types Reference for more information on the DBMS_STATS package.
    Use the ANALYZE statement (rather than DBMS_STATS) for statistics collection not related to the cost-based optimizer:
        To use the VALIDATE or LIST CHAINED ROWS clauses
        To collect information on freelist blocks
    "http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_4005.htm#i2086320
    For V10+ statistics are gathered every 24 hours by default

  • Table and Index compression in data warehouse - thoughts?

    Hi,
    We have a data warehouse with large fact tables and materialized views of this data.
    Approx 3 million inserts per day week-ends about 12 million.
    The fact tables we have expected to have 200 million, and couple with 1-3 billion.
    Tables partitioned and have bitmap indexes.
    Just wondered what thoughts were about compressing large fact tables and mviews both from point of view of ETL into them and reporting from them afterwards.
    I take it, can compress/uncompress accordingly without any problem?
    Many Thanks

    After compression, most SELECT statements would not get slower. Actually, many can get faster due to reduced IO and buffer needs.
    The situation with DMLs is more complex. It depends on the exact compression options (basic or advanced) and the DML (INSERT,UPDATE, direct load,..),but generally DML are negatively affected by compression.
    In a Data Warehouses (DWs), it is usually quite beneficial to compress partitions or tables that contain data that is not supposed to be modified (read only or read mostly). Please note that in many cases you do not have to compress while you are loading the data – you can do that later.
    You can also consider compressing some of your B-tree indexes (if you use them in your DW system).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • DB02 view is empty on Table and Index analyses  DB2 9.7 after system copy

    Dear All,
                 I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
    After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
    a) all standard backgrnd job scheduled in sm36
    b) Automatic runstats are enabled in db2 parameters
    c) Reorgchk all scheduled periodically from db13 and already ran twice.
    4) 'reorgchk update statistics on table all' was also ran on db2 level.
    but Run stats staus in db02 was not getting updated. Its empty.
    Please suggest.
    Regards
    Vinay

    Hi Deepak,
    Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
    it's better to export/import with systemcopy.
    Also with systemcopy you can use index compression.
    After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
    and then work with db6conv and online table move to move one tablespace online to the new one.
    Best regards,
    Joachim

  • Heap tables and index organized tables

    I performing migration from mssql server to oracle 10gr2 rdbms, in mssql all tables have clustered pk, index. Is it necessary to use index organized tables for that migration, or enough ordinal heap organized tables and what differences between those tables, and mssql tables
    Thanx

    In Oracle, the typical table is a standard 'heap' table. Stuff goes into the heap table randomly, and randomly comes out.
    An Index Organizaed Tables is somewhat similar to a Cluster Index in SS. It can have some performance advantages over heap tables - when the heap table has an associated index on the primary key.
    The IOT can also have some disadvantages, such as the need for an Overflow table to handle the extra data when a row doesn't conveniently fit in a block (implying multiple I/Os), and an extra translation table if bitmap indexes are required (implying extra I/Os).
    An unintelligent developer will generally believe that Oracle and SQL Server are the same - after all they both run SQL - and will attempt to port by a simple translation of syntax.
    An intelligent developer will test both styles of tables, during a port. Such a developer will also be quick to learn about the changes in internals (such as locking mechanisms) and will realize that different styles of coding are required for many application situations.
    I recommend reading Tom Kyte's books to get handle on pros and cons as well as testing techniques to help a developer become intelligent.

  • SQL Server 2014 Bug - with Clustered Index On Temp Table with Identity Column

    Hi,
    Here's a stored procedure.  (It could be shorter, but this will show the problem.)
              CREATE PROCEDURE dbo.SGTEST_TBD_20141030 AS
              IF OBJECT_ID('TempDB..#Temp') IS NOT NULL
                 DROP TABLE #Temp
              CREATE TABLE #Temp
               Col1        INT NULL
              ,Col2        INT IDENTITY NOT NULL
              ,Col3        INT NOT NULL
              CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
              SET IDENTITY_INSERT #Temp ON;
              RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
              INSERT INTO #Temp (Col1, Col2, Col3)
              SELECT 1,2,3
              RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
              SET IDENTITY_INSERT #Temp OFF;
    In SQL Server 2014, If I execute this (EXEC dbo.SGTEST_TBD_20141030)   It works.   If I execute it a second time - It fails hard with: 
            Msg 0, Level 11, State 0, Line 0
            A severe error occurred on the current command.  The results, if any, should be discarded.
            Msg 0, Level 20, State 0, Line 0
            A severe error occurred on the current command.  The results, if any, should be discarded.
    In SQL Server 2012, I can execute it over and over, with no problem.  I've discovered two work-a-rounds:   
    1) Add "WITH RECOMPILE" to the SP CREATE/ALTER statement, or 
    2) Declare the cluster index in the TABLE CREATE statement, e.g. ",UNIQUE CLUSTERED (COL2)" 
    This second option only works though, if the column is unique.    I've opted for the "WITH RECOMPILE" for now.  But, thought I should share.

    Hi,
    I did not get any error Message:
             CREATE TABLE #Temp
               Col1        INT NULL
              ,Col2        INT IDENTITY NOT NULL
              ,Col3        INT NOT NULL
              CREATE CLUSTERED INDEX ix_cl ON #Temp(Col2)
              SET IDENTITY_INSERT #Temp ON;
              RAISERROR('Preparing to INSERT INTO #Temp...',0,1) WITH NOWAIT;
              INSERT INTO #Temp (Col1, Col2, Col3)
              SELECT 1,2,3
              RAISERROR('Insert Success!!!',0,1) WITH NOWAIT;
              SET IDENTITY_INSERT #Temp OFF;
    SELECT * FROM #Temp
    OUTPUT:
    Col1 Col2
    Col3
    1 2 3
    1 2 3
    1 2 3
    Select @@version
    --Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) 
    Oct 19 2012 13:38:57 
    Copyright (c) Microsoft Corporation
    Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • ORA-12815 while reorg/compression of tables without LONG and LOB with 11g

    Hello fellows,
    I am in the luxury situation that I got a copy of our production R/3 environment that was left over from a project and is no more required by any of our developers.
    As we are still on oracle 9.2.0.7 I upgraded this copy to 11.2 in a two step process (from 9i to 10g to 11g).
    I got myself the SAP dbatools 7.20(3) and the Note 1431296 - LOB conversion and table compression with BRSPACE 7.20.
    I started with some small tablespaces but after a while I thought I'd like to try to reorg/compress the worst of all tablespaces...PSAPPOOLD with ~15.000 tables.
    I first converted tables with LONG fields online that can be compressed, than the onse that can not be compressed, than I reorged the tables that contain old LOB fields online. With these different executions of the brspace commands that are also mentioned in the above note I managed to move ~ 3.000 tables without any issues.
    But now I started with the biggest bunch of tables, the compression of tables without LONG and LOB fields online.
    This is the command I used:
    brspace -u / -p reorgEXCL.tab -f tbreorg -a reorg -o sapr3 -s PSAPPOOLD -t allsel -n psapreorg -i psapreorgi -c ctab -SCT
    ...after a few checks that are performed by brspace, I end up in the screen
    Options for reorganization of tables (which is still nothing I wouldn't have expected)
    1 * Reorganization action (action) ............ [reorg]
    2 - Reorganization mode (mode) ................ [online]
    3 - Create DDL statements (ddl) ............... [yes]
    4 ~ New destination tablespace (newts) ........ [PSAPREORG]
    5 ~ Separate index tablespace (indts) ......... [PSAPREORGI]
    6 - Parallel threads (parallel) ............... [1]
    7 ~ Table/index parallel degree (degree) ...... []
    8 ~ Category of initial extent size (initial) . []
    9 ~ Sort by fields of index (sortind) ......... []
    10 # Index for IOT conversion (iotind) ......... [FIRST]
    11 - Compression action (compress) ............. [none]
    12 # LOB compression degree (lobcompr) ......... [medium]
    13 # Index compression method (indcompr) ....... [ora_proc]
    But independent of what I enter in point 6 and 7, I always end up with below erros during the reorg/compression of the outstanding tables:
    Just one sample, but the issue is always the same.
    BR0301E SQL error -12815 in thread 2 at location tab_onl_reorg-26, SQL statement:
    'CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0#$" ON "SAPR3"."RTXTF#$" ("MANDT", "APPLCLASS", "TEXT_NAME", "TEXT_TYPE", "FROM_LINE",
    "FROM_POS")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 1662976 NEXT 655360 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSAPREORGI" PARALLEL ( INSTANCES 0) '
    ORA-12815: value for INSTANCES must be greater than 0
    Just in case, here it the OBJECT DDL:
    CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0"
        ON "SAPR3"."RTXTF"  ("MANDT", "APPLCLASS", "TEXT_NAME",
        "TEXT_TYPE", "FROM_LINE", "FROM_POS")
        TABLESPACE "PSAPPOOLI" PCTFREE 10 INITRANS 2 MAXTRANS 255
        STORAGE ( INITIAL 1624K NEXT 640K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
        LOGGING
    Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    Many thanks
    Florian

    Hello Florian,
    > Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    I have not performed any compression operations on Oracle 11g R2 with brspace until yet .. but this error seems to be very obvious.
    It seems like SAP is still not using the procedure DBMS_REDEFINITION.COPY_TABLE_DEPENDENT to create the indexes (and NOT NULL constraints) on Oracle 11g R2. No idea why, i can only think of one case (create a DDL file before reorganisation to change the DDL parameters through the reorganisation in some kind of ways).
    So in your case it seems like SAP is creating a wrong SQL for creating the index on the interim table.
    You can try to create the DDL file first and correct the parameters and after that you can try to run the reorganisation again.
    Please check sapnote #646681 (Remark 5) for more information about the procedure for "creating the DDL first .. and then do the reorg with edited parameters).
    Regards
    Stefan

  • Large tables without clustered indexes -- a bad idea, or ok?

    We have several large tables that don't have clustered indexes, but do have primary keys and assorted other indexes. We're seeing issues that look like they are related to statistics even though there is a nightly update statistics job.
    So my question is, does the existence of a clustered index impact other indexes or the update statistics process? Are our tables ok the way they are?
    Adding clustered indexes would be a big project for non-development-related reasons so I would need to have good reasons to advocate for this.

    RSingh,
    "By default" is a confusing word here. If you do it with WIZARD, I agree. But with T-SQL,there is no space for by default, it can be anything.
    I agree that I assumed here as I wont do much things with wizard. :)
    Try the below:
    --Here by default, its non-clustered
    create table Test_table(Col1 int , Col2 int not null )
    Create clustered index IX__Test_Table on Test_Table (Col1)
    Alter table test_Table
    Add Constraint PK_testTable
    Primary Key (Col2);
    Drop table test_table
    --Here by default, its clustered
    create table Test_table(Col1 int , Col2 int not null Primary key)
    Create index IX__Test_Table on Test_Table (Col1)
    Drop table test_table
    Neither the word "By Default" is a confusing word nor it happens only in WIZARD. Let me clarify below. I said "When a primary key constraint is created it creates a unique clustered index by default". Run the below DDL and check whether
    a clustered index is created or not. i.e
    CREATE TABLE Persons
    P_Id int NOT NULL,
    LastName varchar(255) NOT NULL,
    FirstName varchar(255),
    Address varchar(255),
    City varchar(255),
    CONSTRAINT pk_PersonID PRIMARY KEY (P_Id,LastName)
    Regards, RSingh
    That did not really make any sense to my context as I am NOT against that it will create Clustered index, In fact, my code says the same. My point is only "By default" is confusing. 
    Let me explain, what I mean "By default"....
    Create index IX_Indexname on Tablename(Colname)
    The above will create "non clustered index" BY DEFAULT. Any situation, the above would create only non-clustered irrespective of other indexes present in the table.
    Hope the above is clear. May be its my way looking at "BY DEFAULT".

  • Wat is the exact differences between clustered table and pooled table

    hi,
       can you tell me ravi...wat is the exact differences between clustered table and pooled table
    with regards//
    anilreddyg

    Hi Anil Reddy
    Pooled Tables, Table Pools, Cluster Tables, and Table Clusters
    These types of tables are not transparent in the sense that they are not legible or manageable directly using the underlying database system tools. They are managed from within the R/3 environment from the ABAP dictionary and also at runtime when they are loaded into application memory.Pool and cluster tables are logical tables. Physically, these logical tables are arranged as records of transparent tables. The pool and cluster tables are grouped together in other tables, which are of the transparent type. The tables that group together pool tables are known as table pools, or just pools; similarly, table clusters, or just
    clusters, are the tables which group cluster tables.Not all operations that can be performed over transparent tables can be executed over pool or cluster tables.
    For instance, you can manage these tables using Open SQL calls from ABAP, but not Native SQL.These tables are meant to be buffered and loaded in memory, because they are commonly used for storing internal control information and other types of data with no external (business) relevance. SAP recommends that tables of pool or cluster type be used exclusively for control information such as
    program parameters, documentation, and so on. Transaction and application data should be stored in transparent tables.
    Table Pools
    From the point of view of the underlying DBMS as from the point of view of the ABAP dictionary, a table pool is a transparent table containing a group of pooled tables which, when created, were assigned to this table pool.
    Field Type Description
    TABNAME CHAR(10) Table name
    VARKEY CHAR(n) Maximum key length n =< 110
    DATALN INT2(5) Length of the VARDATA record returned
    VARDATA RAW(m) Maximum length of the data varies according to DBMS
    Table Clusters
    Similarly to pooled tables, cluster tables are logical tables which, when created, are assigned to a table cluster. Therefore, a table cluster, or just cluster, groups together several tables of type clusters.Several logical rows from different cluster tables are brought together in a single physical record. The records
    from the cluster tables assigned to a cluster are thus stored in a single common table in the database.A cluster contains a transparent cluster key which must be located at the start of the key of all logical cluster tables to be included in the cluster. As well, a cluster contains a long field (VARDATA), which contains the
    data of the cluster tables for this key. If the data does not fit into a field, continuation records are created.
    Field Type Description
    CLKEY1 CHAR(*) First key fields
    CLKEY2 CHAR(*) Second key field
    CLKEYN CHAR(*) nth key field
    PAGENO INT2(5) Number of the next page
    TIMESTMP CHAR(14) Time stamp
    PAGELG INT2(5) Length of the VARDATA record returned
    VARDATA RAW(*) Maximum length of the data section; varies according to database system
    Working with Tables
    The dictionary includes many functions for working with tables. There are five basic operations you can perform on tables: display, create, delete, modify, copy. Please do not confuse displaying a table with displaying the table entries (table contents). In order to display a table, it must previously exist; otherwise the system will display an error message in the status bar. For the following example, the table TABNA is used. To display this table, from the main dictionary screen, enter the table name in the Object name
    input field with the radio button selected next to Tables. Then, click on the Display button at the bottom of the screen, or press the F7 function key, or, alternatively,
    select Dictionary object Display from the menu.
    In this screen, you can see table information such as
    ¨ Table type, shown next to the name of the object. In the example, it is a transparent table.
    ¨ Short text description.
    ¨ Name of the user who made the last change, and the date of the change.
    ¨ Master language.
    ¨ Table status. On the screen, you can see this table is saved and active.
    ¨ Development class. For information on development classes, refer to Chap. 6.
    Delivery class, which sets the maintenance group for the table. It controls how tables will behave during client copy procedures, upgrades, and so forth.¨
    Tab. Maint. Allowed flag, which indicates whether you can generate a screen for maintaining table entries.
    ¨Then, on the lower part of the screen, you can see the table fields with all associated characteristics such as:
    ¨ Field name.
    ¨ Key indicator. When set, this field is the primary key, or part of it.
    ¨ Data element.
    ¨ Basic data type.
    ¨ Length.
    ¨ Check table.
    ¨ Short text, describing the field.
    Additional information about the table can be displayed by selecting the corresponding functions from the menu or directly from the application toolbar, such as keys, indexes, or technical settings
    Standard table:
    The key access to a standard table uses a sequential search. The time required for an access is linearly dependent on the number of entries in the internal table.
    You should usually access a standard table with index operations.
    Sorted table:
    The table is always stored internally sorted by its key. Key access to a sorted table can therefore use a binary search. If the key is not unique, the entry with the lowest index is accessed. The time required for an access is logarithmically dependent on the number of entries in the internal table.
    Index accesses to sorted tables are also allowed. You should usually access a sorted table using its key.
    Hash table:
    The table is internally managed with a hash procedure. All the entries must have a unique key. The time required for a key access is constant, that is it does not depend on the number of entries in the internal table.
    You cannot access a hash table with an index. Accesses must use generic key operations (SORT, LOOP, etc.).
    Index table:
    The table can be a standard table or a sorted table.
    Index access is allowed to such an index table. Index tables can be used to define the type of generic parameters of a FORM (subroutine) or a function module.
    Just have a look at these links:
    http://help.sap.com/saphelp_nw04/helpdata/en/90/8d7304b1af11d194f600a0c929b3c3/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/74/83015785d811d295a800a0c929b3c3/frameset.htm
    Regards
    Sreeni

Maybe you are looking for

  • Upload data from Excel to the Planning Book

    We are on APO 5.1. We have APO DP at the moment. I wanted to test upload of data from Excel to the planning book. I have tried several times. I have been unable to see any changed data in the planning. I do not get ANY errors during the upload. But I

  • How do I install nik software from a previous cs6 cc onto cc2014 ?

    I have installed the latest CC2014, and the plug ins such as Nik and One on One have not migrated across . Any ideas on how to get them back ?

  • Exporting pdf's to text

    I have Acrobat 8.0 Standard. I can search for words... so the pdf's are not an image but were created from another program. I do not have access to the original files. I only have the pdf's. When I export to text, html, word doc, rtf, etc, the text i

  • Why doesn't windows list contain list of windows?

    using version 34.0.5 on mac and with latest update I no longer have a list of open windows under the windows menu. How can I correct this ?

  • No sound in safari despite garageband restart or midi reset

    i can't get any sound in safari for youtube videos, videos on the apple site etc. i tried the "start and quit garageband" - great sound in garageband, none in safari i tried the "reset audio setup to 44.1kHz" - already was and again didn't change can