Avoid Index-Growth

can anybody tell me, how i could avoid an index to grow to fast,
when i insert presorted data into the corresponding table?
Is there any way to force rebalancing of the index? (to get a fill-rate of nearly 100%)
merci, Charles

Jonathan Lewis wrote:
huh? wrote:
Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
Your table definition shows the indexed column at CHAR(200) - which means fixed length; so 4MB equates to 20,000 row. If this grows to 1.5GB and rebuilds to 4MB then at some point your index runs to about 80KB per index entry. If, as you describe, you only delete about 40% of the data and re-insert a similar quantity just once the you must have hit a bug.
You're on 9.2.0.7 (buggy) - were you using ASSM (also buggy). I came across a bug with ASSM on one occasion that resulted in a process using only 3 blocks from each index extent that it allocated during a pl/sql loop to modify a couple of hundred rows. Different circumstances from yours, and an earlier version - but you may have hit something similar.
A few thoughts:
<ul>
If it's in ASSM move the index to a freelist-managed tablespace to see what happens
If you need the index could you make it unusable for the load
Should you have the index at all ?
Should this table be list-partitioned on listdomain - then you could use partition exchange to load data like this
</ul>
Committing between delete and insert generally helps in cases like this.
Committing between each delete/insert pair should (in the absence of bugs) help
Regards
Jonathan Lewis.Yes we are using ASSM. Yes we have been told the same that this sounds like a common bug. One thing we will do is the two commits as well as your suggestion and moving or altering the index and see how that fairs. Thanks for your help.

Similar Messages

  • A better SQL to avoid Index growth

    Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
    BEGIN
    PRETERR := 'NO ERROR';
    PRETCODE := 0;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectalias';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectalias', V.PVR_ALIAS, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_ALIAS IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectlender';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectlender', V.PVR_BUSINESS_NAME, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_BUSINESS_NAME IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectlendername';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectlendername', PP.PRP_COLUMN_1, NULL, NULL
    FROM T_POLICY, T_POLICY_PARTY, T_POLICY_PARTY_PROPERTY PP
    WHERE POL_POLICY_ID = PPA_POL_POLICY_ID
    AND PPA_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
    AND PPA_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    AND POL_POLICY_ID = PP.PRP_POL_POLICY_ID
    AND PP.PRP_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
    AND PP.PRP_COLUMN_1 IS NOT NULL
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectinsinstcode';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectinsinstcode', V.PVR_INSTITUTION_CODE, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_INSTITUTION_CODE IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectinstransit';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectinstransit', V.PVR_TRANSIT_NUM, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_TRANSIT_NUM IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectSubInstCode';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectSubInstCode', V.PVR_INSTITUTION_CODE, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_INSTITUTION_CODE IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectSubTransNum';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectSubTransNum', V.PVR_TRANSIT_NUM, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND V.PVR_TRANSIT_NUM IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectFileOwner';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectFileOwner',
    PVR_FIRST_NAME || ' ' ||
    DECODE(PVR_MIDDLE_NAME,
    NULL,
    PVR_MIDDLE_NAME || ' ') || PVR_LAST_NAME,
    NULL,
    PTY_PARTY_CODE
    FROM T_ROLE TR, T_USER_ROLE TUR, T_PARTY TP, T_PARTY_VERSION TPV
    WHERE TP.PTY_PARTY_ID = TPV.PVR_PTY_PARTY_ID
    AND SYSDATE BETWEEN TPV.PVR_EFFECTIVE_START_DATE AND
    TPV.PVR_EFFECTIVE_END_DATE
    AND TP.PTY_PARTY_ID = TUR.ULE_PFY_PTY_PARTY_ID
    AND (SYSDATE BETWEEN TUR.ULE_START_DATE AND TUR.ULE_END_DATE OR
    TUR.ULE_END_DATE IS NULL)
    AND TR.RLE_ROLE_ID = TUR.ULE_RLE_ROLE_ID
    AND PTY_PARTY_CODE NOT LIKE 'LTEST%'
    AND TR.RLE_ROLE_NAME IN
    ('VPOPS', /*'VPRSK2', */
    'OPSLEADER', 'TEAMLEADER', 'OPSLEVEL5', 'OPSLEVEL4', 'OPSLEVEL3',
    'OPSLEVEL2', 'OPSLEVEL1', 'OPSTRNG')
    AND PTY_PARTY_CODE IS NOT NULL
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectbrokername';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectbrokername', PP.PPA_NAME_BROKER_TEXT, NULL, NULL
    FROM T_POLICY, T_POLICY_PARTY PP
    WHERE POL_POLICY_ID = PPA_POL_POLICY_ID
    AND PPA_EFFECTIVE_END_DATE = TO_DATE('9999-01-01', 'RRRR-MM-DD')
    AND PPA_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    AND TRIM(PP.PPA_NAME_BROKER_TEXT) IS NOT NULL
    ORDER BY 2;
    DELETE FROM T_TEAM_LIST_VAL WHERE LIST_DOMAIN = 'selectorg';
    INSERT INTO T_TEAM_LIST_VAL
    (LIST_DOMAIN, LIST_VALUE_1, LIST_VALUE_2, LIST_VALUE_3)
    SELECT DISTINCT 'selectorg', PT.PTY_PARTY_CODE, NULL, NULL
    FROM T_PARTY_VERSION V, T_PARTY PT, T_PARTY_FUNCTION F
    WHERE PT.PTY_PARTY_ID = V.PVR_PTY_PARTY_ID
    AND F.PFY_PTY_PARTY_ID = PT.PTY_PARTY_ID
    AND SYSDATE BETWEEN PVR_EFFECTIVE_START_DATE AND
    PVR_EFFECTIVE_END_DATE
    AND PT.PTY_PARTY_STATUS <> 'D'
    AND PT.PTY_PARTY_CODE IS NOT NULL
    AND F.PFY_SHR_STAKE_HOLDER_FN_CODE = 'LENDER'
    ORDER BY 2;
    COMMIT;Question 1 is with a delete and insert one after another, will this cause it to grow rapidly? Would it be better to run all the delete's first, commit and then the inserts and commit?
    Question 2 is at a high level, what method would be better. We though a truncate, however, the developer only deletes 40% data.

    Jonathan Lewis wrote:
    huh? wrote:
    Our developer passed this code along to us as we are getting some performance and behaviour in our 9.2.0.7 DB. When this process run, our 4MB index grows to over 1.5 GB and we are forced to re-org it to bring it back down.
    Your table definition shows the indexed column at CHAR(200) - which means fixed length; so 4MB equates to 20,000 row. If this grows to 1.5GB and rebuilds to 4MB then at some point your index runs to about 80KB per index entry. If, as you describe, you only delete about 40% of the data and re-insert a similar quantity just once the you must have hit a bug.
    You're on 9.2.0.7 (buggy) - were you using ASSM (also buggy). I came across a bug with ASSM on one occasion that resulted in a process using only 3 blocks from each index extent that it allocated during a pl/sql loop to modify a couple of hundred rows. Different circumstances from yours, and an earlier version - but you may have hit something similar.
    A few thoughts:
    <ul>
    If it's in ASSM move the index to a freelist-managed tablespace to see what happens
    If you need the index could you make it unusable for the load
    Should you have the index at all ?
    Should this table be list-partitioned on listdomain - then you could use partition exchange to load data like this
    </ul>
    Committing between delete and insert generally helps in cases like this.
    Committing between each delete/insert pair should (in the absence of bugs) help
    Regards
    Jonathan Lewis.Yes we are using ASSM. Yes we have been told the same that this sounds like a common bug. One thing we will do is the two commits as well as your suggestion and moving or altering the index and see how that fairs. Thanks for your help.

  • Database growth following index key compression in Oracle 11g

    Hi,
    We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
    What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
    ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
    Does anyone know if a rebuild with index key compression  will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    Cheers
    Theo

    Hello Theo,
    Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
    We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
    If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
    Regards
    Stefan

  • Difference b/w index and unique

    Hi,
    Difference b/w index and unique?

    hi,
    The optional additions UNIQUE or NON-UNIQUE determine whether the key is to be unique or non-unique, that is, whether the table can accept duplicate entries. If you do not specify UNIQUE or NON-UNIQUE for the key, the table type is generic in this respect. As such, it can only be used for specifying types. When you specify the table type simultaneously, you must note the following restrictions:
    You cannot use the UNIQUE addition for standard tables. The system always generates the NON-UNIQUE addition automatically.
    You must always specify the UNIQUE option when you create a hashed table.
    INDEX:
    An index can be considered a copy of a database table that has been reduced to certain fields. This copy is always in sorted form. Sorting provides faster access to the data records of the table, for example using a binary search. The index also contains a pointer to the corresponding record of the actual table so that the fields not contained in the index can also be read.
    The primary index is distinguished from the secondary indexes of a table. The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database
    You can also create further indexes on a table in the ABAP Dictionary. These are called secondary indexes. This is necessary if the table is frequently accessed in a way that does not take advantage of the sorting of the primary index for the access.
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The system automatically creates the primary index. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVINGclauses, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You create secondary indexes using the ABAP Dictionary. There you can create its columns and define it as UNIQUE. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to, at most, five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    What is the difference between primary index and secondary index?
    http://help.sap.com/saphelp_47x200/helpdata/en/cf/21eb2d446011d189700000e8322d00/frameset.htm
    A difference is made between Primary & Secondary indexes to a table. the primary index consists of the key fields of the table and a pointer to the non-keys-fields of the table. The Primary index is generated automatically when a table is created and is created in the datebase as the same times as the table. It is also possible to define further indexes to a table in the ABAP/4 dictionary, which are then referred to as Secondary indexes.
    Message was edited by:
            Roja Velagapudi

  • Is the spotlight index stored on the boot drive or individual volumes?

    Hello!
    My questions are pretty simple, and are sumarized here (from the block of text below):
    First, if I have several external hard drives, is the Spotlight index file for each drive stored on the external hard drive, or on OS X's boot drive?
    Secondly, if it's stored on the boot drive, can I copy these files to another Tiger-running Mac to eliminate the need to re-index the hard drives? Where do I find them?
    Third: Leopard's Spotlight has quite a few new features. Even if the Tiger index files are stored on the external drive, will it need to re-index the drive to support the new feature set?
    Now for the long explenation:
    I have been preparing to upgrade my Powerbook to Leopard within the next month or two, and as such, I bought a new hard drive to dedicate one of my old drives to Leopard and Time Machine. Because I have three extrernal hard drives, I needed to move about 500 GB of data between the thee drives to make room for what was on the now-dedicated drive. Of course, this requires that Spotlight re-index the "new" drives.
    Today, I left all of the drives attached to my old G4, running Tiger, while it sits there indexing the three drives, totaling around a terrabyte of external storage. Because Spotlight tends to hog all of the available CPU time, and the G4 should be done indexing the drives, it would save me a lot of time if I could copy the index files over from the G4 to my Powerbook to avoid indexing them all over again. That is, if the index files are not on the external drives already. If they are on the G4's boot drive, where do I find these files?
    Finally, I plan to reformat the internal hard drive of my Powerbook when I install Leopard - I make a habit of doing a fresh install for every major upgrade (eg, 10.3, 10.4, 10.5, etc). To estimate the amount of time needed to upgrade, I'd be nice to know if Leopard will need to re-index the files on the external drives - even if Tiger's spotlight index is stored on the externals - for Leopard's new Spotlight features. Will it need to re-index? I would assume that until Leopard arrives, most people wouldn't know this, of course.
    Thanks a bunch,
    -Dan

    Each drive's Spotlight index is located on that drive.
    (25154)

  • CO table growth rate

    Hi,
    We have gone line with SAP ECC for retail scenario recently. Our database is growing 3 GB per day which includes both data and index growth.
    Modules configured:
    SD (Retail), MM, HR and FI/CO.
    COPA is configured for reporting purpose to find article wise sales details per day and COPA summarization has not been done.
    Total sales order created per day on an average: 4000
    Total line items of sales order on an average per day: 25000
    Total purchase order created per day on an avearage: 1000
    Please suggest whether database growth of 3 GB per day is normal for our scenario or should we do something to restrict the database growth.
    Fastest Growing tables are,
    CE11000     Operating Concern fo
    CE31000     Operating Concern fo
    ACCTIT     Compressed Data from FI/CO Document
    BSIS     Accounting: Secondary Index for G/L Accounts
    GLPCA     EC-PCA: Actual Line Items
    FAGLFLEXA      General Ledger: Actual Line Items
    VBFA     Sales Document Flow
    RFBLG     Cluster for accounting document
    FAGL_SPLINFO     Splittling Information of Open Items
    S120     Sales as per receipts
    MSEG     Document Segment: Article
    VBRP     Billing Document: Item Data
    ACCTCR     Compressed Data from FI/CO Document - Currencies
    CE41000_ACCT     Operating Concern fo
    S033     Statistics: Movements for Current Stock (Individual Records)
    EDIDS     Status Record (IDoc)
    CKMI1     Index for Accounting Documents for Article
    LIPS     SD document: Delivery: Item data
    VBOX     SD Document: Billing Document: Rebate Index
    VBPA     Sales Document: Partner
    BSAS     Accounting: Secondary Index for G/L Accounts (Cleared Items)
    BKPF     Accounting Document Header
    FAGL_SPLINFO_VAL     Splitting Information of Open Item Values
    VBAP     Sales Document: Item Data
    KOCLU     Cluster for conditions in purchasing and sales
    COEP     CO Object: Line Items (by Period)
    S003     SIS: SalesOrg/DistCh/Division/District/Customer/Product
    S124     Customer / article
    SRRELROLES     Object Relationship Service: Roles
    S001     SIS: Customer Statistics
    Is there anyway we can reduce the datagrowth without affecting the functionalities configured?
    Is COPA summarization configuration will help reducing the size of the FI/CO tables growth?
    Regards,
    Nalla.

    user480060 wrote:
    Dear all,
    Oracle 9.2 on AIX 5.3
    In one of our database, one table has a very fast growth rate.
    How can I check if the table growth is normal or not.
    Please advice
    The question is, what is a "very fast growth rate"?
    What are the DDL of the table resp. the data types that the table uses?
    One potential issue could be the way the table is populated: If you constantly insert into the table using a direct-path insert (APPEND hint) and subsequently delete rows then your table will grow faster than required because the deleted rows won't be reused by the direct-path insert because it always writes above the current high-water mark of your table.
    May be you want to check your application for such an case if you think that the table grows faster than the actual amount of data it contains.
    You could use the ANALYZE command to get information about empty blocks and average free space in the blocks or use the procedures provided by DBMS_SPACE package to find out more about the current usage of your segment.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • PARCONV phase Index could not be created ERROR

    Upgrading Solution Manger 3.2 to 7.0  Windows Nt SQL Server 2000
    We are in teh PARCONV phase now and getting errors regarding tables with missing indexes.
    Example from PARCONV.ELG file:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    CONVERSION ERRORS and RETURN CODE in NCONV00.TS3
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    2EEGT236 The SQL statement was not executed
    2EEDI006 Index " " could not be created completely in the database
      Long text:
        Cause
          It was not possible to create the index in the database.
          This could be caused by the fact that an index with the same name
          exists in the database, but it is unknown to the ABAP/4 Dictionary.
          Activating the index in the ABAP/4 Dictionary is possible, but it is
          not possible to create it in the database.
        System Response
        What to do
          For more information about the cause of the error, analyze the SQL
          error messages in this log.
    2EEGT221 Creation of secondary indexes for table "BBP_PDHGP" failed
      Long text:
        Cause
          There might be a SQL error.
        System Response
        What to do
          Check the SQL codes for errors.
    2EEGT239 Error in step "BBP_PDHGP-STEP6"
    2EEGT094 Conversion could not be restarted
      Long text:
        Cause
          The conversion could not be restarted, i.e. it was not possible to
          continue the conversion at the point where it was interrupted.
        System Response
        What to do
          Information about the cause of the error can be found in the restart
          log.
    2EEGT236 The SQL statement was not executed
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    I have gone into se14 the index name exists but the index doesn't exist on the database.  I've tried using the Create to create teh index on the databse but that fails with a name may already exist.
    How to I clear these errors?
    Thanks
    Laurie  McGinley

    It appears on all the ABAP Dictionary: Restart Log List, that the same error is occuring on creating the secondary indexes...
    Database error 170 at EXE
    Line 1: incorrect syntax near '('
    Same error occurs if I use the SE14 to "Continue Adjustment".
    Note 1180553 u2013 Syntax error 170 during index creation on SQL 2000 and
    Note 1346662 u2013 Avoid index creation errors in 7.0 EHP and higher upgrades.
    Note 1365755 - Scripts to fix fillfactor issues on SQLServer 2000
    These seem to be the key.... but...
    I ran the scripts in 1180553 ... no change in the errors.  In fact, there isn't a dbo.[TATAF~] table and the dbo.[TATAF] table doesn't have the 'With fillfactor" entry.  So i wonder if  I'm at a different phase?
    I'm looking at 1365755 and think I'd like to give that one a try, but it says to unpack the ZIP file in the <upgr dir>\abap\bin directory.  Well there isn't one.  I have the upgrade dir \usr\sap\put and its subdirectories, but no \abap\bin dir.
    So I'm not sure where to unpack this to give it a try.
    Perhaps it would be better to go back to teh beginning of the UPGRADE (after PREPARE) and start again?
    I didn't upgrade to SQL Server 2005 because the version of SolMan we are on isn't certified on it, so thought I'd get through the Sol Man upgrade then do SQLServer 2005... maybe need to rethink this to?
    I'm open you suggesions, ideas, answers!
    Thanks
    Laurie McGinley
    Edited by: Laurie McGinley on Sep 28, 2009 12:23 PM

  • Case Insensitive Indexes

    In relation to switching on case insensitive queries using
    alter session set NLS_COMP=LINGUISTIC;Can anyone answer the following?
    >
    Yes, it works.... but I can't for the life of me figure out how to build a linguistic index that the LIKE clause will actually use. Building an index thus, for example:
    create index bin_ai on names(NLSSORT("NAME",'nls_sort=''BINARY_AI'''));
    makes an index which does get used to good effect by queries such as
    select name from names where name = 'Johny Jacobson';
    but not by
    select name from names where name like 'Johny%';
    Hence, in a real-world test with 100,000 records, the LIKE query runs about 100 times slower than the '=' query (3 sec compared to 0.03 sec). Not very scalable. Is there a way to speed this up??
    Also is it possible to set session variables such as nls_comp on a database/schema/user level?

    Hi,
    select name from names where name like 'Johny%';Performance when using the SQL "like" clause can be tricky because the wildcard "%" operator can invalidate the index. For example a last_name index would be OK with a "like 'SMI%'" query, but unusable with "like '%SMI%'.
    One obscure trick for indexing queries "like '%SON'" is to create a REVERSE index and them programmatically reverse the like clause to read "like 'NOS%'", effectively indexing on the other side of the text.
    You might want to look at Oracle*text indexes, if your database has low DML:
    http://www.dba-oracle.com/oracle_tips_like_sql_index.htm
    If you are 10gr2:
    Oracle 10g release 2 has now introduced a case insensitive search method for SQL that avoids index invalidation and unnecessary full-table scans. You can also employ Oracle text indexes to remove full-table scans when using the LIKE operator. Prior to Oracle10g release 2 case insensitive queries required special planning:
    1 - Transform data in the query to make it case insensitive (note that this can invalidate indexes without a function-based index):
    create index upper_full_name on customer ( upper(full_name));
    select full_name from customer
    where upper(full_name) = 'DON BURLESON';
    2 - Use a trigger to transform the data to make it case insensitive (or store the data with the to_lower or to_upper BIF.
    3 - Use Alter session commands:
    alter session set NLS_COMP=ANSI;
    alter session set NLS_SORT=GENERIC_BASELETTER;
    select * from customer where full_name = 'Don Burleson'
    Hope this helps. . .
    Don Burleson
    Oracle Press author

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Creating Index to a master table

    Hello All,
    I have a database table with the following fields
    CLIENT 
    MATNR
    KORDX
    KCATV
    VARIANT_NBR
    With the initial four fields making the primary key.  ( CLIENT, MATNR,  KORDX, KCATV).
    I would like to put an index to the table with the fields  (CLIENT, VARIANT_NBR), because there are many reads in the program for the field VARIANT_NBR. Could you please put across on views on this. What all factors should I consider for creating an index in performance perspective.
    Thanks in advance
    Sudha

    Hi,
    Regarding indexes information check this link...
    http://help.sap.com/saphelp_nw04/helpdata/en/cc/7c58b369022e46b629bdd93d705c8c/content.htm
    and
    http://www.ncsu.edu/it/mirror/mysql/doc/maxdb/en/6a/c943401a306f13e10000000a1550b0/content.htm
    And also go through the below information...
    They may help you in optimizing your program.
    The Optimizer
    Each database system uses an optimizer whose task is to create the execution plan for SQL statements (for example, to determine whether to use an index or table scan). There are two kinds of optimizers:
    1) Rule based
    Rule based optimizers analyze the structure of an SQL statement (mainly the SELECT and WHERE clauses without their values) and the table index or indexes. They then use an algorithm to work out which method to use to execute the statement.
    2) Cost based
    Cost based optimizers use the above procedure, but also analyze some of the values in the WHERE clause and the table statistics. The statistics contain low and high values of the fields, or a histogram containing the distribution of data in the table. Since the cost based optimizer uses more information about the table, it usually leads to faster database access. Its disadvantage is that the statistics have to be periodically updated.
    Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan). The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set. You specify the fields of secondary indexes using the Abap Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields. Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table. If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents. Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column's selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table. If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan. Check for Equality and Link Using AND The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE. If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Hope this information had helped you.
    Regards
    Narin Nandivada.

  • Audit growth

    i am working in oracle 11.2 on HP-UX. My audit parameters are showing false... but there is an enormous grwoth of .aud files in the audit folder....
    Audit parameter
    NAME VALUE
    audit_file_dest /oracle11g/app/oracle/product/11.2/rdbms/audit
    audit_syslog_level
    audit_sys_operations FALSE
    audit_trail NONE
    How to avoid the growth of the audit files.....
    Rgds....

    Dear user537350,
    Are you talking about the audit files that are being generated in the adump directory? (*.aud files)
    Here is an example of it;
    vals1:/oracle/product/10.2.0/db_1/admin/optprod/adump#cat ora_18725.aud
    Audit file /oracle/product/10.2.0/db_1/admin/optprod/adump/ora_18725.aud
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /oracle/product/10.2.0/db_1
    System name:    HP-UX
    Node name:      vals1
    Release:        B.11.31
    Version:        U
    Machine:        ia64
    Instance name: optprod
    Redo thread mounted by this instance: 1
    Oracle process number: 132
    Unix process pid: 18725, image: oracle@vals1
    Wed Dec 15 14:51:52 2010
    LENGTH : '135'
    ACTION :[7] 'CONNECT'
    DATABASE USER:[3] 'SYS'
    PRIVILEGE :[6] 'SYSDBA'
    CLIENT USER:[0] ''
    CLIENT TERMINAL:[7] 'unknown'
    STATUS:[1] '0'As you can see it stores the sysdba connetions / login information. So it stores the information eventhough i have the same parameters as yours on your database;
    SQL> show parameter audit
    NAME                                 TYPE        VALUE
    audit_file_dest                      string      /oracle/product/10.2.0/db_1/ad
                                                     min/optprod/adump
    audit_sys_operations                 boolean     FALSE
    audit_syslog_level                   string
    audit_trail                          string      DB_EXTENDEDThere is no connection here between the DB_EXTENDED or the NONE value for audit_trail in terms of .aud files. As far as i know there is simply no way to disable those file creations like you simply can not disable the writing process in the alert.log file.
    Hope That Helps.
    Ogan

  • Steps for creating a database index

    Do we just create it from SE11? Does Basis needs to be involved for any furthur steps?

    Hi Amrutha,
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan). The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set. You specify the fields of secondary indexes using the Abap Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields. Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table. If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents. Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column's selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table. If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Index:
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb20446011d189700000e8322d00/content.htm
    Creating Secondary Index
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
    regards,
    keerthi.

  • How to deal with xcode indexing?

    I upgraded to XCode 4 for my cpp project, but I can no more use my macbook pro since the new XCode always uses all the CPU for indexing!!
    In my project I'm include Boost quite often, is there a way to avoid indexing libraries? Or to index Boost only once?
    Thanks

    Hi
    There is no possiblity of making your Process enabled Org as EAM enabled, and you must have to create a Discrete Org where in you can make this as EAM enabled.
    In R12/11i Process Enabled items/inventory items are dealt with only process, and that is the reason we can maintain an Assest item to Process enabled org which is a constriant. Hence it is adviced that, create an Inv Org (Discrete) and Enable this for EAM.
    Hope this is clear.
    Regards
    Raj
    Sierra Atlantic

  • Sharing TREX indexes between portal instances

    Hi people,
    We created several TREX search indexes in our development portal and we need to reuse them in our productive environment.
    As I read in Migrating Content, changing namespace I understand that the indexes cannot be transported between portal instances.
    My question is: Is there any way to access existing TREX indexes in new portal instances ?
    Thanks

    Hi Daniel,
    of course you can use the same TREX server from two different portals.
    I assume though, that the issue will be, that the resource IDs and subsequently access URLs will not be the same on the two portals. Or that, for other reasons, you will not be able to ensure abolute synchronicity of KM content in the two portals.
    => Cannot use same index for not exactly same content...
    Regards, Karsten
    PS: In this rough context, please also have a look at the linked thread and at the text below, concerning TREX use by totally different apps.
    TRex index_service
    When running several solutions against one TREX, currently you, in the project, have to ensure the following:
    - availability of a TREX Server release that is compatible to all the respective solution releases (check the PPMS or PAM entries of DMS, EP, SRM, etc.)
    - adequate sizing (see above link to distributed systems)
    - maintenance of the proper TREX address in all solutions
    AND MOST IMPORTATNTLY:
    Although TREX does offer index namespaces, most solutions do not address that yet and create indexes in one common directory on the TREX Server.
    Thus, if more than one of the solutions creates technical index names on the TREX Server that are likely to reoccur (e.g. "Test" is created on TREX Server with that directory name, rather than with some techID as directory name), you have to educate the future administrators/powerusers of the project to create only indexes that follow a naming conventiont to avoid index name conflicts (e.g. start all index names with solution prefix "DMS_...", "xRPM_...")
    Note:
    Using one TREX instance instead of several only provides "IT-side integration". It does not produce an information integration. Each solution will only search in its own indexes.
    An enterprise search over KM, ERP, BI and more will be offered with the SAP NetWeaver release <b>after</b> 2004s.

  • Reduce the disk spaceltl

    Hi
    Oracle DB Local Disk = 54% used (18.8GB Free)
    Oracle EBS Local Disk = 97% used (2.3GB Free)
    So please tell me how to reduce the disk space.
    Plat form and version: HP-UX Itanium 11.31
    Application version: 12.0.4
    Database version: 10g
    Regards
    RK
    Edited by: 806171 on May 4, 2011 7:34 AM

    Hi,
    There is no specific way to reduce space, you need to understand data in you system and purge unnecessary data from db and unnecessary logs from filesystem.
    Also checking fragementation in database and then removing unnecessary will help you in gaining some space:-
    Reclaiming unused space in APPLSYSD tablespace [ID 303709.1]
    Note: 1019709.6 Script to Report Tablespace Free and Fragmentation
    Note: 298698.1 Avoiding abnormal growth of FND_LOBS table
    Note: 189800.1 FND Related Tablespaces Growing at Rapid and Excessive Rate
    Note: 77635.1 How to Determine Real Space used by a Table
    Note: 115586.1 How to Deallocate Unused Space from a Table, Index
    Note: 224027.1 Objects Created When Creating a Queue table
    Note: 304522.1 How to Move Queue Tables without using export import
    Note: 130814.1 How to move LOB Data to Another Tablespace
    Note: 1029252.6 How to Resize a Datafile" to reduce the size of datafile.
    Note: 130866.1 How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark
    Note: 1019474.6 Script: To Create Tablespace Block Map
    Note: 230627.1 9i Export/Import Process for Oracle Applications Release 11i
    Note: 269291.1 Oracle Applications Tablespace Migration Utility User Documentation
    Please check below MOS for db:-
    Note:298698.1 - Avoiding abnormal growth of FND_LOBS table in Applications 11i:
    Note:375630.1 - FND_ENV_CONTEXT Table Growing Large
    Note:178009.1 - APPLSYSD and APPLSYSX Tablespaces Running Out Of Space:
    Note:298550.1 - Troubleshooting Workflow Data Growth Issues:
    Note:248857.1 - Oracle Applications Tablespace Model (OATM) Whitepaper.
    Note:269293.1 - OATM FAQ
    Below notes for purging:-
    Purging Strategy for eBusiness Suite 11i [ID 732713.1]
    A Detailed Approach To Purging Oracle Workflow Runtime Data [ID 144806.1]
    Reducing Your Oracle E-Business Suite Data Footprint using Archiving, Purging, and Information Lifecycle Management [ID 752322.1]
    Also you can find some temp files on server that can be deleted.
    Thanks,
    JD

Maybe you are looking for

  • Internet connection suddenly slow for iMac but OK for iPad and iPod

    A few days ago the  WIFI internet connection on my iMac suddenly became so slow as to be unusable. However my iPad and iPod Touch on the same network still connect as normal. I have tried everything I can think of together with some ideas posted on v

  • Cannot select HTML message format in Active Sync - Treo 750

    Hi - When I configure active sync in Treo 750 device, I can select the message format as HTML but after the first sync the message format reverts to Plain Text, and it's now grayed out! I would appreciate if anyone have a solution for this. \Sridhar

  • Files without extensions read as UNIX exe files

    I have an iMac connecting to my MacBook Pro that has several external drives attached by firewire. On those external drives the iMac sees any file without an extension as exe files. In particular, font files are seen this way. Can I add extension to

  • Tracker doesn't work in deskbar-applet

    Hi again, one more problem. I used to use a lot the spotlight-like tandem deskbar-applet + tracker; but after the upgrade to Gnome 2.24 it doesn't work for me anymore and it looks like i'm the only one that uses it or, at least, that has this problem

  • Marketing execution over the top of ORDM ?

    We are implementing ORDM here for its MIS/BI capabilities and have noted that most of the Data Mining and Analytics that the Marketing team use in their own tool (Alterian) exist within BI (segment analysis / basket analysis / promotion analysis etc)