Degenerated Index

Hi
i have a problem when i ran the RSRV it is given an yellow error
"ORACLE: Index /BIC/SNKDOCSCL1~0 has possibly degenerated".
i ran correct error, but still same thing.
run "Entries Not Used in the Dimension of an InfoCube"  test on indidual dimension and correct it, it is shows green
when again i run the "Database indexes of an InfoCube and its aggregates"
it is given the same error.
Any body help me please,
Thanks

Hi Rob,
Try to delete indexes of Infocube and its aggregates. ( manage --> performance)
Then create indexes again.
Let me know if it helps.
Derya

Similar Messages

  • Performance Problems - Index and Statistics

    Dear Gurus,
    I am having problems lossing indexes and statistics on cubes ,it seems my indexes are too old which in fact are not too old just created a month back and we check indexes daily and it returns us RED on the manage TAB.
    please help

    Dear Mr Syed ,
    Solution steps I mentioned in my previous reply itself explains so called RE-ORG of tables;however to clarify more on that issue.
    Occasionally,ORACLE <b>Cost-Based Optimizer</b> may calculate the estimated costs for a Full Table Scan lower than those for an Index Scan, although the actual runtime of an access via an index would be considerably lower than the runtime of the Full Table Scan,Some Imperative points to be considered in order to perk up the performance and improve on quandary areas such as extensive running times for Change runs & Aggregate activate & fill ups.
    Performance problems based on a wrong optimizer decision would show that there is something serious missing at Database level and we need to RE_ORG  the degenerated indexes in order to perk up the overall performance and avoid daily manual (RSRV+RSNAORA)activities on almost similar indexes.
    For <b>Re-organizing</b> degenerated indexes 3 options are available-
    <b>1) DROP INDEX ..., and CREATE INDEX …</b>
    <b>2)ALTER INDEX <index name> REBUILD (ONLINE PARALLEL x NOLOGGING)</b>
    <b>3) ALTER INDEX <index name> COALESCE [as of Oracle 8i (8.1) only]</b>
    Each option has its Pros & Cons ,option <b>2</b> seems to be having lot of advantages to
    <b>Advantages- option 2</b>
    1)Fast storage in a different table space possible
    2)Creates a new index tree
    3)Gives the option to change storage parameters without deleting the index
    4)As of Oracle 8i (8.1), you can avoid a lock on the table by specifying the ONLINE option. In this case, Oracle waits until the resource has been released, and then starts the rebuild. The "resource busy" error no longer occurs.
    I would still leave the Database tech team be the best to judge and take a call on these.
    These modus operandi could be institutionalized  for all fretful cubes & its indexes as well.
    However,I leave the thoughts with you.
    Hope it Helps
    Chetan
    @CP..

  • Index size greated then Table Size

    Hi all,
    We are running BI7.0 in our environment.
    One of the tables' index size is much greated than the table itself. The Details are listed below:
    Table Name: RSBERRORLOG
    Total Table Size: 141,795,392  KB
    Total Index Size: 299,300,576 KB
    Index:
    F5: Index Size / Allocated Size: 50%
    Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
    Please letme know on this as I am not very clear on DB much.
    Thanks and Regards,
    Raghavan

    Hi Hari
    Its basically degenerated index.  You can follow the below steps
    1. Delete some entries from RSBERRORLOG.
    BI database growing at 1 Gb per day while no data update on ECC
    2. Re-organize this table from BRSPACE . Now the size of the table would be very less.  I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required.   ---Basis job
    3. Delete and recreate Index on this table
    You will gain lot of space.
    I assumed you are on Oracle.
    More information on reoganization  is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
    Anindya
    Regards
    Anindya

  • Dead lock error while updating data into cube

    We have a scenario of daily truncate and upload of data into cube and volumes arrive @ 2 million per day.We have Parallel process setting (psa and data targets in parallel) in infopackage setting to speed up the data load process.This entire process runs thru process chain.
    We are facing dead lock issue everyday.How to avoid this ?
    In general dead lock occurs because of degenerated indexes if the volumes are very high. so my question is does deletion of Indexes of the cube everyday along with 'deletion of data target content' process help to avoiding dead lock ?
    Also observed is updation of values into one infoobject is taking longer time approx 3 mins for each data packet.That infoobject is placed in dimension and defined it as line item as the volumes are very high for that specific object.
    so this is over all scenario !!
    two things :
    1) will deletion of indexes and recreation help to avoid dead lock ?
    2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
    Regards.

    hello,
    1) will deletion of indexes and recreation help to avoid dead lock ?
    Ans:
    To avoid this problem, we need to drop the indexes of the cube before uploading the data.and rebuild the indexes...
    Also,
    just find out in SM12 which is the process which is causing lock.... Delete that.
    find out the process in SM66 which is running for a very long time.Stop  this process.
    Check the transaction SM50 for the number of processes available in the system. If they are not adequate, you have to increase them with the help of basis team
    2) any idea why the insertion into the infoobject is taking longer time (there is a direct read on sid table of that object while observed in sql statement).
    Ans:
    Lie item dimension is one of the ways to improve data load as well as query performance by eliminationg the need for dimensin table. So while loading/reading, one less table to deal with..
    Check in the transformation mapping of that chs, it any rouitne/formula  is written.If so, this can lead to more time for processing that IO.
    Storing mass data in InfoCubes at document level is generally not recommended because when data is loaded, a huge SID table is created for the document number line-item dimension.
    check if your IO is similar to doc no...
    Regards,
    Dhanya

  • Performance when using bind variables

    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    My first JAVA program runs without using bind variables as follows:
    loop
    stmt.executeUpdate("delete from nobind_test where id = " + i);
    end loop
    My second JAVA program uses bind variables as follows:
    pstmt = conn.prepareStatement("delete from bind_test where id = ?");
    loop
    pstmt.setString(1, String.valueof(i));
    rs = pstmt.executeQuery();
    end loop;
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    Thanks.

    [email protected] wrote:
    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    The point is that you have to find out where your test is spending most of the time.
    If you've just populated a table with 100,000 records and then start to delete randomly 2,000 of them, the database has to perform a full table scan for each of the records to be deleted.
    So probably most of the time is spent scanning the table over and over again, although most of blocks might already be in your database buffer cache.
    The difference between the hard parse and the soft parse of such a simple statement might be negligible compared to effort it takes to fulfill each delete execution.
    You might want to change the setup of your test: Add a primary key constraint to your test table and delete the rows using this primary key as predicate. Then the time it takes to locate the row to delete should be negligible compared to the hard parse / soft parse difference.
    You probably need to increase your iteration count because deleting 2,000 records this way probably takes too short and introduces measuring issues. Try to delete more rows, then you should be able to spot a significant and constant difference between the two approaches.
    In order to prevent any performance issues from a potentially degenerated index due to numerous DML activities, you could also just change your test case to query for a particular column of the row corresponding to your predicate rather than deleting it.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • VT01N : SAP getting slower

    HI to All
    i am facing strange problem & problem is user XYZ creating the shipment via VT01N & in VT01N user XYZ select 10 outbound delivery out of 20, that mean rest of 10 should be unlock for other shipment process .
    once the user XYZ save the shipment which contain 10 delivery ,the other user ABC start processing the new shipment for remaining 10 delivery , but due to some reason system issue the massage (VW 846) “The delivery & is currently being processed by user XYZ"But user XYZ had already save the shipment & come out from VT01N ,VT02N .
    It will take 10 to 20 minute for the user ABC to process the new shipment for remaining 10 delivery .
    As per the sap note 495564 & 494674 I made the change but result is same , still system take 10 to 20 minute for processing new shipment.
    If you have solution for the same please let me know.
    Nitin .

    As you know, there can be many reasons for the bad performance (e.g. DB and R/3 parameterization, Patches, OS settings, memory, CPU, Statistics of the tables, indexes, bad programming, network,Oracle <b>Bugs</b> etc....)
    Did you have a look at the execution plan of the slow statement? How big is this table? is a FULL table scan or index scan chosen? How good  is the index or how is the storage quality of the index? Do you think that the used index ist the right index for this statement? Any many other questions.
    If you have "bad" values for the Storage Quality of the used index, you can rebuild the index and should collect again the statistics after rebuilding.  Please see and read following SAP Notes very <b><u>carefully</u></b> to check the storage quality of the index, which is used in the statement.
    <b><u>771929</u></b> - FAQ: Index fragmentation
    <b><u>332677</u></b> - Reorganizing degenerated Indexes
    which oracle database version do you have?
    Did you implement the database parameters and patches according to the following SAP notes?
    <b><u>124361</u></b> - Oracle parameterization (R/3 >= 4.x, Oracle <b><u>8.x/9.x</u></b>)
    <b><u>830576</u></b> - Parameter recommendations for Oracle <b><u>10g</u></b>
    <b><u>871096</u></b> - Oracle Database 10g: Patch sets/patches for <b><u>10.2.0</u></b>
    Best regards
    Baran

  • Urgent: ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated

    Hi,
    I ran ‘Database Indices of an InfoCube and Its Aggregates’ test for one of my cubes.
    After running the test I get following 7 warnings
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/ICOORDER~0 has possibly degenerated
    ORACLE: Index /BI0/ICOSTCENTER~0 has possibly degenerated     
    ORACLE: Index /BI0/ICOSTCENTER00 has possibly degenerated     
    ORACLE: Index /BI0/IWBS_ELEMT~0 has possibly degenerated
    ORACLE: Index /BI0/ICOORDER~001 has possibly degenerated
    How can I repair indexes for the above objects? Can above be cause for poor query performance.
    Thank you,
    sam

    check: Index possibly Degenerated
    We have this warning on several of our infocubes. If there is no noticable performance issue it is not cause for concern.
    Also refer to OSS notes: 771929 and 332677
    null

  • Delete cube request - before or after index creation?

    Hi Folks,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    I guess it is after the index is created otherwise it would take longer to find the requests that should be deleted. The index will be slighly degenerated due to the deletion but should be only marginal.
    I am right or wrong?
    Thanks for all replies in advance,
    Axel

    hi,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    The delete should be done before index creation, as once the index are created then even though the data corresponding to those index is deleted the index still remains. This unnecessarily increases the index table size
    regards,
    Arvind.

  • Index is possibly denegerated

    When I was running T-Code RSRV to test the index for a cube, there was a warning saying that one InfoObject index is possibly degenerated.
    I try to use correct errors function in the toolbar. But it didn't work.
    Does anyone have any clue on this?
    Thanks,
    Glen

    Hi,
    The problems occur under at least one of the following conditions:
    (1) The queries or aggregates use hierarchies which are reloaded regularly.
    (2) Requests have been deleted from the InfoCube and new data has been loaded afterwards.
    (3) You carry out regular rollups without newly setting up the aggregate indexes in between
    Solution:
    (a) DROP INDEX ..., and CREATE INDEX ... afterwards
    b) ALTER INDEX <index name> REBUILD
    Check the OSS Note 323090
    -Vikram

  • Which to use - Sy-index or sy-tabix ??

    This is what i found out about SY-INDEX and SY-TABIX but which one to use when i want to delete a line of data from an internal table ? I tried both sy-index and sy-tabix and both works fine and returning the expected output for me but which one is better of to use ?
    SY-TABIX :- For Internal Table, Current Line Index
    SY-INDEX :- For Loops, Current Loop Pass
    The below code is where i uses the Delete ..
    LOOP AT dmg.
            CONCATENATE
                   dmg-dmg00
                   dmg-dmg01
                   dmg-dmg02
                   dmg-dmg03
                   dmg-dmg04
                   dmg-dmg07
                   dmg-dmg08
                   dmg-dmg09 INTO tli_down1 SEPARATED BY '*'.
            APPEND tli_down1. CLEAR tli_down1.
            DELETE dmg INDEX sy-index.
            EXIT.
          ENDLOOP.

    Right. Just like what they said upstairs, sy-babix is the best choice.
    One more thing is, if you want to concatenate fields of table dmg and append to tli_down1 on by one. You should not use EXIT after delete dmg.
    In that case ,only one line can be appended into tli_down1 table.
    > The below code is where i uses the Delete ..
    >
    > LOOP AT dmg.
    >         CONCATENATE
    >        dmg-dmg00
    >         dmg-dmg01
    >        dmg-dmg02
    >         dmg-dmg03
    >        dmg-dmg04
    >         dmg-dmg07
    >        dmg-dmg08
    >         dmg-dmg09 INTO tli_down1 SEPARATED BY '*'.
    >   APPEND tli_down1. CLEAR tli_down1.
    >       DELETE dmg INDEX sy-index.
    >   EXIT.
    >     ENDLOOP.

  • Index's on cubes or Aggregates on infoobjects

    Hello,
    Please tell me if it is possible to put index's on cubes; are they automatically added or is this something I put on them?
    I do not understand index's are they like aggregates?
    Need to find info that explains this.
    Thanks for the hlep.
    Newbie

    Indexes are quite different from aggregates.
    An Aggregate is a slice of a cube which helps the data retrival on a faster note when a query is executed on a cube. Basically it is kind of a snapshot of KPI's and Business Indicators (Chars) which will be displayed as the initial query run result.
    Index is a process which is inturn will reduce the query response time. While an object gets activated, the system automatically create primary indexes. Optionaly, you can create additional index called secondary indexes.Before loading data, it is advisable to delete the indexes and insert them back after the loading.
    Indexes act like pointers for quickly geting the Data.When u delete it will delete indexes and when u create it will create the indexes.
    When loading we delete Bcs during loading it has to look for existing Indexes and try to update so it will effect the Data load performence so we delete and create it will take less time when compared to updating the existing ones.
    There is one more issue we have to take care if u r having more than 50 million records this is not a good practice insteah we can delete and create during week end when they r no users.

  • LIKE, LIKEC and Index usage

    I've table that contains about 20 million rows, and I've created index for varchar2(100) column. It works well if I do
    SELECT * FROM MY_TABLE WHERE MY_COL LIKE 'FOO%';
    But if I change query to use LIKEC (to make unicode escaped strings work):
    SELECT * FROM MY_TABLE WHERE MY_COL LIKEC 'FOO%';
    I always get full table scan in explain plan.
    I tried to use NVARCHAR, or index created by TO_NCHAR but I always end up hitting full table scan.
    Should I create index some special way or do something else before I get index working?

    Just a gut feeling : is the database using character semantics or byte semantics?
    My gut feeling, after looking up the documentation, is it should be character semantics.
    BTW: Not posting version info decreases the chance you get an adequate reply.
    Sybrand Bakker
    Senior Oracle DBA

  • Index with "or" clause (BUG still exists?)

    The change log for 2.3.10 mentions "Fixed a bug that caused incorrect query plans to be generated for predicates that used the "or" operator in conjunction with indexes [#15328]."
    But looks like the Bug still exists.
    I am listing the steps to-repro. Let me know if i have missed something (or if the bug needs to be fixed)
    DATA
    dbxml> openContainer test.dbxml
    dbxml> getDocuments
    2 documents found
    dbxml> print
    <node><value>a</value></node>
    <node><value>b</value></node>
    INDEX (just one string equality index on node "value")
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-element-equality-string for node {}:value
    2 indexes found.
    QUERY
    setVerbose 2 2
    preload test.dbxml
    query 'let $temp := fn:compare("test", "test") = 0
    let $results := for $i in collection("test.dbxml")
    where ($temp or $i/node[value = ("a")])
    return $i
    return <out>{$temp}{$results}</out>'
    When $temp is true i expected the result set to contain both the records, but that was not the case with the index. It works well when there is no index!
    Result WITH INDEX
    dbxml> print
    <out>true<node><value>a</value></node></out>
    Result WITHOUT INDEX
    dbxml> print
    <out>true<node><value>a</value></node><node><value>b</value></node></out>

    Hi Vijay,
    This is a completely different bug, relating to predicate expressions that do not examine nodes. Please try the following patch, to see if it fixes this bug for you:
    --- dbxml-2.3.10-original/dbxml/src/dbxml/optimizer/QueryPlanGenerator.cpp     2007-04-18 10:05:24.000000000 +0100
    +++ dbxml-2.3.10/dbxml/src/dbxml/optimizer/QueryPlanGenerator.cpp     2007-08-08 11:32:10.000000000 +0100
    @@ -1566,11 +1572,12 @@
         else if(name == Or::name) {
              UnionQP *unionOp = new (&memMgr_) UnionQP(&memMgr_);
    +          result.operation = unionOp;
              for(VectorOfASTNodes::iterator i = args.begin(); i != args.end(); ++i) {
                   PathResult ret = generate(*i, ids);
                   unionOp->addArg(ret.operation);
    +               if(ret.operation == 0) result.operation = 0;
    -          result.operation = unionOp;
         // These operators use the presence of the node arguments, not their valueJohn

  • INDEX vs TABIX

    Hi,
    Can some body explain the CLEAR difference between Sy-index and Sy-tabix. And one or two small examples. I am little bit confused.
    Thanx.

    Hi,
    SY-INDEX
    In a DO or WHILE loop, SY-INDEX contains the number of loop passes including the current pass.
    SY-TABIX
    Current line of an internal table. SY-TABIX is set by the statements below, but only for index tables. The field is either not set or is set to 0 for hashed tables.
    APPEND sets SY-TABIX to the index of the last line of the table, that is, it contains the overall number of entries in the table.
    COLLECT sets SY-TABIX to the index of the existing or inserted line in the table. If the table has the type HASHED TABLE, SY-TABIX is set to 0.
    LOOP AT sets SY-TABIX to the index of the current line at the beginning of each loop lass. At the end of the loop, SY-TABIX is reset to the value that it had before entering the loop. It is set to 0 if the table has the type HASHED TABLE.
    READ TABLE sets SY-TABIX to the index of the table line read. If you use a binary search, and the system does not find a line, SY-TABIX contains the total number of lines, or one more than the total number of lines. SY-INDEX is undefined if a linear search fails to return an entry.
    SEARCH <itab> FOR sets SY-TABIX to the index of the table line in which the search string is found.
    regards,
    madhu

  • What is" LINE-COL2 = SY-INDEX ** 2."

    can u explain what is '' ** "
    FIELD-SYMBOLS <FS> LIKE LINE OF ITAB.
    DO 4 TIMES.
      LINE-COL1 = SY-INDEX.
      LINE-COL2 = SY-INDEX ** 2.  "what this will do
      APPEND LINE TO ITAB.
    ENDDO.

    Hi sunil,
    1 **   means "To the power of"
    2. eg. 5 ** 2  =  25
       (5 To the power of 2 = 25)
    regards,
    amit m.

Maybe you are looking for

  • ADF Application and Oracle Portal Login Page

    We have developed ADF application and deployed it in Oracle AS 10.1.2 along with the custom JAAS module, which is working fine with the application custom login page. As a next page, I want to use Oracle Portal login page for the authentication and a

  • Error while applying patch MLR#4

    hi, I am getting error while applying patch MLR#4. Can you please let me know whats error can be resolved? Please see below the snapshots from log file. Thanks, Vaibhav Execute perf_wid_pid_pcid.sql Error: Abort transaction java.lang.NullPointerExcep

  • How to delete cookies in windows 7

    You give information about how to delete cookies in windows XP and the button 'extra' I can't find the button 'extra' in my version of windows 7

  • Is there a way to allow "send to lab" export in lightroom?

    I would like to develop e plugin for my clients in lightroom. I need a plugin who let clients directly send photographs to my lab. For example in an ftp site. I've seen time ago something like that and i would like to do it for my lab. Is it quite po

  • Flash Loader Not Working

    I am currently creating a website www.offnitemusic.com using Flash CS3 and Actionscript 3.0. I dynamically create a Loader in Actionsscript to load external SWF's once a button is clicked. The loader and the files load and work fine when I run Test M