Secondary index report

I would like to create a report that lists all secondary indexes including the columns indexed.  What tables contain information on secondary indexes?  Is there an FM to get information on secondary indexes?
Thanks In Advance.

Hi Brad,
The two tables you are looking for are DD12L and DD17S. Table DD12L gives you a list the seconday indexes and table DD17S gives you the list of all the fields in a secondary index.
1) Retrieve field INDEXNAME from table DD12L where SQLTAB = <Table name>.
2) Retrieve field FIELDNAME from table DD17S where SQLTAB = <Table name>
                                                      and    INDEXNAME = DD12L-INDEXNAME.
This should be a simple 2 select report.
Let me know if this helps.
Regards,
Mark

Similar Messages

  • DBACOCKPIT reports missing secondary indexes even though they exist .

    Hi all,
           On our BI system (7.0) on DBACOCKPIT, system reports some secondary indexes missing even though they already exist.
         Any suggestions for the cause of the problem or what to investigate?
    regards,
    K

    Hi,
    When was statistics updated last time. Update the statistics and chk if the problem is still shown.

  • Creation of secondary indexes due to heavy flow of messages in ecc smq2

    Hi gurus,
    So we are facing smq2 issues from last 15 days in Ecc system due to some Time Limit Exceed and some times Object is Locked by the user xxxx
    So finally we decided to go for creating seconday indexes in Ecc side , So here my question is there any thing required from Pi side while creating
    Secondary indexes why i am asking this question is there is no issues From PI end after reaching to target(ECC) system only messages got stucking
    in smq2 and facing issues .Below is the interface details.
    ECC-FSCM
    SAP ECC 6.0
    FSCM
    CreditCommitment_In and CreditCommitment_Out
    CC_ProxySender_FSCM
    CC_ProxyReceiver_FSCM
    Plz reply back .
    Regards
    Madhu

    If you had gone through the replies in your previous post, Iñaki Vilaand myself already provided the report name which needs to be scheduled to clear the messages in queues.
    Plz provide the permanent fix for this issue

  • DSO activation problem after creating the secondary indexes

    Hi,
        I am facing the problem with DSO activation after creating the secondary indexes.
    •  Compared with Info Cubes there is no functionality available which allows dropping and recreating a secondary index before/after the data activation.
    As a workaround I can write a simple report which drops and creates the indexes on database level.
    By using a process chain, we can simply insert the drop index report before data activation and the create index report after the data activation process.
    Can any body help me step by step procedure or Material to write the programs for delete index and create index reports on DSO object?..
    Thanks in advance for your help.
    Thanks & Regards,
    Bala

    hi,
    in BI if you are using the dso for reporting then you can simply chk the settings of dso for SID generation.
    no need to create the indexes or delete it.
    if the dso is not used for report then no need to use indexes.
    Ramesh

  • Database tableAUFM hitting is taking much time even secondary index created

    Hi Friends,
       There is report for Goods movement rel. to Service orders + Acc.indicator.
       We have  two testing Systems(EBQ for developer and PEQ from client side).
       EBQ system contains replica of PEQ every month.
       This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems  have same data(Getting same output).
    The report has the follwoing fields on the selection criteria:
    A_MJAHR     Material Doc. Year (Mandatory)
    S_BLDAT     Document Date(Optional)
    S_BUDAT     Posting Date(Optional)
    S_LGORT     Storage Location(Optional)
    S_MATNR     Material(Optional)
    S_MBLNR     Material Documen(Optional)t
    S_WERKS     Plant(Optional)
    Client not agrrying to make Material Documen as Mandatory.
    The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
    BLDAT
    BUDAT
    MATNR
    WERKS
    LGORT 
    Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
    What can be done to get teh report executed very fast.
    <removed by moderator>
    The part of report Soure code is as below:
    <long code part removed by moderator>
    Thanks and Regards,
    Rama chary.P
    Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
    Please Read before Posting in the Performance and Tuning Forum
    locked by: Thomas Zloch on Sep 15, 2010 11:40 AM

    Hi Friends,
       There is report for Goods movement rel. to Service orders + Acc.indicator.
       We have  two testing Systems(EBQ for developer and PEQ from client side).
       EBQ system contains replica of PEQ every month.
       This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems  have same data(Getting same output).
    The report has the follwoing fields on the selection criteria:
    A_MJAHR     Material Doc. Year (Mandatory)
    S_BLDAT     Document Date(Optional)
    S_BUDAT     Posting Date(Optional)
    S_LGORT     Storage Location(Optional)
    S_MATNR     Material(Optional)
    S_MBLNR     Material Documen(Optional)t
    S_WERKS     Plant(Optional)
    Client not agrrying to make Material Documen as Mandatory.
    The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
    BLDAT
    BUDAT
    MATNR
    WERKS
    LGORT 
    Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
    What can be done to get teh report executed very fast.
    <removed by moderator>
    The part of report Soure code is as below:
    <long code part removed by moderator>
    Thanks and Regards,
    Rama chary.P
    Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
    Please Read before Posting in the Performance and Tuning Forum
    locked by: Thomas Zloch on Sep 15, 2010 11:40 AM

  • Is it worth creating secondary index on BKPF table ?

    Hello,
    One of my clients is using ECC version 5.0. I have a requirement wherein I need to fetch the data from BKPF table based on AWKEY, BUKRS and GJAHR. There is no standard secondary index available.
    I have decided to create a secondary index on these fields in the following order:
    1) MANDT
    2) AWKEY
    3) BUKRS
    4) GJAHR
    I know that creating secondary indexes does improve performance during data retrieval. But when I checked the total number of entries in BKPF table in production system, there are more than 20 lac 2 million records.
    I am worried that creation of secondary index will create another table of such a large size in production that has data sorted on the above fields. Also, the RAM of production system is 6 GB.
    Please suggest if creation of secondary index is a good measure OR should I recommend the client to increase the system RAM?
    Regards,
    Danish.
    Edited by: Thomas Zloch on Oct 3, 2011 3:01 PM

    Hi,
    Secondary Index on BKPF-AWKEY has been successfully created in production. The report which used to timeout in foreground as well as in background is now executing in less than 10 seconds !!
    Client is very much satisfied with this. But, there is one problem that we are facing now is that when the user is changing an existing billing document via VF02, on SAVE, system takes a very long time to save the changes done.
    We monitored the processes via SM50 and found that there was a sequential read on BKPF table.
    Before transporting the index in production, system approximately took not more than 5 seconds to save the Billing Document. But now system takes more than 20 minutes just to save the billing document via VF02.
    I really don't know what has gone wrong. I can't figure out if I have missed any step while creating the index.
    I did the following,
    1) Created a Secondary index on BKPF, saved it in the transport request and activated it. Since the index was not existing in database ORACLE, I activated and adjusted the table via SE14. Now, index exists on database as well. Working perfectly in development.
    2) Imported the transport request to Production. Checked in SE11. Index was existing and active. Also, index existed in database ORACLE.
    Have I missed anything ? Is it required to activate and adjust the database via SE14 in production too ?
    Regards,
    Danish.

  • Select query with secondary index

    hi,
    i have a report which is giving performance issues on a perticular select query on KONH table.
    the select query doesnt use the primary key fields and table already has around 19 million entries.So there was a secondary index created for the fields in the table.
    now, KONH is a client specific table, and hence has MANDT as the first field. when the table is not indexed it is sorted according to the order of fields, like first MANDT, then primary key fields and then remaining fields.. (correct me if i am wrong)
    but the secondary index created doesnt has MANDT in it..(yea, a mistake! )...
    but instead of correccting the secondary index, i am told to change the select query..
    so, i used a "client specific" syntax to sort the issue.. but i dont understand whre i should put the "where mandt eq sy-mandt" clause..
    should i put it right after all my secondary index fields are over? or what happens to the order of fields which are not present in the list of secondary index?
    kindaly help.
    thanx.

    Hi chinmay kulkarni,
    its better if you can ask concerned person to add MANDT field in your  index as well....
    Indexes and MANDT
    If a table begins with the mandt field, so should its indexes. If a table begins with mandt and an index doesn't, the optimizer might not use the index.
    Remember, if you will, Open SQL's automatic client handling feature. When select * from ztxlfa1 where land1 = 'US' is executed, the actual SQL sent to the database is select * from ztxlfa1 where mandt = sy-mandt and land1 = 'US'. Sy-mandt contains the current logon client. When you select rows from a table using Open SQL, the system automatically adds sy-mandt to the where clause, which causes only those rows pertaining to the current logon client to be found.
    When you create an index on a table containing mandt, therefore, you should also include mandt in the index. It should come first in the index, because it will always appear first in the generated SQL.
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    Structure of an Index
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    For Example...
    SELECT KUNNR KUNN2 INTO TABLE T_CUST_TERR
    FROM KNVP CLIENT SPECIFIED
    WHERE MANDT = SY-MANDT " here MANDT shd be first
    AND KUNN2 IN S_TERR
    AND PARVW LIKE 'Z%'.
    Accessing tables using Indexes
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    Also pls have a look on below link
    http://www.sapfans.com/sapfans/forum/devel/messages/30240.html
    Hope it will solve your problem..
    Reward points if useful...
    Thanks & Regards
    ilesh 24x7

  • Q1: Secondary Index. Q2: Fiscal Year /Period

    Hi gurus,
    I have two questions need your explain:
    Q1: To improve the query performance I am going to create some secondary indexes on an ODS, I am wondering whether this action will remove the data from the ODS therefore I have to reset the data loading process and if these indexes can be maintained while data loading automatically ?
    Q2: About fiscal year /period which technical name is 0fiscper. we know in sales order header, item also purchase order and billing ODS or CUBEs, FY/Period are existed in those models. but there are still some other dates e.g. delivery date, goods issue date, goods recieve date, document date, posting date and so on. So which date is used to generate FY/Period ?
    Thanks in advance
    Edited by: Leon Ouyang on Dec 3, 2008 4:27 PM
    Edited by: Leon Ouyang on Dec 4, 2008 1:26 AM

    Hi,
    1)The creation of index have no effect on the data stored and you need not delete or relaod the data.
    System will adjust the index with every data load to the DSO automatically.
    2)Fiscal year period depeneds upon the module from which you are using the data and you can tweak the logic for this suit your need.
    If the data source is providing the fiscal period values then well and good.
    If not then generally its mapped to those dates which are driving date of the query or the date which is going to be used by the user for the input selections in the report.
    for example if you want sales based on the posting date in the reports and user just wnt sale till month and not date then.......you map posting date to fiscal period in the transformation and use them in the reporting.
    This can vary too...there is no hard fast rule for what date it should be mapped...and it all depends upon the reporting requirement
    The good design practise is to fill it with the same date which is filling calday and calmonth objects.This will keep granuarlty as same...but if you want some other selection in the report then you can map calday to some other date and fiscal period to some thing else.
    Ajeet

  • Issue with new secondary index

    hi i have created new secondary index in VEPO table.( mandt, werks,lgort, sonum).index is activated. but when i am doing activate and adjacent in database in SE14.it is giving warings and error.
    warning 1:
    Enhancement category 3 possible , but include or subty. not yet classified.
    index VEPO-ZS1 must be created in the database
    test activation of Table VEPO successful.
    Activation and DDL statments for Table VEPO required.
    warning and error:
    Enhancement category 4 possible , but include or subty. not yet classified.
    create UNIQUE INDEX "VEPO~ZS1' ON "VEPO" ("MANDT", "WERKS", "LGORT","SONUM")
    PCTFREE 10
    INITRANS 002
    TABLESPACE PSAPBTABI
    STORAGE ( INITIAL 0000447920 K
    NEXT 0000447920 K
    MINIEXTENTS 0000000001
    MAXEXTENTS UNLIMITED
    PCTINCREASE 0000
    FREELISTS 001)
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    DDL time(___1): ...378,823 milliseconds
    The SQL statement was not executed
    Statements for Table VEPO could not be executed
    Request for VEPO could not be executed
    When i m executing below select staments its taking more time.
    For below Select Statments i m using :
      if not t_vbap[] is initial.
          select venum
                 vbeln
                 unvel
                 vemng
                 matnr
                 lgort
                 sonum
                 werks
          from   vepo
          into   table t_vepo
           for all entries in t_vbap
       where  matnr in s_matnr
        and   werks =  p_werks
        and   lgort in s_lgort
        and   sonum = t_vbap-sonum.
        else.
          select venum
             vbeln
             unvel
             vemng
             matnr
             lgort
             sonum
             werks
      from   vepo
      into   table t_vepo
    where  matnr in s_matnr
    and   werks =  p_werks
    and   lgort in s_lgort.
        endif.

    Oracle give you the answer
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    so you cannot create a unique index with those keys, add another key (or even the whole primary keys) or remove the unique constraint. (of course in some of your environment, e.g. DEV there could be no duplicate entries for those keys, but in at least one of your environment the problem arose)
    Adding an unique constraint on a SAP standard table is not a good idea, if you were successful in a young (go live) system, you could get in trouble if standard reports dump cause of this unexpected constraint. (and they will according to [Murphy's_law|http://en.wikipedia.org/wiki/Murphy%27s_law])
    Regards,
    Raymond

  • Issue with select query for secondary index

    Hi all,
    I have created a secondary index A on mara table with fields Mandt and Packaging Material Type VHART.
    Now i am trying to write a report
    Tables : mara.
    data : begin of itab occurs 0.
    include structure mara.
    data  : end of itab.
    *select * from mara into table itab*
    CLIENT SPECIFIED where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Unable to interpret "CLIENT". Possible causes of error: Incorrect spelling or comma error.          
    if i change to my select query     to
    *select * from mara into table itab*
      where
      MANDT = SY-MANDT and
      VHART = 'WER'.
    I'm getting an error
    Without the addition "CLIENT SPECIFIED", you cannot specify the client     field "MANDT" in the WHERE condition.
    Let me know if iam wrong and we are at 4.6c
    Thanks

    Like I already said, even if you have added the mandt field in the secondary index, there is no need the use it in the select statement.
    Let me elaborate on my reply before. If you have created a UNIQUE index, which I don't think you have, then you should include CLIENT in the index. A unique index for a client-dependent table must contain the client field.
    Additional info:
    The accessing speed does not depend on whether or not an index is defined as a unique index. A unique index is simply a means of defining that certain field combinations of data records in a table are unique.
    Even if you have defined a secondary index, this does not automatically mean, that this index is used. This also depends on the database optimizer. The optimizer will determine which index is best and use it. So before transporting this index, you should make sure that the index is used. How to check this, have a look at the link:
    [check if index is used|http://help.sap.com/saphelp_nw70/helpdata/EN/cf/21eb3a446011d189700000e8322d00/content.htm]
    Edited by: Micky Oestreich on May 13, 2008 10:09 PM

  • Secondary index on vbfa table

    I have very expensive statement running against VBFA table. It comes from a customer report and doing SQL:
    SELECT /*+ FIRST_ROWS */ "VBELV"
    FROM "VBFA"
    WHERE "MANDT" = :A0 AND "VBELN" = :A1 AND ROWNUM <= :A2
    It has execution plan:
    SELECT STATEMENT ( Estimated Costs = 96.009 , Estimated #Rows = 41 )
    |
    --- COUNT STOPKEY
    |
    INDEX RANGE SCAN VBFA~0
    As you can see it is very expensive statement because VBFA is huge table and because I have only VBFA~0 index with columns:
    UNIQUE Index VBFA~0
    COLUMN DISTINCT VALUES
    MANDT      1
    VBELV       1.589.207
    POSNV      4.184
    VBELN       3.202.114
    POSNN       58.173
    VBTYP_N    18
    In order to improve performance of this report , would you recomend creating secondary index and would it be on columns: MANDT, VBELN, VBELV
    I have not seen this type of secondary index in SAP community (most of the time I see sec.index on mandt, vbeln and posnn columns) so that is why I want to double check it before I deploy it.
    Regards,
    Andrija

    Hi,
    Indexes speed up access to rows in a table. They can be created for a single column or for a series of columns.
    MANDT AND VBELN does not have index then create indexes on these columns.
    The explain statement can be used to check the effect of creating or deleting indexes (see index) on the choice of search strategy for the specified SQL statement. You can also estimate the time needed by the database system to process the specified SQL statement. The specified QUERY or SINGLE SELECT statement is not executed while the EXPLAIN statement is being executed.
    To be frank to analyze you shd generate trace file and you need to analyze.
    Oracle claims that <b>first_rows_n</b> optimization results in faster response time for certain queries, we must remember that the performance is achieved via a change to the costing.
    Use the FIRST_ROWS hint when you need only the first few hits of a query. When you need the entire result set, do not use this hint as it might result in poorer performance.
    So collect stats,Analyze table create index your query will execute faster.
    Analyze in trace file generated.Want to know indexes are used or not have a look on Explain Plan.
    Regards
    Vinod

  • Secondary Index for the table MSEG

    Hi folks,
             One of my report is comparetively very slow when it is executed in the production server. when i chked, a query related to the MSEG table is taking more time to execute. I want to create a secondary index for that table. Could any body tell  me the procedure of creating the seconday index for the table MSEG, as i am not aware of that.
    Thanks in advance,
    Shyam.

    Dear shyam,
    Search SDN before posting any thread their are 10k threads available for your question anyhow for your reference here is one link
    http://help.sap.com/search/highlightContent.jsp
    Cheers
    fareed

  • Reoccuring missing secondary indexes in BI systems

    Hi Gurus,
    I'm been experiencing the problem of missing secondary indexes for some time, and I would like to get to the bottom of this problem.
    These are a few of the missing indexes (from DB02):-
    /BIC/F100294-900
    /BIC/F100365-900
    /BIC/F100366-900
    /BIC/F100368-900
    /BIC/F100369-900
    /BIC/F100386-900
    /BIC/FZCFIGRLP-900
    /BIC/FZCGLECRD-900
    /BIC/FZCGLECRD2-900
    /BIC/FZCGLECRD3-900
    TSP03-1
    TSP03L-A
    Any idea what could be causing this???
    And how do I solve this problem??? And how can I prevent these from reoccuring again???
    Thanks.
    Cheers,
    KeatSeong

    I see missing indexes on DB02 as well and ran SAP_INFOCUBE_INDEXES_REPAIR and I just have a couple more questions that I think is relevant to this chain:
    1) It repaired some fact indexes like /BIC/F* and dimension indexes like /BIC/D* but in the report showed as follows for those cubes:
    0  ZPP_C008     :          0  secondary indexes repaired.
    2) I had some other dimension indexes like /BIC/D* that did not get repaired and I'm wondering why. See 4) below for how I repaired those.
    3) I ran RSRV and these missing indexes like /BIC/D* do not show up as being an issue
    4) When I run RSDU_INFOCUBE_INDEXES_REPAIR on that missing index like /BIC/D* I get E_REPAIRED = 0 and system error: RSDU_INFOCUBE_INDEXES_REPA_ORA but when I refresh DB02 the index is gone. When I go to RSDU_INFOCUBE_INDEXES_REPA_ORA and run it I got e_reparied - 11
    Thanks for all your help.
    Mike
    Edited by: Michael Hill on May 26, 2010 6:49 PM

  • Secondary Indexes

    Hi experts,
    I need to have an opinion on the following for performance tuning.
    Actually a report was taking long time fetching entries from the tables, so secondary index are created for BSAD, BSID, BSAK and BSIK tables, and reduced the time from 8 hours to 2 hours by this secondary index.
    Now becuase of this index, the other processes are getting hampered like a job in BW, uses the same tables. earlier it used to run in 5-10 mins it is now taking around 45-50 mins to run sometimes 2 hours to run.
    So how can we improve without hampering any processes?
    Thanks,
    Rashmi

    Hi Rashmi,
    rashmi N purohit wrote:
    Hi experts,
    >
    > I need to have an opinion on the following for performance tuning.
    >
    > Actually a report was taking long time fetching entries from the tables, so secondary index are created for BSAD, BSID, BSAK and BSIK tables, and reduced the time from 8 hours to 2 hours by this secondary index.
    >
    > Now becuase of this index, the other processes are getting hampered like a job in BW, uses the same tables. earlier it used to run in 5-10 mins it is now taking around 45-50 mins to run sometimes 2 hours to run.
    >
    > So how can we improve without hampering any processes?
    >
    > Thanks,
    > Rashmi
    Good question! Creating a new index bears always the risk to interfere with other statements applications. There is no way around
    analyzing the important (frequently executed) sql statements and come up with an index design that fits for all of them.
    You have to analyze the sql statements that are slow now in more detail.
    In order to answer your question all details (sqls indexes statisitcs execution plans ...) would be needed.
    There is no easy answer to your question.
    Kind regards,
    Hermann

  • Help me with Points to be noted While creating a Secondary Index.

    Hi,
    There is a Secondary index created on a Z table which already have 9 secondary indexes. This is the 10th index that is created.
    The Index is created as follows
    MANDT             Client
    DOCNUM_REF     Document number
    The Table data is huge and it is more than 4Lakh records.
    I have done the following analyses.
    1) The fields to be used are not in any other indexes.
    2) The Data is mostly different, i.e. The document reference field is having distinct data
      select a~docnum a~werks_o a~docnum_ref a~natop
             a~m_icms a~m_ipi
             b~lgort_out b~mov_est
             b~item b~matnr
        appending table ti_requisicoes
        from zsytmm_reqnfcb as a
       inner join zsytmm_reqnfit as b
          on a~branch_o eq b~branch_o
         and a~requi    eq b~requi
         and a~mask     eq b~mask
         for all entries in ti_doc_saida_del
       where a~docnum_ref   eq ti_doc_saida_del-docnum.
    What else I need to check , so that the index improves the performance. The table is using in 300 reports.

    Ravishankar Lanjewar wrote:
    @SAP LEARNER,
    >
    > There is a Secondary index created on a Z table which already have 9 secondary
    > indexes. This is the 10th index that is created.
    >
    >
    > I will not recommand to create index any more on this table. If you want to create secondary index for on above table which you have mention. You don't create index only for hamper of performance of single program/report.
    >
    > Don't create more than 4 to 5 index on sinlge table.
    >
    > Refere the tread Link:[Why to create Secondary index ?|Re: When is a secondary index used (in select or where)]
    Sounds like another myth that is widely spread across SAP community.
    Remember, complex systems like modern DBMS do not like generalizations. I personally know standard SAP tables with 9 standard secondary indexes. In the meantime the hardware became fast enough to update several secondary indexes without really significant problems. Of course indexes require space, of course one should create indexes that are not similar and differ from each other with more than one field for example. In other words, think before creating!
    But it's again incorrect to say "do not create because you already have 9 others". You did not even ask what fields are in the affected table, what are the existing indexes.
    Ravishankar Lanjewar wrote:
    > I will recommand to delete all the index and create 4 to 5 fresh index while doing analysis for all SQL statement using where use list  depends on where condition of SQL.
    I am now trying to imagine the effort to analyze 300 reports using this table and all corresponding selects. And probably you will drop supporting indexes for several reports. Some of them may be CEO/CFO-relevant reports that will run minutes/hours instead of seconds. And I would like to see the reaction of CEO when he finds out that it was the result of your "optimization" activities.

Maybe you are looking for

  • In need of advice for backing up HD Video

    So we returned from a vacation, where I shot all the video in HD 1080p. We got back, and I used Final Cut Express to convert the AVCHD files from .mkv to .mov files. All together, the total files come to a whopping 78.93 GB. Wow. I need to back these

  • Browserlab in DW trial version

    Hi, I am trying out  DW CS5 30 day version. My browserlab window has never worked and the 'preview in adobe browserlab' function has never worked from day 1. Is browserlab not available in the trial version?

  • Routing sound out of logic

    I have just purchased a Korg Monotron and am very pleased with it! I am trying to send sound out from Logic into the Monotron, and then filter the audio in the Monotron and record this back into Logic, however I have a problem. The audio I want to se

  • Displaying currency lable in forms

    Is there a way to display the currency in a form that has been attached to an entity via metadata? There is a new function in FR to do this but curious to understand how others have done it in forms.

  • Re: TosBtHSP see. 8.0.0.0 "stopped working" in all versions Windows 7 & 8

    This error happens when my headset automatically connects to the system. My handsfree tells me that the computer is connected, and then this message appears. Problem signature: ** Problem Event Name: BEX ** Application Name: TosBtHsp.exe ** Applicati