Too Many Index in a Table

Dear Gurus,
I´ve got some performance problems of an especific table called CC_FICHA_FINANCEIRA, the structure of the table is described below:
NUMBER OF RECORDS: ABOUT 1.600.000
NAME NULL TYPE
CD_FUNDACAO NOT NULL VARCHAR2(2)
NUM_INSCRICAO NOT NULL VARCHAR2(9)
CD_PLANO NOT NULL VARCHAR2(4)
CD_TIPO_CONTRIBUICAO NOT NULL VARCHAR2(2)
ANO_REF NOT NULL VARCHAR2(4)
MES_REF NOT NULL VARCHAR2(2)
SEQ_CONTRIBUICAO NOT NULL NUMBER(5)
CD_OPERACAO NOT NULL VARCHAR2(1)
SRC NUMBER(15,2)
REMUNERACAO NUMBER(15,2)
CONTRIB_PARTICIPANTE NUMBER(15,2)
CONTRIB_EMPRESA NUMBER(15,2)
DIF_CONTRIB_PARTICIPANTE NUMBER(15,2)
DIF_CONTRIB_EMPRESA NUMBER(15,2)
TAXA_ADM_PARTICIPANTE NUMBER(15,2)
TAXA_ADM_EMPRESA NUMBER(15,2)
QTD_COTA_RP_PARTICIPANTE NUMBER(15,6)
QTD_COTA_FD_PARTICIPANTE NUMBER(15,6)
QTD_COTA_RP_EMPRESA NUMBER(15,6)
QTD_COTA_FD_EMPRESA NUMBER(15,6)
ANO_COMP NOT NULL VARCHAR2(4)
MES_COMP NOT NULL VARCHAR2(2)
CD_ORIGEM VARCHAR2(2)
EXPORTADO VARCHAR2(1)
SEQ_PP_PR_PAR NUMBER(10)
ANO_PP_PR_PAR NUMBER(5)
SEQ_PP_PR_EMP NUMBER(10)
ANO_PP_PR_EMP NUMBER(5)
SEQ_PP_PR_TX_PAR NUMBER(10)
ANO_PP_PR_TX_PAR NUMBER(5)
SEQ_PP_PR_TX_EMP NUMBER(10)
ANO_PP_PR_TX_EMP NUMBER(5)
I think that the indexes of this table can be the problem, there are too many. I will describe them below:
INDEX COLUMNS
CC_FICHA_FINANCEIRA_PK CD_FUNDACAO
NUM_INSCRICAO
CD_PLANO
CD_TIPO_CONTRIBUICAO
ANO_REF
MES_REF
SEQ_CONTRIBUICAO
CD_OPERACAO
ANO_COMP
MES_COMP
CC_FICHA_FINANCEIRA_IDX_002 CD_FUNDACAO
NUM_INSCRICAO
CD_PLANO
CD_TIPO_CONTRIBUICAO
ANO_COMP
ANO_REF
MES_COMP
MES_REF
SRC
CC_FICHA_FINANCEIRA_IDX_006 CD_ORIGEM
CC_FICHA_FINANCEIRA_IDX_007 CD_TIPO_CONTRIBUICAO
CC_FICHA_FINANCEIRA_IDX2 CD_FUNDACAO
ANO_REF
MES_REF
NUM_INSCRICAO
CD_PLANO
CD_TIPO_CONTRIBUICAO
CONTRIB_EMPRESA
CC_FICHA_FINANCEIRA_IDX3 CD_FUNDACAO
ANO_REF
MES_REF
CD_PLANO
CD_TIPO_CONTRIBUICAO
SEQ_CONTRIBUICAO
There are columns that have 4 indexes. Is it right? How is the better way to analyze those indexes?
Regards...

Hi,
You can monitor index usage to know if it used by application.
See metalink note 136642.1 Identifying Unused Indexes with the ALTER INDEX MONITORING USAGE Command
Nicolas.

Similar Messages

  • Can you have too many rows in a table?

    How many rows would you consider to be too many for a single table and how would you re-arrange the data if
    asked?
    any answers?
    sukai

    I have some tables with over 100 million rows that still perform well, and I'm sure much larger tables are possible.  The exact number of rows would vary significantly depending on a number of factors including:
    Power of the underlying hardware
    Use of the table – frequency of queries and updates
    Number of columns and data types of the columns
    Number of indexes
    Ultimately the answer probably comes down to performance – if queries, updates, inserts, index rebuilds, backups, etc. all perform well, then you do not yet have too many rows.
    The best way to rearrange the data would be horizontal partitioning.  It distributes the rows into multiple files; which provides a number of advantages including the potential to perform well with larger number of rows.
    http://msdn.microsoft.com/en-us/library/ms190787.aspx

  • Too many information in 1 table line - alternative solution ?

    Hi,
    Just join today since i am new to Adobe LifeCycle.
    I am using Adobe Life Cycle 8.1.2
    As the title explains, currently i am designing a interactive adobe form (orientation: Portrait) which has a table with "add/remove" button. The "add/remove" works perfect (i can add line or remove line from the table).
    The problem is I have too many information (read: columns) in 1 table line. In results:
    1. Table line is not sufficient to display all information, OR
    2. I can squeeze all columns into 1 line and play around with smaller font size. But the form looks ugly.
    Has anyone encountered this kind of situation before ?
    Can you share your design (if possible attached the form) for similar situation ?
    Any bright input/idea are most welcomed.
    Thanks In Advance !
    Note:
    1. Landscape is NOT an option.
    2. I still want to have "add/remove" functionality.

    Hi,
    Suppose you have set the the no of columns to 6 for any row . 1st row has 6 columns. 2nd row has 6 columns intially. But you can merge some of the columns to make it one. Selects the columns you want to merge.Then Right Click -> Merge cells. But this may lead to another problem about the caption of the columns. But if it's text field or same type of field you can specify the option to set the caption as on top.
    Thanks,
    Bibhu.

  • Too many sessions inserting in table

    Hi All
    I am new to batch programs processing .Please help me on this.
    I have an application in which more than 1 session
    will try to insert millions of recrds in the same table.
    I need to know what care should I take for this.
    Will tables be locked or I do not need to wory about this
    Further in the program , one application may delete certain lacs of records while
    other session might still be inserting into it
    WHat do I do in this case.
    Also if one session is trying to update while other session is inserting/deleting records
    then what care do I take
    Thanks
    Ashwin N.

    It is a very bad idea to have more than one session attempt to insert millions of rows (if that is a literal statement of requirement). In you have paid for it use the Parallel Server option, but always try to have only a single batch process running unless it is absoultely unavoidable. Otherwise you are increasing the risk of latch contention both for the table itself and also rollback segemnts.
    Other things to check (these are database type things):
    - have big rollback segment assigned to batch session;
    - consider committing every 1000 records to cut down on rollback usage;
    - if you're really going to have several sessions doing monster inserts like this give your table lots of freelist groups;
    - make sure the table and its indexes have enough empty space by pre-assigning extents
    - make sure that the tablespaces are big enough in case additional extents are required
    As for row contention: rows you insert cannot locked by someone else nor are they affected by someone else inserting, updating or deleting other rows (although you might suffer from block header contention) unless that other session has issued a LOCK TABLE statement. Only once you have committed your insert will other sessions be able to see, update and delete your new rows (because Oracle doesn't support DIRTY_READ :-) )
    One way of reading your question is that you think someone may be trying to delete the records you're inserting. I hope that's not the case but if it is someone ought to have a look at the business model.
    rgds, APC

  • Weird behavior when deleting too many rows from a table

    Hello ADFr's
    I have the following code in my managed bean which removes data from an adf table. I am getting an id set from a calling process in my app.
    DCIteratorBinding tableIterato = getIterator(); // returns iterator of the table being modified
    RowSetIterator rsi = tableIterator.getRowSetIterator();
    for(Integer id : idSet) {
    Key key = new Key(new Object[]{id.toString()});
    Row foundRow = rsi.findByKey(key, 1)[0];
    foundRow.remove();
    My issue ..
    This seems to work when the idSet is small 30 or so. But when the idSet is like 50(maybe a littel less) or more, none of the rows seem to be getting deleted. Any clue?
    I added this test in there to see if any rows were being deleted after the loop above, and all the records were still being printed out.
    for(Row row : tableIterator.getAllRowsInRange()) {
    System.out.println("row with name " row.getAtribute("name") " swtill exists");
    This seems to work under 30 fine.
    Why doesn't this work for any size idSet?

    That was it.. thanks Puthanampatti.
    I thought about this right after I sent this//
    Well maybe this post will help someone else (-;

  • E Table of aggregate has too many partitions

    Hi,
    While checking performance info of the query I'm getting red light on one of the lines saying aggregate 100067 has too many partitions in E table.
    Could you guys pls let me know what is this all about and how to reslove the same?
    Thanks
    R

    Does your cube have compress after rollup on the aggregates...?
    The aggregate is partitioned in the same way the E fact table is partitioned. You should choose the compress upon rollup option in the rollup tab of the infocube... Set the same and then if possible deactivate and do another rollup to the aggregate...

  • Reg. indexes on a table

    Hi,
    Having too many indexex in a table( like an index on each column in a table), will this have any impact on DML performance?
    TIA
    Edited by: suma_ys on Jan 20, 2010 2:05 AM

    Pavan - a DBA wrote:
    i dont think so. sorry cant show you a demo as i dont have a test machine now...Here is small test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> create table t3 (tid int, tdesc varchar2(10) default 'T3') ;
    Table created.
    SQL> create index t3_idx on t3(tdesc) ;
    Index created.
    SQL> analyze index t3_idx validate structure ;
    Index analyzed.
    SQL> select rows_per_key, del_lf_rows from index_stats ;
    ROWS_PER_KEY DEL_LF_ROWS
               0           0
    SQL> insert into t3 (tid) values (1) ;
    1 row created.
    SQL> insert into t3 (tid) values (2) ;
    1 row created.
    SQL> commit ;
    Commit complete.
    SQL> analyze index t3_idx validate structure ;
    Index analyzed.
    SQL> select rows_per_key, del_lf_rows from index_stats ;
    ROWS_PER_KEY DEL_LF_ROWS
               2           0
    SQL> delete from t3 where tid = 1 ;
    1 row deleted.
    SQL> commit ;
    Commit complete.
    SQL> analyze index t3_idx validate structure ;
    Index analyzed.
    SQL> select rows_per_key, del_lf_rows from index_stats ;
    ROWS_PER_KEY DEL_LF_ROWS
               2           1

  • SQL subquery returning too many rows with Max function

    Hello, I hope someone can help me, I been working on this all day. I need to get max value, and the date and id where that max value is associated with between specific date ranges. Here is my code , and I have tried many different version but it still returning
    more than one ID and date
    Thanks in advance
    SELECT
      distinctbw_s.id, 
    avs.carProd,cd_s.RecordDate,
    cd_s.milkProductionasMilkProd,
    cd_s.WaterProductionasWaterProd
    FROMtblTestbw_s
    INNERJOINtblTestCpcd_sWITH(NOLOCK)
    ONbw_s.id=cd_s.id   
    ANDcd_s.recorddateBETWEEN'08/06/2014'AND'10/05/2014'
    InnerJoin
    (selectid,max(CarVol)ascarProd
    fromtblTestCp
    whererecorddateBETWEEN'08/06/2014'AND'10/05/2014'
     groupby 
    id)avs
    onavs.id=bw_s.id
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155   0 225
    47790 2014-10-01   13444    0 0
    47790 2014-08-06   132111    10 100
    47790 2014-09-05   10000    500 145
    47790 2014-09-20   10000    800 500
    47791 2014-09-20   10000    300 500
    47791 2014-09-21   10001    400 500
    47791 2014-08-21   20001    600 500
    And the result should be ( max carprod)
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155  0 225
    47791 2014-08-21   20001    600 500

    Help your readers help you.  Remember that we cannot see your screen, do not know your data, do not understand your schema, and cannot test a query without a complete script.  So - remove the derived table (to which you gave the alias "avs")
    and the associated columns from your query.  Does that generate the correct results?  I have my doubts since you say "too many" and the derived table will generate a single row per ID.  That suggests that your join between the first
    2 tables is the source of the problem.  In addition, the use of DISTINCT is generally a sign that the query logic is incorrect, that there is a schema issue, or that there is a misunderstanding of the schema. 

  • Bex Query: Too many table names in the query The maximum allowable is 256

    Hi Experts,
    I need your help, Im working on a Query using a multiprovider of 2 datastores, I need to work with cells to assign specific acconts values to specific rows and columns, so I was creating a Structure with elements from a Hierarchy, but I get this error when I'm half way of the structure:
    "Too many table names in the query. The maximum allowable is 256.Incorrect syntax near ')'.Incorrect syntax near 'O1'."
    Any idea what is happening? is ti possible to fix it? do I need to ask for a modification of my Infoproviders? Some one told me is possible to combine 2 querys, is it true?
    Thanks a lot for your time and pacience.

    Hi,
    The maximum allowable limit is 256 holds true. It is the max no. of characteristics and key figures that can be used in the column side. While creating a structure, you create key figures (restricted or calculated) and formulas etc.. The objects that you use to create these should not be more than 256.
    http://help.sap.com/saphelp_nw70/helpdata/EN/4d/e2bebb41da1d42917100471b364efa/frameset.htm
    Not sure if combination of 2 query's is possible.  You can use RRI. Or have a woorkbook with 2 queries.
    Hope it helps.

  • Too many table columns

    Hi,
    I have to create a pdf with a table having multiple columns.
    But the table has too many columns to fit in a A4 page.
    I do not wan to change the paper type. I have to use A4 paper only.
    Is there a way in which I can show the table records in two consecutive rows..
    so that I can split the columns into two rows.
    There will be two header rows... and two data rows (only data rows will repeat)
    data row1 will have data from say fields 1 to 10 and data row 2 will have data from fields 11 to 20.
    Regards
    Reema.

    As far as I know there's probably no way ID is going to do what you want.
    You can place a table across a multiple page spread, but the odds of being able to do that and still keep the file printable are marginal, at best. You can't for example, leave blank space at the gutters unless you are able to add a blank column that spans the gutter, and the limt for multipage spreads is 10 pages wide, whcih doesn't sound like it's probably enough to hold almost 600 columns.
    Perhaps placing the table using named ranges of appropriate widths would work...

  • Special Ledger Roll up: too many records created in receiver table

    Hello,
    When I execute roll up from 1 special ledger to another special ledger (leaving out a number of fields) 4 records (postings) are selected but in the receiver table there are 92 records created (4*23) so 88 records too many. For each selected record in the sender table 7 records contain only 0 amounts and have an incorrect RPMAX value (starting with 16 till 336). The 8th record has an amount and has the correct RPMAX value: 352). In Sender and Receiver Ledger fiscal year variant Daily Balance is assigned (= each day is a period). Any hint how to prevent the empty records?

    Installing the patch, re- importing the SAP Provisioning framework (I selected 'update') and recreating the jobs didn't yield any result.
    When examining pass 'ReadABAPRoles' of Job 'AS ABAP - Initial Load' -> tab 'source', there are no scripts used .
    After applying the patch we decided anyway to verify the scripts (sap_getRoles, sap_getUserRepositories) in our Identity Center after those of 'Note 1398312 - SAP NW IdM Provisioning Framework for SAP Systems' , and they are different
    File size of SAP Provisioning Framework_Folder.mcc of SP3 Patch 0 and Patch 1 are also exactly the same.
    Opening file SAP Provisioning Framework_Folder.mcc with Wordpad : searched for 'sap_getRoles'  :
    <GLOBALSCRIPT>
    <SCRIPTREVISIONNUMBER/>
    <SCRIPTLASTCHANGE>2009-05-07 08:00:23.54</SCRIPTLASTCHANGE>
    <SCRIPTLANGUAGE>JScript</SCRIPTLANGUAGE>
    <SCRIPTID>30</SCRIPTID>
    <SCRIPTDEFINITION> ... string was too long to copy
    paste ... </SCRIPTDEFINITION>
    <SCRIPTLOCKDATE/>
    <SCRIPTHASH>0940f540423630687449f52159cdb5d9</SCRIPTHASH>
    <SCRIPTDESCRIPTION/>
    <SCRIPTNAME>sap_getRoles</SCRIPTNAME>
    <SCRIPTLOCKSTATE>0</SCRIPTLOCKSTATE>
    -> Script last change 2009-05-07 08:00:23.54 -> that's no update !
    So I assume the updates mentioned in Note 1398312 aren't included in SP3 Patch 1. Manually replaced the current scripts with those of the note and re- tested : no luck. Same issue.
    Thanks again for the help,
    Kevin

  • Select from (too many) tables

    Hi all,
    I'm a proud Oracle Apex developer. We have developed an Interactive Report that is generated from many joined tables in a remote system. I've read that to improve performances we can do the following:
    1) Create a temporary table on our system that stores the app_user id and the colmun as a result of the query
    2) Create a procedure that does:
    declare
    param1:= :PXX_item
    param2:= :PXY_item.
    param3:= :V('APP_USER')
    insert into <our_table>
    (select param3, <query from remore system>)
    commit;
    3) Rediresct to a query page where IR reads from this temp table
    On "Exit" button there's a procedure that purge table data of that user (delete from temp where user=V('app_user'), so the temp table is only filled with necessary data.
    Do you see any inconvenience? Application will be used from about 500 users, about 50 concurrent users at a time.
    Thank you!

    1) We don't have a control on source syste, we can only perform query on itI was referring to a materialized view on the system where Apex is installed, not on the source database.
    2) There are many tables involvedI don't understand why this is a problem. Too much data I can see, but too many tables... not so much.
    3) Data has to be in real time, with no delayThis would a problem for MV or collections. The collections would store the data as of the initial query. Any IRs using the collection after the fact would be using stale data. If you absolutely have to have the data as of right now every time, then the full query must run on the remote system every time. Tuning that query is the only option to make it faster.
    4) There are many transactions on the source tables (they are the core of the source system) and so MV could not be refreshed so fastProbably could be with fast refresh enabled, but not necessarily practical to do so. As I indicated in 3, you have painted yourself into a corner here. You have indicated a need for a real-time query and that eliminates a number of possibilities for query-once use-many performance solutions.

  • Calling Overloaded Procedures from Table Adapter - PLS-00307: too many..

    I have called Overloaded Oracle Procs in .NET code in the past. I now want to add a procedure call to a table adapter. The proc is an overloaded proc and when I add it I get the following:
    PLS-00307: too many declarations of 'prc_[my proc name]' match this call.
    I looked in the designer class and all the parameters are there, just as I do in my own code, but it still gets the message above even with all the parameter names in place.
    Is there a way to call Overloaded Procs from a table adapter? ?

    Any Oracle folks care to provide some input on why Table Adapters cannot call Overloaded Stored Procs?
    Edited by: SURFThru on Jul 8, 2011 11:37 AM

  • Access 2010 table has too many fields for web database - how to split into two web-compatible tables?

    Hello, 
    I'm in the process of converting an Access 2010 database into a web database and I'm having some trouble. I have a table which has 236 fields, which is more than the 220 field limit for web-compatible tables. I have tried to split this table into two tables
    with a one-to-one relationship, but web tables can only use lookups as relationships. I've tried to connect the tables with a lookup and then synthesize a one-to-one relationship by using data macros but I'm not having much luck.
    I realize that 236 fields is a lot, but it must be set up this way because each field represents a tasks and is a yes/no box to verify that the task has been completed - and the records are different employees for whom which the tasks need to be completed.
    Could someone please help me figure out a way to make this table web compatible?
    Thank you, 
    Ryan

    Hi,
    I found that you've cross post the quesion on our Answer forum, are you satisfiled the reply from there?
    http://answers.microsoft.com/en-us/office/forum/office_2010-access/access-2010-table-has-too-many-fields-for-web/06ee81ea-24ab-48b8-9b8f-0ed08a868bac
    Regards,
    George Zhao
    TechNet Community Support

  • Too many object match the primary key on master-detail tables

    Hi all,
    I am using Jdeveloper 11.1.1.2 and ADFBC.
    I have three table: tableA (with fields IdA,AttributeA), tableB (with fields IdB,AttributeB), tableC (with fields IdC,IdTableA,IdTableB,AttributeC).
    Table C has a composition relation with tableA and tableB.
    I have a panelTabbed with two tabs. In the first tab I can create a row in TableA and automatically (setting some parameters) I create a row in tableC. In the second tab I see the tableC
    I have this strange behavior when I try to create the row in tableA for the first time:
    I click on create button, I insert the values for tableA and commit, I have this error:
    too many object match the primary key Oracle.jbo.key [132]
    Instead if I click on create button, I insert the values for tableA, I move on the second tab, I return to the first tab and commit, I have no errors and the rows are correctly create on tableA and tableC.
    How can I solve it?
    Thank you
    Andrea

    It indicates that an Entity is being added to the Entity cache with the same primary key as an existing entity: http://download.oracle.com/docs/cd/E14571_01/apirefs.1111/e10653/oracle/jbo/CSMessageBundle.html#EXC_TOO_MANY_OBJECTS
    If the primary key based on sequence is assigned declaratively and your view link is based on an association, ensure that the association is marked as composition the Association settings.
    As this has been asked a few times in the forum, double-check for solutions here: http://forums.oracle.com/forums/search.jspa?threadID=&q=JBO-25013&objID=f83&dateRange=lastyear&userID=&numResults=15&rankBy=10001

Maybe you are looking for

  • How to change the color of text but only when that text is highlighted

    I am NOT asking how to change the highlight color OR how to change the text color generally, but rather, how to change the text color as it shows onscreen while it is highlighted. Here's where I'm going: Text is ordinarily black (assume) such as when

  • Converting Textfile with position delimited in SAP XI using ContentConvers?

    Hi ,   My problem is i need to conver FlatFile with position delemeted  using Content Conversion with File Adaptor.There is no Fieldseperator, are there any options to convert this  type of file.please post if any suggestion are there... Thanks & Reg

  • XML and PLSQL

    To get the data from an XML document, what methods shoulf one use - the cdatasection or characterdata method? On using the character data it gives out the following error: ERROR at line 1: ORA-29532: Java call terminated by uncaught Java exception: j

  • What is a vRge08Event file?

    Please tell me what kind of file this is and whether I can erase the hundreds of them that I have without worrying?

  • Issues 4252150 - Roll back a JTA trx now disassociates it from curr thread.

    Dear all, Known that Issues 4252150 has been fixed in TopLink 10.1.3 DP4 (http://www.oracle.com/technology/products/ias/toplink/preview/10.1.3dp4/relNotes/rel_notes.htm#CEGIHCBI) , would like to know whether Oracle has provided patch for this issued