Index slow

Hi All,
An Index existed on a table, if we are performing DML operations why DML operations are slow.
Please explain me.
Thanks
Ramesh

Lets see ... according to you ... with an unknown version of Oracle ... at an unknown patch level ... accessing tables created with unknown DDL ... with an unknown variety of types and structures of indexes and constraints ... and possibly also triggers, views, and other dependent objects ... some DML statement(s) of unknown type with or without query rewrite transformation ... producing potentially an unknown explain plan report output ... running in an unknown amount of time ... are slower than you think they should be.
I suggest you increase the voltage of the electricity powering your computer. <g>
Seriously ... if you do not understand that what you posted is inadequate by any standard for receiving help ... take a beginning SQL class.
About all I got from what you posted is a chuckle ... not even a laugh ... and I doubt that is what either of us was looking for. <g>

Similar Messages

  • Insert with unique index slow in 10g

    Hi,
    We are experiencing very slow response when a dup key is inserted into a table with unique index under 10g. the scenario can be demonstrated in sqlplus with 'timing on':
    CREATE TABLE yyy (Col_1 VARCHAR2(5 BYTE) NOT NULL, Col_2 VARCHAR2(10 BYTE) NOT NULL);
    CREATE UNIQUE INDEX yyy on yyy(col_1,col_2);
    insert into yyy values ('1','1');
    insert into yyy values ('1','1');
    the 2nd insert results in "unique constraint" error, but under our 10g the response time is consistently in the range of 00:00:00.64. The 1st insert only took 00:00:00.01. BTW, if no index or non-unique index then you can insert many times and all of them return fast. Under our 9.2 DB the response time is always under 00:00:00.01 with no-, unique- and non-unique index.
    We are on AIX 5.3 & 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production.
    Has anybody seen this scenario?
    Thanks,
    David

    It seems that in 10g Oracle simply is doing something more.
    I used your example and run following script on 9.2 and 10.2. Hardware is the same i.e. these are two instances on the same box.
    begin
      for i in 1..10000 loop
        begin
          insert into yyy values ('1','1');
        exception when others then null;
        end;
      end loop;
    end;
    /on 10g it took 01:15.08 and on 9i 00:47.06
    Running trace showed that in 9i there was difference in plan of following recursive sql:
    9i plan:
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.43       0.43          0          0          0           0
    Execute  10000      1.09       1.07          0          0          0           0
    Fetch    10000      0.23       0.19          0      20000          0           0
    total    30000      1.76       1.70          0      20000          0           0
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  NESTED LOOPS 
          0   NESTED LOOPS 
          0    TABLE ACCESS BY INDEX ROWID CDEF$
          0     INDEX RANGE SCAN I_CDEF4 (object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$
          0     INDEX UNIQUE SCAN I_CON2 (object id 49)
          0   TABLE ACCESS CLUSTER USER$
          0    INDEX UNIQUE SCAN I_USER# (object id 11)10g plan
    select c.name, u.name
    from
    con$ c, cdef$ cd, user$ u  where c.con# = cd.con# and cd.enabled = :1 and
      c.owner# = u.user#
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    10000      0.21       0.20          0          0          0           0
    Execute  10000      1.20       1.31          0          0          0           0
    Fetch    10000      2.37       2.59          0      20000          0           0
    total    30000      3.79       4.11          0      20000          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          0  HASH JOIN  (cr=2 pr=0 pw=0 time=301 us)
          0   NESTED LOOPS  (cr=2 pr=0 pw=0 time=44 us)
          0    TABLE ACCESS BY INDEX ROWID CDEF$ (cr=2 pr=0 pw=0 time=40 us)
          0     INDEX RANGE SCAN I_CDEF4 (cr=2 pr=0 pw=0 time=27 us)(object id 53)
          0    TABLE ACCESS BY INDEX ROWID CON$ (cr=0 pr=0 pw=0 time=0 us)
          0     INDEX UNIQUE SCAN I_CON2 (cr=0 pr=0 pw=0 time=0 us)(object id 49)
          0   TABLE ACCESS FULL USER$ (cr=0 pr=0 pw=0 time=0 us)So in 10g it had hash join instead of nested loop join at least for this particular select. Probably time to gather stats on sys tables?
    The difference in time wasn't so big though 4.11 vs 1.70 so it doesn't explain all the time taken.
    But you can probably check whether you haven't more difference.
    Also you can download Thomas Kyte runstats_pkg and run it on both environments to compare whether some stats or latches haven't very big difference.
    Gints Plivna
    http://www.gplivna.eu

  • Text query using a Multi Column datastore index slow

    I have created a text index using multi column datastore preference. I have specified two clob columns in my preference. Searching on this new index works, but it is slower than I expected.
    I have done the following comparison:
    My original two clob columns are: DocumentBody and DocumentFields. I have built an individual text index on each column. My new column with Multi Column index is DocumentBodyAndFields;
    I did two queries:
    1. search 'dog' on DocumentBody UNION search 'dog' on DocumentFields;
    2. search 'dog' on DocumentBodyAndFields;
    I would think the second search should be faster than the first one because it is a single query. But this is not the case. The second query is consistently slower than the first query by about 10-20%.
    Things are getting much worse when I search on preceding wildcards. If I search '%job', the multi column index is twice as slow as the first query! I am very confused by this result. Is this a bug?

    I am unable to reproduce the performance problem. In my tests, the search that uses the multicolumn_datastore performs better, as demonstrated below. Can you provide a similar test case that shows the table structure, datastore, index creations, and explain plan?
    SCOTT@orcl_11g> CREATE TABLE your_tab
      2    (DocumentId            NUMBER,
      3       DocumentBody            CLOB,
      4       DocumentFields            CLOB,
      5       DocumentBodyAndFields  VARCHAR2 (1))
      6  /
    Table created.
    SCOTT@orcl_11g> INSERT ALL
      2  INTO your_tab VALUES (-1, 'adog', 'bdog', NULL)
      3  INTO your_tab VALUES (-2, 'adog', 'whatever', NULL)
      4  INTO your_tab VALUES (-3, 'whatever', 'bdog', NULL)
      5  SELECT * FROM DUAL
      6  /
    3 rows created.
    SCOTT@orcl_11g> INSERT INTO your_tab
      2  SELECT object_id, object_name, object_name, NULL
      3  FROM   all_objects
      4  /
    69063 rows created.
    SCOTT@orcl_11g> BEGIN
      2    CTX_DDL.CREATE_PREFERENCE
      3        ('your_datastore', 'MULTI_COLUMN_DATASTORE');
      4    CTX_DDL.SET_ATTRIBUTE
      5        ('your_datastore', 'COLUMNS', 'DocumentBody, DocumentFields');
      6  END;
      7  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> CREATE INDEX your_idx1 ON your_tab (DocumentBody)
      2  INDEXTYPE IS CTXSYS.CONTEXT
      3  /
    Index created.
    SCOTT@orcl_11g> CREATE INDEX your_idx2 ON your_tab (DocumentFields)
      2  INDEXTYPE IS CTXSYS.CONTEXT
      3  /
    Index created.
    SCOTT@orcl_11g> CREATE INDEX your_idx3 ON your_tab (DocumentBodyAndFields)
      2  INDEXTYPE IS CTXSYS.CONTEXT
      3  PARAMETERS ('DATASTORE your_datastore')
      4  /
    Index created.
    SCOTT@orcl_11g> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'YOUR_TAB')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11g> SET TIMING ON
    SCOTT@orcl_11g> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11g> SELECT DocumentId FROM your_tab
      2  WHERE  CONTAINS (DocumentBody, '%dog') > 0
      3  UNION
      4  SELECT DocumentId FROM your_tab
      5  WHERE  CONTAINS (DocumentFields, '%dog') > 0
      6  /
    DOCUMENTID
            -3
            -2
            -1
    Elapsed: 00:00:00.65
    Execution Plan
    Plan hash value: 4118340734
    | Id  | Operation                     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |           |     4 |   576 |     2 (100)| 00:00:01 |
    |   1 |  SORT UNIQUE                  |           |     4 |   576 |     2 (100)| 00:00:01 |
    |   2 |   UNION-ALL                   |           |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID| YOUR_TAB  |     2 |   288 |     0   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | YOUR_IDX1 |       |       |     0   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID| YOUR_TAB  |     2 |   288 |     0   (0)| 00:00:01 |
    |*  6 |     DOMAIN INDEX              | YOUR_IDX2 |       |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("CTXSYS"."CONTAINS"("DOCUMENTBODY",'%dog')>0)
       6 - access("CTXSYS"."CONTAINS"("DOCUMENTFIELDS",'%dog')>0)
    SCOTT@orcl_11g> SELECT DocumentId FROM your_tab
      2  WHERE  CONTAINS (DocumentBodyAndFields, '%dog') > 0
      3  /
    DOCUMENTID
            -1
            -2
            -3
    Elapsed: 00:00:00.28
    Execution Plan
    Plan hash value: 65113709
    | Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |           |     4 |    76 |     0   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| YOUR_TAB  |     4 |    76 |     0   (0)| 00:00:01 |
    |*  2 |   DOMAIN INDEX              | YOUR_IDX3 |       |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("CTXSYS"."CONTAINS"("DOCUMENTBODYANDFIELDS",'%dog')>0)
    SCOTT@orcl_11g>

  • TOC/Index slow to load

    Hi all,
    I have just followed Peter Grainge's methods for splitting a
    large project, and creating merged webhelp (thanks Peter!).
    Everything is working as expected, except when the webhelp is
    first called the TOC and index take about 3 seconds to load from
    the UK. This increases to over 12 seconds in New York, which is
    pretty unacceptable.
    When we had one large project the appearance of the TOC was
    instantaneous here in the UK, and much quicker in NY too.
    Does anyone know if it is possible to speed this up?
    I am publishing with Optimize Speed For Web Site, and I've
    tried excluding the Java Applet but this made no difference (I
    thought it might cut out some of the processing time). The actual
    help topic in the right pane appears instantaneously.
    Many thanks,
    Kate

    Some things to consider:
    I also notice that a large merged project loads OK, up to
    displaying the welcome topic.
    The TOC sometimes is a little slow, and a large index and
    search database can take awhile.
    It wasn't always this way. I've surmised the server is busier
    than it use to be.
    Also, WebHelp output raises a lot of red flags with html and
    JavaScript checkers. Some of them may be related to slow loading.
    I routinely make these fixes:
    whdata/whftdata0.htm
    It's missing a </body> tag before </html>
    whdata/whftwdata.htm
    Same as above
    whfdhtml.htm
    Has </head> not followed by <body>. Either move
    </head> down to nearly the end, or insert <body> and
    then near the end, insert </body>
    whtdhtml.htm
    Has head and body sections with a block of scripts between
    them. Should be in one or the other.
    whskin_tbars.htm
    Has no proper <body> tag.
    Same for
    whfform.htm
    Also have a look at the launch file,
    myproject.htm
    After the </head> there are scripts not enclosed in a
    head or body section. I move </head> down, just before the
    framesets.
    whskin_tw.htm
    No <head> ... </head> tags before and after the
    block of scripts.
    same for:
    whskin_frmset01.htm and ...10.htm.
    whskin_plist.htm
    A couple of timeout/reload pieces may be involved.
    in whstart.js, near the top, is a reload value 5000. Try
    2000.
    In whtbar.js, just a few lines from the end, is a function
    tryReload
    Comment out the documert.location.reload line like this:
    // document.location.reload
    As usual, I must caution that this seems to work while not
    introducing new problems in my enviroment(s) . It may not be the
    same for you
    Harvey

  • Index slows down query execution

    Hello everybody,
    I have reordered the join conditions for the query...
    select (first_name||' '||middle_name||' '||last_name) name,regn_no,age,gender,
    (select loc_name from locations where loc_id=location_code and loc_h_id='L6') district,
    person_id from persons p,musers u where reg_center_id=u.center_id and
    p.ipop='RG' and u.user_id = '8832' and u.eff_end_dt is null and p.CID = '1' order by p.crt_dt desc
    like this...
    select (first_name||' '||middle_name||' '||last_name) name,regn_no,age,gender,
    (select loc_name from locations where loc_id=location_code and loc_h_id='L6') district,
    person_id from musers u, persons p where reg_center_id=u.center_id and u.user_id = '8832'
    and p.ipop='RG' and u.eff_end_dt is null and p.CID = '1'
    because
    select count(*) from persons p, musers u where reg_center_id=u.center_id and
    p.ipop='RG' is 13002
    and
    select count(*) from persons p, musers u where reg_center_id=u.center_id and u.user_id = '8832' is 1007.
    In this excercise I have a couple of questions..
    1. This did not show any difference in the CPU time.
    and,
    I have created an index 'idx_ipop_persons' on persons(ipop) "create index idx_ipop_persons on persons(ipop)".
    2. The query is taking more time to execute than it was before creating the index.
    Please help me...
    Thanks,
    Aswin.

    Please post the execution plan for your query.
    And also i need some details:
    select count(*) from person where ipop='RG';
    How many records fetch?
    select distinct ipop from persons; --How many
    records fetch?
    Regards
    RajaBaskar
    Execution plan:
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=921 Card=176 Bytes
    =11088)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'LOCATIONS' (TABLE)
    (Cost=2 Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'IDX_LOCID_LOCHDR_LOCATIONS' (INDE
    X) (Cost=1 Card=1)
    3 0 TABLE ACCESS (BY INDEX ROWID) OF 'PERSONS' (TABLE) (
    Cost=918 Card=176 Bytes=9152)
    4 3 NESTED LOOPS (Cost=921 Card=176 Bytes=11088)
    5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MUSERS' (TA
    BLE) (Cost=3 Card=1 Bytes=11)
    6 5 INDEX (RANGE SCAN) OF 'PK_MUSERS' (INDEX (UNIQUE)
    ) (Cost=2 Card=1)
    7 4 INDEX (RANGE SCAN) OF 'IDX2_PERSONS' (INDEX) (Co
    st=1 Card=1464)
    select count(*) from person where ipop='RG';
    count(*)
    12135
    select distinct ipop from persons;
    distinct ipop
    RG
    OP
    IP
    RF
    CR

  • Spotlight indexing slow/stalled in 10.7.5

    Hi,
    I know there is another thread to do with the interface between spotlight and time machine, but I just would like to have one thread about the issues with spotlight alone.  I turned spotlight off (so I could do a TM backup, as advised on many previous threads) and then when I turned it on again the time required to index 300 GB is unacceptable.  On top of that I have a laptop which I carry from place to place, so it is out of the question to leave it running for the 9 days, 3 weeks, 2 years or whatever it takes to index.  It keeps changing its mind about how much time it is going to need; this is the current display (the blue bar never gets further than this point, for this particular display it has been running for 5 days already)
    I tried the command
    sudo opensnoop -n mdworker
    to see what is going on and I see that it sometimes gets stuck on this line
    501   2845 mdworker32
    5 /var/folders/b9/d9f8j9l9611ds62x0168b20m0000gn/C//sandbox-cache.db
      501   2845 mdworker32
    4 /Users/paula/Library/Preferences/ByHost/.GlobalPreferences.0FBF59E7-8956-529B-B BA4-3E756EEB02E8.plist
      501   2845 mdworker32
    4 /Users/paula/Library/Preferences/.GlobalPreferences.plist
      501   2845 mdworker32
    4 /Library/Preferences/.GlobalPreferences.plist
    Should I delete GlobalPreferences.plist or is this too dangerous?
    I tried dragging drives in and out of the privacy list on spotlight, rebooting etc. but nothing helps.
    I spent several hours on the phone to Apple - no help, not even referred to someone who knew what I was talking about.
    Any advice out there...?  If, to back up with TM I have to every time turn off spotlight, then wait 3 weeks for it to reindex - this isn't possible..
    I can't roll back to a previous version because I don't have any TM backup from a previous operating system.

    Hi,
    before you delete anything, you should try this update for 10.7.5:
    http://support.apple.com/kb/DL1599
    I can't tell if it solves the problem, but the release notes refer to this bug specifically.
    I lost my nerves before it came out and installed 10.8.2. This solved the problem for me.
    Dirk

  • INDEX and DML

    Hello every one,
    I have may be crazy question but i am confused.
    1. Does indexes store the data of column in the database other then the base table or refernces to table? Plesae make my doubt clear. I read th doc already.
    2.When we do DML on table, does index gets updated on every commit or every dml of at certain inteval or only when Index is analysed or re build.
    May be this questions are helpful to other also.
    Thank you in advance
    Message was edited by:
    user553284

    index acts like a table-data in some way:
    It is being changed when you modify your data. It generates the undo and redo data. if the transaction is commited, then practically no additional actions are taken. When transaction is rolled back, the Oracle Database perform many operations to restore the table data and the index data in the state they were before the transaction began.
    Please note that index SLOW DOWN DML operations! The more indexes you have the MORE time you need to perform similar DML operations.

  • I want information in practical sceario  how to create indexes?

    hi,
    i want complete information about indexes? how create indexes in real time?

    " Secondary Database
    First it must be stated that table design is a more logical work while index design is rather technical. In table design it might make sense to place certain fields (client, company code, ...) in the beginning. In index design, this is not advisable.  Very important for an index is that it contains very selective fields in the beginning. Those are fields like object numbers. Not selective are client, company code, ... 
    Indexes should be small (few fields).  The Database optimizer can combine two or more indexes to execute a query. 
    Indexes of one table should be disjoint (have few common fields), in order not to confuse the optimizer which index to use. 
    Note that each index slows the inserts into the table down. Updates are only slowed down if indexed fields are updated. In general, heavy inserted tables should have only few indexes while heavy selected tables might have more. 
    " Creating Secondary Indexes
    Procedure
    1.In the maintenance screen of the table, choose Indexes.
    If indexes already exist on the table, a list of these indexes is displayed. Choose .
    2.In the next dialog box, enter the index ID and choose 
    The maintenance screen for indexes appears.
    3.Enter an explanatory text in the field Short text.
    You can then use the short text to find the index at a later time, for example with the R/3 Repository Information System.
    4.Select the table fields to be included in the index using the input help for the Field name column.
    The order of the fields in the index is very important. See What to Keep in Mind for Secondary Indexes.
    5.If the values in the index fields already uniquely identify each record of the table, select Unique index.
    A unique index is always created in the database at activation because it also has a functional meaning (prevents double entries of the index fields).
    6.If it is not a unique index, leave Non-unique index selected.
    In this case you can use the radio buttons to define whether the index should be created for all database systems, for selected database systems or not at all in the database.
    7.Select for selected database systems if the index should only be created for selected database systems.
    Click on the arrow behind the radio buttons. A dialog box appears in which you can define up to 4 database systems with the input help. Select Selection list if the index should only be created on the given database systems. Select Exclusion list if the index should not be created on the given database systems. Choose .
    8.Choose  activate.
    " Result
    The secondary index is automatically created in the database during activation if the corresponding table was already created there and index creation was not excluded for the database system.
    You can find information about the activation flow in the activation log, which you can call with Utilities ® Activation log. If errors occurred when activating the index, the activation log is automatically displayed.
    " How to Check if an Index is Used
    Procedure
    1.Open a second session and choose System -> Utilities ->Performance trace.
    The Trace Requests screen appears.
    2.Select Trace on.
    The SQL trace is activated for your user, that is all the database operations under your user are recorded.
    3.In the first window, perform the action in which the index should be used.
    If your database system uses a cost-based optimizer, you should perform this action with as representative data as possible. A cost-based optimizer tries to determine the best index based on the statistics.
    4.In the second session, choose Trace off and then Trace list.
    Result
    The format of the generated output depends on the database system used. You can determine the index that the database used for your action with the EXPLAIN function for the critical statements (PREPARE, OPEN, REPOPEN).
    " What to Keep in Mind for Secondary Indexes
    How well an existing index supports data selection from a table largely depends on whether the data selected with the index represents the data that will ultimately be selected. This can best be shown using an example.
    ' Example  :
    An index is defined on fields FIELD1, FIELD2, FIELD3 and FIELD4 of table BSPTAB in this order. This table is accessed with the SELECT statement:
    SELECT * FROM BSPTAB WHERE FIELD1 = X1 AND FIELD2 = X2 AND FIELD4= X4.
    Since FIELD3 is not specified more exactly, only the index sorting up to FIELD2 is of any use. If the database system accesses the data using this index, it will quickly find all the records for which FIELD1 = X1 and FIELD2 = X2. You then have to select all the records for which FIELD4 = X4 from this set.
    The order of the fields in the index is very important for the accessing speed. The first fields should be those which have constant values for a large number of selections. During selection, an index is only of use up to the first unspecified field.
    Only those fields that significantly restrict the set of results in a selection make sense for an index.
    Reward  points if it is usefull.....
    Girish

  • DML,Transactions and index updates

    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation? Will it be generated if i am updating column2 ( with non-unique index) ?
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?

    user9356129 wrote:
    Hi,
    Its known adding indexes slows down the DML on the table. i.e. every time table data changes, the index has to be recalculated. Yes, but only when involved (i.e. indexed) columns are changed.
    And, indexes are not "recalculated". Assuming the index is of type B-tree (by farout the most commonly used type), then the "B-tree is maintained". How that's done can be found in elementary computer science materials which you can probably find using Google.
    What i am trying to understand is whether the index is recalculated as soon as oracle sees the change?
    To elaborate, lets say i have a table abc with 4 columns, column1, column2, column3 and column4. I have two indexes; one unique on column1 and another non unique index on column2.
    So when i am trying to update column4, which is not indexed, will there be any transactional data generated for this operation?You'll need to clarify what you mean by "transactional data". But in this case the block(s) that hold(s) the table row(s) of which you have updated column4, will be changed, in memory, to reflect your update. And as column4 is not involved in any index, no index blocks will be changed.
    Will it be generated if i am updating column2 ( with non-unique index) ?In this case not only table-blocks will be changed to reflect your update, but also index-blocks (that hold B-tree information) will be changed (in memory).
    >
    What i am interested to know is how transactions boundaries impact the calculation of index. Will oracle always generate transactional entries and recalculate affected indexes even before the transaction is committed and the data change is made permanent ?Yes (to the part following 'and' of the latter sentence. I don't know what you mean by "transactional entries").
    Toon

  • Criteria for choosing to create index on column?

    Hello,
    This maybe is a trivial question. I have been reading on performance tuning of SQL queries. What criteria is used to determine which column needs to have an index?
    If a query is as follows:
    Select * from table where column1=<value> and column2=<value> and column3=<value>;What is the criteria for choosing a particular column to be indexed?. Is it the column containing the higher number of unique rows so that index selectivity is high?
    Thanks

    For that particular query, an index on all three columns would be best. However, that's probably not the only query on that table, and you have to take them into account as well.
    What's the main use of your table? What functionality is used most or is most important? Design your index scheme to assist those queries if possible.
    If you have a book, would you use the index? If it's a technical reference book, or an encyclopedia, yes. If it's a novel you read from start to finish, probably not. And if you are reading the Oracle Concepts Manual and you want to search for the paragraphs containing the word "database", would you use an index? And if you search for the logwriter process (LGWR)? It's just about common sense in general: you'll want an index to be selective.
    Another side of the story is that indexes slow down inserts, deletes and updates, so you'll have to make a trade-off: is this penalty worth the speedup of a few queries?
    Regards,
    Rob.

  • Indexing Speed

    Recently I have been using Oracle Text and I have noticed that the indexing speed is rather low. I have changed the max index mem to 2GB and the default to 2GB also and have created a partitioned table made of 8 partitions and use the 'local' and 'parallel 8' keywords when indexing.
    I am running this on a server with 32GB of ram and 8 processors.
    It takes about 3.3 hours to index 3GB (360k documents).
    I am using a FILE_DATASTORE and the documents are stored on a RAID.
    I have used other search engines on the same machine and this is considerably slow. Any ideas what I can do to speed this up?

    You might try increasing sort_area_size. Also, if your memory settings are so high as to cause paging it can make indexing slower instead of faster.

  • Time Machine backups to WD My Book Live terribly slow

    I got a Western Digital My Book Live for Christmas and have set it up to take Time Machine backups over my home Wifi network.
    Unlike many other posts I've seen, my initial backup took a reasonable amount of time in my opinion.  My initial backup was about 400 GB and it did it over night in about 12 hours.  I did however plug my Macbook into my router via ethernet in hopes that would be faster (the My Book is also plugged into the same router via ethernet).  This was 2-3 weeks ago.
    Since then, I have set Time Machine to off as I use my Macbook at work each day and manually tell Time Machine to backup when I'm at home with my computer.  I didn't want Time Machine to constantly have errors while looking for my backup disc when I was at work and not at home.
    I have been trying to a manual backup all weekend now with no luck.  I have approximately 5 GB to backup, and Time Machine is only backing up 50 MB an hour if that.  I have even plugged in my computer via ethernet again.
    Any ideas on how to speed this up?  I've also read a lot about Spotlight indexing slowing TM down, but I don't think Spotlight is indexing right now.

    Rick Wall wrote:
    Time Machine is only backing up 50 MB an hour if that.
    Agreed that is crazy slow.
    WD use to be problematic for Macs, is this still the case?
    Did you you reformat the WD with the correct partitioning scheme and format ; and get rid of the proprietary backup software?
    Also TimeMachine if left on, can take local snapshots, and maybe be a benefit to you.
    http://support.apple.com/kb/HT4878
    http://support.apple.com/kb/PH11394
    Does not answer your question I know.

  • Spotlight indexing after every turn turn on

    Every single time this machine is turned on spotlight begins to index, slowing the performance for the first couple of minutes.

    In total first time it can take several hours to index. It's usefulness is in searching for a particular document you have written something in when you can't remember where - as well as other searches. The searching is quite fast once the indexing is finished. If you are already able to search using Spotlight then the initial indexing has been completed. If not yet able to use it, it's because this initial indexing is not yet complete.
    After the initial indexing is done incremental indexing is done any time the computer has free processing capacity and will not be noticeable at startup.
    Neville

  • Memory Upgrade for T61

    Hi friends,
    I have a T61 7663-xxx system, with 1 GB PC2-5300 667 MHz RAM. My Front Side Bus is 800 MHz ( I think so!), Everest software reports my memory controller is dual channel capable, and ~ 85% physical utilization of RAM. I have a nVidia Quadro NVS 140M 128 MB graphics card, but Everest reports 512 MB ?
    I would like to upgrade my RAM, and buy some online.
    a) Should I buy a similar 2 GB PC2-5300 RAM for my system for performance boost, running Windows XP 32 bit? If so, should I buy Lenovo RAM only, for maintaining my 3 year warranty intact?
    Only ThinkPad and Lenovo memory will assume ThinkPad and Lenovo notebook systems limited warranty
    b) What are the other best alternatives for good quality, reliable laptop memory,  besides Lenovo?
    c) Would it be better if I am able to buy RAM which can match my FSB? Will this have any performance benefit? Do these similar FSB speed RAM exist?
    Kindly suggest top memory makers, if any.
    I could not find any Lenovo 800 MHz memory from their website.
    Kindly help me solve my memory upgrade problem.
    I am a mechanical engineering student, hence I don't have any idea about FSB and the like, so I request the learned members to cast light on this area.
    Info about reputed online sites in US, with fast delivery will be very much appreciated.
    Thanks in advance.
    Best Regards,
    Raiden
    TU Hamburg, Germany
    Solved!
    Go to Solution.

    Hmm.  As far as I know, the internal graphics card can't be upgraded.  There's some dock you can buy that lets you add a graphics card for use with an external monitor, but I don't know much about it.  The cpu is more likely to be upgradeable, but don't attempt unless you know a decent amount about hardware servicing.  Also, make sure the cpu will be supported by the motherboard!  Best not to buy one with a clockspeed higher than what Lenovo sold (core 2 duo at 2.6 ghz).
    Why do you want to upgrade so much?  I run 2.1ghz with discrete graphics and 2gb ram under Vista, and it runs beautifully.  If you have integrated graphics, then that might be a bit of a pain, but hey, you've got better battery life!
    Anyway, I'd look more into de-cluttering your computer to speed stuff up.  Remove anything you don't need, especially Lenovo tools that run in the background and you may not need.  Turn off search indexing (slower searches, but faster overall performance).  There are some nice guides online, if you google "vista tuneup."
    Good luck!  Enjoy your T61 as it is!  It's a lovely laptop

  • Welcome to the SQL Server Disaster Recovery and Availability Forum

    (Edited 8/14/2009 to correct links - Paul)
    Hello everyone and welcome to the SQL Server Disaster Recovery and Availability forum. The goal of this Forum is to offer a gathering place for SQL Server users to discuss:
    Using backup and restore
    Using DBCC, including interpreting output from CHECKDB and related commands
    Diagnosing and recovering from hardware issues
    Planning/executing a disaster recovery and/or high-availability strategy, including choosing technologies to use
    The forum will have Microsoft experts in all these areas and so we should be able to answer any question. Hopefully everyone on the forum will contribute not only questions, but opinions and answers as well. I’m looking forward to seeing this becoming a vibrant forum.
    This post has information to help you understand what questions to post here, and where to post questions about other technologies as well as some tips to help you find answers to your questions more quickly and how to ask a good question. See you in the group!
    Paul Randal
    Lead Program Manager, SQL Storage Engine and SQL Express
    Be a good citizen of the Forum
    When an answer resolves your problem, please mark the thread as Answered. This makes it easier for others to find the solution to this problem when they search for it later. If you find a post particularly helpful, click the link indicating that it was helpful
    What to post in this forum
    It seems obvious, but this forum is for discussion and questions around disaster recovery and availability using SQL Server. When you want to discuss something that is specific to those areas, this is the place to be. There are several other forums related to specific technologies you may be interested in, so if your question falls into one of these areas where there is a better batch of experts to answer your question, we’ll just move your post to that Forum so those experts can answer. Any alerts you set up will move with the post, so you’ll still get notification. Here are a few of the other forums that you might find interesting:
    SQL Server Setup & Upgrade – This is where to ask all your setup and upgrade related questions. (http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/threads)
    Database Mirroring – This is the best place to ask Database Mirroring how-to questions. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabasemirroring/threads)
    SQL Server Replication – If you’ve already decided to use Replication, check out this forum. (http://social.msdn.microsoft.com/Forums/en-US/sqlreplication/threads)
    SQL Server Database Engine – Great forum for general information about engine issues such as performance, FTS, etc. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/threads)
    How to find your answer faster
    There is a wealth of information already available to help you answer your questions. Finding an answer via a few quick searches is much quicker than posting a question and waiting for an answer. Here are some great places to start your research:
    SQL Server 2005 Books Onlinne
    Search it online at http://msdn2.microsoft.com
    Download the full version of the BOL from here
    Microsoft Support Knowledge Base:
    Search it online at http://support.microsoft.com
    Search the SQL Storage Engine PM Team Blog:
    The blog is located at https://blogs.msdn.com/sqlserverstorageengine/default.aspx
    Search other SQL Forums and Web Sites:
    MSN Search: http://www.bing.com/
    Or use your favorite search engine
    How to ask a good question
    Make sure to give all the pertinent information that people will need to answer your question. Questions like “I got an IO error, any ideas?” or “What’s the best technology for me to use?” will likely go unanswered, or at best just result in a request for more information. Here are some ideas of what to include:
    For the “I got an IO error, any ideas?” scenario:
    The exact error message. (The SQL Errorlog and Windows Event Logs can be a rich source of information. See the section on error logs below.)
    What were you doing when you got the error message?
    When did this start happening?
    Any troubleshooting you’ve already done. (e.g. “I’ve already checked all the firmware and it’s up-to-date” or "I've run SQLIOStress and everything looks OK" or "I ran DBCC CHECKDB and the output is <blah>")
    Any unusual occurrences before the error occurred (e.g. someone tripped the power switch, a disk in a RAID5 array died)
    If relevant, the output from ‘DBCC CHECKDB (yourdbname) WITH ALL_ERRORMSGS, NO_INFOMSGS’
    The SQL Server version and service pack level
    For the “What’s the best technology for me to use?” scenario:
    What exactly are you trying to do? Enable local hardware redundancy? Geo-clustering? Instance-level failover? Minimize downtime during recovery from IO errors with a single-system?
    What are the SLAs (Service Level Agreements) you must meet? (e.g. an uptime percentage requirement, a minimum data-loss in the event of a disaster requirement, a maximum downtime in the event of a disaster requirement)
    What hardware restrictions do you have? (e.g. “I’m limited to a single system” or “I have several worldwide mirror sites but the size of the pipe between them is limited to X Mbps”)
    What kind of workload does you application have? (or is it a mixture of applications consolidated on a single server, each with different SLAs) How much transaction log volume is generated?
    What kind of regular maintenance does your workload demand that you perform (e.g. “the update pattern of my main table is such that fragmentation increases in the clustered index, slowing down the most common queries so there’s a need to perform some fragmentation removal regularly”)
    Finding the Logs
    You will often find more information about an error by looking in the Error and Event logs. There are two sets of logs that are interesting:
    SQL Error Log: default location: C:\Program Files\Microsoft SQL Server\MSSQL.#\MSSQL\LOG (Note: The # changes depending on the ID number for the installed Instance. This is 1 for the first installation of SQL Server, but if you have mulitple instances, you will need to determine the ID number you’re working with. See the BOL for more information about Instance ID numbers.)
    Windows Event Log: Go to the Event Viewer in the Administrative Tools section of the Start Menu. The System event log will show details of IO subsystem problems. The Application event log will show details of SQL Server problems.

    hi,I have a question on sql database high availability. I have tried using database mirroring, where I am using sql standard edition, in this database mirroring of synchronous mode is the only option available, and it is giving problem, like sql time out errors on my applicatons since i had put in the database mirroring, as asynchronous is only available on enterprise version, is there any suggestions on this. thanks ---vijay

Maybe you are looking for

  • Getting a useful list of tables

    Can anyone recommend a query to get a useful list of tables for the end user? Our application allows users to browse the database and look at data in tables. Currently, we first have the user choose a schema, and then a table,view or synonym within t

  • JSONDataSet can't work in firefox

    I found that the JSONDataSet can't work in firefox, but in IE /Saiferi is all right. I Get the json data from a local file , and run the local html in firefox. var dsStates = new Spry.Data.JSONDataSet("data/bug.json") <div spry:region="dsStates"> <ta

  • Adding image to panel gives an error

    any ideas whatz wrong with this Panel p = new Panel(); Graphics g = getGraphics(); g.drawImage(splashImage, 0,0, p); add(p,BorderLayout.CENTER ); the error i get is "AWT-Windows" (TID:0x14bda60, sys_thread_t:0x56a0cb8, state:R, native ID:0x540) prio=

  • [Solved]Havent had internet to update in a few months

    Has anything changed in a few months with the website to where my old august packages wouldnt be able to update..... I tried pacaur, packer and yaourt with no luck. I get the following error with pacaur: M|$├1─> sudo pacaur -Syu Password: :: Synchron

  • The browser can not start after running

    I am working on jdeve10.2, because I deinstall the mozilla on my workstation, now I dont have bowser to start after I run the application, I can only paste the url on a new open IE. how can I fix it? thanks